top of page

Weighted Factor Modelling: Six Examples For How to Use a Spreadsheet to Make Good Decisions

Updated: Dec 5, 2022



An earlier version of this post was previously published on the EA forum by Peter Hurford


We all make decisions every day. Some of these decisions are pretty inconsequential, such as what to have for an afternoon snack. Some of these decisions are quite consequential, such as where to live or what to dedicate the next year of your life to. Finding a way to make these decisions better is important.


We have been using the same method to help us make many of our major decisions for the past few years – everything from where to live to even deciding to create Charity Science Health and later Charity Entrepreneurship. The method isn’t particularly novel, but we definitely think it is quite underused: Multi-factor decision-making, also called weighted factor modelling.


Creating a weighted factor model (WFM) involves generating a set of criteria – often between three and twelve – and assigning a weight to each. Possible options are then scored on each of the criteria. The final score incorporates the option’s performance on each criterion and the weighting of each criterion, often by multiplying the two together. Hard factors (such as population size in absolute numbers) and soft factors (such as a score out of ten for population size) can be used in WFMs. It is a particularly useful tool, as it allows decision-makers to combine a large number of objective and subjective factors and identify which ones drive the results. However, it has some weaknesses. Like any decision-making tool, it is best used in combination with other methods.


In this article, we will explain this method as a ten-step process. Then we walk you through four examples for using weighted factor model spreadsheets to make better decisions. In the end, we will give two recommendations for how to improve your models. ​Here is the ten step process of using weighted factor models to make better decisions:

  1. Come up with a well-defined goal.

  2. Brainstorm many plausible solutions to achieve that goal.

  3. Create criteria through which you will evaluate those solutions.

  4. Create custom weights for the criteria.

  5. Quickly use intuition to prioritize the solutions on the criteria so far (e.g., high, medium, and low)

  6. Come up with research questions that would help you determine how well each solution fits the criteria

  7. Use the research questions to do shallow research into the top ideas (you can review more ideas depending on how long the research takes per idea, how important the decision is, and/or how confident you are in your intuitions)

  8. Calculate the weighted scores of each option, then use research to rerate and rerank the solutions

  9. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable

  10. Repeat steps 8 and 9 until sufficiently confident in a decision.


WHICH CHARITY SHOULD I START?



The definitive example for this process was when we first started prioritizing which charity to start in the early days of Charity Entrepreneurship.


Come up with a well-defined goal: I want to start an effective global poverty charity, where effective is taken to mean a low cost per life saved comparable to current GiveWell top charities.


Brainstorm many plausible solutions to achieve that goal: For this, we decided to start by looking at the intervention level. Since there are thousands of potential interventions, we placed a lot of emphasis on which might plausibly be highly effective, and chose to look at GiveWell’s priority programs plus a few more ideas that we thought were worthy additions as recommended by different sources and experts.


Create criteria through which you will evaluate those solutions / create custom weights for the criteria: For this decision, we spent a full month of our six month project thinking through the criteria. We weighted criteria based on both importance and the expected variance that would occur between our options. We decided to strongly value cost-effectiveness, flexibility, and scalability. We moderately valued strength of evidence, metric focus, and indirect effects. We weakly valued logistical possibility and other factors.


Come up with research questions that would help you determine how well each solution fits the criteria: We came up with the following list of questions and research process (a piece of CE history!).


Use the research questions to do shallow research into the top ideas, calculate the weighted scores of each option, then use research to rerate and rerank the solutions: Since this choice was important and we were pretty uninformed about the different interventions in the beginning, we did shallow research into all of the choices. We then produced the following spreadsheet:



Afterwards, it was pretty easy to drop 22 out of the 30 possible choices and go with a top eight (the eight that ranked 7 or higher on our scale). Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable / Repeat steps 8 and 9 until sufficiently confident in a decision: We then researched the top eight more deeply, with a keen idea to turn them into concrete charity ideas rather than amorphous interventions. When reranking, we came up with a top five, and wrote up more detailed reports - SMS immunization reminders, tobacco taxation, iron and folic acid fortification, conditional cash transfers, and a poverty research organization. A key aspect to this narrowing was also talking to relevant experts, which we wish we had done earlier on in the process as it could have eliminated some unpromising options more quickly. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: As we researched further, it became more clear that SMS immunization reminders performed best on the criteria of being highly cost-effective, with a high strength of evidence and easy testability. However, the other four finalists are also excellent opportunities and we strongly invited other teams to invest in creating charities in those four areas. The rest is history: Charity Science Health was started and later merged into Suvita for immunization reminders, iron and folic acid fortification is being arried out at large scale in India by Fortify Health, and we are still recommending tobacco taxation as a promising intervention for a new charity to implement as of 2023.


WHICH CONDO SHOULD I BUY?



Come up with a well-defined goal: I want to buy a condo that is (a) a good place to live and (b) a reasonable investment.


Brainstorm many plausible solutions to achieve that goal: For this, I searched around on Zillow and found several candidate properties.


Create criteria through which you will evaluate those solutions: For this decision, I looked at the purchasing cost of the condo, the HOA fee, whether or not the condo had parking, the property tax, how much I could expect to rent the condo out, whether or not the condo had a balcony, whether or not the condo had a dishwasher, how bright the space was, how open the space was, how large the kitchen was, and Zillow’s projection of future home value.


Create custom weights for the criteria: For this decision, I wanted to turn things roughly into a personal dollar value, where I could calculate the benefits minus the costs. The costs were the purchasing cost of the condo turned into a monthly mortgage payment, plus the annual HOA fee, plus the property tax. The benefits were the expected annual rent plus half of Zillow’s expectation for how much the property would increase in value over the next year, to be a touch conservative. I also added some more arbitrary bonuses: +$500 bonus if there was a dishwasher, +$500 bonus if there was a balcony, and up to +$1000 depending on how much I liked the size of the kitchen. I also added +$3600 if there was a parking space, since the space could be rented out to others as I did not have a car. Solutions would be graded on benefits minus costs model.

Quickly use intuition to prioritize the solutions on the criteria so far: Ranking the properties was very straightforward. I could skip to plugging in numbers directly from the property data and the photos.


Come up with research questions that would help you determine how well each solution fits the criteria: For this, the research was just to go visit the property and confirm the assessments.

Use the research questions to do shallow research into the top ideas, calculate the weighted scores of each option, then use research to rerate and rerank the solutions: Pretty easy, not much changed as I went to actually investigate.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: For this, I just ended up purchasing the highest ranking condo, which was a mostly straightforward process. Property A wins! This is a good example of how easy it is to readapt the process and how you can weight criteria in nonlinear ways.

HOW SHOULD WE FUNDRAISE?



Come up with a well-defined goal: I want to find the fundraising method with the best return on investment. Brainstorm many plausible solutions to achieve that goal: For this, our Charity Science Outreach team conducted a literature review of fundraising methods and asked experts, creating a list of the 25 different fundraising ideas. Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: The criteria we used here was pretty similar to the criteria we later used for picking a charity – we valued ease of testing, the estimated return on investment, the strength of the evidence, and the scalability potential roughly equally. Come up with research questions that would help you determine how well each solution fits the criteria: We created a rubric with questions:

  • What research says on it (e.g. expected fundraising ratios, success rates, necessary prerequisites)

  • What are some relevant comparisons to similar fundraising approaches? How well do they work?

  • What types/sizes of organizations is this type of fundraising best for?

  • How common is this type of fundraising, in nonprofits generally and in similar nonprofits (global health)?

  • How would we run a minimum cost experiment in this area?

  • What is the expected time, cost, and outcome for the experiment?

  • What is the expected value?

  • What is the expected time cost to get the best time per $ ratio (e.g., would we have to have 100 staff or a huge budget to make this effective)?

  • What further research should be done if we were going to run this approach?

Use the research questions to do shallow research into the top ideas, calculate the weighted scores of each option, then use research to rerate and rerank the solutions: After reviewing, we were able to narrow the 25 down to eight finalists: legacy fundraising, online ads, door-to-door, niche marketing, events, networking, peer-to-peer fundraising, and grant writing. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We did MVPs of all eight of the top ideas and eventually decided that three of the ideas were worth pursuing full-time: online ads, peer-to-peer fundraising, and legacy fundraising.


WHO SHOULD WE HIRE?



Come up with a well-defined goal: I want to hire the employee who will contribute the most to our organization. Brainstorm many plausible solutions to achieve that goal: For this, we had the applicants who applied to our job ad. Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: We thought broadly about what good qualities a hire would have, and decided to heavily weight values fit and prior experience with the job, and then roughly equally value autonomy, communication skills, creative problem solving, the ability to break down tasks, and the ability to learn new skills. Quickly use intuition to prioritize the solutions on the criteria so far: We started by ranking hires based on their resumes and written applications. (Note that to protect the anonymity of our applicants, the following information is fictional.)


Come up with research questions that would help you determine how well each solution fits the criteria: The initial written application was already tailored toward this, but we designed a Skype interview to further rank our applicants. Use the research questions to do shallow research into the top ideas, calculate the weighted scores of each option, then use research to rerate and rerank the solutions: After our Skype interviews, we reranked all the applicants.


Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: While “MVP testing” may not be polite to extend to people, we do a form of MVP testing by only offering our applicants one month trials before converting to a permanent hire.


WHICH TELEVISION SHOW SHOULD WE WATCH?



Come up with a well-defined goal: Our friend group wants to watch a new TV show together that we’d enjoy the most. Brainstorm many plausible solutions to achieve that goal: We each submitted one TV show, which created our solution pool. Create criteria through which you will evaluate those solutions/custom weights for the criteria: For this decision, the criteria was the enjoyment value of each participant, weighted equally. Use the research questions to do shallow research into the top ideas, calculate the weighted scores of each option, then use research to rerate and rerank the solutions: For this, we watched the first episode of each television show and then all ranked each one. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We then watched the winning television show, which was Black Mirror. Fun!


WHICH STATISTICS COURSE SHOULD I TAKE?


Come up with a well-defined goal: I want to learn as much statistics as fast as possible, without having the time to invest in taking every course. Brainstorm many plausible solutions to achieve that goal: For this, we searched around on the internet and found ten online classes and three books. Create criteria through which you will evaluate those solutions/custom weights for the criteria: For this decision, we heavily weighted breadth and time cost, weighted depth and monetary cost, and weakly weighted how interesting the course was and whether the course provided a tangible credential that could go on a resume. Quickly use intuition to prioritize the solutions on the criteria so far: By looking at the syllabi, table of contents, and reading around online, we came up with some initial rankings:


Use the research questions to do shallow research into the top ideas, calculate the weighted scores of each option, then use research to rerate and rerank the solutions: For this, the best we could do would be to do a little bit from each of our top class choices, while avoiding purchasing the expensive ones unless free ones did not meet our criteria.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: Only the first three felt deep enough. Only one of them was free, but we were luckily able to find a way to audit the two expensive classes. After a review of all three, we ended up going with “Master Statistics with R”.


IMPROVING YOUR MODELS

There are many ways to learn and improve your modeling; here we will briefly discuss two of the more meta options. Each of these can improve your WFMs and help them come to better conclusions in the end.


Selecting the right factors

The hardest part of making a good WFM is probably picking the right factors to model. This is partly because the ideal factors will differ according to the exact problem you are aiming to solve and partly due to the abstract nature of the factors themselves. Factors in the model can include anything from hard data, like CEA results, to very soft judgment calls such as a general sense of logistical difficulty. Many decision-making tools you learn about can become future columns in your weighted factor models.


Three criteria separate good factors from bad ones:

Relevance: The first and most obvious, the factor has to be relevant. If we are trying to determine what charity to fund, the number of letters in the intervention name could in theory be a column, but it would not correlate with our endline goal. A much more relevant criterion might be how many studies have been conducted on the intervention.

Cross applicability: As discussed in the previous chapter, tools must cross apply. A column like “estimated lives saved from malaria” would work if you were only considering malaria interventions, but it would only apply to some of the options if you were considering many different global health interventions. Another thing to watch out for is columns that do not differentiate between options. If a factor is scored as “medium” for all options, it does not add value to the decision-making process.

Practicality: Can you get data on it? A column that would take ten years to fill out is not helpful for making a decision that you need to make in a month. Is it more objective or subjective? Can others understand what the column indicates? These sorts of factors can allow your model to be interpreted and criticized by outsiders.

For highly important decision spreadsheets, you can even make a weighted factor model for possible metrics, as we did for our animal welfare points system. Our detailed metrics sheet outlined how different metrics performed on the various criteria, and the animal spreadsheet used our top metrics to compare the lives of different animals.


Selecting the right factors

A similar process to the way you select factors can be used to weight them. Certain factors will be better than others and more important to your endline conclusion. Having a “weights” column at the top of your spreadsheet can allow you to adjust weights quickly and easily based on how important they are. It also helps to order the factors by how heavily they are weighted so the most important factors are at the front of the sheet.


Finally, remember that WFMs are just one among many great decision-making tools you can use to make better decisions. Its strengths and weaknesses, next to many other decision-making tools, are discussed in our Handbook "How to Launch a High-Impact Nonprofit".

bottom of page