“Smart can also mean wise, kind, inspiring - and cost-effective. And that has a charm all its own.” - Nancy Gibbs
When researching what charity to found, you’ll undoubtedly spend a lot of time conducting cost-effectiveness analyses (CEAs) if you want to achieve the most good possible. Technically, all other criteria are proxies for achieving the most good per dollar spent. So, here’s a crash course on evaluating cost-effectiveness.
Is this intervention the cheapest way to achieve your goal?
Once you are confident that an intervention will have the intended effect you need to calculate and compare the cost-effectiveness of all the other possible interventions. Remember, you don’t just want to do good, you want to be ambitiously altruistic, and given that you have limited resources, that means figuring out how to do the most with what you have
Take, for example, the cost-effectiveness of two interventions for helping the blind. One might be to train a seeing-eye dog who then guides the blind and makes their lives easier. This costs thousands of dollars. Another might be treat trachoma in the developing world, completely curing the blindness of dozens of people for the same amount. So, you could increase your impact over tenfold if you switched from training seeing eye dogs to treating trachoma victims. In fact, if you don’t, your choices lead to dozens of people living with blindness for the rest of their lives. That’s pretty big. In fact, there are millions of similar examples where, if you think strategically, you can help many more people simply by founding a more cost-effective charity.
OVERHEAD USUALLY ISN’T THE PROBLEM
On first thought, you might assume that all charities with high overhead ratios have low cost-effectiveness. After all, if a charity is lining its own pockets, their higher-ups are being greedy and some of your donation just goes to paying inflated salaries, but this is a red herring. If a charity takes half the money it raises and gives it to just one executive, but takes the other half and spends it ten times more effectively than any other charity, then a donation there is still five times more impactful than the next best alternative. To illustrate this example, consider a charity that provides anti-malarials, but whose CEO is paid $120,000 US. Then consider a charity whose CEO is paid nothing; it’s all volunteer run. Their costs are simply the cost of the medicine they give out. The catch is that the medicine is homeopathic, something proven not to work. It’s clear that you should give to the former charity, despite it having higher overhead.
This is not to say that looking at overhead is entirely useless. Exorbitant overhead ratios may be a bad sign, indicating underlying corruption or inefficiencies. However to be cost-effective relies on both the effectiveness of the intervention as well as the total costs.
Here is an extremely simplified cost-effectiveness (CE) equation:
CE = Total cost of intervention/Counterfactual impact
Where Counterfactual impact = Metric after intervention - Baseline metric or, Counterfactual impact = Improvement in chosen metric resulting from intervention Counterfactuals will be more fully explained in Chapter 10.
Of course, these calculations are never as simple as all that. For example, bednets reduce the incidence of malaria by half. If it costs $3 per bednet, then you might think that it costs $6 per incident of malaria prevented. However, it’s not malaria itself you are trying to prevent, but most importantly, deaths caused by malaria. How many cases of the disease do you have to prevent to prevent a death? This changes from country to country depending on the health care available and the rates of malaria-carrying mosquitoes in the area. It also depends on how many people had bednets before you arrived with a new shipment. As you can imagine, the calculations get complicated quickly.
Are cost-effectiveness analyses reliable?
If we knew the true cost-effectiveness of a charity, that number would be practically the only important factor to consider. However, it’s important not to conflate the true cost-effectiveness with the estimated cost-effectiveness. The estimated figure is a simple and imperfect approximation based on the information available. Because we cannot predict the future, the estimated figure is useful in that it allows us to move forward with our calculations. However, beware: the cost-effectiveness calculation is never 100% accurate.
What do you do if Intervention Pie in the Sky says it has a $5 per DALY (If you’re still confused about DALYs and QALYs, you should re-read chapter 7) cost-effectiveness but has only one observational study (a pretty weak form of evidence) backing up that claim, while Intervention Conservative Estimate says it has a $50 per DALY cost-effectiveness but cites 20 randomized controlled trials on the subject (a pretty strong set of evidence)? Some might decide to go with Pie in the Sky because $5 per DALY is ten times more cost-effective than Intervention Conservative Estimate’s $50 per DALY. Those people might argue that, even if Pie in the Sky turns out to be twice, or even three times less cost-effective, it’s still better than Intervention Conservative Estimate.
This train of thought is common among those new to calculating impact but, fortunately, other people have already blazed this trail, so we can learn from their experiences. The truth is that cost-effectiveness analyses are usually based on very poor data and charities frequently cite misleading and overly optimistic figures. Even when researchers input numbers that seem conservative, they’re still often completely wrong and almost always in a negative direction.
They commonly miss the mark not just by ten times, but often hundreds of times. GiveWell, one of the most famous charity evaluators, had this exact experience when researching deworming. Each year GiveWell’s calculations found the intervention to be less and less cost-effective, stabilizing years later at significantly less cost-effective than their original estimates. This happens time and again.
There are some tactics you can use when trying to estimate a figure as accurately as possible. For example, after each guess, ask yourself what probability you would assign to your number being too high, and what probability you would assign to your number being too low. Then adjust your next guess to take that into account. For example, if I think it’s 90% likely that my guess is too low, I should raise my guess. Aim to reach the point where you’re 50% sure your guess could be too high and 50% sure it could be too low.
While this helps, you will still mostly likely have an overestimate of your effect, because of what is called the optimizer’s curse. This curse is that even if each intervention you compare is equally likely to be an overestimate as and underestimate, you will only look more deeply into the more cost-effective interventions, thus introducing an upward bias.
You can also limit the importance of your errors by focusing most of your research time on the things that will dramatically affect your analysis. Don’t spend a long time improving your model of something relatively insignificant.
Many people and organizations come up with optimistic and pessimistic estimates for each factor then use them to calculate an optimistic and pessimistic overall outcome, but this produces very extreme boundaries that are not very realistic. This is because even the most pessimistic overall estimates do not think that every single thing will go as wrong as possible. We at Charity Science use getguesstimate.com to help us come up with more accurate estimates. This software, runs a simulation many times, combining optimistic and pessimistic scenarios for each factor, and takes the average of those simulated outcomes. This produces a bell curve which is closer to reality, solving the issue of extreme boundaries.
HUMANS ARE TERRIBLE AT PREDICTING AND ESTIMATING
Buehler et al. (Buehler, R., Griffin, D. and Ross, M. 1995. It's about time: Optimistic predictions in work and love. Pp. 1-32 in European Review of Social Psychology, Volume 6, eds. W. Stroebe and M. Hewstone. Chichester: John Wiley & Sons.) asked participants to predict how long it would take them to write an academic paper. Each student provided the estimated times by which they assigned a 50%, 75%, and 99% probability that they would have completed their papers. Just 45% of participants finished before the time which they had been 99% sure they would be done by, even though they had written many papers in the past and were intimately familiar with their own writing habits. This experiment shows the stark reality that even something simple, without many variables, and that you have concrete feedback on is incredibly difficult to predict or estimate. When it comes to charitable work and working at a societal level, these calculations are exponentially more complicated, leaving even more room for error. Just like the time predictions in the Buehler et al. study, cost-effectiveness analyses are almost always too optimistic.
How can I account for low-quality evidence?
There are no hard-and-fast rules about how much to discount a CEA based on how much (or how little) evidence there is backing it up, but the table below can give you some idea. These charities are all likely to be about equal in true expected value. And remember - it’s hard to be too pessimistic when it comes to CEAs.
How do I account for government spending in cost-effectiveness
When considering government spending on a program instead of your direct nonprofit costs you should take into account where the money would be spent counterfactually if not spent on this intervention. If a government is funding the program, the counterfactual impact will depend largely on how well the government spends funds in general i.e. how well would the money be spent if it wasn’t used on funding this intervention? What department was it spent from? After you estimate these parameters (even if there is plenty of uncertainty) you can start to model how much government spending you should include in your CEA.
When you create your cost-effectiveness analysis, you should discount the weight given to government costs, and enable external parties to adjust that figure according to how well they think the money would have been spent counterfactually. Generally a government's average spending even from a poverty focused branch will not be as strong as the most effective charities.
If deciding the optimal percentage of the intervention for external impact focused donors to fund and the optimal percentage for the government to fund, it makes sense to have the government fund as high a percentage as possible, so long as the government's spending tends to be less effective than that of the external donor.