top of page
Our Website Backgrounds (2).jpg


This is one of the documents on Charity Entrepreneurship’s 2019/2020 research process. ​A summary of the full process is here.


​This article explains why and how Charity Entrepreneurship uses cost-effectiveness analyses (CEAs) as part of our research process. A CEA consists of one or more calculations estimating the ratio of the cost of a given intervention relative to its impact. CEAs are particularly useful because they allow us to quantitatively compare different interventions. Despite their usefulness, CEAs are not our only evaluative method, since they can be prone to errors and can fail to adjust for prior views.

Charity Entrepreneurship uses CEAs at three stages of its research. At the first stage (idea sort), each intervention is assessed for 20 minutes using all four of our methodologies, including CEA. At the second stage (prioritization report), we spend two hours assessing each intervention using just one of our four methodologies: we apply CEA in our mental health and happiness research area. Finally, we use all four methodologies again in greater depth for the 80-hour assessment of each of the top interventions (intervention report). To conduct a CEA, lead researchers compile different key parameters and factors affecting the impact of an intervention into a sheet, and use these to calculate the ratio of cost to good done. Depending on the time allocated, the model will involve more or fewer parameters and factors, and will use more or fewer external sources to estimate them.

Table of contents:

What is a cost-effectiveness analysis?
2. Why are cost-effectiveness analyses useful?
3. Why doesn’t Charity Entrepreneurship rely exclusively on cost-effectiveness analyses?
4. How much weight do we give our cost-effectiveness analyses?
5. How does Charity Entrepreneurship use cost-effectiveness analyses?
6. How long are Charity Entrepreneurship’s cost-effectiveness analyses?
7. Deeper reading


A cost-effectiveness analysis is a form of analysis most commonly used in economics, health economics, and charity evaluation. It consists of one or more calculations and results in a ratio of the cost of a given action or intervention relative to its impact. Costs are generally measured in dollars, with impact often measured in something like DALYs or lives saved. More cost-effective interventions generally have a lower ratio of costs to good done, and are considered better than less cost-effective interventions, all else being equal.

Summary: Cost-effectiveness is a measure of how much cost a given activity or action entails compared to the amount of good done by that activity or action. (1)

Example CEA



It is important to distinguish between the “true cost-effectiveness” of an action and the “modeled cost-effectiveness.” The true cost-effectiveness of an action—if it’s known—would be a highly relevant metric and could be weighted very heavily when making a decision. However, the closest an evaluation-focused organization can usually get to ascertaining the true cost-effectiveness of an intervention is through constructing a model, which is almost by definition an imperfect estimation. This is because we often lack important data about the world, or a sufficient amount of it. Models can certainly be helpful and can be used as a type of evidence that an intervention should or shouldn’t be considered. However, we think that it’s important to take into account data from other types of models as well, given the shortcomings of cost-effectiveness models. 



When attempting to compare the effectiveness of different interventions, it can be useful to create a formal and detailed model with a single endline number, i.e. one unique final result.

Benefits of CEAs (listed here from strongest to weakest): 

  • Clearly connect to endline goals

  • Can be used to compare interventions that are otherwise difficult to compare

  • Allow formal sensitivity analysis

  • Encourage quantitative analysis more broadly

  • Are underutilized in many evaluations

  • Give a transparent picture of the evaluator’s rationale

  • Are a respected tool in multiple fields

  • Consider scope

  • Reduce some biases

  • Can lead to novel conclusions

CEAs clearly connect to endline goals: Ultimately, doing the most good per investment is our goal. A CEA may be an imperfect model, but it speaks directly to our key question. Compared to other models such as consulting experts or a weighted factor model, it has the clearest theoretical correlation with good done, even if in practice there are model errors that weaken it.

CEAs can be used to compare interventions that are otherwise difficult to compare: Doing an analysis that results in a ratio is useful because it allows for a direct numerical comparison to be made. In other words, CEAs provide a way to quantitatively compare interventions that may seem qualitatively incomparable (e.g. from different cause areas).

CEAs allow formal sensitivity analysis: A sensitivity analysis can locate the most important assumptions, variables, and considerations affecting the endline conclusion. A formal sensitivity analysis can be done quickly and easily on a CEA, showing the key parameters that are the most important to get right. In other words, it identifies the factors that have the most substantial effect on the impact. 

CEAs encourage quantitative analysis more broadly: By default, most people (including experts) do not think in quantitative terms. For example, when asked if an event will happen, most people think of this as a binary question (yes/no) rather than thinking about the probability of the event happening. CEAs require quantitative inputs for each variable, which encourages quantitative thinking and calibration (e.g. an event being 20% vs. 80% likely). 

CEAs are underutilized in many evaluations: Likely partially because both quantified consideration in general and CEAs take both considerable amounts of time and require a decent mathematical understanding, CEAs are often unused in situations that could benefit from them. Often in charitable areas with less established research bases, individuals trained in formal quantitative methods will have a very low CEA utilization. This allows CEAs to add a useful and unconsidered or under-considered viewpoint. 

CEAs give a transparent picture of the evaluator’s rationale: Cost-effectiveness models provide a high level of transparency of thought. Since each input is identified and clearly quantified, an outsider can quickly see where assumptions are being made and can therefore more easily assess the validity of the conclusions.

CEAs are a respected tool in multiple fields: Experts in many fields are in strong agreement that CEAs are a useful tool. These include experts in economics, medicine, and – most relevant for our purposes – charity evaluation

CEAs consider scope: A major concern with many models is that scope is frequently not taken into account. If one charity has the potential to grow one thousand times bigger than another charity, a different type of model may not successfully reflect that it could be one thousand times more important to start the former charity over the latter. Humans are notoriously bad at properly understanding scope. (2)

CEAs reduce some biases: CEAs are less susceptible to certain human biases that affect other analyses. For example, a well-used CEA can reduce the base rate fallacyconjunction fallacy, and hyperbolic discounting.

CEAs can lead to novel conclusions: CEAs can often lead to unintuitive conclusions and can, thus, lead to the consideration of new ideas or approaches that might have been quickly ruled out by other methodologies or “commonsense” approaches.




Concerns with reliance on CEAs in charity evaluation have been discussed in depth in other posts, with the most comprehensive coverage of the theoretical concerns outlined by GiveWell and the most comprehensive coverage of the practical concerns by Saulius Šimčikas.

In their post “Why we can’t take expected value estimates literally (even when they’re unbiased)”, GiveWell states:

“The mistake (we believe) is estimating the ‘expected value’ of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. We believe that any estimate along these lines needs to be adjusted using a 'Bayesian prior’; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter, even when they seem to be making very conservative downward adjustments to the expected value of an opportunity, are not making nearly large enough downward adjustments to be consistent with the proper Bayesian approach.”

Flaws of CEAs listed here from strongest to weakest:

  • Subject to the “optimizer's curse”

  • Necessarily involve value judgments

  • Model uncertainty

  • Prone to mistakes

  • May not be generalizable to other contexts  

  • Make it hard to model flow-through effects

  • Can be misleading in many ways

  • The interventions we analyze are somewhat preselected for cost-effectiveness

  • Subject to researcher bias

  • May bias you towards interventions with more measurable results

  • 90% confidence intervals can be misleading

CEAs are subject to the “optimizer's curse”: All estimates are prone to error, and these errors compound. An intervention whose CEA yields a high cost-effectiveness is more likely to have had errors in its favor. This means that the most and least cost-effective interventions are likely to regress to the mean upon further examination. Overweighting CEAs in our decision making could lead us to neglect good opportunities that did not have as many favorable errors. This is less of a problem in richer information environments.

CEAs necessarily involve value judgments: It is surprising how much value judgments can differ. For example, GiveWell assumes that the "value of averting the death of an individual under 5 [years of age]" is 50 times larger than the value of "doubling consumption for one person for one year." Reasonable estimates could be as large as six times this number, using life-satisfaction years. If all value judgments are subjective preferences that vary among individuals, then CEAs are only generalizable insofar as the researcher’s values align with the reader’s.

CEAs model uncertainty: Cost-effectiveness models are necessarily simplifications of reality. This is both a strength and a weakness. Although it allows us to get a clearer understanding faster, it also means that they do not accurately capture reality. Adjustments in the variables used will change the final value of the CEA. One way to combat this is to create several models and see if they converge.

CEAs are prone to mistakes: Mistakes are inevitable, due to human error and/or poor information quality. Although small mistakes usually only translate to small problems on their own, these mistakes compound in a multivariate model, thus exaggerating the consequences. For example, GiveWell once found five separate errors in a DCP2 DALY figure for deworming that contributed to an overestimation of the intervention’s cost-effectiveness by one hundred times.

CEAs may not be generalizable to other contexts: Some CEAs rely heavily on randomized controlled trials (RCTs) for their data, and in some cases, this can be problematic. If an RCT was conducted in one particular region or with one particular method, the effect size may change dramatically in different regions or with other methods.

CEAs make it hard to model flow-through effects: Researchers have written that it is difficult to properly model flow-through effects in CEAs. Indeed, a common tactic is to ignore flow-through effects entirely. There are solutions to this problem; however, they all take vast amounts of time or are prone to error. 

CEAs can be misleading in many ways: If researchers fail to consider important factors or are not transparent in their reasoning, CEAs can yield misleading results. For example, if a CEA concerns an expected value, the probability of success must be clarified. If only pure expected value is reported, there is no difference between a 50% chance of saving 10 children and a 100% chance of saving five children. This would fail to consider any level of risk aversion. 

The interventions we analyze are somewhat preselected for cost-effectiveness: As the sources for our charity ideas were largely from within the EA community, these ideas will have been created with cost-effectiveness in mind. For our longer reports, we will have already narrowed down the ideas to more promising interventions, and thus, the variance will be lower. This means that random error will account for more of the variance, making CEAs a weaker tool.

CEAs are subject to researcher bias: CEAs are resistant to certain biases, but are susceptible to others. If the researcher conducting a particular CEA has a favorable view of the intervention, for example, he or she may (consciously or unconsciously) bias the results in its favor. A researcher’s desire to find novel, cost-effective interventions may also have this result. 

CEAs may bias you towards interventions with more measurable results: Effects that are difficult to measure may increase the error rate or be neglected. This can lead to an underestimation of the effectiveness of interventions with hard-to-measure outcomes.

Ninety-percent confidence intervals can be misleading: Depending on how well calibrated researchers are, the worst-case scenario, the best case and 90% confidence interval (CI) may be incorrect. CIs are particularly susceptible to this, as we are likely to underestimate the range of uncertainty that is actually accurate. Worst case and best case are no better, as these may rely on many unlikely events happening, meaning the probability of either occurring is minimal. 



Given that CEAs have many benefits and flaws, it is important to use them only in conjunction with other methodologies. CEAs are one of the four components of our evaluation process; the others are the weighted factor model, informed consideration (roughly, team intuition), and expert view. We also consider the convergence of these five components, i.e. in what direction the different models point overall. We expect to weight our CEAs more than 20% but likely not more than 33% in our overall assessments (depending on the charity/intervention). We expect our CEAs to be more useful in areas where quantitative differences can be very large and where analysis based on our other evaluative criteria is less reliable. ​





​We considered several software programs and combinations for our cost-effectiveness modeling. The two easiest to rule out were Google Documents for back-of-the-envelope calculation (BOTEC), and STATA, a complex modeling software. These had either too little or too much complexity for our purposes. We considered Google Sheets and Guesstimate in greater detail, and discuss those below.

Google Sheets: Google Sheets is fast and simple to work with, and it is easy to understand without much prior or complex knowledge. While spreadsheets are a common way to generate number-based models, they lack a few of the key features we need for our CEAs.

Guesstimate: Guesstimate is a less commonly used system, but has advanced Monte Carlo and sensitivity analysis features. It is too slow to use for very quick CEAs, but can be handy for models with high levels of uncertainty.

How we use the two in combination: For our 20-hour CEAs, we create them in a spreadsheet and then remodel the data using Guesstimate for both sensitivity analysis and simulated endline point estimates. The use of two models decreases the odds that an error in one model will have a very significant effect on the overall outcome, particularly since the software packages require somewhat different formatting.




We use consistent formatting across all of our CEAs, and have tried to keep it somewhat consistent with GiveWell’s formatting. This way, anyone familiar with GiveWell’s CEAs will have an easier time understanding ours (and vice versa). ​



Certain cells are color coded to reflect the sources of those numbers.

Yellow: Value and ethical judgments
These numbers could change if the reader has different values from the researcher. For example, reasonable people could disagree about the answer to the question “How many years of happiness is losing the life of one child under five worth?”. When making these judgments, we generally consult the available literature, but there often is no clear, consistent, agreed-upon answer. 

Green: Citation-based numbers
These numbers are based on a specific citation. If we found and considered multiple citations, the best will be hyperlinked to the number, and the others will be included in the reference section. If a number is an average of two other numbers, both numbers will be entered into the sheet, and the average will become a calculated number with a different color format.

Blue: Calculated number
These numbers are calculations generated from other numbers within the sheet. Calculated numbers involve no more than five variables for readability as well as easier sanity checking. Generally, it is harder to make errors if a higher number of subtotals are created instead of one very large, multi-variable calculation at the end.

Orange: Estimated numbers
Sometimes, no specific numbers can be found for a parameter. In this case, the number is estimated by one or more staff members. These estimates will often be the numbers within a CEA that we have the lowest confidence in. 




Discounting is a term we use for estimates that are affected or discounted by a factor not considered in the direct number. We try to keep our discounting clear and separate from the original number in the CEA, as these discounts are generally subjective. Discounting is common and can be seen in many other detailed CEAs (for example, GiveWell’s). The items listed below are not the only types of discounting used in our models, but they are some of the most common ones.

Evidence discounting: If a source of evidence suggests one number but the source is extremely weak, we might apply a certainty discount to it. This is based on the assumption that, in general, numbers regress as they get more certain. Thus, using a very weakly evidenced number in one estimate and a strongly evidenced number in another will systematically favor the areas with weaker evidence, as these numbers will be more positive.  

Generalizability discounting: Often, sources will be based on a situation that is not identical or even similar to the situation we are considering when using a source. For example, if a study was run in one country, the results will not be identical if it were run in another country, even if all other factors are held constant. Thus, when generalizing evidence more than is common in our other comparable CEAs, we apply a generalizability discount.

Bias discounting: If a citation comes from a source that we suspect has some sort of bias, we might discount this number. For example, every charity has a strong incentive to make their program and progress look better. Thus, charity-reported numbers tend to be far more optimistic than the same activity analyzed by a study or outside actor. 

Time discounting: Time discounting is the practice of discounting future benefits compared to immediate effects. Even with zero time preference, in terms of utility, it can still make sense to discount based on time. For example, income in the near term can be invested and used for increased consumption in the future. Additionally, there is always some probability that an accidental death will occur before the future utility is realized, and therefore, it is worth less in the present. 




Sheets: Each charity idea will have its own CEA sheet named after the charity idea being evaluated within the sheet. These sheets will also go into a single CE-CEA 2020 spreadsheet for each of the four cause areas we are considering.

Summary sheet: The first sheet will be a summary sheet that will allow quick comparison between the charities and will describe the three factors that could most change the CEA, as determined by a sensitivity analysis, and the factors considered the least certain by the CEA’s creator. The summary sheet will include two endlines. One is a metric that is easily understandable and directly connected to the intervention – for example, “number of chickens’ life years lost from being in a caged vs. a cage-free system.” The other endline will be a cross-comparable metric that can be used across the entire cause area. This metric can be used to determine which interventions look most cost-effective in a given area. There is also a column to describe the overall uncertainty level, which is the CEA creator’s estimate of how confident we are in this CEA relative to others within the cause area.

Row, columns, and sections: The spreadsheet will generally be read across a given row with the first non-empty column containing a title or description. The column will generally be consistent across multiple rows. Specific sections will be put into boxes to increase readability. Generally, the last column and row of a given section will be used for notes or description. Every CEA will generally have a benefits, costs, and counterfactuals section. 

Optimistic, pessimistic, and best guess: Throughout the spreadsheet, an optimistic, pessimistic, and best-guess estimate will be identified. The most time will be put into the best-guess numbers. The endline summary will be generated using a Monte Carlo simulation. The optimistic and pessimistic estimates will be used for the range of the 90% confidence interval. The relative position of the best-guess within this range will be used to determine the curve. 

Sensitivity analysis: A sensitivity analysis will be conducted on each CEA to determine which factors most affect the estimate. The CEA creator then pulls out the factors that both have a large effect and seem more likely to change based on new information or research. These factors will be listed on the summary sheet.

Referencing: The most important or relevant reference will be linked to the cell. All references will be stored in a section of the reference sheet. Each cause-level CEA spreadsheet will have a consistent reference page as its last sheet. Our tracking of references will be done consistent with our system that is used across other methodologies. 

Endline metrics: Our endline metrics are sufficiently complex, cause-specific, and detailed that they require their own report. Our metrics report explains how metrics are applied across different methods used in our research process (i.e. informed considerations, experts view, CEA, and weighted factor model).

Example CEA: This cost-effectiveness analysis produced during our 2020 animal welfare research shows some of the features and organization of a Charity Entrepreneurship CEA.






At Charity Entrepreneurship we use four different CEA lengths throughout our charity idea evaluation process. CEAs are done in multiple lengths for two main reasons. The first is that there are simply too many ideas to conduct deep CEAs on each one. Our initial brainstorming often results in hundreds of ideas, so conducting even a 10-hour CEA on each would require multiple years of research time. The second reason is that we want to see how the results of shorter CEAs compare to those of larger, more intensive CEAs. If a two-hour CEA yields results that are inconsistent with our endline recommended charity ideas, this could suggest that the CEA methodology is not effective for narrowing down ideas.




​The goal of a five-minute CEA is to very quickly get a quantitative sense of the cost-effectiveness of an intervention. This CEA only considers key factors and only uses estimates based on intuition or citations that can be found in a very quick internet search.

At this level of CEA, results can vary by orders of magnitude. Therefore, high scores are considered to be almost certainly caused by errors. One way we account for this is by evaluating interventions using percentiles rather than raw CEA score. This limits the effect that errors can have on the overall score an idea receives and prevents the Z score distribution from being too skewed by a particularly high or low CEA. One drawback of making these assumptions and corrections is that this may cause ideas that do have remarkably high cost-effectiveness to be undervalued. However, we believe that such ideas should also score well on other metrics.

We are skeptical about whether doing such brief CEAs yields helpful results. However, we use them in conjunction with other approaches, and they only represent one-fourth of the systems used to narrow down from hundreds to dozens of ideas. 

For five-minute CEAs:

  • Numbers are mostly either estimated or based on the first result in Google

  • Only a few key numbers are calculated within a single row 

  • Only the largest factors are considered

  • Citations are not tracked​

Expected outcomes include:

  • A quick Google spreadsheet-based CEA that is understandable to the creator

  • An endline number that is cross-comparable

Advice for conducting five-minute CEAs:

  • Before conducting CEAs we recommend doing a calibration exercise 

  • Unless you are using a specific study that reports cost data and has a clear effect size, you will almost always only be able to use intuitions and Fermi calculations

  • With regard to effectiveness, think about the theory of change for the intervention, how many people you would need to reach, how many among them would take up the behavior/treatment/ask, and how that will translate into the metric you care about

  • With regard to cost, identify the largest factor and think about it in terms of approximate product cost and salary cost for each person reached

  • It can be worth spending 1-2 hours preparing a CEA template with key parameters.

    • For example, for family planning research, useful parameters include the efficacy rates of various contraceptives based on a quick literature review. If a large number of interventions on the list target specific populations, consider looking at their share within the larger population and how many people you can reach at different levels (a clinic, a school, etc).

  • Consider using an alarm to respect the five-minute limit, as it is easy to go over time with this methodology

    • Try not to use more time, in order to ensure consistency in your mental model for all the interventions. Since a lot of factors might require on-the-spot judgment calls, it is useful to make sure these remain constant across interventions.

  • When all the CEAs have been completed, it is a good idea to have another researcher look over them to see if anything seems counterintuitive

    • Results that have extreme values are often due to error, so it’s helpful to have a second pair of eyes. 

  • Different methods result in different outcomes. Considering multiple methods reduces the likelihood that a promising intervention is missed.




The goal of two-hour CEAs is to compare multiple charity ideas within a given area and to sort them in approximate order of promise before conducting a deeper report. We used two-hour sorting CEAs in one of the four cause areas we considered (mental health) to later allow comparisons to other possible two-hour methodologies that could be used to sort.

​Biggest differences between a two-hour CEA and a five-minute CEA:

  • Key numbers are based on sources instead of intuition 

  • More factors are considered

  • External CEAs are searched for and values from these are used as inputs 

  • Citations are tracked (but are not organized for readability)

Expected outcomes include:

  • A Google spreadsheet-based CEA that is understandable by the research team

  • An endline number that can be used to compare the idea to others and is somewhat reliable

Advice for conducting two-hour CEAs:

  • Even within two hours you may fail to find certain relevant data, you may discover that there isn’t sufficient evidence, or you may have too low confidence in the evidence you find to create a robust CEA. If this is the case, it is best practice to use either your prior estimate for cost-effectiveness or the average of the top 30 interventions. This can be done using a Bayesian approach and adjusting towards these priors as explained by GiveWell.


6.4. 20-HOUR CEAS


​The goal of the 20-hour CEA is to provide one of the four major inputs to our conclusion regarding whether it is worth founding a charity based on the idea being evaluated. Twenty-hour CEAs are done fairly late in our research process and are not begun until 36 hours have already been put into other approaches on a given charity idea. At this point, there is a fairly strong understanding of the idea and a small CEA (created earlier in the two-hour CEA process) to build on. The 20-hour CEA will use all the elements in this document.

Elements that are unique characteristics of the 20-hour CEA include: 

  • More factors are considered and at a greater depth

    • Counterfactuals are considered formally

    • A higher percentage of the numbers in the model are based on multiple citations

    • Deeper consideration is used when considering the strength of each source

    • External CEAs are used as a minor input 

  • There is greater readability

    • Endlines are given in both intuitive and unintuitive metrics

    • Intuitive and readable color coding is used

    • Citations are polished

      • The model includes the relevant citations for each number in Google Sheets and Guesstimate without explaining why these sources were used or what the research behind these numbers looks like

    • Assumptions and possible issues are stated in another document

      • This document is where the numbers used are explained, including why certain sources were considered and/or what the research behind the numbers looks like

  • More things are double-checked

    • Sensitivity analyses are double-checked

    • A second model is created in Guesstimate to error-check 

    • External checking is done

How a 20-hour CEA compares to other methods used (in an 80-hour report) 

  • [10 hours]Broad undirected reading and crucial considerations (informed consideration)

  • [16 hours] Directed research (weighted factor model)

  • [10 hours] Finding and talking to experts (experts)

  • [20 hours] CEA creation (CEA)  

  • [4 hours] Directed research (weighted factor model) 

  • [10 hours] Summary writing and internal contemplation (informed consideration) 

  • [10 hours] Showing endline report to experts (experts)

Expected outcomes of the 20-hour CEA include:

  • One cross-comparable spreadsheet + Guesstimate model duplication, both of which are readable and publishable to the general public

    • A Google Sheets model, made before the Guesstimate model (Google Sheets is more reliable than Guesstimate, so you are less likely to lose all of your numbers in a saving error)

  • An endline number we can use to recommend the intervention




External CEAs have much diversity in both quality and formatting. Thus, they have a wide range of possible uses. Due to this diversity, external CEAs are basically never directly comparable to CEAs created by other organizations. We see CEAs at roughly three different levels: informative, suggestive, and predictive. 

Informative CEAs: Many CEAs, even of low quality, can be informative to generate ideas or obtain citations for key numbers. For this level of CEA we do not take the endline as informative or even suggestive of the intervention’s impact, but if we are already investigating an area, we will consider the variables and citations used in informative CEAs when creating our own CEAs. Often, quick or back-of-the-envelope calculations fall into this category. 

Suggestive CEAs: Many CEAs have some quality but are not assessing the same metrics we are, or they are not built to apply to the same situation. We see these CEAs often as suggestive that a charity idea could be promising. Suggestive CEAs provide us sufficient evidence that our views often update based on the results of the CEA. For example, we view the DCP3 CEAs as suggestive, meaning that if an intervention looks cost-effective on their models, that makes us think that related charities and intervention areas could be promising. We do not take the endline numbers literally or even as comparable. For example, if DCP3 says intervention A is better than B but they are both cost-effective, we would do our own comparative research. 

Predictive CEAs: Some CEAs are sufficiently high quality or close to the methodology and endlines we are considering that they can be taken as predictive. If a predictive CEA was done on an intervention we are considering, we would often give it considerable weight in our process and potentially use many of the same numbers and inputs when comparing. We would view CEAs like this as useful in predicting which areas are better if the same organization has completed multiple CEAs. CEAs we find predictive include those done by GiveWell.




EA concepts: Cost-effectiveness analysis
GiveWell: Sequence vs. Cluster (classic)
GiveWell: Our Criteria for Top Charities (their heuristics) 
GiveWell: Cost-Effectiveness Overview
GiveWell: GiveWell's Cost-Effectiveness Analyses (past examples)
ACE: How ACE uses CEA
Peter Hurford: How Do EA Orgs Account for Uncertainty in Their Analysis?
Peter Hurford: Five Ways to Handle Flow-Through Effects
GiveWell: Guide to GiveWell CEAs
Eva Vivalt: How Much Can We Generalize from Impact Evaluations?
Christian Smith: The Optimizer’s Curse & Wrong-Way Reductions

What is CEA
Why are CEAs Useful?
Why not only CEAs?
Howmuch weight to CEAs?
How does CE use CEAs?
How ong re CE's CEAs?
Deeper Reading
bottom of page