top of page
Our Website Backgrounds (2).jpg


This is one of the documents on Charity Entrepreneurship’s 2019/2020 research process. ​A summary of the full process is here.



In this document, we explain why and how CE uses expert views (EpV) as part of its research process. Garnering expert views consists of speaking to experts who may have broad, domain-focused, or specific knowledge of the field. It is particularly useful because experts can often give a broad overview of a topic, allowing researchers to gain a comprehensive understanding of an idea. However, it is not the only method we rely on because human judgment often suffers from cognitive biases. 

CE uses expert views at three stages of our research. At the first stage (idea sort), each intervention is assessed for twenty minutes based on all the methodologies, including expert views. At the second stage (prioritization report), two hours will be spent on each intervention using this method, but only for our family planning research area. Finally, expert views are one of the four methods used for the eighty-hour assessment of each of the top interventions (intervention report). Concretely, the lead researchers will use this method by holding one-hour interviews with broad experts at the initial stage, domain experts at the second stage, and a mix of experts with various levels of knowledge, including specific knowledge, for the last stage of the research.

Table of contents:

​1. Who are experts
2. Why is this a helpful methodology
3. Why they are not our only endline perspective
4. How much weight we give experts
5. Our process for speaking to experts
6. Different levels of expert depth
7. Summary
Deeper reading


Speaking to experts is a common way to gain a lot of information about a topic quickly. Experts can synthesize an informally large amount of knowledge into layman’s terms that are much easier to understand than a meta-analysis or other form of formal synthesis. When we speak about experts we are referring to three different groups of individuals:

  1. Specialist experts

  2. Domain experts

  3. Broad experts

Specialist experts will often be highly versed but in a very specific situation or content area. For example, a fish disease specialist would fall under specialist domain experts. They can provide a piece of the picture but often not a broad comparison. If your goal is to start a charity that helps the most fish, they would not be able to compare disease to transportation issues, and often specialist experts would not even offer a guess on it. However, they would be able to provide highly specific information about disease rates in a given species and situation that you have identified as promising. 

Domain experts are experts who have a sense of a single area. They might know about many different possible factors that affect a single type of fish but would not be able to compare a fish-based intervention to a chicken-based one. Heads of nonprofits in a given area would be good examples of domain experts. 

Broad experts can provide comparisons across different domains, for example, a funder who supports half a dozen different fish organizations might have a strong sense of how disease compares to transportation even if she does not have the same detailed sense of specific diseases as the specialist expert. 

We see all of these experts as very helpful but in very different situations. 

Example conversation notes (GiveWell and Charity Science Health)

Example of synthesized expert data (Charity Entrepreneurship 2018)



Experts are in many ways the broadest source of information. They rarely give specific conclusions but a broad overview of a large field, covering a lot of ground and have views that are often easy to explain in terms of conclusion but hard to explain in terms of the factors that went into their formulation.

Reasons they are a helpful resource (in rough order of strength)

  • Utilize a large number, and a variety, of evidence sources

  • Apply common sense filters

  • Quickly assess weaknesses

  • Are insensitive to model errors

  • Use new sources of information

  • Are a respected source of information

  • Offer multisession engagement

  • Can directly compare possible strategies

  • Provide field-level convergence

Utilizes a large number, and a variety, of evidence sources: Experts have formed views using a wide range of sources of information. They often form their perspectives based on a number of studies, conversations, personal experiences, and other sources. These diverse sources are combined into a single view. This has a number of advantages including making their conclusions more robust and grounded. 

Apply common sense filters: When you are new to a field you do not have a strong common sense filter; however, experts have often seen many projects come and go and have a strong sense of things that will be more successful or impactful. Experts often have a sense in their field of what things might go wrong in a project or lead it to failure. These filters can be helpful and informative in prioritization among areas and making long-term plans. 

Quickly assess weaknesses: As a result of being able to talk directly to an expert and lay out specific situations and combinations of ideas, it is easier to identify flaws in reasoning or possible areas in which an idea could fail. This information would often be hard to find from informal research or even deeper, more systematic research. 

Are insensitive to single-number model errors: One of the biggest concerns with many-multiplication models such as CEAs is that a single error such as a mistyped number can have a large effect on the model. Experts on the other hand are rarely overly affected by a single model or a single number and tend to be slow to update on shocking conclusions. They more intuitively and directly apply “extraordinary claims require extraordinary evidence” heuristics. 

Use new sources of information: Experts often have connections and knowledge about what resources are worth considering for further research or which other experts are worth talking to. This methodology, when applied, thus lends itself to finding more information and getting a clear path of who to talk to next or what resources to read. Experts sometimes have access to studies or other research that is not yet available or easily accessible in the public domain. Well-positioned experts can often have access to information a full year before it is publicly available. They also often have details on what studies are being worked on and will be completed in the near future.

Are a respected source of information: Talking to and getting viewpoints from experts is a respected and even expected tool to use when researching an area. It is also common practice in many fields including charity evaluation. 

Offer multisession engagement: Experts are one of the few sources of information that can directly engage in back-and-forth discussion, which means that after speaking to them with a more basic idea, you can converse again about the changes or advancements that have happened in your thinking.

Can directly compare possible strategies: Experts can be given highly specific plans and compare different elements of them much more quickly than a more formal model like a CEA can. For example, if you are considering three interventions in three different countries with three different partner organizations, the number of permutations quickly becomes overwhelming for a formal model. Experts, however, can compare multiple iterations and suggest which combination seems to have the highest impact or is the most promising to research further.   

Provide field-level convergence: Experts can give a sense of whether many individuals within a field have a fairly unified view on something (e.g., if all three experts you speak to agree on a topic) or if there is a variety of views on a topic (e.g., three experts give three different answers). If an area has a high level of convergence, it is good to get these conclusions, and if it does not, that leaves open more areas that should be considered or researched. ​




Despite experts being a helpful source of information, they are not our only endline perspective. When viewing an evidence table, there is a reason why EpV is often near the bottom.

Experts have a number of weaknesses that have been demonstrated to negatively affect their judgment, and studies have shown in some areas, such as predicting the future, that “many of the experts did worse than random chance, and all of them did worse than simple algorithms.” These concerns limit experts’ usefulness and make us confident that they should not be the only perspective used. Many of our biggest concerns with experts are cognitive biases that cross-apply to the vast majority of human judgments. Not all the following concerns apply to every expert, but they are generalized concerns that will apply to a large number of experts. 

  • Unequal application of rigor

  • Inconsistent and unclear epistemology

  • Cognitive bias

    • Anchoring 

    • Groupthink

    • Illusion of control (weaknesses with randomness) 

    • Confirmation bias

    • Many other human biases

  • Lack of transparency in argument generation 

  • Memory concerns

  • Limited specificity

  • Lack of decisiveness 

Unequal application of rigor: A major concern with experts is equal application of rigor. Given all the information currently available, a motivated actor can find evidence supporting almost any viewpoint. Thus a fairly weak argument could hold a lot of weight in an expert’s view if he or she has not considered it skeptically or if it fits a prior worldview the expert holds. Similarly, if an expert does not like an idea, he or she finds it easy to be significantly more critical of it than would be justified compared to other ideas or viewpoints he or she holds. This rigor concern makes it highly challenging to take expert conclusions without a deeper sense of how they, for example, react to any new idea.

Inconsistent and unclear epistemology: Another factor that makes expert judgment less strong a source of evidence is the relative rarity of a formal or consistent epistemic system. Experts often have views about how to weigh different types of evidence but few have thought about this problem explicitly, and very few have publicly laid out how they would compare and integrate different pieces of evidence into their endline viewpoints. 

Cognitive bias: There are a number of cognitive biases that affect humans. Experts are fundamentally just more informed humans and thus generally suffer from the same biases. Some evidence suggests that experts can be affected even more strongly by some biases than the general population. One mitigating factor is that if multiple experts are spoken to, their biases will not necessarily overlap, and their average quality of judgment tends to do better than that of a single expert. There are hundreds of biases that can affect judgment and decision-making, but some that seem particularly relevant to experts when considering charity ideas are: 

Anchoring: Is when an individual depends too heavily on an initial piece of information offered (considered to be the “anchor”) when making decisions (1). Experts can often anchor on a specific idea for a charity early in a conversation or before the conversation has even started. Many experts will have projects they have already supported or invested time into, and these existing projects will generally be compared to any new idea with a high level of comparative skepticism regarding competing or different ideas. 

Groupthink: Is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Group members try to minimize conflict and reach a consensus decision without critical evaluation of alternative viewpoints by actively suppressing dissenting viewpoints and by isolating themselves from outside influences (2). In the case of charity ideas, if an idea has not been previously tested or considered by experts in the field, often they will be more inclined to dismiss the idea than they would if the same concept was presented by someone connected to their ingroup. Although this is a useful heuristic for experts to use, it can make them underweight new ideas relative to more established ones, particularly if the new ideas are generated using an intelligent process (e.g., CEAs). 

Illusion of control: Is the tendency for people to overestimate their ability to control events; for example, it occurs when someone feels a sense of control over outcomes that he or she demonstrably does not influence (3). This connects closely to experts having difficulty discerning effects from randomness or noise. The way this connects to charity ideas is that often experts will put more weight on personal experiences they have had; for example, if idea A has worked in the past, idea A will always work, and if idea B failed in the past, idea B is likely to fail. These are often assumptions held without careful consideration of the environmental factors that are different or the nonresults-focused factors; for example, a higher chance of failure might be worth it if the win is several times larger.  

Lack of transparency in argument generation: Experts have formed their views using a considerable number of sources and experiences. A byproduct of this is the great difficulty of tracking down the basis for a given viewpoint. This can make it very challenging to confirm or disprove a given idea or even know how much weight it should be given. This is not the fault of the expert but is a flaw inherent in expert-based information. 

Memory concerns: Evidence has demonstrated that memory is a fallible tool, but generally when speaking to an expert there is a high level of reliance on the expert’s memory. A remembered version of a study or conversation could be significantly different than the original. It is also hard or impossible to detect these memory effects given the lack of transparency.  

Limited specificity: Many experts are unwilling to give specific estimates such as a percentage-based chance of success. Experts are often unwilling to make claims that could be used in other methodologies such as CEAs, particularly if those claims cannot be anonymized. 

Lack of decisiveness: Similar to the specificity concerns, experts are often unwilling to make decisive claims even when taking a neutral or unsure stance would have its own ramifications. This is often taught practice in academia and can be a good habit when it comes to truth-seeking, although it can impair comparison among different options.  




Experts are quite an important source of information, and a lot of fundamental information comes from EpV as interpreted by others. However, experts suffer from many human biases that more algorithmic systems are less affected by. Experts are one of our five perspectives, the others being: WFM, team intuition, CEA, and our prior view. We expect our experts to generally hold more than one-fifth weight but likely not more than one-third of the weight of the model. Given the considerable variation depending on the specific charity idea and cause area, we expect experts to be stronger in areas where it is harder to get solid numbers and commonsense intuitions can be used as an effective guide. ​






Who do we count as an expert, and how do we speak to experts? We have two main processes to generate expert names. Our list of experts is generally created opportunistically instead of systematically.


  1. Finding via research. As we are conducting directed or undirected research, names often come up of key people in the area. These names are then noted down and later contacted as experts. 

  2. By recommendation. Often when we speak to experts we ask for others who would be helpful to talk to. Thus many of our experts come from peer recommendations.  

The types of experts we end up speaking to in practice fall into the following categories. 

  1. Broad experts: Generally broad experts are the smallest group we speak to. These are people involved in comparing a range of interventions within a given area. Often their knowledge will be more generalized when compared to domain or specialist experts. An example of a broad expert might be a researcher at an evaluation organization such as GiveWell or Copenhagen Consensus or an author on the Disease Control Priority reports. However, they could also be a large funder or cross-area researcher who has a good sense of the space overall. These experts would be considered “broad experts” in a cause area such as global poverty.  

  2. Domain experts: We generally speak to many domain experts, and they often consist of the largest group of experts we speak to. Often such an individual could be the author of multiple studies in a given area, someone who worked on a synthesis of the intervention or domain being considered, or someone who conducted an evaluation or meta-analysis of the area or an adjacent area. These experts would be considered “domain experts” within an intervention area such as vaccination reminders.

  3. Specialist experts: We often speak to a small number of specialist experts. These are individuals who are highly informed but in a small niche within a given charity idea or intervention we are considering. This might be a biology specialist who has deep information about food fortification uptake rates. It could be a person experienced in an element of running an intervention on the ground, such as country-level experience, or an individual who has written a study related to the execution of a given intervention. We consider these experts specialists who could have insight into an element of, but not the whole, intervention. ​




When contacting an expert for the first time, we generally use an email similar to the following. 

Dear _____

_____ person recommended that I speak to you because of your background in _____ OR I am researching ______and I read your paper on _____ (which was very _____! (compliment, like interesting or well done). Based on this, I thought you might know the answers to a few questions about the topic. 

I am a research associate at an organization that researches and funds new nonprofits which put that research into action (our website here). Previously we worked mainly in global poverty and animal welfare, but we have expanded to mental health so we can make this issue a higher priority in the global sphere. We are based out of London and so far have work underway in Oxford, London, the US, India, and Asia. 

Some questions I have are: ____ and  ____. Would you happen to have the time to jot down some quick answers to the above, or maybe discussing it via Skype might be easier? We’d really love to have your input and research inform our funding decisions and what charities to work on in the sphere. 

Best regards,




Experts are ultimately just people like anyone else, so most standard conversational rules apply to them. A few elements to highlight are:

  • Be humble – At the point you are talking to experts, you are new to the field, so you should try to come across as someone who is surprisingly informed for a nonexpert. Ask for only a little bit of time, such as one Skype instead of longer commitments. Say you are happy to keep any comment anonymous if they review the conversation notes, or say it should be so during the conversation. Try to take a broad interest in the topic matter as a whole even if it is not directly tied to the question you are asking.  

  • Be prepared – Being thoughtful with an expert’s time is important. If they have written a whole book on a given topic, for example, you should at the least review a summary before talking to them about it. The same goes for website content they have created. As well as reading content beforehand, you should think about the most important questions and which ones could get cut if you run out of time. Have a backup system for contacting if the first one does not work (e.g., Google phone credit you can use to call someone if Skype is not working). 

  • Frame opposing views using a citation – If you want to push on a point or perspective that an expert has claimed, do not describe it as “you are skeptical of point A”; instead tie it directly to your research (“A different expert I spoke to was skeptical of point A”). 

  • Go deeper – Try to cover the key questions on your agenda, but if something comes up that seems important you can ask more questions relating to that area. Ask follow-up questions, such as “You said that . . . Why do you think that is?” 

  • Ask comparative questions – Few experts will have a great sense of what the percentage chance of something happening is or a clear expected value for a given intervention, but they often give excellent answers to more comparative questions. “Does X seem like it would cost more per person than Y?” is easier than giving an exact number for either. 

  • Ensure that they have answered your question – If you ask questions such as “What are the main strengths and weaknesses of x intervention?” it is quite easy for them to forget the initial question once they have been talking about the strengths of the intervention for a few minutes. Follow up on this with something like “Thanks for outlining the strengths of x intervention; what do you think are the main weaknesses?”

  • Give them space to think – Don’t move onto the next question immediately after they stop answering the previous one. Ensure that there is a small pause so that they can add something else if they think of it.




At the start of the interview, we ask the expert if they are comfortable having the interview recorded. This means that we don’t have to take notes during the interview so we can entirely focus on the questions and their answers to them, and it also makes summarizing the interview easier because we can go back and listen to their answer to x question again, for example.

To be safe, we usually record in two different ways.

  1. Recording the interview through Skype. More information on this can be found here.

    1. When recording through Skype, remember to click “Stop Recording” at the top of the screen before ending the call, otherwise there is a risk that the recording will not be saved.

    2. Skype only keeps the recording for thirty days, so you should save the recording to your computer immediately. We also store these recordings in our Research Agenda -> <cause area> -> Expert Interviews folder. 

  2. Recording the audio of the interview using an external program.

    1. We recommend MP3 Skype Recorder for Windows users and Ecamm Call Recorder for Mac users. Both can record automatically.



It is critical to use the research team’s extensive interactions with researchers to identify additional mentors. At the same time, the additional effort for a CE researcher should be limited. Hence, the review of researchers as potential mentors should be quick and fully integrated into the existing process that outlines conversations with researchers. Otherwise, the likelihood of poor retention or poor data quality is high.



  1. Brief survey: Each research conversation is reviewed with a very brief survey/review. This survey is fully integrated into the normal conversation-tracking process. 

  2. One-time assessment: In March/April 2020, or whenever suitable, the CE research and curriculum teams will reevaluate the initial reviews. Researchers with potential will be contacted appropriately and eventually sent the Mentor Application Form.





CE Expert Assessment

The survey is kept intentionally short to ensure consistent application. It will be automatically graded and return a suitability score.

Questions to ask 

  • 35 percent general cause neutral questions

  • 45 percent customized cause (e.g. mental health)

  • 20 percent customized person (e.g. expert X)




A little bit about the project: I am surveying <cause area> experts to get a sense of what would be the best areas to research and launch charity startups in. 

  1. Could I record our interview for the purpose of making more accurate conversation notes later?

  2. Give an outline of the interview (types of questions, length, etc.) 

    1. Do you have any questions about the interview?

  3. What got you interested in <cause area>?

  4. How long have you been working in <cause area>?

  5. All things considered (cost effectiveness, execution difficulty, what existing organizations are already doing, etc.) what specific organizations would you like to see founded in the next five years?

  6. All things considered, what intervention or organization do most people think is effective but in your opinion is not? Why?

  7. Are there any areas you think are neglected by current actors in the field? Why do you think these areas are neglected?

  8. Are there any areas that seem unusually cost effective and evidence based relative to others in the sphere? 

At the end:

  1. What are good resources to read (blogs, books, podcasts)?

  2. Do you know anyone who would be interested in talking to, mentoring, or supporting a new charity founded in this space?

  3. Who else in the movement do you think would give valuable information about these sorts of questions?

  4. Would it be possible for you to introduce me to them?

  5. Do you have any questions for me?

  6. This conversation was really helpful; would it be possible for me to write up a summary of some of the points we talked about and send you a copy for review? Some experts are happy for us to put up a published set of conversation notes on the topic. But we can also anonymize the conversation or combine it with points from other experts into a more overall view (eg., five out of twelve experts think that this is the most promising intervention) depending on what you are most comfortable with. ​




Customized cause area questions:
There are some broad areas of mental health we are considering. For each area, we consider two questions.
1) How effective the area seems generally (for starting new charities) (below average - average - above average - the best intervention) 
2) What might be the most promising specific things to do within an area. For example, one of the broad areas is therapy, and one specific area that some people think is promising is online apps for lower-income countries. 

  • Areas

    • Task Shifting 

    • Peer Support

    • App-Based Therapy

    • Skype-Based Therapy

    • Direct Therapy

    • Therapy

    • Social Change 

    • Lifestyle Change 

    • Screening 

    • Research

    • Medication

      • Diet Supplementation

    • Medical Procedures 

    • Government Lobbying

      • Do you know of any existing organizations that are lobbying for subjective well-being/mental health?

    • Corporate Campaigns

    • Lifestyle Improvements (e.g. weighted blankets)

  • Do you think there are important broad areas that are not covered under one of these headings?

  • Do you know of any broad resources that compare different global mental health interventions?

  • How do you think different metrics of subjective wellbeing compare to DALYs? Which do you think is the strongest measure in the mental health space? 

Customized person questions:
These will be customized questions to ask a specific person. 
For example, a question to ask someone who mainly works in global health:

  • What is your perception of the funding landscape for mental health compared to global poverty? Is a new organization likely to be constrained by funding, or would possible sources of funding have good counterfactuals? 

Or asking someone with more knowledge about specific types of mental health interventions:

  • How do you think task shifting compares to computer-based therapies in terms of cost effectiveness? ​




Customized cause area questions:

There are some broad areas of animal advocacy we are considering. For each area we have two questions we are considering:
1) How effective does an area seem (for starting new charities)?
2) What might be the most promising interventions within an area? For example, within the area of food technology, one intervention that may be promising is lobbying governments to ensure fair labeling of plant- and cell-based products?

  • What is the effectiveness of starting a new charity in the following areas, and what are the best and worst interventions?

    • Corporate outreach    

      • Meat reduction campaigns (eg. meatless Mondays in universities)

      • Chronic welfare improvements campaigns (eg. environmental conditions)

      • Acute welfare improvements (eg. slaughter)

    • Governmental outreach

      • Welfare improvements lobbying (eg. environmental conditions); follow-up questions for welfare improvements:

        • Acute (eg. slaughter, handling, transport) vs chronic (eg. environmental conditions)

        • Increasing follow-through rates of existing campaigns vs new campaigns

        • Positive welfare interventions (eg. environmental enrichment) vs negative welfare interventions (environmental conditions)

      • Meat reduction lobbying (ban the advertising of meat, meat tax, ban imports from lower welfare countries etc.)

    • Wild animal suffering (WAS)

      • Research 

      • Targeted interventions

    • Veg outreach

    • Neglected areas e.g. crustacean interventions, farmed rodents, bugs for human use (entomophagy, snails, silkworms, insects used in research etc.)

    • Research

      • Building the evidence base 

      • Targeted research (eg. institutional ask)

    • Food technology

  • Do you think there are important broad areas that are not covered under one of these headings?

  • Do you know of any broad resources that compare different animal interventions?

Customized person questions:

These will be customized questions to ask a specific person. 
For example, a question to ask someone who works with funders/works for a fund:

  • Is there an intervention that the funders you work with are particularly excited about funding?

    • Is there some metric that these funders are always looking for in the interventions they fund (e.g., high cost effectiveness vs. neglected areas)?

Or asking someone with knowledge of our priority countries:

  • What are the top considerations and challenges we should keep in mind when working in <country>?




Broad questions

  1. How long have you been working in/researching family planning and what got you interested in it?

  2. All things considered (cost-effectiveness, execution difficulty, what existing organizations are already doing, etc.), if a nonprofit was starting tomorrow, what specific program would you like to see implemented?

  3. All things considered, what intervention or organization do most people think is effective but in your opinion is not? Why?

Comparative questions

  1. There are some broad areas of family planning we are considering. For each area, we would like to get a sense of

    1. How effective the area seems generally (for starting new nonprofits) (below average - average - above average - the best) 

    2. What might be the most and least promising specific things to do within an area? For example, you might think contraceptives supply and distribution is average compared to other broad areas, and within that community, health workers are most promising and social franchising is least promising.

    3. Contraceptives provision/distribution/supply 

    4. Incentives 

    5. Information/education

    6. Policy/advocacy 

    7. Research 

    8. Services delivery/ quality 

    9. Social and behavioral change

    10. Women’s and girls’ empowerment

  2. Which contraceptive is the most/least important to promote among, say, IUDs, pills, emergency contraception, condoms, implants, injectables, sterilization, and natural/traditional methods?

  3. Do you know of any broad resources that compare different family planning interventions?


  1. What kind of programs do you think are particularly easy to get funded, and which ones aren’t?

  2. Do you think there is funding available for new organizations to scale evidence-based programs?



For every conversation, we ask the expert if we can take notes and share them, either named or anonymized as input from one of several experts we have interviewed on the topic. We offer to send them a copy of any notes we take so they can comment if they feel we misunderstood anything. We also offer to send them a copy of our full report if they are interested in seeing other experts’ views or other synthesized research we conduct on this topic. If they are interested, we would love their feedback on the full report. 

Conversation notes are summarized into an easily readable document and then sent back to the expert for confirmation that we did not misunderstand or misrepresent anything, similar to the GiveWell conversation notes
In an experiment using Otter (an automatic transcription service) to record the conversation vs. manual conversation notes (making notes whilst listening to the audio), we found that the time taken for the automatic transcription to be edited into a readable form is slightly longer than the time taken to manually write up conversation notes.




Although the bulk of the expert report will be the conversation notes, the project lead will have to synthesize these thoughts into a one-page easy-to-read summary. This can include both narrative explanation of concepts that came up multiple times in conversation notes or table-based data with rough quantification on what experts thought. An example of this can be seen here. The section above the “additional details” is more reflective of the detail that would be helpful to have for a specific idea expert synthesis.



Our final utilization of experts is when our full report is nearing conclusion. We send out our full report to any expert that indicated an interest in seeing the endline results and ask for any feedback they have.  ​






Speaking to an expert for five minutes is not possible because even finding and contacting them would take close to this amount of time. However, one broad expert conversation can cover a lot of ground. If there are 300 interventions in a given cause area and each is given five minutes, this adds up to twenty-five hours of total time. Of this, seven hours would need to be used to find and contact experts, two hours to prepare the most important questions that would give helpful information across an intervention area, ten hours to interview the top five who respond, and finally eight hours to synthesize the notes from speaking to them and scoring interventions based on the responses. This results in around five minutes per intervention in a given area or about five experts for the area.

Interviews at this level of depth will be used in our research agenda phase. We will contact five broad experts to help us narrow down our long list of ~300 <cause area> intervention ideas. For this level of depth, we will use an automatic transcription service (Otter seems best). We will send this transcription to the experts for review following our Skype call, explaining that this is an automatic transcription that has not been edited and will not be published. 

Expected outcome

  • Have spoken to several broad experts to get their sense of the area

  • Summarized conversation and how it affects the intervention (one paragraph)

  • Synthesized information from the interviews that will enable rating of each idea 


  • Contact at least three times the number of experts you want to talk to. 

  • Reach out to them before you start any of the methodologies. Even if EpV turns out to be the third or last methodology, they will take time to respond, so you can pursue the others in the meantime.

  • Send your questions to them before your Skype call so they are aware of what kind of questions you will be asking and can prepare if they feel they need to.

Lessons learned
The categorization of interventions into a broad category (which you will ask experts to rate) is a highly important part of this process that we should have put more thought into for three main reasons.

  1. It was hard to come up with broad categories that all interventions could fall under, but this was necessary because most experts would not have enough time to rate each individual intervention idea, so we had to get them to rate broad category areas instead.

  2. Because all interventions did not fit neatly into the broad category areas, we sometimes had to make inferences about how they would score an intervention based on specific comments made during the interview.

  3. In some cases, it was difficult to score individual interventions because they could fall under multiple broad category areas. For example, in animals, we had a lot of wild animal suffering interventions, so we came up with the broad category of “Wild Animal Suffering” (WAS) and got experts to rate this category. We also had a lot of interventions to prevent acute suffering, so we came up with the broad category of “Acute Suffering Interventions” and got experts to rate this category. In cases such as this, it was difficult to tell, for example, what category the intervention “using snap traps over glue traps for wild rodents” would best fall under: wild animal suffering, or acute suffering? There were a few instances like this that made some interventions difficult to score.

Potential solutions

  1. Send experts the whole list of interventions and get them to rate each one individually, but this would be very time consuming for them.

  2. Add subcategories (e.g., from the example above “Acute WAS Interventions” and “Chronic WAS Interventions”).

  3. Send them a Google form survey that better illustrates the type of interventions that would fall under each category and get them to input their ratings there.

  4. Ask experts about one representative intervention and infer from this the score for the rest of the interventions in that category/subcategory.

  5. If we were to publish the results from the five-minute expert surveys, experts might be more interested in rating all of the individual interventions.

  6. Remove the expert survey from the five-minute process as a standalone factor, but integrate it with informed considerations.

What we think would be best to do in the future: Incorporate both a Google form survey and the Skype interview into the EpV (e.g., Skype with experts and ask open-ended questions to get a sense of their values etc., and after the Skype send them a Google form survey and get them to rank subcategories of interventions on there, explaining what sort of intervention would fall into these categories). 




Given the generalized information that has been gained from the five-minute process, the most helpful expert to speak to next would be a single domain expert. This interview would take one hour including prep, contacting the expert, and summarizing the notes afterward, as well as one hour speaking directly to the expert about the key questions that would be hard for a broader expert to answer. If an expert can cover more than one cause area, more time can be used to prep for his or her interview. Over a given cause area this would result in one expert per domain or five to thirty experts in total. This stage will only happen for family planning this year. In other cause areas, we will use a different two-hour methodology. 

Interviews at this level of depth will be used to help us understand a more specific charity idea such as what a promising country or approach might look like. These interviews, which will often be with domain experts, will help us determine, for example, what country would be most promising to run a conditional cash transfer (CCT) program for intrauterine devices (IUDs) in. For this level of depth, we will manually write up conversation notes (because these notes will be published, they need to be higher quality than for the other levels of depth). We will send this conversation summary to the experts for review before publishing. We will also invite these specialist experts to review the report as a whole. 

Expected outcome

  • Have spoken to at least one domain expert to get their sense of the area

  • Summarized conversation and how it affects the intervention (one paragraph)

  • Synthesized information from the interviews that will enable rating of each idea 


  • At this stage, you could offer the following options to the experts you are interviewing:

    • You can send your questions in a Google document and they can write their answers on there, with a quick follow-up Skype to ask any further questions if necessary

    • One-hour Skype interview 

      • If they choose this, you should send your questions to them before your call so they are aware of what kind of questions you will be asking and can prepare if they feel they need to.




At the twenty-hour level, roughly the same process would be used as for the five-minute level. Of this time, two hours would need to be used to find and contact experts, as well as two hours preparing the most important questions that would give helpful information across an intervention area, six hours interviewing the top five that respond, four hours synthesizing the notes from speaking to them and scoring interventions based on the responses, and six hours sending the results to the experts and getting their feedback on both their personal notes and the report as a whole. This would result in five experts spoken to about a single intervention. 

These expert interviews will often be with specialist experts, such as speaking to an animal advocate in Taiwan or speaking to a fish disease specialist to help us determine whether paying farmers to use vaccines to treat diseased tilapia in Taiwan seems like a promising intervention for farmed fish. One of the experts contacted will be a person leading an organization implementing similar interventions in the same region as the recommended charity. The goal of this interview is to find out if they could be influenced to change their program to a more cost-effective one (recommended by CE). Such a change would alter where we plan to allocate resources. For example, influencing a charity fortifying flour with iron to add folic acid would alter the score of a folic acid fortification intervention, and would lead to starting a tobacco taxation charity instead.

For this level of depth, we will manually write up conversation notes (because these notes will be published, they need to be higher quality than for the other levels of depth). We will send this conversation summary to the experts for review before publishing. We will also invite these specialist experts to review the report as a whole. 

Timeline relative to other methods (eighty-hour report) 

  • Ten hours – Broad undirected reading and crucial consideration, CC (IC)

  • Sixteen hours – Directed research (WFM)

  • Ten hours – Finding and talking to experts (EpV)

  • Twenty hours – CEA creation (CEA) 

  • Four hours – Directed research (WFM) 

  • Ten hours – Summary writing and internal contemplation (IC) 

  • Ten hours – Showing endline report to experts (EpV)

Expected outcome

  • One-page summary of the synthesis expert conversion 

  • Conversion notes for all experts interviewed

  • Expert feedback given on the final report

  • An expert or two highlighted who would be willing to mentor or speak to a new charity


  • At this stage, you could offer the following options to the experts you are interviewing:

    • You can send your questions in a Google document and they can write their answers on there, with a quick follow-up Skype to ask any further questions if necessary.

    • One-hour Skype interview 

      • If they choose this, you should send your questions to them before your Skype so they are aware of what kind of questions you will be asking and can prepare if they feel they need to.



Experts we interview are not the only source of expert data we use. If there are previously written interviews, conversation notes, or other direct sources of expert data, we also include these in our expert report. Data like these would be searched for in the directed research phase of the project but would be included in the evidence section of data. ​



By the time a charity is recommended in an area, we will have spoken to five broad experts in the cause area, as well as six domain experts or specialist experts. Some of these experts will also have reviewed the overall report and given comments or suggested improvements. We also will have taken into account any publicly available expert surveys or summaries of other related conversation notes. 

These experts are spoken to using a consistent methodology. Our conversation notes, as well as our summarized interpretation of the conversations, are published in a single section but clearly differentiated from each other. 



1) External resources on how to be a good interviewer
2) Evidence on good forecasting practices from the Good Judgment Project: An accompanying blog post
3) The Black Swan: The Impact of the Highly Improbable
4) Expert Political Judgment: How Good Is It? How Can We Know?
5) Future Babble: Why Expert Predictions Fail and Why We Believe Them Anyway
6) Online Bettors Can Sniff out Weak Psychology Studies.

Who are Experts?
Why is This a Helpful Methodolgy?
Why They Are Not Only Endline Perspective
How Muc Weight We Give Exerts
Our Process for Speaing to Experts
Differen Levels of Expert Depth
Deeper Reading
bottom of page