top of page
Our Website Backgrounds (2).jpg

IDEA SORT REPORT

This is one of the documents on Charity Entrepreneurship’s 2019/2020 research process. ​A summary of the full process is here.

Table of contents:

1. Goal

2. Explanation
3. Idea generation
4. Method

 

1. GOAL


Sorting hundreds of ideas down to around thirty for further research.

2. EXPLANATION

There are millions of possible charity ideas and thousands within our top cause areas. The first step of our research process is narrowing down this list to a far more manageable amount. Even putting a single hour into thousands of ideas would take years. There are many different heuristics and processes that can be used to narrow down ideas. Our sorting method ideally would:

  1. Accurately sort to the top the ideas most likely to become recommended charities.

  2. Take up minimal time that could be used on deeper research reports. 

  3. Give us information on how to perform future sorting more effectively.

  4. Be cross applicable to all causes.

 

3. IDEA GENERATION

The first stage in the process is idea generation, which we split among each cause area to allow the main researcher within each one to come up with ideas. This allows the researcher to gather relevant knowledge for future reports and capitalizes on their individual specialisms. Idea generation occurs in four main ways:
 

  1. Borrowing ideas from other organizations 

  2. Listing problems and solutions

  3. Expert review

  4. Spontaneous generation


Borrowing ideas from other organizations 
The initial stage of idea generation is locating relevant sources. Sources in global poverty, for example, include GiveWell, The Life You Can Save, our previous research, and Disease Control Priorities. These sources may also lead us to other intervention ideas from different organizations or help us find experts for interviews. 

Listing problems and solutions
The next technique we use is listing conditions or problems and brainstorming or searching for a way to solve them. This includes treatments for medical conditions or ways to relieve some of the suffering felt by farmed animals.

Expert review
This occurs during our expert interviews, in which we ask experts to review our ideas list and add any ideas they think are missing. We also ask whether they have any sources from which to compare or suggest new ideas.

Spontaneous generation
This is for any ideas that occur randomly to one of our researchers. These will come from sporadic sources and could be added at any point in our process. If one occurs after the initial sort has been completed, we quickly conduct the methods described below to see if it is worth researching. Ideas generated at a later point will likely have better estimates for their values and thus appear worse. However, it is better to include these even with unequal application of rigor rather than ignore potentially valuable ideas. Techniques a researcher can implement to brainstorm more ideas, for example in animal causes, include:

  • Reevaluating the expert surveys we conducted in the past to see if they suggest anything or trigger ideas,

  • Thinking about interventions applied to humans in terms of possibly being applicable to animals,

  • Looking at the welfare point system and considering each column to see if there is a way to target that specific weakness,

  • Thinking about whether there are any generalized interventions that were mostly in a nonanimal space but could affect animals dramatically,

  • Scrolling through our blog posts and the results that come up on the Effective Altruism (EA) forum from searching “Animals” etc.,

  • Looking up a list of all animal groups to see if anything is there that we do not have on the list,

  • Asking a member of Rethink Priorities if they have a list that we could pull ideas from,

  • Asking other members of staff to come up with off-the-wall ideas,  

  • Looking at a farm animal health website and searching for interventions recommended to reduce, for example, foot pad burn or feather pecking in poultry.

 

4. METHOD

 

In 2020, we started with over 1,000 ideas in four cause areas. We wanted to consider and sort these down to a more reasonable number of around one hundred, knowing we would only be able to conduct deep reports on a few dozen. To do this, we used four different methods. These four methods are used throughout our research process in different depths at different stages, so that we go deeper and deeper into fewer and fewer ideas. Each idea type is implemented within a cause area but not between cause areas. 

Each of the four methods is described in detail in the following documents:

  • Cost-effectiveness analysis (CEA)

  • Expert view (EpV)

  • Weighted factor model (WFM)

  • Informed consideration (IC)


For each of the cause areas the order will be somewhat different. This is to test different orders of methodologies that work best. 

  • Animals (WFM) → (CEA) → (EpV) → (IC)

  • Mental health (CEA) → (WFM) → (IC) → (EpV)

  • Health policy (EpV) → (IC)  → (WFM) → (CEA)

  • Family planning (IC) → (EpV) → (CEA) →  (WFM) 


To sort the large set of ideas, each methodology is applied in a five-minute variant, so that a total of twenty minutes is put into each idea. Five minutes is of course not enough to speak to an expert, for example, so for the expert view, cause area experts are asked about several ideas instead of us speaking to an expert for each specific idea. IC also involves the bulk of its minutes going to cause-level research instead of idea-specific research. CEA and WFM are conducted on an idea-by-idea basis.

The idea scores in each section are then combined by averaging the percentile score of the intervention on each factor. The overall score indicates how an idea performs across our metrics compared to the average intervention. Thus ideas that are consistently in the top-fifth percentile will be rated highly.

Using percentiles means that if the variance in scores in one section is higher, high- or low-scoring ideas are neither penalized nor overly benefited. For example, the WFM varies between four and forty and IC between one and ten. Thus, simple addition would overweight the WFM by four. We considered Z-scores to achieve these same results but found that Z-scores fail to capture the variance in some variables such as the CEA, which may have values that vary between 0.01 and 5,000. Because the range for the CEA is unbounded, outliers can be many orders of magnitude greater than the mean. This results in the CEA Z-score for well-performing but not extraordinary ideas clustering around 0. For example, we found that two ideas that had a CEA of about 125,000 and 0.3 had a 0.0004 difference in Z-scores, both around 0.035, even though the actual difference was five orders of magnitude. Other ideas scored as highly as 723,000,000, three orders of magnitude greater than the first one mentioned above, resulting in a Z score of 2.35. If we suspected these CEAs to be highly predictable, this variance would be beneficial. However, at this level of depth, a CEA could easily vary by several orders of magnitude. Using percentiles better “sandboxes” these methodologies by bounding their scores between 0 and 1; it reduces the effect of errors in particular large estimates because an order of magnitude change may only reduce its percentile score by 0.05 and captures more of the variance in moderately well-performing ideas, as shown in the example above. When percentiles were used, the ideas above scored in the ninety-ninth, ninety-fifth, and sixteenth percentile, which much better captures the variance among these ideas.

It may seem harsh to eliminate an idea after just twenty minutes; however, in practice, every research team does this informally each time it prioritizes one area over another. Although a near-limitless amount of research could be conducted, given the limited resources going toward these issues, we need to prioritize the ideas that look most likely to eventually lead to a charity that changes the world. 

Accuracy: We have some previous data on quicker and slower prioritization methods from our past research—our research team is able to know an idea is promising very quickly. In about half of the cases, top charities came from ideas that looked very strong even during their first hour of research. However, the other half of the ideas were somewhat surprising, performing better or worse than we initially expected. 

Time: Twenty minutes is less time than we put into our first round prioritization in earlier years, but it is done in a much more structured way that we believe will strengthen it per minute spent. We are also considering a larger number of ideas at the start of this year, due partly to a larger initial brainstorm phase and partly to working on more than a single cause area. 

Learning: One of the biggest changes we have made to our process this year is taking a more epistemically humble and long-term approach to building a research process. Given there are so many possible heuristics one could use and so many different lengths of time a researcher could put into each one, we plan on running extensive correlations on how different methodologies correlate to the eventual charity idea recommendation. For example, it is possible that a five-minute CEA would correlate far more strongly than the five-minute experts’ view with the ideas that are eventually recommended--or vice versa. We expect to gain a lot of learning that will make our process even more efficient and empirically tested in future years.

Publishing plan
At this stage in the process, we will publish four idea sort reports, each with a complete list of all ideas considered and top ideas selected for further research. A template for the report can be found here.

Goal
Explanation
Idea Generation
Method
bottom of page