Deference Culture in EA
As has been noted by other writers, the EA (Effective Altruism) movement has a pretty strong deference culture. In many ways this makes sense as lots of EAs come from a background of reading about and being compelled by organizations like GiveWell. These organizations are built on the assertion that charity evaluation is hard and benefits from a full-time team doing it. A typical donor has the most impact by deferring to this research. This culture of deference has become pretty strong in EA, I pretty frequently have conversations with highly involved EAs who are still deferring on major topics (e.g. cause area, career choice) without investigating them personally.
On the one hand, it is impossible to be an expert in everything, and making hard decisions around doing good is no different. On the other hand, a culture that is too high in deference or defers using the wrong metrics can become homogenized and lead to great opportunities being missed. Often, pieces written about deferral end up falling on the “you should defer more” side or the “you should defer less” side.
I think the optimal perspective on the right amount of deferral depends on background expertise and expected time. (I don’t think any previous writers would disagree with this, but their posts are typically not interpreted in that way.) If you want to become a tinker and put thousands of hours into learning about cars; this puts you in a different position than someone who drives but has never looked under the hood, in terms of deference to a mechanic. The same piece of advice; ‘defer less’ or ‘defer more’, would not be equally applicable to those two demographics. One might defer too little (e.g. the person who has never opened up their hood being pretty confident they can fix something) another might defer too much (the tinker being disappointed to learn that the local mechanic knows far less than expected about a rare engine part).
Another component of this is discerning which questions are more objectively answerable vs ones which are based on values or unclear epistemic trade-offs. To use GiveWell as an example; if you want to save the most lives possible with a high degree of confidence, one of their top choices in fighting malaria is a really strong bet, and deferring to their research is advisable. However, they are far less confident about their trade-offs between income and lives saved and thus it makes less sense to defer on that topic.
So when does it concretely make sense to defer in EA? Let's examine some clear examples on either side and then work our way to more ambiguous cases.
High deference - new EA
John is brand new to EA and has read a single book on the topic. Although he loves the concepts, he feels overwhelmed with all the new information and does not plan on engaging with it super deeply. He is already well into a solid career and does not imagine EA becoming a big part of his life. Nonetheless, he wants his donations to make the maximum impact from a fairly standard view of saving more lives and reducing pain. He defers to the EA community and ends up donating 10% to GiveWell recommended charities, seeing it as a safe, impactful option that does not take a ton of time.
I think in this case John has made the ideal call; he has made an optimal decision given the amount of time and energy he wants to put into the topic. But let's look at the same amount of deference with a much more involved EA.
High deference - Experienced EA
Sally has been involved in the EA movement for a number of years, she led her local university chapter for a couple of years before joining an EA organization full time. She has spent several hundred hours engaging with EA content, and has a pretty deep understanding of where the cruxes of disagreements are between EAs. However, when it comes to donating she still feels uncertain. She sees problems with the movement and its granting, and has knowledge of some unique opportunities that most EAs are not aware of. She puts in several dozen hours to investigate a couple of opportunities. However, she also knows that the full-time grantmakers are even more experienced in this area and likely have access to even more information. She thus decides to donate evenly between the EA funds, deferring that they will ultimately have better judgment than her.
Although this is close to the same outcome in terms of the level of deferral, this seems like a real loss to me. Sally fits the profile of someone who could be a helpful grantmaker if they had just happened to get another job, and likely would be able to have far more impact independently considering opportunities to find the best one. She is like the tinker in the car example above. In addition, the judgment calls that are made by the EA funders are considerably more sensitive to values than John's rough alignment with GiveWell. Sally might decide to fund one of the funds fully after considering the debate between cause areas, or donate specifically to an unnoticed opportunity that might be missed by larger grantmakers.
A central claim here is that someone's deference should reduce as they become more knowledgeable in an area. Someone who has been working full-time in EA for years should probably take the time to thoroughly think through their cause prioritization. Someone who is going to pick a career primarily based on impact should likely do enough research that they have a good sense of the options, not just pick something from the top of a list. Let's look at some examples of questions where it might make sense to use an informed view rather than deferring, as your experience in the EA movement goes up. Similar to my room for more funding post, I do not expect this table to be perfectly accurate or cross-applicable. I do think it’s a more helpful guide or frame of reference than the more generic “everyone should defer more” or “everyone should defer less” advice. In this table; when something is in the “areas to investigate” column, the action would involve looking at the original sources of arguments and best critiques (e.g. in the first case I think it would be reading some of GiveWell content, spot-checking a few of their assumptions, looking up critics of GiveWell, and looking up the other big charity evaluators to see their differences). I do not mean just asking their local EA chapter leader “is GiveWell the best charity evaluator.” That would really just be a different deferral, and I am suggesting direct consideration would be valuable.
Example choices to defer
Example choices to investigate
An EA who has read one book and has put in ~1 hour or less a week for under a year
What are the best specific charities?
Is GiveWell the best charity evaluator?
An EA who has read three books on the topic and been involved in a chapter for one-two years
What EA cause area is best to focus on?
An EA who has led a chapter for two years and worked at an EA org. for one
What should a specific organization's plan be?
What are my ethical and epistemic tradeoffs?
An EA who has been working full time in EA and considering meta-issues for years
Sub comparison between charities doing similar work (e.g. AMF vs Malaria Consortium)
What are the biggest weaknesses of current EA views and how should my actions change based on that?
This table shows the evolution of how as someone gains more expertise in an area they should defer less and less, particularly on topics that might be value sensitive or that relatively few EAs are considering independently. It’s also worth noting that EA is a young movement and there are likely lots of things that the movement as a whole is missing. If we have a culture of deference that means there are a relatively small number of people who need to notice these gaps. If we have more informed independent thinkers however, gaps can be noticed that would otherwise be missed. There are lots of reasons why a high deferral community might create bad norms.
Overall I think EA would benefit from a more spectrum based understanding of deferral; with specific questions and levels of knowledge (like the table above) being the factors discussed, instead of overall views or vague claims about when and when not to defer.