Back in April 2018, I spent some time trying to understand the hierarchy/structure/classification of cause areas. I did this at the suggestion of Vipul Naik, who wanted to (1) categorize cause areas treated on the Cause Prioritization Wiki so that there was more structure to it than that of a jumble of 100+ cause areas, and (2) make the analysis of cause areas more systematic. (I believe he was also interested in this because the Donations List Website that he created also needed a better ontology of cause areas.)
Some of the outputs of that investigation are:
- A list of existing classifications of philanthropy
- A directed acyclic graph of existing cause areas (where means " has as a sub-cause" or "if I am claiming that I work on , then I can also claim that I am working on ")
- A list of potential properties with which to classify existing cause areas
- A table of "form of altruism" vs "beneficiary group" ("form of altruism" and "beneficiary group" are two of the "potential properties" in the previous list, so this table crosses these two properties, resulting in a two-dimensional grid)
- A generic linkdump and rambling on taxonomies
I came away from the above investigation feeling pretty confused about the nature of cause areas. Given just a description of reality, it didn't seem obvious to me to carve things out into "cause areas" and to take "cause area" as the basic unit of analysis/prioritization (which is what cause prioritization is all about).
Some thoughts/intuitions that contribute to this feeling are:
- As explained (EA Forum link; HT Edo Arad) by Owen Cotton-Barratt back in 2014, there are at least two meanings of "cause area". My impression is that since then, effective altruists have not really distinguished between these different meanings, which suggests to me that some combination of the following things are happening: (1) the distinction isn't too important in practice; (2) people are using "cause area" as a shorthand for something like "the established cause areas in effective altruism, plus some extra hard-to-specify stuff"; (3) people are confused about what a "cause area" even is, but lack the metacognitive abilities to notice this.
- A cause area can try to "seem" big or small by lumping together more and more things in the world (or alternatively excluding more things from itself). Do we compare "animal welfare improvement" against "agent foundations research", or against "technical AI safety work", or against "technical, strategy, or policy work in AI safety", or against "existential risk reduction", or against "applied mathematics related to futuristic technology"?
- More generally, if we take some basic unit of action like "1 person-year of work" then we can form sets of actions and call those sets "cause areas" (these sets don't necessarily form a partition, i.e. there might be actions contained in multiple causes and actions not contained in any cause). But then we can imagine defining some arbitrary "cause area" that just picks out the most high-value actions and declares it "the most important cause". Of course, finding which actions are contained in this "most important cause" would be difficult, and the task of cause prioritization would seem to be reduced to this search process.
- I can imagine an argument taking place where the opponent of a cause area picks some ineffective actions within the cause area while a supporter picks effective actions, so they disagree regrading the overall effectiveness of the cause area despite agreeing about the effectiveness of specific actions. Maybe even a motte-and-bailey argument where the supporter draws a tighter boundary around the cause when attacked, and loosens the boundary at other times to be able to call their preferred interventions effective. (I don't actually know if such arguments are taking place, so this is just a theoretical concern at the moment.)
- One way looking at cause areas might be useful is from an evaluator's perspective of "what skills/domain expertise do I need to be able to evaluate specific programs/research topics?" If skillsets tend to "unlock" a bunch of potential programs at once, then there might be a natural-seeming boundary around these programs, which might correspond to our intuitive notion of cause area. But this seems to depend on the order in which various skills are acquired. To take an extreme case, if someone had a lot of domain-specific expertise in many domains but lacked some general skill (like generalist research skills, knowledge of statistics, programming experience) then by learning the general skill they suddenly "unlock" a whole bunch of "cause areas" at once.
- I think reductionism and "dissolving the question" type moves have been useful in many situations, and I have a vague intuition that the notion of cause area can be reduced in some way.
- In practice, Open Philanthropy Project (which is apparently doing cause prioritization) has fixed a list of cause areas, and is prioritizing among much more specific opportunities within those cause areas. (I'm actually less sure about this as of 2021, since Open Phil seems to have made at least one recent hire specifically for cause prioritization.)
- I've noticed that as I learn more about a cause area, I get more opinionated about activities within it. A naive analysis cannot distinguish effectiveness within a cause area, and instead puts a uniform score over the whole cause area, whereas a more sophisticated analysis puts precise scores over each action within a cause area. So it feels like "cause prioritization" is just a first step, and by the end it might not even matter what cause areas are. It seems like what actually matters is producing a list of individual tasks ranked by how effective they are.
- In this 80,000 Hours podcast episode Toby Ord talks about the idea of risk factors, as distinguished from risks. This seems to further complicate the situation.
- Some recent Katja Grace posts that are relevant and that make me even more confused: Are the consequences of groups usually highly contingent on their details? and Infinite possibilities.
Why does any of this matter? Here are a couple of reasons that come to mind:
- Practically, projects like Cause Prioritization Wiki, Donations List Website, and other efforts to categorize cause areas require some organization system that makes sense.
- From a more philosophical or emotional perspective, I feel dissatisfied with my current understanding.
- In terms of public discourse, people are actually using the concept of "cause area" to do further thinking. If the idea of a cause area is not a reliable one, then all of this further thinking is done on a shaky foundation, which seems worrying. I feel like these two comments by Buck Shlegeris and this post by Katja Grace are possibly doing this thing, or giving less careful thinkers the idea that this is a sound move.
I am curious to hear people's thoughts on this. I would also appreciate pointers to existing discussions (I feel like I've been paying attention, but it seems plausible to me that I've missed some).
Thanks to Vipul Naik for funding part of my work on this post, and for funding my work on cause areas that led to this post. Thanks also to Edo Arad for pushing me to finish this post.
I like this answer.
Maybe a minor point, but I don't think this is quite right, because:
- I don't think we know what the best solution in each "bucket" is
- I don't think we ha
... (read more)