Summary: I argue for a very broad, inclusive EA, based on the premise that the culture of a region is more important than any specific group within that region, and that broad and inclusive EA will help shift the overall culture of the world in a better direction. As a concrete strategy, I propose a division into low- and high-level EA - a division which I argue already exists within EA - and then selling people on low-level EA (using EA concepts within their chosen cause area to make that cause more effective), even if they are already committed to causes which traditional EA would consider low-impact or ineffective. I argue that in the long term, this will both boost the general effectiveness of all altruistic work done in the world, and also bring in more people into high-level EA as well.
Related post / see also: All causes are EA causes, by Ian David Moss.
An analogy
Suppose that you were a thinker living in a predominantly theocratic world, where most people were, if not exactly hostile to science, then at least utterly uninterested in it. You wanted to further scientific understanding in the world, and were pondering between two kinds of strategies:
1) Focus on gathering a small group of exceptional individuals to do research and to directly further scientific progress, so that the end result of your life's work would be the creation of a small elite academy of scientists who did valuable research.
2) Focus on spreading ideas and attitudes that made people more amenable to the idea of scientific inquiry, so that the end result of your life's work would be your society shifting towards modern Western-style attitudes to science about a hundred years earlier than they would otherwise.
(I am not assuming that these strategies would have absolutely no overlap: for instance, maybe you would start by forming a small elite academy of scientists to do impressive research, and then use their breakthroughs to impress people and convince them of the value of science. But I am assuming that there are tradeoffs between the two goals, and that you ultimately have to choose to focus more on one or the other.)
Which of these outcomes, if successful, would do more to further scientific progress in the world?
It seems clear to me that the second outcome would: most obviously, because if people become generally pro-science, then that will lead to the creation of many elite scientific academies, not just one. Founding an academy composed of exceptional individuals may cause them to make a lot of important results, but they still have to face the population's general indifference, and many of their discoveries may eventually be forgotten entirely. The combined output of a whole civilization's worth of scientists, is unavoidably going to outweigh the accomplishments of any small group.
Mindsets matter more than groups
As you have probably guessed, this is an analogy for EA, and a commentary on some of the debates that I've seen on whether to make EA broad and inclusive, or narrow and weird. My argument is that, in the long term a civilization will do a lot more good if core EA concepts, such as evaluating charities based on their tractability, have permeated throughout the whole civilization. Such a civilization will do much more good than a civilization where just a small group of people are focusing on particularly high-impact interventions. Similarly to one elite scientific academy vs. a civilization of science-minded people, the civilization that has been permeated by EA ideas will form lots of groups focused on high-impact interventions.
This could be summed as the intuition that civilizational mindsets are more important than any group or individual. (Donella Meadows places a system's mindset or paradigm as one of the most effective points to intervene in a system.) Any given group can only do as much: but mindsets will consistently lead to the formation of many different groups. Consider for instance the spread of environmentalist ideas over the last century or so: we are now at a point where these ideas are taken for so granted that a lot of different people think that environmentalist charities are self-evidently a good idea and that people who do such work are praiseworthy. Or similarly the spread of the idea that education is important, with the result that an enormous number of education-focused charities now exists. E.g. Charity Navigator alone lists close to 700 education-focused charities and over 400 environment-focused charities.
If EA ideas were thought to be similarly obvious, we could have hundreds of EA organizations - or thousands or tens of thousands, given that I expect Charity Navigator to only list a small fraction of all the existing charities in the world.
Now, there are currently a lot of people working on what many EAs would probably consider ineffective causes, and who have emotional and other commitments to those causes. Many of those people would likely resist the spread of EA ideas, as EA implies that they should change their focus to doing something else.
I think this happening would be bad - and I don't mean "it's bad that we can't convert these people to more high-impact causes". I mean "I consider everyone who tries to make the world a better place to be my ally, and I'm happy to see people do anything that contributes to that; and if they have personal reasons for sticking with some particular cause, then I would at least want to enable them to be as effective as possible within that cause".
In other words, if someone is committed to getting guide dogs to blind people, then I think that that's awesome! It may not be the most high-impact thing to do, but I do want to enable blind people to live the best possible lives too, and altruists working to enable that is many times better than those altruists doing nothing at all. And if this is the field that they are committed to, then I hope that they will use EA concepts within that field: figure out whether there are neglected approaches towards helping blind people (could there be something even better than guide dogs?), gather more empirical data to verify existing assumptions about which dog breeds and dog training techniques are the best for helping blind people, consider things like job satisfaction and personal fit in determining whether they want to personally train guide dogs / do administrative work in matching those dogs to blind people / do earn-to-give, etc.
If EA ideas do spread in this way to everybody who does altruistic work, then that will make all altruistic work more effective. And by the ideas becoming more generally accepted, it will also increase the proportion of people who end up taking EA ideas for granted and consider them obvious. Such people are more likely to apply them to the question of career choice before they're committed to any specific cause. Both outcomes - all altruistic work becoming more effective, and more people going into more high-impact causes - are fantastic.
A concrete proposal for mindset-focused EA strategy
Maybe you grant that all of that sounds like a good idea in principle, but how to apply it in practice?
My proposal is to explicitly talk about two kinds of EA (these may need catchier names):
1. High-level EA: taking various EA concepts of tractability, neglectedness, room for funding, etc., and applying them generally, to find whatever cause or intervention that can be expected to do the most good in the world.
2. Low-level EA: taking some specific cause for granted, and using EA concepts to find the most effective ways of furthering that specific cause.
With this distinction in place, we can talk about how people can do high-level EA if they are interested in generally doing the most good in the world, or if they are interested in some specific cause, applying low-level EA within that cause. And, to some extent, this is what's already happening within the EA community: while some people are focused specifically on high-level EA and general cause selection, a lot of others "dip their toes" into high-level EA for a bit to pick their preferred cause area (e.g. global poverty, AI, animal suffering), and then do low-level EA on their chosen cause area from that moment forward. As a result, we already have detailed case studies of applying low-level EA into specific areas: e.g. Animal Charity Evaluators is a low-level EA organization within the cause of animal charity, and has documented ways in which they have applied EA concepts into that cause.
The main modification is to talk about this distinction more explicitly, and phrase things so as to make it more obvious that people from all cause areas are welcome to apply EA principles into their work. Something like the program of EA Global events could be kept mostly the same, with some of the programming focused on high-level EA content, and some of it focused on low-level EA; just add in some talks/workshops/etc. on applying low-level EA more generally. (Have a workshop about doing this in general, find a guide dog charity that has started applying low-level EA to its work and have their leader give a talk on what they've done, etc.) Of course, to more effectively spread the EA ideas, some people would need to focus on making contact with existing charities that are outside the current umbrella of EA causes and, if the people in those charities are receptive to it, work together with them to figure out how they could apply EA to their work.
Hey Kaj,
I agree with a lot of these points. I just want to throw some counter-points out there for consideration. I'm not necessarily endorsing them, and don't intend them as a direct response, but thought they might be interesting. It's all very rough and quickly written.
1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they're already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.
It becomes particularly difficult because the leaders, who do the broad outreach, want to focus on high-level EA. It's more transparent and open to pitch high-level EA directly.
There are probably ways you could implement a division without incurring these problems, but it would need some careful thought.
2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It's what makes us different from simply "competent" do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.
3) Often the best way to promote general ideas is to live them. With your example of promoting science, people often seem to think the Royal Society was important in building the scientific culture in the UK. It was an elite group of scientists who just got about the business of doing science. Early members included Newton and Boyle. The society brought likeminded people together, and helped them to be more successful, ultimately spreading the scientific mindset.
Another example is Y Combinator, which has helped to spread norms about how to run startups, encourage younger people to do them, reduce the power of VCs, and have other significant effects on the ecosystem. The partners often say they became famous and influential due to reddit -> dropbox -> airbnb, so much of their general impact was due to having a couple of concrete successes.
Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact - finding the "airbnb of EA" or the "Newton of EA".
Thanks!
Yeah, agreed. Though part of what I was trying to say is that, as you mentioned, we have the high/low distinction already - "implementing" that distinction would just be giving an explicit name to somethi... (read more)