Hide table of contents

When planning a project, a key question is what success looks like. 

What does Effective Altruism look like if it is successful? 

I think a lot of the answer is that, as a social movement, it’s successful if its ideas are adopted and its goals are pursued - not just by proponents, but by the world. Which leads me to a simple conclusion: Effective Altruism is at least three orders of magnitude too small - and probably more than that. And I think that the movement has been hampered by thinking about a scale far, far too small to maximize its impact - which is the goal.

I’ll talk about three specific aspects where I think EA can and should scale.

  1. Dollars donated
  2. Cause areas being worked on
  3. People who are involved

I want to lay out the case of what it looks like for this to change, and inter-alia, suggest how all three of these are connected - because I don’t think any of them will grow anything like significantly enough without the other two.

1. Funding

Funding today

At present, EA donors have an uncommitted pool of tens of billions of dollars - though it can’t all be liquidated tomorrow. But that isn’t enough to fully fund everything we know is very valuable. Givewell has recently raised its funding bar to 8x giving directly, and has a $200m shortfall this year. We fully expect that there will be more money in the future - but no-one seems to be claiming that the amounts available would be enough to, say, raise the standard of living in sub-Saharan Africa by even a factor of two, to one fifteenth of the level in the United States. That would take far more money.

The funding we should aim for

The obvious goal should be to get to the point where we’re funding everything better than giving directly, and funding that, as well. We’re talking about a minimum of tens of billions of dollars per year, plausibly far more. This cannot possibly be sustained by finding more billionaires, unless wealth inequality rises even faster than interest in EA. We need more people, and clearly, if the money is supposed to go to improving the world, not everyone can be hired by EA orgs.

Instead, I claim we need to go back and enable the original vision of many EA founders, who were simply looking to maximize their charitable impact. That is, we need people to earn-to-give in the normal sense - having a job they like, living a normal life, and donating perhaps 10% of their income to effective charity. And that’s a sustainable vision for a movement that can be embraced by hundreds of millions or billions of people. And as an aside, if the ideas are widely embraced, it’s also far more likely to be embraced by politicians for allocation of international aid, and create even more value democratically[1].

Scale likely requires diversifying cause areas

Alongside this, if and as effective altruism gets larger, the set of things effective altruists focus on will need to expand. If the EA donor base grows enough, we will fill the current funding gap for EA organizations.  And a broad base of supporters will have a small segment who will work more directly on these issues. Scaling the interventions gets us only so far - there will be a need for more causes. We will hopefully quickly fill the funding gaps for scaling newer ideas, and need to expand. Once we can save every life that can be saved for $10,000, we will need to move on to more and more expensive interventions, interventions that address welfare, preventative healthcare for the uninsured in wealthy countries, and so on. If we successfully scale, the world will be a better and very different place.

2. Cause Areas

As mentioned, there are a few reasons to expand cause areas over time. But before doing so, there is a conceptual elephant in the room. That is, Effective Altruism embraces cause-neutrality. Cause neutrality has meant, historically, that we should find the biggest and most important single cause area, and focus on that. It’s good advice for individuals or small groups. That means we shift quickly from one thing to the next - global poverty, animal welfare, existential risk. I claim that important reasons exist to temper the focus on the single highest leverage area, especially as the areas each grow.

Decreasing Marginal Returns

First, as resources grow, we expect to find decreasing marginal returns in each area. Eventually, we will want to fall back to prioritizing other causes. As the funding increases, the “cheapest” opportunities to identify effective interventions disappear. And as areas move from needing generalists to implementation, the need changes from dedicated people, to funding to pay direct workers to get things done.  Second, as the number of people involved in, say, effective global biorisk reduction increases, the bar for entry changes, as it requires much more specific skill sets, and most people are unable to contribute to the work directly. Over the past decade, it seems that effective altruists who once focused on global health next switched to animal welfare, then perhaps biosecurity, or AI safety. And that makes sense, since they changed from areas that needed intensive investigation to ones where they needed organizations and funding [2]. Not only does this suggest a high probability that other areas will continue to be identified, and undergo this transition, it implies that having a broader pool of people looking at diverse areas means that identifying opportunities is easier.

Different Talents

Next, heterogenous talents have different optimal avenues for direct work. The naive version of Effective Altruism (that few people well-versed in the movement, or simply sensitive to reality, would agree with,) would tell petroleum engineers to try to switch to AI safety research, or perhaps to alternative meat engineering. But those skills are not at all related. So even if such an engineer was at all willing to retrain and find work in those areas, which seems unlikely, as a novice, they are risking their career for a small chance they could be impactful in a new domain. Instead I suspect we could point to the need for petroleum engineers in geothermal energy production, with clearly positive impact, and point to the ability to earn-to-give if they are hoping to focus on even more impactful areas. In fact, EA organizations already have diverse needs - from public relations to operations to finance to general middle management. Moving people from these areas into “higher priority” areas isn’t going to help.

Different Values

Third, values differ, and a broad-based movement benefits from encouraging diversity and disagreement[3]. For example, there are people who strongly discount the future. People who do not assign moral weight to animals. People who view mental suffering as more important than physical suffering. People who embrace person-affecting views. Average utilitarians. Negative utilitarians. And so on. And these different views lead to different conclusions about what  maximizes ”the good” - meaning that different causes should be prioritized, even after accepting all of the fundamental goals of effective altruism.

There are even people who disagree with those fundamental claims, and, for example, feel that they should give locally, not just due to the over-used and confused claims about local knowledge, but because they have deontological or virtue ethical positions. These are sometimes compatible with effective altruism, but often are not. So aside from my own questions about how to address moral uncertainties, I think that a fundamental part of benefiting others is respecting their current preferences, even when I disagree with them, and allowing them to benefit from thinking about effectiveness. Not everything needs to be EA. As long as they aren’t doing harm, there seems to be little reason to discourage donations or activism which improves the world a little because we disagree with their priorities, or think we know better. We probably want to encourage allies, instead of criticizing them - and in many cases, point out that the disagreements are minor, or straw men.

Big Tent EA

Some criticisms of EA have been that it’s too demanding. This seems wrong. Not only do very few effective altruists embrace a maximalist and burdensome utilitarian view about ethical obligations, but we should be happy to encourage consequentialist altruistic giving across domains, even if not optimal. While I would be appalled if GiveWell decided that education in the United States was a top global priority for charity, I’m perfectly happy with funds donated to schools being donated more effectively. First world poverty is much more expensive to solve on a per-person basis than helping subsistence farmers escape a poverty trap in the developing world - but I still hope people donating to the cause can be shown how to fund effective poverty reduction strategies over donations to Salvation Army. These seem like a useful expansion of Effective Altruist ideas and ideals, even if it doesn’t optimize along every dimension of our priorities. Effective Altruists addressing those areas seems like a potentially large benefit, even ignoring the indirect effects of exposing people to the questions of cause prioritization.

Finally, expanding the tent for Effective Altruism seems positive for the world - and as I’ll argue below, it is unlikely to be damaging to core EA priorities. I would hope to have effective altruism as a movement encouraging a broad adoption of the very basic version of what we promote; applying consequentialist thinking to giving.

3. People

I’d estimate that Effective Altruism is strongly embraced by, at most, 10,000 people. That is small - if we take 80,000 hours logic to its conclusion, it implies less than a billion hours of committed time by effective altruists. There are 8 billion people in the world. If we’re trying to continue to improve the world, we need to scale further.

That means Effective Altruism absolutely cannot be an “elitist” group. And to be clear, we aren’t interested in a big tent movement because it’s strategically valuable. We are interested in a big tent movement because a moral statement - that people can and should try to improve the world with our resources - applies universally. So we welcome people who “just” want to do good better with their charitable giving. And several EA orgs, such as Giving What We Can, do a good job making that clear - but I think that many parts of EA clearly missed getting the message.

Bigger EA means not everyone is working directly.

As mentioned above, not everyone will work in EA orgs. Yes, everyone I know working in Effective Altruism is critically limited by the difficulty of finding people to work on key areas, with idiosyncratic skills or narrow expertise outside of our focus areas - and scaling would definitely make that easier. The need for people is not, however, limited to direct work. We want community members that embrace earning-to-give - again, not in the sense of maximizing their incomes to give, but more simply working at generally beneficial and/or benign jobs and giving money effectively. We want to make the world better, safer, and happier, and that means bringing the world along - not deciding for them. 

To put it in slightly different terms, you don’t get to make, much less optimize, other people’s decisions. Organizations like 80,000 hours offer advice to more dedicated EAs about using their careers for good, but they aren’t game guides for life. And we need to be clear that not everyone in the world should focus on working directly on the highest-value interventions, especially given that talents and capabilities differ. Some - the vast majority of all people, in fact - should have normal jobs. If scaling EA means that everyone needs to directly work on EA causes, we’re sharply limited in how much we can scale.

Objections

There are a variety of objections to my claims, which I will sort into two general buckets of normative versus positive. The first sort of disagreement is that the claims here are wrong about what we should do - that we shouldn’t embrace people with different values, or that we should encourage people to maximize impact even when they are doing other things. I’m not going to debate those. The second sort are predictive disagreements - that if we embrace the strategies I’m suggesting, Effective Altruism as a movement will be less effective. I think these are both more important, and that it is possible to discuss them more productively.

Specifically, as mentioned above, I claim that big-tent effective altruism is unlikely to damage core EA priorities. 

First, I think that it’s very plausible that the multiplier effect of changing charitable giving in developed countries could have very large impacts - the half trillion dollars a year given charitably in the United States doesn’t need to become much more effective to have a huge impact, and moving a couple billion dollars from something completely ineffective to solving “first-world problems” isn’t going to save as many lives as donations to AMF, but it will still have a larger impact than what many or even most people in EA work on.

Second, I think that with even minimal care, people are unlikely to end up misled into thinking that analyses which compare charities in terms of impact per dollar imply that lower-impact charities are nearly as effective as top givewell charities. Relatedly and finally, there could be a concern that we would push people who would otherwise be more impactful to focus on near-term issues, and not realize they are not actually being as impactful as they could be. This similarly seems unlikely, given the degree to which EA tends to be (brutally) honest about impact. Though as I’ll mention below, slightly less abusive and better informed criticism of other people’s views and choices is probably needed.

I would be interested in hearing if and why others disagree.

Concrete next steps

I don’t think that this vision directly impacts most people’s priorities within EA. People who want to work on global development, AI safety, biorisk, or animal welfare should continue to do so. 

But I do hope that it impacts their vision for EA, and the way they interact with others both inside and outside of EA. Yes, perceptions matter.  And if we don’t have room for people who are interested and thinking about what they should do, even if they decide to choose different careers, to prioritize differently than ourselves, or to “only” donate 5%, or 1% of their income to effective causes, in my view, we’re making it very unlikely that the movement is as successful as it could be. Worse, potentially, we won’t be able to find out when we’re making mistakes, because anyone who disagrees will have been told they aren’t EA enough. 

So as I’ve said before, I never want to hear anyone told “that’s not EA,” or see people give unsolicited criticism of someone else’s choice of causes. A movement that grows and remains viable needs to be one where we can be honest, but also not insult others' choices and values. But I unfortunately see this happen. I hear from people who started getting involved in local groups, got turned off, and almost left. I can reasonably assume they aren’t the only ones who had that experience. If EA doesn’t allow for people to be partly on board, or says that certain things aren’t good enough, we’re cutting off diversity, we’re alienating allies, and we’re making our work on outreach and publicizing the ideas less impactful. So if I’m right, the vision many people - especially newcomers and the younger and more enthusiastic EA devotees -  seem to have is definitely not an effective way to scale a movement. And as I argued, we want to scale.
 

  1. ^

    For EA to be embraced politically, in a democratic society, it also needs to be embraced by at least a large part of the population - i.e. it requires scaling.

  2. ^

    As an aside, the changing focus of the initiators doesn’t mean that each problem has been solved, or made less valuable - just that there are even more neglected or higher leverage opportunities. We still need funding, and people, to finish solving these high-leverage and important problems.

  3. ^

    Effective Altruism has done some work on this front, but far more is needed.

Comments37
Sorted by Click to highlight new comments since:

Thank you for writing this! I strongly agree that we should broaden the tent. 

EA's biggest weakness in my opinion is that almost nobody knows what it is. I've spoken to many hundreds of athletes about EA in the last 2 years and only a handful had any idea what it was (hadn't heard the term) before I explained it. These are people with large audiences and cultural clout, who could be outsized levers in bringing the ideas to hundreds of millions.

However, EA as it presents itself right now seems quite exclusive. I don't believe that broadening the tent would lessen the direction or determination of those who are "pure" EA, but would gather a much more powerful groundswell around it. 

Thanks - I agree with this, though I'd note that within DC and academic circles, the movement is far better known, which probably accentuates rather than addresses the elitism.

Given that, I would be interested in any thoughts on what a populist movement around EA looks like, and how we could build a world where giving effectively was a norm that was reinforced socially, - especially if we can figure out how that could happen without needing central direction, nor encouraging fanaticism and competition about who gives the most or is the most dedicated.

Thanks for writing this article. You make a number of good points, however I don’t think you quite grapple with the strongest arguments against this.

  • The more you prioritise near-termist causes the more you’ll be tempted to grow the movement to maximise the donations to charity, whilst the more you prioritise long-termism, the more you‘ll prioritise high-skilled recruitment  for specific skills we need (Actually, even near-termists may want to prioritise niche outreach as donations are heavy tailed too)
  • We face a dilemma - if everything counts as EA, then EA will lose its distinctiveness - but at the same time we don’t want to come off as narrow minded. I think part of the solution is acknowledging there are ways of doing altruism effectively outside of Effective Altruism. Then it makes sense for us to decide that we don’t need to do  everything (this originally said ‘anything’ as a typo)
  • I really wish it weren’t the case, but broadening the movement too much would  reduce the nuance in discussions by default.
  • I think it makes sense for Giving What We Can to focus on mass outreach and Effective Altruism to remain narrower rather than to do everything under the same banner as an event for everyone is an event for no-one.
  • I disagree that there is a meaningful trade-off here, and would love to more narrowly pinpoint what we're predicting differently. As I said in the post, I don't think that EAs will suddenly stop talking about prioritization and needs, and I don't think that this is negative sum - I think it's far more likely that a more accepting EA movement makes it easier to recruit high-skilled workers, not harder.
  • I agree that we need to "acknowledg[e] there are ways of doing altruism effectively outside of Effective Altruism," or as I put it in the post, "Not everything needs to be EA." But for exactly that reason, I strongly disagree that "it makes sense for us to decide that we don’t need to do anything" - at the very least, as you just said, we need to acknowledge it. And as I argued, we need to do more, and accept that people might want to do different things, and still be happy to ally with them, and congratulate them for doing good things, instead of criticizing them.
  • I don't think that EA would be horribly hurt by reducing nuance by a few percent, at least not the way it is hurt by actively sabotaging itself by rejecting and too-often insulting people who would be allies.
  • First, I don't think that you can be exclusive and demeaning and still have an EA movement. Second, an event open to everyone is, to potentially overextend your analogy, a large event. We don't need everything under the same banner, but I think that stealing the EA banner to mean something narrow is a disservice to the movement. Yes, we can absolutely have various narrower banners - we already do for Alternative protein, Wild animal suffering, Biorisk, AI safety, Global health, etc. Shifting the movement overall to be narrow, instead of being the far more general applying consequentialist thinking to giving,  isn't helpful.

Sorry, I don’t have time to respond to all your points, but I agree that the EA movement can’t be demeaning. That isn’t the same though as optimising for a certain audience.

(Fixed typo: wrote demanding instead of demeaning).

Really enjoyed this post and agree with it pretty much across the board. Two points that I especially liked.

As long as they aren’t doing harm, there seems to be little reason to discourage donations or activism which improves the world a little because we disagree with their priorities, or think we know better. We probably want to encourage allies, instead of criticizing them - and in many cases, point out that the disagreements are minor, or straw men.

and

A movement that grows and remains viable needs to be one where we can be honest, but also not insult others' choices and values.

Strong agree; EA being enormous would be good.

I hope we successfully make EA enormous quickly; I hope we pursue making EA enormous interventions beyond just being more welcoming on the margin.

I agree that EA being enormous eventually would be very good. 🙂

However, I think there are plenty of ways that quick, short-term growth strategies could end up stunting our growth. 😓

I also think that being much more welcoming might be surprisingly significant due to compounding growth (as I explain below). 🌞

It sounds small, "be more welcoming", but a small change in angle between two paths can result in a very different end destination. It is absolutely possible for marginal changes to completely change our trajectory!

We probably don't want effective altruism to lose its nuances. I also think nuanced communication is relatively slow (because it is often best done, at least in part, in many conversations with people in the community)[1].  I think that we could manage a 30% growth rate and keep our community about a nuanced version of effective altruism, but we probably couldn't triple our community's size every year and stay nuanced.

However,  growth compounds. Growing "only" 30% is not really that slow if we think in decades!

If we grow at a rate of 30% each year, then we'll be 500,000 times as big in 50 years as we are now.[2]

Obviously growth will taper off (we're not going to grow exponentially forever), but I think at what point it tapers off is a very big deal. That saturation point, that maximum community size we hit, is more important for EA ending up enormous. We can probably grow by focusing on "slow" growth strategies, and still end up enormous relatively soon (30% is actually very fast growth but can be done without loads of the sorts of activities you might typically think of as fast-growth strategies).[3]

I actually think one of the biggest factors in how big we grow is how good an impression we leave on people who don't end up in our community. We will taper off earlier if we have a reputation for being unpleasant. We can grow at 30% with local groups doing a lot of the work to leave a lot of people with a great impression whether or not they decide to engage much with the community after they've formed that first impression.

If we have a reputation for being a lovely community, we're much more likely to be able to grow exponentially for a long time.

Therefore, I do think being really nice and welcoming is a really huge deal and more short-term strategies for fast growth that leave people confused and often feeling negatively about us could, in the end, result in our size capping out much earlier.

Whether or not we have the capacity for all the people who could be interested in effective altruism right now (only being able to grow so fast in a nuanced way limits our capacity), we still do have the capacity to leave more people with a good impression.

More of my thoughts on what could be important to focus on are here.

  1. ^

    Books and articles don't talk back and so can't explore the various miscellaneous thoughts that pop up for a person who is engaging with EA material, when that material is thought-provoking for them. 

  2. ^

     (I find the weird neatness of these numbers quite poetic 😻)

  3. ^

    This isn't to say I'm against broad outreach efforts. It's just to say that it is really important to lay the groundwork for a nuanced impression later on with any broad outreach effort.  

I actually think being welcoming to a broad range of people and ideas is really about being focused on conveying to people who are new to effective altruism that the effective altruism project is about a question. 

If they don't agree with the current set of conclusions, that is fine! That's encouraged, in fact. 

People who disagree with our current bottom line conclusions can still be completely on board with the effective altruism project (and decide whether their effective altruism project is helped by engaging with the community for themselves).

If in conversations with new people, the message that we get across is that the bottom-line is not as important as the reasoning processes that get us there, then I think we naturally will be more welcoming to a broader range of people and ideas in a way that is genuine. 

Coming across as genuine is such an important part of leaving a good impression so I don't think we can "pretend" to be broader spectrum than we actually are. 

We can be honest about exactly where we are at we are while still encouraging others to take a broader view than us by distinguishing the effective altruism project from the community. 

I think there is a tonne of value to making sure we are advocating for the project and not the community in outreach efforts with people who haven't interacted that much with the community. 

 If newcomers don't want to engage with our community, they can still care a tonne about the effective altruism project. They can collaborate with members of the community to the extent it helps them do what they believe is best for helping others as much as they can with whatever resources they are putting towards the effective altruism project.

 I'd love to see us become exceptionally good at going down tangents with new people to explore the merits of the ideas they have.  This makes them and us way more open to new ideas that are developed in these conversations.  It also is a great way to demonstrate how people in this community think to people who haven't interacted with us much before. 

How we think is much more core to effective altruism than any conclusion we have right now (at least as I see it). Showing how this community thinks will, eventually, lead people we have these conversations with to conclusions we'd be interested in anyway (if we're doing those conversations well).

Strongly agree that being more welcoming is critical! I focused more on the negatives - not being hostile to people who are potential allies, but I definitely think both are important.

That said, I really hate the framing of "not having capacity for people" - we aren't, or should not be, telling everyone that they need to work at EA organizations to be EA-oriented. Even ignoring the fact that career capital is probably critical for many of the people joining, it's OK for EAs to have normal jobs and normal careers and donate - and if they are looking for more involvement, reading more, writing / blogging / talking to friends, and attending local meet-ups is a great start.

I agree with that. 🙂

I consider myself a part of the community and I am not employed in an EA org, nor do I intend to be anytime soon so I know that having an EA job or funding is not needed for that.

 I meant the capacity to give people a nuanced enough understanding of the existing ideas and thinking processes as well as the capacity to give people the feeling that this is their community, that they belong in EA spaces, and that they can push back on anything they disagree with.

It's quite hard to communicate the fundamental ideas and how they link to current conclusions in a nuanced way. I think integrating people into any community in a way that avoids fracturing or without losing the trust that community members have with each other (but still allowing new community members to push back on old ideas that they disagree with) takes time and can only be done, I think, if we grow at a slow enough pace. 

 

(I strongly agree that we should be nice and welcoming. I still think trying to make EA enormous quickly is good if you can identify reasonable such interventions.)

I also think faster is better if the end size of our community stays the same. 👌🏼 I also think it's possible that faster growth increases the end size of our community too. 🙂 

Sorry if my past comment came across a bit harshly (I clearly have just been over-thinking this topic recently 😛)![1]

 I do have an intuition, which I explain in more detail below, that lots of ways of growing really fast could end up making our community's end size smaller. 😟

Therefore, I feel like focusing on fast growth is much less important than focusing on laying the groundwork to have a big end capacity (even if  it takes us a while to get there). 

It's so easy to get caught up in short-term metrics so I think bringing the focus to short-term fast growth could take away attention from thinking about whether short-term growth is costing us long-term growth.

 I don't think we're in danger of disappearing given our current momentum. 

I do think we're in danger of leaving a bad impression on a lot of people though and so I think it is important to manage that as well as we can. My intuition is that it will be easier to work out how to form a good impression if we don't grow very fast in a very small amount of time. 

 Having said that, I'm also not against broad outreach efforts. I simply think that when doing broad outreach, it is really important to keep in mind whether the messages being sent out lay the groundwork for a nuanced impression later on (it's easy to spread memes that makes more nuanced communication much harder). 

However, I think memes about us are likely to spread if we're trying to do big projects that attract media attention, whether or not we are the ones to spread those broad outreach messages. 

I could totally buy into it being important to do our best to try and get the broad outreach messages we think are most valuable out there if we're going to have attention regardless of whether we strategically prepare for it. 

I have concrete examples in my post here of what I call "campground impacts" (our impact through our influence on people outside the EA community). If outreach results in a "worse campground", then I think our community's net impact will be smaller (so I'm against it). If outreach results in a "better campground", then I think our community's net impact will be bigger (so I'm for it).  If faster-growth strategies result in a better campground then I'm probably for them, if they result in a worse campground, then I'm probably against them. 😛
 

  1. ^

    I went back and edited it after Zach replied to more accurately convey my vibe but my first draft was all technicalities and no friendly vibes which I think is no way to have a good forum discussion! (sorry!)

    (ok, you caught me, I mainly went back to add emojis, but I swear emojis are an integral part of good vibes when discussing complex topics in writing 😛🤣: cartoon facial expressions really do seem better than no facial expressions to convey that I am an actual human being who isn't actually meaning to be harsh when I just blurt out some  unpolished thoughts in a random forum comment😶😔🤔💡😃😊)

A shorter explainer on why focusing on fast growth could be harmful:

Focusing on fast means focusing on spreading ideas fast. Ideas that are fast to spread tend to be 1 dimensional.

Many 1d versions of the EA ideas could do more harm than good. Let's not do much more harm than good by spreading unhelpful, 1 dimensional takes on extremely complicated and nuanced questions.

Let's spread 2 dimensional takes on EA that are honest, nuanced and intelligent where people think for themselves.

The 2d takes that include the fundamental concepts (scope insensitivity and cause neutrality etc) that are most robust. One where people recognize no-one has all the answers yet because these are hard questions. Where they also recognize smart people have done some thinking and that is better than no thinking.

Let's get an enormous EA sooner rather than later.

But not so quickly that we end up accidentally doing a lot more harm than good!

We don't need everyone to have a 4 dimensional take on EA.

Let's be more inclusive. No need for all the moral philosophy for these ideas to be constructive.

However, it is easy to give an overly simplistic impression. We are asking some of the hardest questions humanity could ask. How do we make this century go well? What should we do with our careers in light of this?

Let's be inclusive but slowly enough to give people a nuanced impression. And slowly enough to be some social support to people questioning their past choices and future plans.

This all sounds reasonable. But maybe if we're clever we'll find ways to spread EA fast and well. In the possible worlds where UGAP or 80K or EA Virtual Programs or the EA Infrastructure Fund didn't exist, EA would spread slower, but not really better. Maybe there's a possible world where more/bigger things like those exist, where EA spreads very fast and well. 

I doubt anyone disagrees with either of our above two comments. 🙂

I just have noticed that when people focus on growing faster, they sometimes push for strategies that I think do more harm than good because we all forget the higher level goals mid project.

I'm not against a lot of faster growth strategies than currently get implemented.

I am against focusing on faster growth because the higher level goal of "faster growth" makes it easy to miss some big picture considerations.

A better higher level goal, in my mind, is focus on fundamentals (like scope insensitivity or cause neutrality or the Pareto principal applied to career choice and donations) over conclusions.

I think this would result in faster growth with much less of the downsides I see in focusing on faster growth.

I'm not against faster growth, I am against focusing on it. 🤣

Human psychology is hard to manage. I think we need to have helpful slogans that come easily to mind because none of us are as smart as we think we are. 🤣😅 (I speak from experience 🤣)

Focus on fundamentals. I think that will get us further.

Hmm, it’s funny, this post comes at a moment when I’m heavily considering moving in the opposite direction with my EA university group (towards being more selective and focused on EA-core cause areas). I’d like to know what you think of the reason I thought for doing so.

My main worry is that as the interests of EA members broaden (e.g. to include helping locally), the EA establishment will have less concrete recommendations to offer and people will not have a chance to truly internalize some core EA principles (e.g. amounts matter, doubt in the absence of measurement). 

That has been an especially salient problem for my group, given that we live in a middle-income country (Colombia) and many people feel most excited about helping within our country. However, when I’ve heard them make plans for how they would help, I struggle to see what difference we made by presenting them EA ideas. They tend to choose causes more by their previous emotional connection than by attributes that suggest a better opportunity to help (e.g. by using the SNT framework). My expectations are that if we emphasize more the distinctive aspects of EA (and the concrete recommendations they imply), people will have a better chance to update on the ways that mainstream EA differs from what they already believed and we will have a better shot at producing some counterfactual impact. 

(Though, as a caution, it’s possible that the tendency for members of my group to not realize when EA ideas differ from their own may come from my particular aversion to openly questioning or contradicting people, rather than from the member’s interest in less-explored areas for helping.)

A few points. First, I think we need to be clear that effective altruism is a movement encouraging use of evidence to do as much good as we can - and choosing what to work on should happen after gathering evidence. Listening to what senior EA movement members have concluded is a shortcut, and in many cases an unfortunate one. So the thing I would focus on is not the EA recommendations, but the concept of changing your mind based on evidence. It's fine for people to decide to focus locally instead of internationally, or to do good, but not the utmost good - but even when we congratulate them for doing good things, they shouldn't tell themselves it's the most effective choice.

Second, it sounds like a basic EA fellowship would be beneficial. Instead of having members have conversations about EA, have them read and discuss sources, and the topics directly. Ask them to think about where they agree and disagree with the readings. And to address one of your issues more specifically, I definitely think that you should have more discussions about cause area choice - not specific cause areas themselves. Some questions might include: what makes a cause more effective than another? How would you know? If evidence exists, what makes people disagree about where to focus? What sorts of evidence are needed to know if  an intervention will generalize? 

Lastly, I think that people should spend less time thinking about how *they* can help, and more about what needs to be prioritized globally. I think the idea that people should maximize their personal impact is often misleading, compared to asking what matters, and then figuring out where you can counterfactually contribute most. And even for people who want to dedicate their careers to effective interventions, sometimes - often - that means building career capital in the short term, rather than direct work.

I agree that focusing on epistemics leads to conclusions worth having. I am personally skeptical of fellowships unless they are very focused on first principles and when discussing conclusions, great objections are allowed to take the discussion completely off topic for three hours.

Demonstrating reasoning processes well and racing to a bottom line conclusion don't seem very compatible to me.

Changing minds and hearts is a slow process. I unfortunately agree too much with your statement that there are no shortcuts. This is one key reason why I think we can only grow so fast.

Growing this community in a way that allows people to think for themselves in a nuanced and intelligent way seems necessarily a bit slow (so glad that compounding growth makes being enormous this century still totally feasible to me!).

Hi, thanks for the detailed reply. I mostly agree with the main thrust of your comment, but I think I feel less optimistic about what happens when you actually try to implement it.   

Like, we've had discussions in my group about how to prioritize cause areas and in principle everyone agrees with how we should work on causes that are bigger, more neglected and tractable, but when it comes to specific causes it turns out that the unmeasured effects are the most important thing and the flow through effects of the intervention I've always liked turn out to compensate for its lack of demonstrated impact.  

I'm not saying it's impossible to have those discussions, just that for a group constrained on people who've engaged with EA cause priorization arguments, being able to rely on the arguments that others have put forward (like we can often do for international causes) makes the job much easier. However, I'm open to the possibility that that the best compromise might be to simply to let people focus on local causes and double down on cultivating better epistemics.    

(P.s.: I now realize that answering to every comment on your post might be quite demanding, so feel free to not answer. I'll make sure to still update on the fact that you weren't convinced by my initial comment. If anything at least your comment is making me consider alternatives that don't restrict the growth and avoid the epistemic consequences I delineated. I'm not sure if I'll find something that pleases me, but I'll muse on it a little further)

You don't need to convince everyone of everything you think in a single event. 🙂 You probably didn't form your worldview in the space of two hours either. 😉

When someone says they think giving locally is better, ask them why. Point out exactly what you agree with (e.g. it is easier to have an in-depth understanding of your local context) and why you still hold your view (e.g. that there are such large wealth disparities between different countries that there are some really low hanging fruit, like basic preventative measures of diseases like malaria, that you currently guess that you can still make more of a difference by donating elsewhere).

If you can honestly communicate why you think what you do, the reasons your view differs from the person you are talking to, in a patient and kind way, I think your local group will be laying the groundwork for a much larger movement of people who care deeply about helping others as much as they can with some of their resources. A movement that also thinks about the hard but important questions in a really thoughtful and intelligent way.

The best way to change other people's minds for me is to keep in mind that I haven't got everything figured out and this person might be able to point to nuance I've missed.

These really are incredibly challenging topics that no-one in this community or in any community has fully figured out yet. It didn't always happen in the first conversation, but every person whose mind I have ever ended up changing significantly over many conversations added nuance to my views too.

Each event, each conversation, can be a small nudge or shift (for you or the other person). If your group is a nice place to hang out, some people will keep coming back for more talks and conversations.

Changing people's mind overnight is hard. Changing their minds and your mind over a year, while you all develop more nuanced views on these complicated but still important questions, is much more tractable and, I think, impactful.

If it's a question of giving people either a sense of this community's epistemics or the bottom line conclusion, I strongly think you are doing a lot more good if you choose epistemics.

Every objection is an opportunity to add nuance to your view and their view.

If you successfully demonstrate great epistemics and people keep coming back, your worldviews will converge based on the strongest arguments from everyone involved in the many conversations happening at your local group.

Focus on epistemics and you'll all end up with great conclusions (and if they are different to the existing commonly held views in the community, that's even better, write a forum post together and let that insight benefit the whole movement!).

Oh, I totally agree that giving people the epistemics is mostly preferable to hanging them the bottom line. My doubts come more from my impression that forming good epistemics in a relatively unexplored environment (e.g. cause prioritization within Colombia) is probably harder than in other contexts.
I know that at least our explicit aim with the group was to exhibit the kind of patience and rigour you describe and that I ended up somewhat underwhelmed with the results. I initially wanted to try to parse out where our differing positions came from, but this comment eventually got a little long and rambling.
For now I'll limit myself to thanking you for making what I think it's a good point.

I don't know if this is what you mean by cultivating better epistemics, but it seems super plausible to me that the comparative advantage of a Colombian EA university group is to work towards effective solutions to problems in Colombia. If you think most of your members will continue to stay in Colombia, and some of them might go into careers that could potentially be high impact for solving Colombian issues, that seems like a much more compelling thing to do than be the Nth group talking about AI or which GiveWell charity is better.

I agree EA could have much more impact if it expanded. Relevant is Peter McClusky’s "Future of Earning to Give" argument for thinking big.

One way to proceed would be to try to learn from those who have been doing this long before any mention of EA.   As just one example, in the United States, Catholic Charities is the second leading provider of services to the needy, topped only by the federal government.  The Catholic Church has been involved in these kind of projects in one way or another for 2,000 years.  They might be worth talking to, not to mention being a potentially strong ally.   (PS:   I'm not religious)

I think there’s a need to separate out an effective giving movement (which should and can be enormous) from a more careers based / direct impact focused effective altruism movement (which probably can’t become enormous)

Given the historical origin of the term, the directly impactful careers folks are probably the ones who would need a new name. And while I think organizations are already specialized or specializing along the lines of careers and direct impact versus funding and socialization between members, I don't really think that it makes sense to split a movement or philosophical orientation by the methods used to pursue the goals.

I think effective giving and high impact careers aren’t just methods to pursue the same goals, but are also different levels of commitment to doing good / are on different levels of moral demandingness.

I think some people will see the high impact career focus in EA and will consider career changes to be a big sacrifice, and so will feel alienated and not engage with EA at all, even though they could have taken up effective giving if they had come across it separately.

What does Effective Altruism look like if it is successful?

Ideally, a very well-coordinated group that acts like a single, rational wise, EA-aligned agent. (Rather than a poorly coordinated set of individuals who compete for resources and status by unilaterally doing/publishing impressive, risky things related to anthropogenic x-risks, while being subject to severe conflicts of interest).

Strongly disagree - groups of people aren't single agents. Worse, historically, deciding that the goal  of a movement should be chosen even if it turns out that it is fundamentally incompatible with human, economic, and other motives leads to horrific things. 

groups of people aren't single agents

I agree. But that doesn't mean that the level of coordination and the influence of conflicts of interest in EA are not extremely important factors to consider/optimize.

deciding that the goal of a movement should be chosen even if it turns out that it is fundamentally incompatible with human, economic, and other motives leads to horrific things.

Can you explain this point further?

  • Yes, coordination is good, and should be considered. But I don't think that a well-coordinated groups can or should look anything like a single agent.
     
  • Communism was really pretty bad, as were many other movements with strong views about what should happen and little attention paid to humans.
     

I'm not sure what exactly we disagree on. I think we agree that it's extremely important to appreciate that [humans tend to behave in a way that is aligned with their local incentives] when considering meta interventions related to anthropogenic x-risks and EA.

Curated and popular this week
Relevant opportunities