At the heart of Effective Altruism is a commitment to doing "as much good as possible" or maximizing counterfactual impact. EAs break down counterfactual impact into three components - scale (how big is the problem?), tractability (how solvable is the problem) and neglectedness (how neglected is the problem?).

The ‘bad’ kind of neglected is when something is neglected precisely because it's not tractable. But the "good" kind of neglectedness is when a problem or a task is unsolved or undone either because there exists no market or commercial incentive to solve this problem (eg. animal welfare) or because the commercial incentive is small relative to the social benefits of solving the problem (eg. pandemic preparedness).

EA thinks on the margin. The central question it asks: Holding most of the external world constant, how can I use my time and resources to have the highest personal or organizational impact? But as EA gets exported from the grungiest and geekiest corners of Berkeley and Oxford into the "real" world, thinking on the margin presents a practical problem, especially with respect to neglectedness.

This might be a good time to differentiate between ea - the philosophy of effective altruism and EA - the Effective Altruism movement, largely funded by Dustin Moskowitz via Open Philanthropy. I see the latter as just one instantiation of the former idea.

Notice that EA, or any such social movement, that is at some level, cause-neutral and cost-effective has to factor in neglectedness. If you must allocate scarce resources between problems to do the most good, you can't remain indifferent to how others are allocating their resources, if you also care about maximizing impact per dollar spent.

Any movement, by definition, also has to self-perpetuate to be successful and accomplish its goals. Past a particular size, any movement that imbues its adherents with a desire to work on neglected problems, will become inimical to its own growth.

The following might drive home the intuition: If we lived in a world in which the EA movement was 10x its size, would shrimp welfare as a cause be more neglected or less, relative to how neglected it seems today? So if someone came along, having internalized the 'ea' message of doing the most good with their resources, all else equal, they would be less enticed by EA cause areas, in the world in which a larger EA renders something less neglected pretty soon after it declares something a 'top priority'.

This tension manifests clearly in the current oversubscription problem in EA jobs. Operations roles at EA organizations that pay well under $100,000 receive thousands of applications, with extensive selection processes spanning 3-6 months. On the bright side (for EA), this is a marker of success. When a job gets tagged as "EA", it confers credibility and status – as being one of the "highest impact jobs" out there. This is basically the thesis of EA come true - aligning incentives such that the gap between optimizing for status and impact is as narrow as possible.

However, the more successful EA gets, the less likely it is that EA jobs are the most impactful jobs out there. Some in EA defend this with a canonical line about power laws - that since these jobs are so much higher impact than everything else, they're just not worried about oversubscription. That the marginal value of an additional applicant does not diminish even with thousands of applicants.

But this seems implausible for most roles with bounded autonomy, even in exceptionally impactful organizations. The exceptions are high leverage roles like leading organizations or specialized technical positions. For a marketing manager or operations coordinator, it's hard to make the case that from a pool of 2000 qualified applicants, the delta between the best and second-best candidate justifies this insistence on working for an EA organization.

This points to a deeper challenge that EA faces through the lens of public choice theory. EA is not just a handful of grantmakers trying to allocate resources but also the social and intellectual capital of the movement - the people who generate ideas and execute projects. For example, a substantial portion of EA's intellectual capital is now building careers in AI safety. 

If you build a career in Area X, you will naturally be slower to update downward on X's relative importance. You'll see more arguments for X's significance, develop deeper understanding of X's complexities, and be better positioned to articulate why X matters. Even with purely altruistic motives, you might think: "I understand X deeply now, so I need to make sure others appreciate its importance."

This creates a form of intellectual and institutional lock-in. When EA identifies a cause area and invests in it, it's not just allocating money - it's creating careers, expertise, and institutional infrastructure. Any movement sufficiently large and invested in specific causes will face pressure to maintain these structures, potentially at the expense of pure cause neutrality.

One might argue for a distinction between grantmaking organizations at the highest level of EA – which strive for cause-neutrality – and the organizations they fund that work on specific problems. But this is likely a distinction without a difference. The same institutional forces that make it hard for individual EA professionals to remain purely cause-neutral affect the movement's central institutions through network effects, shared discourse, and the need to maintain stable organizational structures.

One potential solution is to transform EA into a movement that primarily focuses on raising and allocating capital, rather than providing subsidized labor to "important causes." Under this model, EA would leverage market mechanisms and incentives to achieve its goals, with movement-building efforts centered on earning to give. 

While some might object that ambitious EA projects require high-trust, value-aligned teams since impact can't be tracked purely through metrics, this argument deserves a bit more scrutiny. Yes, corporations at the highest level have a clearer optimization target in profits, but at each lower level of hierarchy, they face the same challenges of incentive alignment and goodharting that EA organizations do. Despite this, good companies manage to build effective hierarchies and get important things done. EA could similarly harness incentives and competitive dynamics to its advantage.

19

4
0

Reactions

4
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

I really liked this post, and found the second half of it especially insightful.

Executive summary: The Effective Altruism (EA) movement faces an inherent tension between its focus on neglected causes and its own growth, as increased attention to previously neglected areas makes them less neglected and potentially less impactful for new participants.

Key points:

  1. EA's success in attracting talent has led to severe oversubscription in EA jobs, challenging the assumption that these positions remain the highest-impact opportunities.
  2. The movement creates institutional lock-in through career paths and expertise development, making it difficult to maintain pure cause neutrality.
  3. EA organizations face public choice theory challenges as they build infrastructure and careers around specific cause areas.
  4. Proposed solution: EA could shift focus to primarily raising and allocating capital rather than providing subsidized labor, leveraging market mechanisms instead.
  5. Current model of requiring value-aligned teams may be unnecessarily restrictive, as corporations successfully handle similar incentive alignment challenges.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities