Hide table of contents

I remain deeply convinced by Effective Altruism (EA), even if I don't always align with the community's conclusions on how to act because I see EA as a question with no definit answer. In this article I argue that CEA’s guiding principles[1]—commitment to others, a scientific mindset, openness, integrity, and collaboration—are admirable, but I believe they are insufficient on their own.

Many people believe they are doing good, as it's difficult to live with the idea of being a bad person. For example, Hitler likely believed that Jews were a problem and that eliminating them was for the greater good (a view I absolutely reject) . Although we don’t consider him an altruist, he might have claimed to embody values like those cited above albeit in a distorted way. This suggests that these values, as they currently stand, are not enough to guide ethical actions. If such values can be misinterpreted to justify atrocities, they need supplementation.

Pause for a moment and consider: What additional value might prevent well-intentioned individuals from committing harmful acts under the guise of altruism?

While pointing out a problem is easy, fixing it is more complex. It takes longer to rebuild than to destroy. This is why I ask for your help in finding the right solution. If you have time, please share your thoughts in the comments.

My Thoughts on the missing value

I believe the missing element is responsibility. I'm not risk-averse when it comes to doing good. For example, if I have a 5% chance of saving one million lives versus a 99% chance of saving 50,000, I will logically choose the former. However, we also need to account for the risk of doing harm. Even with good intentions, we are still accountable for any negative consequences of our actions. For example, if the first scenario has a 50% probability of killing 2,000 people we should choose the second option. Of course, in real life things aren’t this clear.

To properly assess risks, we need to evaluate both the potential benefits and harms. Some might argue that overthinking negative outcomes can be counterproductive, and I agree. Caution should be proportional to the potential impact. Effective altruists, who aim to have the greatest impact, should bear a great deal of responsibility. The case of the PlayPump is a well-known example of a well-intentioned project that caused harm, but there are other, less obvious cases that require careful evaluation. Often, these adverse effects cannot be captured through traditional methods like randomized controlled trials (RCTs) because their impact is external. One option would be a democratic assessment of each risk as wiseness of crowds can surpass expert opinions (If enough people are interested in this proposal like one comment about this and we could try making a poll for this week’s theme global health vs animal welfare)

How I Apply This to EA

Responsibility, in my view, means that decisions must be made. In debates like animal welfare vs. global health and similar topics, I believe the EA community has remained too neutral for too long. Responsibility requires recognizing that neutrality can have negative consequences, and these should be avoided. Therefore, EA should take a clear stance on what it currently considers the most cost-effective option. Not taking a position can dilute our efforts and reduce effectiveness, especially within each cause area.

This doesn’t mean the community can’t change its mind as new information becomes available. If decision processes are made publicly, they can evolve as our understanding and knowledge grow. We could imagine a democratic vote on key decisions and how to apply them.

For a more concrete example, I appreciate that for each EA fund there’s a section explaining "Why you might choose not to donate to this fund." However, I believe we should take this further by providing a dedicated section on "Which fund is the most cost-effective?" This would help guide decisions more effectively and responsibly.

How I Apply This in Different Areas of EA - Summary

If we add responsibility as a core value, EA’s aim should be to take the most effective actions while considering how to avoid worst-case scenarios in each cause area:

  • Global Health vs. Animal Welfare: The worst-case scenario, in my view, is a healthy and wealthy human species continuing to exploit other sentient beings. While both causes are important, we should prioritize preventing such exploitation. As always, we should be cautious about dichotomous choices; for example, directing funds to global health may still be better than not donating at all.
  • Accelerating AGI vs. Regulating AGI: The worst-case scenario is AGI exterminating sentient beings. Therefore, we should focus on regulating AGI. Accelerating AGI development versus regulating AGI seems counterproductive. Regulating AGI wouldn’t slow its progress but accelerating it without adequate oversight poses extreme risks. As always, we should be cautious about dichotomous choices. Addressing immediate challenges—such as AI recommendation algorithms—can improve our chances of managing AGI risks in the future. This doesn’t mean ignoring AGI risks, but rather shifting focus to more short-term, tractable issues.
  • Global Health vs. Long-termism: The worst-case scenario is helping current generations while neglecting future ones. Toby Ord’s argument for long-term thinking should be used to advocate for preventing existential/ catastrophic risks like nuclear war, pandemics, and ecological disasters. We should also consider solutions that improve health while reducing long-term risks, such as addressing the overconsumption of animal products.
  • Animal Welfare vs. Animal Breeding: The worst case would be marginally improving animal conditions without addressing the root issue: the sheer volume of animals we raise. We could prioritize educating the public and shifting political discussions to change this. For example, regulating AI recommendation algorithms might be more effective by shifting information flows to raise awareness of these issues.

As always, we should be cautious about seeing each causes individually. This prevents us from seeing systemic issues and their solutions.

To dive in deeper

Global Health vs. Animal Welfare

This week’s topic is global health vs. animal welfare.

First, I think the term "animal welfare" should be reconsidered. A term like "animal breeding" might better capture the broader impacts, as this issue is not just about the welfare of animals but also affects global health, the environment, and other areas.

I don’t believe the worst consequence of funding global health is that poor people would start eating more meat. The real issue is that by directing funds to global health, EA, which is supposed to maximize good, is diverting money from more cost-effective animal welfare initiatives. Philanthropic funds are limited, and every dollar spent on global health is one that isn’t going to animal charities. It seems to me that animal welfare is more cost-effective. Therefore, I argue that donations should prioritize animal welfare, and only if the donor is unwilling, should global health be considered.

Why AI Regulation Could be More Effective Than Traditional Animal Welfare Initiatives

There’s a significant information gap regarding how meat is produced, its environmental and health impacts, and its hidden costs, which has fueled overconsumption. Investing in public awareness would likely be more impactful than campaigns to free chickens from cages. For instance, AI recommendation algorithms could promote content that highlights these negative effects, potentially influencing millions or even billions of consumers.

On Global Health

In terms of global health, the key question should be: Which approach is most cost-effective? For example, how do we compare "Giving Green" (long-term focus), GiveWell (near-term focus), and The Life You Can Save (broader approach)? If we follow Toby Ord’s reasoning, I believe that Giving Green may be the most important, as it focuses on long-term impact.

AI: An Urgent but Misguided Focus?

The worst-case scenario for the EA community’s work on AI is focusing too much on long-term risks while neglecting the rapid advancements happening now. AI is evolving faster than climate change, and we must address near-term issues—like AI in military applications and information manipulation—so we can influence long-term outcomes.

Systemic Solutions

The EA community often focuses on solving specific cause areas, but given our limited resources, we should also consider systemic solutions that offer high-impact and reduced adverse effects. Two systemic solutions stand out:

  1. Focusing on information flows (AI recommendation algorithms): This could be a game-changer for global health, animal welfare, ecology, and institutional decision-making.
  2. Addressing animal breeding: Tackling this issue would have far-reaching benefits, impacting animal welfare, global health, ecological sustainability, and pandemic risk.

Conclusion

Effective Altruism has made significant strides in tackling some of the world’s most pressing issues, but as the community grows, so too must its values. While the foundational principles of commitment, scientific thinking, openness, integrity, and collaboration are essential, they are not sufficient on their own. Adding the value of responsibility would help ensure that our decisions are not only impactful but also cautious of unintended consequences. It would push the EA community to take clear, accountable stances on difficult trade-offs and act decisively, while remaining open to change as new evidence emerges.

Responsibility also means thinking systemically recognizing that choices in one area affect others, and that solutions can and should address multiple cause areas simultaneously. Whether we’re debating global health versus animal welfare, or how to handle AI risks, it’s crucial that we not only aim for the greatest good but also take care to avoid the worst outcomes. By integrating responsibility into our values, we can navigate the complexities of doing good more effectively and ethically.

In this article, I am expressing my opinion on subjects I admit not knowing much about. I am not advocating for these precise measures, but I believe we should research these ideas as a community. Effective altruism should still aim for the most effective altruistic actions. What I want people to remember is simple: with great power comes great responsibility. While this principle primarily applies to those with power within the community or to wealthy individuals, it is crucial for every organization claiming to be inspired by effective altruism.

The future of EA depends on making tough choices and embracing the accountability that comes with them. Only then can we truly maximize the good we hope to achieve.

Notes

This article was improved with the assistance of ChatGPT, as my writing skills aren't as strong as I'd like them to be. This is my first post, and I hope you find it interesting. It summarizes my thoughts on Effective Altruism, which I’ve been acquainted with for a few years. I'm open to learning and evolving my views, so please feel free to comment if you have a different perspective, criticism, or suggestions.

  1. ^

6

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

This all makes sense to me - I am fairly new but I also think that EAs already think a lot about the downsides of their actions (the pattern of "advantages of x minus disadvantages of x mean that the expected value is Y" seems pretty common, and rethink priorities portfolio builder tool (https://rethinkpriorities.org/publications/portfolio-builder-tool) also has "expected negative value" bits, and pe ple seem to care a lot about downstream ripple effects from e.g. health interventions. Are there some specific examples of EAs ignoring downsides that motivated this post?

Hello,

Yes I believe we see some people in the community talking about downstream ripple effects. Here I wanted to point out that it was not in the values nor the principles of CEA which I believe would be important. I do not have specific examples (It seem to me that RP portfolio is made to reflect your opinion of risk not a factual risk). If I had to give some examples I would say that a lot of ea orgs aren't clear about theyr choice and the possible downside of theyr work like Givewell, the life you can save, Giving green etc.. For example in neither of both Givewell or The Life You Can Save website's you can find the GHD vs AW dilemna, in Giving green you can't find possible downsides of advancing energy or decarbonising energy... I really appreciate the work that they are doing. The more problematic for me is companys that have a great impact like Open ai. They are making money devellopping a dangerous technology without acknowledging the risk they are taking. I realise my criticism isn't worth a lot since I am to ideal for the real world where funders probably don't want to hear about GHD vs AW. Nonetheless, I believe responsability should be part of the EA values and we should be more cautious about areas where the community has a great impact (because it took hold of neglected issues like AI) 

Executive summary: The author argues that Effective Altruism (EA) should add "responsibility" as a core value to its existing principles, to help guide ethical decision-making and prevent unintended harm.

Key points:

  1. Current EA values are insufficient to prevent misuse; adding "responsibility" could help avoid harmful actions done with good intentions.
  2. EA should take clearer stances on key issues like global health vs. animal welfare to increase effectiveness.
  3. Decisions should consider both potential benefits and risks of harm, with caution proportional to impact.
  4. The author suggests prioritizing animal welfare over global health based on cost-effectiveness.
  5. Systemic solutions like regulating AI algorithms and addressing animal breeding could have wide-ranging benefits.
  6. EA needs to balance long-term goals with addressing urgent near-term issues, especially regarding AI development.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

If you believe that we should implement direct democraty into ea and begin by trying on this dilemna (WA vs GHD) like this comment :) If enough people are interested ill post a group link in the comments so that we can think together on how to go forward with it.

Curated and popular this week
Relevant opportunities