EA claims to be utilitarian by urging giving to the most downtrodden, for example, low-income people in Africa. My definition of utilitarian is very different: "What will end up doing the most good for humankind.? And that would, for example, prioritize donating to SENG, which helps troubled intellectually gifted kids in developed nations to live up to their potential. Those kids are thus more likely to develop cures for diseases, develop helpful yet ethical uses of artificial intelligence, and become wiser, more ethical leaders, which benefits the world's humankind more than EA-touted causes. 

1

0
0

Reactions

0
0
Comments4
Sorted by Click to highlight new comments since:
[anonymous]10
4
0

Welcome to the EA Forum, Marty. Thanks for posting.

A few thoughts:

  1. EA does not claim to be utilitarian
  2. Urging giving to the most downtrodden is only one common recommendation in EA (and in fact the community is often criticised internally and externally for not giving enough to the most downtrodden)
  3. My definition of utilitarianism is not restricted to humans (far from it!)
  4. This community discourages confidently  stating controversial opinions without much supporting evidence/argument - I think people would be more receptive if you spoke with more humility, e.g. instead of saying "And that would," saying "And that might" or "And I think that would"
  5. Lines of reasoning that point to apparent panaceas are obviously very alluring, but given that you can make similar arguments for many other apparent panaceas (economic growth, empathy training, rationality workshops, global health, improving science, etc.), I think you need a much stronger argument to persuade people that your proposed solution should be prioritised above all others
  6. I still like your basic point and it looks like the Future Fund did too since they funded Pratibha Poshak (who recently posted on this forum asking for support, incidentally)

Welcome to the forum. I see that this is your first post. As others have mentioned there would still be some fleshing out to do, but thanks either way!

I think one of the reasons why this proposal isn't really part of the EA mainstream is that EAs tend to differentiate into cautious global health and development people and speculative risk-takers which go into domains such as AI, biosecurity, institutional decision-making, etc. There are a few people in the middle, e.g., the work of Charity Entrepreneurship or Innovations for Poverty Action could be categorized as speculative global health, but it's not that common.

Interesting. I agree that second or third-order effects such that as the good done later by people you have helped are an important consideration.  Maximising such effects could be an underexplored effective giving strategy, and this organization you refer to looks like a group of people trying to do that. However, to really assess an organization's effectiveness, epecially if it focuses in educational or social interventions, some empirical evidence is needed. 

  • Does SENG follow-up on the outcomes of aid recipients?
    •  How do they compare with those of similar people in similar situations, but who didn't recieve help?
  • What programs does SENG run?
    • How much does each cost per recipient helped?

Perhaps this article I've recently written will be helpful. It offers a number of examples of what I believe are more effective altruism than what the EA movement mainly touts: https://medium.com/@mnemko/more-effective-altruism-d05feba47ce3

Curated and popular this week
Relevant opportunities