I believe altruism itself is one of the most important, neglected, and tractable topics facing the world today.
Altruism is possible at any level of poverty/wealth. When I was working as a pediatrician at a hospital in Afghanistan, we often suffered shortages of antibiotics, due to armed opposition groups blocking our delivery trucks. During one shortage, there were four children in the pediatric ICU who had meningitis. Together, the families would buy the antibiotics in the market and share a vial among their children. Antibiotic dosing by weight meant bigger children received a bigger proportion. On the day the mother of the tiniest baby was to buy the antibiotics, there was also a shortage in the market. I took her aside and explained to her that if she shared the antibiotics she purchased, she would not have enough for her own baby the next day. She was adamant the vial was to be shared.
Meanwhile, there seems to be an unprecedented upsurge in egoism in the world. From all walks of life, people seem to be increasingly motivated to self-serving activities which benefit themselves, or at best, the people of their own community, with little to no consideration to the outcome on humanity, non-human animals, the environment, or the future. There are obvious examples of such activities: nationalism, religious wars, partisan politics, capitalism, elite athletics, consumerism, space tourism, tourism, corporate climbing, and stock trading. Less obvious examples include egoist leisure pursuits, “bullshit jobs,” (David Graeber) the belief that “charity begins at home” (vs other beneficiaries, where the need may be more dire), and even the pursuit of publication over ethical meaningfulness or general usefulness in academia.
I wonder if the effective altruism (EA) community has considered the potential in humanity? It is clear the leaders in effective altruism have given and are giving considerable thought to the most important concerns of our time and of the long-term future and how to address them most efficiently. When I stumbled on the EA organizations, because a favourite author pledged to Giving What We Can (Rutger Bregman), I found a school of thought I have been contemplating for years but could never adequately articulate.
Oddly, however, I think I see a missing piece in the work. I hope I am wrong in this observation, but although I see EA is doing considerable work in research in artificial intelligence alignment, longtermism, animal welfare, reducing existential risk, and other “large scale, neglected, and tractable” concerns, increasing the practice of simple altruism work does not seem to be prioritized.
I would imagine EA would be more effective if there were more people who could be convinced to work, donate, or otherwise support the field of effective altruism, or simply be more altruistic. Additionally, unsurprising to anyone in the EA community, general happiness would improve, both within new EAs, as well as recipients of their efforts.
I understand there are several barriers to behaving altruistically, overcoming which is the foundation of my idea.
I believe it is difficult to redirect a person’s belief from egoism to altruism. One factor in perpetuating egoism is the current common perception of oneself as the underdog, or victim. I think the perception is driven by current media and social media sensationalizing the negative, alarming the viewer/reader into victimhood who then consolidates efforts to protect oneself, one’s people, and one’s property from threat. By threat, I do not only mean existential risks of pandemic, or nuclear war, but threats as mild as “inflation is increasing,” and “pesticide use on food products may be harmful.” Improving one’s self-identity as agent of change, rather than victim, may move people from egoism to altruism.
Another barrier to utilitarian work in altruism seems to be the lack of understanding that we are a global community. This barrier seems nearly insurmountable, with ongoing religious ideology, nationalism, economic disparity, and the unfortunate legacy of racist, patriarchal history preventing the belief that every person’s life is equally worthy. There is some evidence in the contact hypothesis for reducing prejudice between groups. How it could be effective on a global scale could be an area for research.
As indicated, the extreme disparity in privilege, wealth, and power between individuals and between and within nations hampers altruistic pursuits. The “haves” retain what they have and seek more, while the “have nots” continually try to gain what they don’t have. Another danger of social media is the constant barrage of images of people with more, and products we do not have, causing us to believe we are all “have nots.” The benefits of equity are eloquently explained in “The Spirit Level” (Wilkinson and Pickett); however, it does not seem to have garnered a large enough following to effect change. Why?
I do not pretend to know what might improve universal altruism, but I have some ideas. Storytelling has been known to be more motivating than statistics, or even facts, at enticing people to give. Motivational interviewing is a guiding style of communication which may be beneficial in moving people from egoism to altruism. It is well accepted that altruism gives pleasure to the altruist— “warm glow altruism” -- which seems to be an excellent argument for recruitment. Each of these methods might be starting points for EA promotion but scaling up might be the challenge. How to scale up or share these ideas more broadly would also be a good research investment, I think.
As far as I can tell from my (admittedly limited) review of the EA organizations is that the advocacy strategy consists of Toby Ord’s book, podcasts, social media posts, emails, 80000 hours consultation, and the research arising from the Global Priorities, and Future of Humanity Institutes. Is it possible that those who work in EA are so surrounded by like-minded individuals, they think the discipline is adequately served?
What I see missing, is promotion of the universal benefits of equality, altruism, and goodwill. Here I mean simple altruism, not necessarily effective altruism. Imagine if only 20% of the population worked for the greater good. Or if every person spent 20% of their time at it? Convincing more of the world population to do right by each other, the environment, animals, and the future, in whatever capacity possible, seems to me to be the best investment the EA community could make. Working at a local soup kitchen may not be the most effective/efficient altruistic pursuit, but what if everyone did something similar, and maximized their personal fit? I have trouble thinking of a downside, but am open to counterpoint ideas.
I think improving general altruism would require a multimodal approach. Areas of research could include the psychology of altruism in human behaviour, including factors which improve and reduce it, determining features of behaviour change models, and which models are most effective, the effectiveness of certain information sharing methods, such as social media, storytelling, arts and entertainment, academia, famous champions, advocacy, policy, and education. Improving altruism, utilitarianism, and generally working for the greater good could be addressed by increasing altruistic policymakers and advisors to governmental and intergovernmental agencies on the large scale, as well as amplifying small-scale altruistic activities on social media, and slowly improving the social culture of altruism.
What happened to the baby with meningitis? Mercifully, the shipment of antibiotics came the next day. The idea that one poor Afghan mother would risk her own baby for the benefit of all the children strengthens my belief that humanity is deeply altruistic, if given the opportunity.
I think that the fact that it may be difficult to increase the level of altruism in the population is an important consideration. Many groups have tried to increase the level of altruism, and while there have been some successes, my sense is that history indicates that it is hard.
Another consideration is that the large differences in effectiveness between charities mean that it may be more impactful to try to make existing donations and altruistic work more effective, than to try to increase the level of altruism; e.g. among people who aren't using their altruistic resources effectively.
My colleague Lucius Caviola and I recently wrote a paper where we discussed how to prioritise between cultivating altruism, effectiveness, and other virtues (the context is slightly different, but I think the paper is still broadly relevant to this post).
Thanks for your comment, and thanks for the excellent paper! I don't disagree with any of it. I am, perhaps, disappointed that you feel improving general altruism is too difficult to approach. It was a question about which I have no information, so I would be very interested to read any literature you have available on the attempts and failures to do so.
Regarding your second point, I also categorically agree that IF the number of altruistic people is limited, their efforts should always be directed to the most effective. I just cannot get past the (perhaps idealist) idea that if more people were persuaded to increase their "moral expansiveness", per your paper, there would be no basis for the disparity/discord/conflict we see within and between races, genders, abilities, religious affiliation, nations, etc, and would simultaneously improve our general desire to contribute to help others.
Thanks, that's kind! To be clear, we distinguish between altruism per se and moral expansiveness. You can be an altruist but have a narrow moral circle/be morally partial; and conversely you can be not so altruistic - sacrifice few resources for others -but distribute those few resources impartially. And I think that moral circle expansion is more tractable than increasing altruism - asking people to sacrifice substantially more resources to others. Also, I should make clear that I don't hold these views with a high degree of certainty.
That's a good point. I've also been wondering about how this differs between cultures (which has to be taken into consideration when designing interventions), specially after reading The WEIRDest People in the World by Joseph Henrich. A quote from the book:
He also gives many examples of the "impersonal prosociality" (i.e., trust, fairness, honesty, and cooperation with anonymous others, strangers, and impersonal institutions such as the government) that we have in WEIRD societies vs. non-WEIRD societies, such as:
That's a beautiful story, thanks for sharing
Thank you for posting! Many kudos for contributing to the frontpage discussion rather than lurking for years like many people (including me).
I agree with most of your assessment here. But I think rather than "simple altruism", it would be better to focus on "altruistic intent". Making this substitution doesn't change much, the major differences are just that it includes EA itself, and excludes cynically motivated giving. The thing I think we care about is people trying to do good, not specifically doing non-EA things.
That said, increasing altruistic intent is, I think, included under the heading of broad longtermism. I don't have a source for this, but my impression is that not that much work goes towards broad longtermism because it seems really hard, not that urgent, and EAs tend to be bad at the key skills involved, like persuasion and politics.
I agree! With both your points on renaming it "altruistic intent", and the reasons behind.
I thought perhaps improving altruistic intent must be somewhere under the EA radar, but in the very superficial reading I have done to date, I had not found it. I will look more specifically now at broad longtermism. To be honest, I was also hoping the EA community had more skills in persuasion and politics, and was already working on it.
Finally, thank you for acknowledging my neophyte attempts on a front page post. It took a lot of internal debate and self-talk to write it ;)
Since I don’t see it mentioned already, one of the major “cause areas” analyzed by e.g., 80000 Hours is “meta-EA” (promoting effective altruism). I understand that one of your points is about trying to promote altruism more generally, and I too have wondered about the potential benefits/tradeoffs of watering down the EA message to try to reach more people (e.g., encouraging people to at least donate to decently effective charities—perhaps “try to find the most effective charity for the problem you want to focus on [even if the overall problem area isn’t really that important]”). While I definitely think there are some ways this could be done better, I don’t know exactly what they are, and I have also thought of/seen a few counterpoints (non-exhaustive):
There is a chance that it could lead to message confusion/dilution regarding EA.
Persuading people in general to be more altruistic when they wouldn’t have been otherwise (counterfactually) may be fairly difficult.
(Closely tied with (1)) It may legitimately be the case that (counterfactually) persuading/causing one person to aspire to EA is more impactful than persuading 50 people to be more generally altruistic if, for example, the 50 people begin donating to seeing eye dog charities whereas the one EA person is donating to a schistosomiasis charity.
Promoting EA might be done better under a non-EA banner or it may not be EA’s niche/comparative advantage.
Yes, these are all sound counterpoints. Together, they suggest the idea is at least, neglected. I think your point 2 was also made by Stefan_Schubert in a comment above. I would be very interested to see research in the area, if there is any. I agree your points 1&3 are a problem if the number of altruistic people were finite, but what if everyone behaved altruistically, to the benefit of others? To the point that it would not matter if some people chose to donate to seeing eye dog charities?
I can appreciate your argument that promoting general altruism might not fit under EA banner, specifically because it lacks the "effective" intent, but I would argue that it could be one of the "hits based", fat-tailed investments EA has been seeking. What if it were tractable and scalable to make people generally nicer to each other, and desire to help each other, non-human animals, the environment, and the future, impartially?
Some questions this post made me realise I didn't have answers to, which seem useful to have answers to and there may be research in this somewhere already:
It seems to me that knowing the answers to these questions might help me judge whether this area is something I think EAs should be pursuing. I may look into these myself in the next few days but I'm putting them here in case someone already knows relevant info about the topic.
Yes! All this, and it was better summarized by you, thank you. I am looking for these answers.