Thank you to Emma Abele, Lennart Justen, Lara Thurnherr, Nikola Jurković, Thomas Woodside, Aris Richardson, Jemima Jones, Harry Taussig, Lucas Moore, and James Aung for inspiration and feedback. All mistakes are mine.
Summary
- I don’t think powerful emotions related to the desire to do good in the world (e.g. feelings of awe, outrage, or fear) surface enough in EA relative to how important they can be for some people.
- I argue that we should give such feelings of ‘emotional altruism’ more consideration.
- Possible benefits of a greater emphasis on emotional altruism include: Improving internal community health; Accelerating EA engagement; Reaffirming healthy motivations; Creating an inspiring work setting; Conveying a more sympathetic community image; and Instilling personal well-being/ purpose.
- Ways to affirm emotional altruism include: More within community discussion; Talking more about beneficiaries in outreach; Crafting experiences that allow powerful emotions to bubble up; More powerful writing; More videos and virtual reality content; New creative artwork; and Deliberately having standout conversations.
Introduction
Effective altruism grapples with profound themes. Themes like suffering at unimaginable scales, futures that transcend our notions of what “good” could look like, and the end of everything we’ve ever known do not occupy the realm of ordinary thoughts. And for many, myself included, these themes do not inspire ordinary emotions. Rather, they are entangled with deep, visceral emotions – emotions that can cast us into deep reflection, that can bring us to tears, and that may underlie the very reason we are in EA.
The feelings I’m gesturing at are delicate, and so is the art of conveying their immensity. Unfortunately, I’m not here to offer them the poetic justice they deserve. Rather, I’m here to claim that these emotions don’t get enough attention, and discuss why I think giving them more attention could benefit the EA community.
What emotions am I talking about?
I’ll give you a sense of what emotions I’m talking about when I use terms like “visceral motivation” and “emotional altruism” by sharing what thoughts can, at times, invoke these feelings for me. But a caveat first: I’m not referring to any one emotion. I’m thinking about a constellation of different emotions, all sharing an ability to inspire good. Feelings that come to mind for me include:
- A sense of awe: Holy shit, so much is at stake. The lives of billions, wait trillions, of beings may be at stake in this century. My care-o-meter simply doesn’t go high enough to feel these numbers anywhere near where they should register. But, every once in a while, I get a glimpse of the moral vastness such a number demarcates– and it’s a vastness I can’t overlook.
- A sense of horror: What if we fail? What if we, humanity, fall short and the lives of billions are ended and foreclosed? It’s not written in the stars that humanity goes on and flourishes— what if we live in a world where that doesn’t happen?
- A sense of guilt: who am I to have been born so lucky? At no expense of my own, I’m among the most well-off human beings that have ever existed. What did I do to deserve this, and how can I possibly repay my good fortune?
- A sense of loving-kindness: Think of the billions of conscious creatures walking this world right now, at the whims of their environment. What a peculiar condition we find ourselves in, craving to make sense of the world, yet so often victims to it. If I could relieve the suffering of all beings, I would. May all find peace and joy in their cosmic blip.
- A sense of outrage: How can so many of us act like everything is fine? Can’t you see all the people dying only because people didn’t give a shit?? Can’t you see all the animals being mass-produced only to be slaughtered for our normalized cuisine?? Open your eyes!
- A sense of gratitude: Oh, how lucky I am to be alive; to be empowered to do something with my aliveness. What an incredible opportunity I have to impact the lives of so many around me; what an incredible group of people I’ve surrounded myself with.
- A sense of yearning: The story of humanity could just be getting started. We could do so, so much more. Creating a profoundly good world is actually possible, and I so desperately want to work towards that world.
- A sense of siblinghood: We are all in this together. We've all been dealt a really unfair hand, and we have to make most of it, together. The problems tower over us. Alone we cannot survive or thrive, but together we stand a chance.
- A sense of ancestral duty: We stand on the shoulders of giants. People have been fighting to make their own microcosms better for millennia; the torch has been passed to us and it flickers in the wind. We owe it to every generation before us to pass the torch to the next generation.
Some people may relate to a few of these emotions, some none, and some might wish to add their own. (Please share what feelings resonate for you in the comments!) I don’t think the specifics matter for the argument I’m making. Crucially, there exist raw, deeply personal emotions that underpin one’s desire to do good, and I think many EAs' feel such emotions in the face of suffering or the vision of a brighter future. This visceral response to a cruel world or a vast potential is honed by rational thought to land at effective altruism, but the motivation is emotional at the core for many.
This isn’t the case for everyone: some people may arrive at EA following a series of rational arguments void of strong emotional appeals. And that’s fine. I do not wish to prescribe a “correct” motivation to do good in the world, I just want to speak to the emotional one. Even if it is not present at all times or accessible in all settings, I think powerful emotions form the bedrock of many people’s connections to effective altruism.
Yet the emotional weightiness of EA can feel muffled, glossed over at times. This can be at a personal level: in moments where life is asking a lot, it can be difficult to find those pure wells of motivation. That seems normal. But I also sense this at a community level: aren’t we here for the same reason? Don’t so many of us feel the same yearnings?
There are fair reasons to quell strong emotional motivations. At times we need to dilute our emotions with a pragmatism that allows us to make progress on the very thing our emotions are telling us is important. And talking about these emotions can be scary. They’re personal, and disclosing them can leave one feeling vulnerable.
But I think the resonance we can feel with others who share our hopes for a better future is worth stepping out of our comfort zone for. What's more, if we gloss over the emotional weightiness of what's at stake and how it makes us feel, I think we undersell the EA project. We risk giving outside observers a tainted picture of why we do what we do, and we risk not inviting in people who too feel that the world is fucked up.
We should be mindful of muffling the emotional grandeur embedded in the problems we tackle. I’m not claiming EA needs a seismic shift in the extent to which it relies on emotional vs. rational appeal, or that all members should gush about their most personal motivations for doing what they do. But, for those among us who draw a lot from emotional reservoirs, a little more emotionality could go a long way.
Possible benefits of a greater emphasis on emotional motivation
- Improving internal community health:[1] For better or worse, EAs judge one another. It can be easy to see others behaving in a way that seems utterly unrelatable and question their motivations. Anecdotally, friends have shared how unsettling they’ve found some encounters with EA ‘rationalist’ events, in part because of the absence of a sense that they were there for the same reason: helping others. This uncertainty leaves an avoidable uneasiness. I became more sympathetic to parts of the EA community I found foreign after a late-night conversation with an AI safety field builder about free-spending. “We’re sprinting,” I remember being told, “because we think we might fail. There’s a world in which we all die because we held back– because we didn’t do enough.” That hit. That, I could relate to.
- Accelerating EA engagement: EA outreach content solidly conveys EA ideas, but I don’t think it sincerely conveys how one can experience EA as something more than a cool idea. I, and quite a few others I’ve talked to, needed something like 500 Million, But Not a Single More or On Caring to really grasp onto EA. If it weren’t for those, we might not have stuck with EA or our engagement would have been much slower.
- Conveying a more sympathetic community image: People will judge anyone claiming to ‘do the most good’ critically. As a community of mostly consequentialists, the EA community has an uphill climb to be likable.[2] We’re only making it harder for people to like us if we don’t radiate the genuine altruistic source of our actions
- Reaffirming healthy motivation: I think if those who subscribed to having strong emotional motivations talked more about their motivations, we could collectively steer ourselves towards healthier, less guilt-based motivations.
- Creating an inspiring work setting: I relish working with and under people who have reflected on why they’re in EA. It makes me want to work my ass off. It doesn’t need to be much either: just a nod that our feelings are pointing us somewhere similar. I also feel this when I read posts from unmet members of the community. “On Future People, Looking Back at 21st Century Longtermists” made me want to hug Joseph Carlsmith – and then dedicate myself to my work.
- Instilling personal well-being/ purpose: Cultivated correctly, I think emotional motivations to do good can be the foundation for a deep sense of well-being and purpose.
Ways to affirm emotional motivation in effective altruism
Below are ways I think the effective altruism community could better tap into people’s emotional motivations to do good in this world. I’m excited about these, but I think they need to be approached delicately. Conversations, content, or experiences in the wrong settings can come across weird or culty. Some of the below recommendations need more careful consideration than I’m giving them here.
More within community discussions
I wish it were more common to talk about our EA motivations with fellow community members. Why do you dedicate yourself to this? Such conversations should be approached delicately, but, if you ever really resonate with someone, I’d encourage you to go out of your comfort zone and steer the conversation to the feel-y realm.
Example: weekly work check-ins to share motivations and try to cultivate a shared sense of purpose. Some people may also draw a lot from ‘metta,’ or loving-kindness, meditations.
Talk about beneficiaries in outreach
When we talk about EA with those unfamiliar, we should remind people of the end goal: helping sentient beings. Give others something to grasp onto. “Why are you going to this EA Global conference?” “I want to help people in our generation and future generations, and connecting with this community seems like one of the best ways I can do so.” EA is instrumental in helping others, but I’m worried our appeals often ascribe it terminal value[3] (not that EA isn’t something to celebrate!)
Example: When talking about biosecurity, paint a picture of the millions of people who could die awful deaths if a virus were released, not just the different levels at which one could intervene.
Craft experiences that allow for powerful emotions
It’s hard to overestimate the importance of setting and vibes when conveying emotional motivations. The same words in the context of a fluorescent-lit classroom and a moonlit field invoke very different feelings. With this in mind, I think smaller, more intimate retreats can often offer the most transformative experiences. They allow for a change in setting and a sense of shared purpose that even EAG events can’t match. (EAG after-parties or late-night walks, however, are a different story).
Example: Organize events (e.g. small retreats) that take aspiring EAs out of their default settings. Be deliberate in designing programming (e.g. late night walks) that allow for these conversations, but don’t force it.
More powerful writing
Powerful writing has already left a mark on many EAs. I and many others cite works like On Caring as formative in our EA engagement. I’d love to see more people try to capture their own motivations (or challenges) in personal pieces, or even fiction.
Examples: more writing like On Caring, 500 Million, But not a Single More, The value of a life, and Killing the Ants. I’d love to see more thoughtful pieces detailing why you are involved in EA. I’d also like to see more people experiment with poetry, fiction, and a variety of different framings of EA concepts (need not be explicitly reference EA). For example, I’d love to see more explorations of:
- Our psychological pulls away from rational compassion, like scope insensitivity, an aversion to prioritization, and a narrow moral circle.[4] (I’ve created this interactive module on the topic, which I hope to write about more soon).
- Empathizing with the human condition across time – in the past, present, and future.
- The possibility that the entire future of humanity might actually hinge on the next few centuries.
- The novelness of the altruistic situation we find ourselves in in the 21st century. Never before has it been so easy to be aware of suffering at a global scale and to help at a global scale.
- How good the future could be.
- The possibility that humanity could actually, really fail. There’s no second try.
More videos and virtual reality content
For some people, I think the best video could be better than the best piece of writing for conveying the ‘why’ of EA. And I think the best virtual reality content could be even better. [5]
Examples: Videos like The Egg, the Last Human, There’s No Rule That Says We’ll Make It, and novel virtual reality content have an enormous power to convey EA concepts and their corresponding emotional weight.
New creative artwork
Cold take: art in EA seems underexplored. Another cold take: Art evokes strong feelings in some people. This post does a nice job sourcing and categorizing some existing art connected to EA. Creating good art that overlaps with EA seems difficult, and my naive recommendation would be to focus more on conveying certain EA-related ideas or mindsets (e.g. coordination problems).
Example: Try to create a comic or graphic novel that conveys EA ideas.
Deliberately having standout conversations.
Having standout conversations about peoples’ personal emotions and motivations towards EA feels like a skill. I perceive attributes of the best conversations to be: consensual (people want to engage), sheltered (people understand that nothing they say will harm them), curious (people try to genuinely understand where the other is coming from), and loving (radiating goodwill).
Example: I’d be excited about people thinking deliberately about how they could improve on such conversations and bringing that energy to EA spaces (e.g., fellowships, board meetings).
Meta-ideas
- Contests: Running a contest to source motivational content would be exciting.
- Interviews with prominent EAs: I like the idea of transcribing (or recording) a series of interviews with prominent EAs about what inspires them to take EA so seriously. Would be cool to do this for people across a range of cause areas, especially those that are not the most ostensibly ‘emotional’ (e.g. AI Safety, meta-EA). I think thought-leaders have a lot of power to shift the tone here.
- Research: Researching how other altruistic movements in the past have inspired more action on altruistic principles or whether there’s relevant wisdom in psychology literature. For example, how do you get people to bridge the gap from recognizing something as probably true to really acting on it?
Conclusion
Here’s a metaphor I find fitting for EA’s project:
When it comes to doing good, let emotions be your gas pedal, and careful reasoning your steering wheel. [6]
We’re good at not forgetting how unreliable our feelings are as guides for how to help other beings. But sometimes I worry we're also good at forgetting the feelings themselves. Don’t, I say. Let’s repurpose those feelings to do exactly what we want them to do.
We don’t need to impose emotional weightiness on concepts like existential risk, animal suffering, or whatever else we’re dedicating ourselves to. Emotional weightiness is already embedded in the parts of reality these concepts point to. We can cultivate emotional motivation that stays true to the pursuit of doing the most good.
This post is written from a place of and about a visceral desire to improve the world. In pure EA fashion, I’ve done gone and intellectualized that feeling. But I hope the sentiment still comes across. Promoting EA should stay closely coupled with promoting – or tapping into – the desire to improve the world. When we talk about effective altruism, we should allow it to be profound.
- ^
This could be its own post. I think there are growing rifts in our community, and I wish we focused more on what we had in common. The AI safety/ rationalist communities also care a lot, even if they could convey this better. (AI safety/rationalist communities, please convey this better).
- ^
See psychology research: Does maximizing good make us look bad? and review of prosocial behavior and reputation. Also, when’s the last time you saw a movie where the protagonist was a consequentialist?
- ^
Thomas Woodside’s speech to Yale EA captures a similar sentiment: “I don’t care about Yale EA”
- ^
See this recent publication on The Psychology of (In)Effective Altruism by Lucius Caviola, Stefan Schubert, and Joshua Greene.
- ^
I’d love to see people think more about what virtual reality could do here – and then do it. I remember being awe-struck at the grandeur VR could convey the first time I put on a headset and floated through the free trial of the International Space Station.
- ^
I forget where I found this and who to credit. Can anyone help me out?
YES.
I love this post. I think EA has a weakness when it comes to storytelling and grabbing hearts. We're great at appealing to the cerebral folk with careful reasoning and logic, but they're a small minority. If we want EA ideas to percolate deeply we need to be outcompeting the other heart-grabbers, which means appealing to emotion and layering the logic on top. IMHO.
At the risk of self-promotion, I wrote a motivational essay on EA a few years ago, Framing Effective Altruism as Overcoming Indifference.
Thanks for sharing! I hadn’t come across this but I like the framing.
Wonderful post Michel; thanks for your thoughts.
I think there's an understandable wariness of emotionality in EA, since it so often leads into Rousseau-ian romanticism, sentimentality, and virtue-signaling that's the exact opposite of Bentham-ite utilitarianism, scope-sensitivity, and rational thinking valued in EA. See, e.g. Paul Bloom's excellent 2018 book 'Against empathy'.
However, I think it's important for EAs to take emotions more seriously for at least a couple of reasons. (Full disclosure: I've taught a seminar on 'Human Emotions' about ten times for upper-level undergrads, and we often end up discussing EA topics.)
First, emotional experiences are the basic building blocks of sentient utility. If we're trying to maximize the quality, quantity, and variety of sentient well-being, it might be important to understand the origins, nature, functions, triggers, and details of animal, human, and artificial emotions. Both from the outside (e.g. as students of emotion research) and from the inside (as people who have deep personal experience with the full range of human emotions -- including holding them at arm's length during mindfulness meditation or psychedelic experiences).
Second, emotional experiences, as you argue, can be great for recruiting new people, helping them understand the stakes and ethos of EA, and in keeping current EAs happy, motivated, and inspired. I agree that more development of EA-oriented stories, art, music, videos, films, etc could help. The challenge is to recruit the kinds of artsy creatives who will actually resonate to the hyper-rationality of the EA mind-set. The Venn diagram of people who can write great screenplays, and people who actually understand long-termism and scope-sensitivity, might be quite small, for example. But those are the kinds of people who could really help with EA movement-building -- as long as they don't dilute, distort, or trivialize the core EA principles.
Thank you for the thoughtful comment Geoffrey. I agree it's a fine balance between being wary – but not dismissive – of emotions.
I spent the latter half of my psychology undergraduate harping against emotions – talking about how our evolution has left us with unreliable guides for who to care about and how much to care. (Basically regurgitating Against Empathy).
Yet here I am writing a whole post on emphasizing emotions. With enough nuance on which emotions we're talking about and in what settings, however, I think both views are compatible. (I appreciate your comment to that effect).
I also think your Venn diagram comment is apt. I agree it's a narrow overlap, but it's one I'd like to see a few more people with the right aptitudes lean into.
I really appreciated this post and it touches on themes I have wanted to write about as well. I have two related questions/comments:
Thanks for your thoughtful post!
Thanks!
Awesome post! I often think that EA needs to better incorporate this huge part of human experience into its discourse, and I think it can go beyond simply motivating people.
This essay also touched on a lot of themes of the Replacing Guilt essay series, which also came out of the EA community.
I agree that emotional approaches can have upsides. But they can also have downsides. For instance, Paul Bloom has a book-length discussion (which covers EA) on the downsides of empathy. Likewise, it's well-known that moral outrage can have severely negative consequences. I think this post would have benefitted from more discussion on the potential costs of a more emotional strategy, since it seems a bit one-sided to only discuss the potential benefits. (And the comments to this post seem pretty one-sided as well.)
Organizational purpose consultant here. You would not believe the human potential left on the table by orgs that don't tap into our deeper, non-rational / personal motivations.
Nice piece, and I mostly agree as written.
One concern I have about raising the salience of emotions in EA is that might more frequently turn into prospective incentives (generally bad) instead of retrospective motivation (generally good).
What I mean is that people might (naturally and likely unconsciously) gravitate towards projects and cause areas that inspire positive emotions (say, awe about the future) and away from those that induce negative ones (say, distress about suffering).
This seems right and is something EAs who resonate a lot with emotional pitches should be cautious of. A recommendation would be to first do cause prio at a rational, pragmatic level. Then, when digging into the problem you deem most important, try to find parts you can emotionally latch onto to keep you motivated.
Yes, totally agree!
Fantastic topic and writing. Very well argued, and its ethos comes through loud and clear.
(I'm not looking for self-advancement here. Rather, it is my hope that this musical inspiration be a "gas pedal" for others as it has been for me over the years)
Figuring out the intersection of EA and music is such a brilliant thought that I had yet to have, but yet getting to connect two worlds I am so deeply involved with seems quite interesting to me, and I love that you gave this a go. Want to open it up to be collaborative so I can try to place songs that evoke a similar feeling for me? If not, nw, I can begin crafting my own.
But also, I noticed that there are multiple connections to visual media throughout (Day One - Interstellar, The Night We Met - 13 Reasons Why, Ashitaka and San - Princess Mononoke, etc.) and given I think I resonate with the connected thoughts to these other works, was wondering if maybe it would be good to make a sort of "Compassionate Longtermist Emotion Building" list of movies and shows? I can easily see this list including Interstellar and Mononoke (I have qualms with 13 Reasons, but fully feel the beauty of the song) and feel like I have at least a couple more strong additions to go from there (beyond the obvious Don't Look Up). Would love to hear your thoughts on this!
Great post, thanks for writing this up! I'm especially impressed by the compilation and description of different types of motivating emotions, seems quite comprehensive and very relatable to me.
I have one question about a minor-ish point you make:
"This isn’t the case for everyone: some people may arrive at EA following a series of rational arguments void of strong emotional appeals."
I've been wondering about that sort of reasoning quite a bit in the past (often in response to something an EA-minded person said). How can you arrive at EA-ish conclusions and goals solely through a serious of rational arguments? Do you not need emotions to feature at some point in order to define and justify how and why you seek to "make the world a better place"? (In other words: How can you arrive at the "ought" solely through rational argument?)
Great article, this is how to reach people. Use the tools of creativity to shine the light on the reasoning. For example Max Tegmark is concerned about slaughterbots. The Back Mirror slaughterbot episode gave everyone who saw it an emotional shiver down their spine. The deep creative skillset large brands tap into should be used on the world most pressing problems. I've been working on this for the last few years and have started a creative agency to do exactly that.
This is post great! I had the idea for a similar post but you put it better than I could.
I hope more diverse messaging attracts a variety of people to EA and makes them more engaged overall.
Cf. I always recommend this excellent philosophy paper "On the aptness of anger".
That was a very interesting essay! I love the distinction of aptness vs instrumentality.
However, the closing paragraphs posed an odd turn of arguments -- essentially, the author tries make a move to say, "I reserve the right to be angry because it is one of the few/last instruments I be any kind of productive." While I agree with assessment, it seems to do a disservice to the author's argument to draw the attention back to the instrumentality of anger. The whole strength of her argument was that there is some place for anger, just as we grant to aesthetics and beauty and senses of justice, in discourse, that stands before considerations of instrumentality.
Lastly, it is also interesting that this essay expresses some disdain for consequentialism as oppressive. That is another intricate dynamic that may be pertinent to EA.
I like it! One of my hopes in writing this post was sourcing posts people have related to this theme so I appreciate you sharing this.
Carl Sagan was inspiring and a great educator.
We must inspire people to think better (or be less wrong), and bring out the best in others (or do good better). We can't reason or lecture people into changing their behavior.
Here's an example:
If your next car were 10x more powerful, would you want more safety features, traffic rules, and driver training? Would you trust car companies alone to address all risks created by these 10x more powerful cars? What safety features, regulations, and public education will be needed when social media, AI, nanotechnology, robotics, or genetic engineer becomes 10x more powerful? Do you trust companies alone to address all the risks created by new technology?
Perhaps more importantly, what would you do to help humanity become 10x better at being objective, understanding and respecting others, and helping others?
Lastly, (self-promotion coming...) my post about inspiring humanity to be its best:
https://forum.effectivealtruism.org/posts/7srarHqktkHTBDYLq/bringing-out-the-best-in-humanity