Lots of folks have written about burnout in EA. It's been a problem since nearly the beginning of the movement. Folks get excited to tackle big problems, get overwhelmed, burn out when they realize they can't fix all the world's problems themselves, and sometimes even leave EA because it was all too much to handle.

I've recently become aware of Active Hope, a free, seven week, online course that teaches people how to respond to big problems in the world—like the kind we try to tackle in EA—from a place of hope rather than despair. Each week of the course is approximately one hour of videos and text materials with exercises to take into your life.

I've not completed the course, but I did work through the first week of it. Here's what I can say:

  • I've never really had a problem with burnout from working on EA issues (I'm focused on x-risk from AI).
  • I think that's because I have a mindset and practices that support me in ways that prevent getting into a headspace where burnout is possible.
  • Among the key things I think help the most:
    • I remember that everything is connected, and everything I do has impacts I may never even know about.
    • I know that I can only ever do my best, because I literally could not have done any more.
    • I leave space for others to do their part. I work where I have a comparative advantage and don't worry about the things I can't meaningfully address (yet!).
    • I have gratitude and appreciation for everything I and others do, even when it's not enough to fully address the world's problems. I need only do as much as I can and no more.
  • The course teaches you how to develop a mindset like mine through teaching practices that cultivate it.

The course has other interesting synergies with EA. The actual purpose of the course is not to help people who've burnt out in EA, but to help people who feel helpless in the face of the world's problem get motivated to do something more than wallow in despair. Although the creators of the course seem mostly concerned about issues like climate change and racism rather than global poverty, animal suffering, and x-risks, at the core they have a similar message to EA: you can do good in the world, and you can have a larger marginal impact by prioritizing how you allocate your personal resources. So I think many folks will find it quite compatible with EA ideas, even if it was created by folks who, as far as I know, are unaware of EA.


I learned about this course because someone at my sangha mentioned it and we're considering running an in-person course based on the materials. I bring this up because, although the practices in the course are secular, they take inspiration from Buddhist practices. If you're interested in the intersection of Buddhism and EA, consider joining the Buddhists in EA Facebook group.

29

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

I have 15+ hours experience in running Active Hope workshops, and 50+ as a participant. Happy to chat if anyone wants to dive in more deeply.

Specifically, I've done a bunch of thinking on how to adapt the deep ecology-based models of Active Hope workshops to an orthodox AI safety audience. For more info on the general framework, further resources, and a list of 7 self-reflection prompts to try it out, feel free to take a look at this writeup I made for the attendees of a workshop at LessWrong Community Weekend 2022.

[comment deleted]1
0
0

This course seems valuable. Thanks for sharing!

I could see this being an inspiring resource for EAs who struggle with imposter syndrome (myself included), especially when paired with posts like seven ways to become unstoppable agentic.

I'm curious why this has gotten so many downvotes but no comments indicating disagreement.

It's on net positive karma but vote count makes it clear there are downvotes.

Curated and popular this week
Relevant opportunities