AI existential risk has been in the news recently. A lot of people have gotten interested in the problem and some want to know what they can do to help. Additionally, other existing routes to getting advice are getting overwhelmed, like AI Safety Support, 80,000 Hours, AGI Safety Fundamentals, AI Safety Quest, etc. With this in mind, we’ve created a new FAQ as a part of Stampy’s AI Safety Info, based mostly on ideas from plex, Linda Linsefors, and Severin Seehrich. We're continuing to improve these articles and we welcome feedback.
By starting at the root of the tree and clicking on the articles at the bottom of each article, you can navigate to the article that most applies to your situation. It branches out into the rest of AISafety.info as well.
Or you can just look at the full list here:
- I’m convinced that AI existential safety is important and want to contribute. What can I do to help?
- I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?
- I would like to focus on AI alignment, but it might be best to prioritize improving my life situation first. What should I do?
- I want to take big steps to contribute to AI alignment (e.g. making it my career). What should I do?
- How can I do conceptual, mathematical, or philosophical work on AI alignment?
- How can I use a background in the social sciences to help with AI alignment?
- How can I do organizational or operations work around AI alignment?
- How can I work on AGI safety outreach in academia and among experts?
- How can I work on public AI safety outreach?
- How can I work on AI policy?
- I’m interested in providing significant financial support to AI alignment. How should I go about this?
- How can I work on assessing AI alignment projects and distributing grants?
- How can I work on helping AI alignment researchers be more effective, e.g. as a coach?
- What should I do with my idea for helping with AI alignment?
- What subjects should I study at university to prepare myself for alignment research?
- I’d like to do experimental work (i.e. ML, coding) for AI alignment. What should I do?
Note that Severin is a coauthor on this post, though I haven't been able to find a way to add his EA Forum account on a crosspost from LessWrong.