Against the overwhelming importance of AI Safety

Friends, Romans, countrymen, lend me your ears;
I come to bury Caesar, not to praise him

Julius Caesar Act III, scene II

AI Safety (and it's various related fields/names of AI risk, AI alignment, AI notkilleveryoneism etc) has become one of, if not the leading cause area in Effective Altruism over the last few years. I think this is wrong, and based on under-scrutinised arguments.

I think working on AI can be important, and the field of AI will be important for the 21st century, but I strongly doubt that AI is overwhelmingly the most important cause that Effective Altruism should support.

I think that EA should become less focused on AI Safety, and in expectation, grant money should be redirected from AI Safety to other causes, such as Global Health and Animal Welfare.

This sequence aims to explore this and explain why I think so.