This is a linkpost for https://irrationalitycommunity.substack.com/p/against-ai-as-an-extinction-threat
I wrote a post to my Substack attempting to compile all of the best arguments against AI as an existential threat.
Some arguments that I discuss include: international game theory dynamics, reference class problems, knightian uncertainty, superforecaster and domain expert disagreement, the issue with long-winded arguments, and more!
Please tell me why I'm wrong, and if you like the article, subscribe and share it with friends!
Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.
The section "International Game Theory" does not seem to me like an argument against AI as an existential risk.
If the USA and China decide to have a non-cooperative AI race, my sense is that this would increase existential risk rather than reduce it.
Yep, I think this is true. The point is that, given AI stays aligned which is stated there, the best thing for a country to do would be to accelerate capabilities. You’re right, however, that its not an argument against AI being an existential threat (I’ll make a note to make this more clear) — it’s more a point for acceleration.