Marc Andreessen's Why AI Will Save the World has rapidly gained readership, benefiting from his 1.2 Million followers on Twitter. In the piece, he employs many underhanded insults about the AI Safety community and does a poor analysis of Millennialism. His piece also falls into the trap of "AI won't have intention and therefore wont want to kill us = no need to consider x-risk from AGI" There is so much wrong about this argument, but I would love to hear the EA communities' responses to this piece. I am hoping to engage Andreessen via an interview or debate in the future but for now would really love to hear the EA and AI Safety communities' gut checks and arguments to different points made in his piece.
No doubt similar arguments are likely to be leveled and sharing effective responses seems to be high value for communication purposes.
Just as I believe the risks from AI are overblown, so too are the potential benefits. In particular, the following paragraph is absurd:
Inflicting bloodshed on your enemy is a large part of how wars are won. AI advisers might reduce blood spilled on your side, but it will more than make up for it with the more accurate killing of people on the other side of the battlefield.
In general, Marc views AI as some perfect, flawless being that never makes any mistakes, which is not how actual software, or actual intelligence, actually works. I think AI will be a positive for humanity (eventually), but the techno-utopian dream will never entirely materialise.
I wrote a piece about a flawed argument of his here, where he basically implicitly groups AI with other safe technologies in order to prove AI Safety. Hope this could be helpful to you!
https://forum.effectivealtruism.org/posts/yQHzdmXa7KBB52fBz/ai-risk-and-survivorship-bias-how-andreessen-and-lecun-got