Unable to work. Was community director of EA Netherlands, had to quit due to long covid.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research.
AI vs. AI non-cooperation incentives
This idea had been floating in my head for a bit. Maybe someone else has made it (Bostrom? Schulman?), but if so I don't recall.
Humans have stronger incentives to cooperate with humans than AIs have with other AIs. Or at least, here are some incentives working against AI-AI cooperation.
When humans dominate other humans, there is only a limited ability to control them or otherwise extract value, in the modern world. Occupying a country is costly. The dominating party cannot take the brains of the dominated party and run its own software. It cannot take the skills or knowledge, where most of the economic value is. It cannot mind control. It cannot replace the entire population with its own population; that would take very long. It's easy for cooperation & trade to be a better alternative than violence and control. Human-human violence just isn't that fruitful.
In contrast, an AI faction could take over the datacenters of another faction and run more copies of whatever they want to run. If alignment is solved, they can fully mind-control the dominated AIs. Extract knowledge, copy skills. This makes violence for AI-AI interactions much more attractive.
This seems overly charitable to someone who literally tried to overturn a fair election and ticked all the boxes of a wannabe-autocrat back in 2018 already (as described in the excellently researched How Democracies Die). I don't think Trump will be able to stay in power without elections, but imo he's likely to try something (if his health allows it). This seems like standard dog whistling tactics to me, but of course I can't prove that.
seemed like a genuine attempt at argument and reasoning and thinking about stuff
I think any genuine attempt needs to acknowledge that Trump tried to overturn the election he lost.
I'm all for discussing the policies, but here it's linked to "EAs should vote for Trump" and that demands that it assesses all the important consequences. (Also, arguing for a political candidate is against Forum norms. I wouldn't like a pro-Harris case)
Thanks for doing this!
I don't know how useful the results are, as extinction is not the only existentially catastrophic scenario I care about. And I wonder if and how the ranking changes when the question is about existential catastrophe. For example, do people think AI is unlikely to cause extinction but likely to cause a bad form of human disempowerment?