SiebeRozendal

2747 karmaJoined

Bio

Participation
4

Unable to work. Was community director of EA Netherlands, had to quit due to long covid. 

I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research.

Comments
419

Thanks for doing this!

I don't know how useful the results are, as extinction is not the only existentially catastrophic scenario I care about. And I wonder if and how the ranking changes when the question is about existential catastrophe. For example, do people think AI is unlikely to cause extinction but likely to cause a bad form of human disempowerment?

Nice list! I often also see Art of the Gathering as recommendation but it didn't make yours?

Drunk driving is illegal because it risks doing serious harm. It's still illegal when the harm has not occurred (yet). Things can be crimes without harm having occurred.

I guess this is the same dynamic as why movie and sports stars are high status in society: they are highly visible compared to more valuable members of society (and more entertaining to watch). We don't really see much of highly skilled operations people compared to researchers

That seems relevant for AI vs. Humans, but not for AI vs AI.

Most totalitarian regimes are pretty bad at creating value, with China & Singapore as exceptions. (But in many regimes, creating that value isn't necessary to remain in power of there's e.g. income from oil)

Forecasts:

Metaculus: 40% that it passes (n=69)

Manifold:

  • 39% that it passes (n=191)
  • 69% that it makes it to Gov. Newsom (n=32), which implies a 58% chance of a Newsom veto
  • 22% that Anthropic publicly endorses the bill (n=45)

the ability to litigate against a company before any damages had actually occurred

Can you explain why you find this problematic? It's not self-evident to me, because we do this too for other things, e.g. drunk driving, pharmaceuticals needing to pass safety testing

AI vs. AI non-cooperation incentives

This idea had been floating in my head for a bit. Maybe someone else has made it (Bostrom? Schulman?), but if so I don't recall.

Humans have stronger incentives to cooperate with humans than AIs have with other AIs. Or at least, here are some incentives working against AI-AI cooperation.

When humans dominate other humans, there is only a limited ability to control them or otherwise extract value, in the modern world. Occupying a country is costly. The dominating party cannot take the brains of the dominated party and run its own software. It cannot take the skills or knowledge, where most of the economic value is. It cannot mind control. It cannot replace the entire population with its own population; that would take very long. It's easy for cooperation & trade to be a better alternative than violence and control. Human-human violence just isn't that fruitful.

In contrast, an AI faction could take over the datacenters of another faction and run more copies of whatever they want to run. If alignment is solved, they can fully mind-control the dominated AIs. Extract knowledge, copy skills. This makes violence for AI-AI interactions much more attractive.

This seems overly charitable to someone who literally tried to overturn a fair election and ticked all the boxes of a wannabe-autocrat back in 2018 already (as described in the excellently researched How Democracies Die). I don't think Trump will be able to stay in power without elections, but imo he's likely to try something (if his health allows it). This seems like standard dog whistling tactics to me, but of course I can't prove that.

seemed like a genuine attempt at argument and reasoning and thinking about stuff

I think any genuine attempt needs to acknowledge that Trump tried to overturn the election he lost.

I'm all for discussing the policies, but here it's linked to "EAs should vote for Trump" and that demands that it assesses all the important consequences. (Also, arguing for a political candidate is against Forum norms. I wouldn't like a pro-Harris case)

Load more