Hide table of contents

30

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company's office talking about existential risk from artificial general intelligence won't convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.

Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).

¹: Microsoft/OpenAI, Amazon/Anthropic, Google/DeepMind, Facebook/Meta

  1. Pausing AI development is not a good policy to strive for. Nearly all regulations will slow down AI progress. That's what regulation does by default. It makes you slow down by having to do other stuff instead of just going forward. But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) I don't know what the ideal policies are but it doesn't seem like a "pause" with no other asks is the best one.
  2. Pausing AI development for any meaningful amount of time is incredibly unlikely to occur. They will claim they are shifting the overton window but frankly, they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.
  3. Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious. Screaming that people are evil is extremely unnuanced, juvenile, and very unlikely to build the necessary bridges to really accomplish things. It makes us look like idiots. I think EAs too often prefer to do research from their laptops as opposed to getting out into the real world and doing things; but doing things doesn't just mean protesting. It means crafting legislation like SB 1047. It means increasing the supply of mech interp researchers by training them. It means lobbying for safety standards on AI models.
  4. Pause AI's premise is very "doomy" and only makes sense if you have extremely high AI extinction probabilities and the only way to prevent extinction is an indefinite pause to AI progress. Most people (including those inside of EA) have far less confidence in how any particular AI path will play out and are far less confident in what will/won't work and what good policies are. The Pause AI movement is very "soldier" mindset and not "scout" mindset.

They don't have any experience and no people with experience driving the ship, where experience and relationships in DC are extremely important. They are meeting with offices, yes, but it's not clear that they are meeting with the right offices or the right staffers. It's likely that they are actually not cost-effective because the money could probably be better spent on two highly competent and experienced/plugged in people rather than a bunch of junior people in terms of ROI.

Curated and popular this week
Relevant opportunities