Hide table of contents

51

0
0

Reactions

0
0
New Answer
New Comment

9 Answers sorted by

PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company's office talking about existential risk from artificial general intelligence won't convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.

Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).

¹: Microsoft/OpenAI, Amazon/Anthropic, Google/DeepMind, Facebook/Meta

Hi Matrice! I find this comment interesting. Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?

“Blockading an AI company's office talking about existential risk from artificial general intelligence won't convince any standby passenger, it will just make you look like a doomsayer caricature.”

Also, what evidence do you have for the below comment? For example, I met the leader of the voice actors association in Australia and we agreed on many topics, including the need for an AISI. In fact, I’d argue you’ve got something important wrong here - talking about existential risk instead of catastrophic risks to policymakers can be counterproductive because there aren’t many useful policies to prevent it (besides pausing).

“ the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk”

6
gw
"slowing down AI" != "slowing down AI because of x risk"
1
Matrice Jacobine
In addition to what @gw said on the public being in favor of slowing down AI, I'm mostly basing this on reactions to news about PauseAI protests on generic social media websites. The idea that LLMs scaling without further technological breakthrough will for sure lead to superintelligence in the coming decade is controversial by EA standards, fringe by general AI community standard, and resoundly mocked by the general public. If other stakeholders agree with the existential risk perspective then that is of course great and should be encouraged. To develop further on what I meant (though see also the linked post), I am extremely skeptical that allying with copyright lobbyists is good by any EA/longtermist metric, when ~nobody think art generators pose any existential risk and big AI companies are already negotiating deals with copyright giants (or even the latter creating their own AI divisions as with Adobe Firefly or Disney's new AI division), while independent EA-aligned research groups like EleutherAI are heavily dependent on the existence of open-source datasets.

There is enough of a scientific consensus that extinction risk from AGI is real and significant. Timelines are arguably much shorter in the case of AGI than climate change, so the movement needs to be ramped up in months-years, not years-decades.

It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.

I'd say more like late-20th Century (late 1980s?) in terms of scientific consensus, and mid-21st century (2040s?) in terms of how close global catastrophe is.

Re the broad coalition - the focus is on pausing AI, which will help... (read more)

1
Matrice Jacobine
Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra's report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another. Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.
3
Greg_Colbourn
10% chance of a 10%[1] chance of extinction happening within 5 years[2] is more than enough to be shutting it all down immediately[3]. It's actually kind of absurd how tolerant of death risk people are on this relative to those from the pharmaceutical, nuclear or aviation industries. 1. ^ I outline here why 10% should be used rather than 50%. 2. ^ Eyeballing the graph here, it looks like at least 10% by 2030. 3. ^ I think it's more like 90% [p(doom|AGI)] chance of a 50% chance [p(AGI in 5 years)].
3
Matrice Jacobine
Crucially, p(doom)=1% isn't the claim PauseAI protesters are making. Discussed outcomes should be fairly distributed over probable futures, if only to make sure your preferred policy is an improvement on most or all of those (this is where I would weakly agree with @Matthew_Barnett's comment).
6
Greg_Colbourn
1% is very conservative (and based on broad surveys of AI researchers, who mostly are building the very technology causing the risk, so are obviously biased against it being high). The point I'm making is that even a 1% chance of death by collateral damage is totally unacceptable coming from any other industry. Supporting a Pause should therefore be a no brainer. (Or to be consistent we should be dismantling ~all regulation of ~all industry.)
1
Matrice Jacobine
Industry regulations tend to be based on statistical averages (i.e., from a global perspective, on certainties), not multiplications of subjective-Bayesian guesses. I don't think the general public accepting any industry regulations commit them to Pascal-mugging-adjacent views. After all, 1% of existential risk (or at least global catastrophic risk) due to climate change, biodiversity collapse, or zoonotic pandemics seem plausible too. If you have any realistic amount of risk aversion, whether the remaining 99% of the futures (even from a strictly strong-longtermist perspective) are improved upon by pausing (worse, by flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen) is important!

1% (again, conservative[1]) is not a Pascal's Mugging. 1%(+) catastrophic (not extinction) risk is plausible for climate change, and a lot is being done there (arguably, enough that we are on track to avert catastrophe if action[2] keeps scaling).

flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen

It's anything but flippant[3]. And x-risk isn't from LLMs alone. "System 2" architecture, and embodiment, two other essential ingredients, are well on track too. I'm happy to bear any reputation costs in the event we live through this. It's unfortunate, but if there is no extinction, then of course people will say we were wrong. But there might well only be no extinction because of our actions![4]

  1. ^

    I actually think it's more like 50%, and can argue this case if you think it's a crux.

  2. ^

    Including removing CO₂ from the atmosphere and/or deflecting solar radiation.

  3. ^

    Please read the PauseAI website.

  4. ^

    Or maybe we will just luck out [footnote 10 on linked post].

1
Matrice Jacobine
To be clear, my point is that 1/ even inside the environmental movement calling for an immediate pause on all industry from the same argument you're using is extremely fringe, 2/ the reputation costs in 99% of worlds will themselves increase existential risk in the (far more likely) case that AGI happens when (or after) most experts think it will happen.
4
Greg_Colbourn
1/ Unaligned ASI existing at all is equivalent to "doom-causing levels of CO2 over a doom-causing length of time". We need an immediate pause on AGI development to prevent unaligned ASI. We don't need an immediate pause on all industry to prevent doom-causing levels of CO2 over a doom-causing length of time. 2/ It's really not 99% of worlds. That is way too conservative. Metaculus puts 25% chance on weak AGI happening within 1 year and 25% on strong AGI happening within 3 years.
2
Matrice Jacobine
Metaculus puts (being significantly more bullish than actual AI/ML experts and populated with rationalists/EAs) <25% chance on transformative AI happening by the end of the decade and <8% chance of this leading to the traditional AI-go-foom scenario, so <2% p(doom) by the end of the decade. I can't find a Metaculus poll on this but I would halve that to <1% for whether such transformative AI would be reached by simply scaling LLMs.
4
Greg_Colbourn
The first of those has a weird resolution criteria of 30% year-on-year world GDP growth ("transformative" more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include "AI Dystopia" as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn't from LLMs alone. "System 2" architecture, and embodiment, two other essential ingredients of AGI, are well on track too.)
1
Matrice Jacobine
If there's no humans left after AGI, then that's also true for "weak general AI". Transformative AI is also a far better target for what we're talking about than "weak general AI". The "AI Dystopia" scenario is significantly different from what PauseAI rhetoric is centered about. The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.

You don't have to go as far back as the mid-19th-century to find a time before scientific consensus about global warming. You only need to go back to 1990 or so.

2
Greg_Colbourn
Yes, I was thinking of James Hansen's testimony to the US Senate in 1988 as being equivalent to some of the Senate hearings on AI last year.
  1. Pausing AI development is not a good policy to strive for. Nearly all regulations will slow down AI progress. That's what regulation does by default. It makes you slow down by having to do other stuff instead of just going forward. But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) I don't know what the ideal policies are but it doesn't seem like a "pause" with no other asks is the best one.
  2. Pausing AI development for any meaningful amount of time is incredibly unlikely to occur. They will claim they are shifting the overton window but frankly, they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.
  3. Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious. Screaming that people are evil is extremely unnuanced, juvenile, and very unlikely to build the necessary bridges to really accomplish things. It makes us look like idiots. I think EAs too often prefer to do research from their laptops as opposed to getting out into the real world and doing things; but doing things doesn't just mean protesting. It means crafting legislation like SB 1047. It means increasing the supply of mech interp researchers by training them. It means lobbying for safety standards on AI models.
  4. Pause AI's premise is very "doomy" and only makes sense if you have extremely high AI extinction probabilities and the only way to prevent extinction is an indefinite pause to AI progress. Most people (including those inside of EA) have far less confidence in how any particular AI path will play out and are far less confident in what will/won't work and what good policies are. The Pause AI movement is very "soldier" mindset and not "scout" mindset.
  1. This is assuming that the alignment/control problems are (a) solvable, and (b) solvable in time. I'm sceptical of (a), let alone (b). 

    None of the regulations you mention ("model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.") matter without at least a conditional Pause when red lines are crossed (and arguably we've already crosses many previously stated red lines, with no consequences in terms of slowing down or pausing).
     
  2. This and the following point are addressed by other
... (read more)

Hi Marcus, I'm in the mood for a bit of debate, so I'm going to take a stab at responding to all four of your points :)

LMK what you think!

1. This is an argument against a pause policy not the Pause org or a Pause movement. I think discerning funders need to see the differences. Especially if you have thinking on the margin.

2. "Pausing AI development for any meaningful amount of time is incredibly unlikely to occur." < I think anything other than AGI in less than 10 years is unlikely to occur, but that isn't a good argument to not work on Safety. Scale a... (read more)

1
MarcusAbramovitch
1. I don't think there is a need for me to show the relationship here. 2/3. https://youtu.be/T-2IM9P6tOs?si=uDiJXEqq8UJ63Hy2 this video came up as the first search result when i searched "pause ai protest" on youtube. In it, the chant things like "open ai sucks! Anthropic sucks! Mistral sucks!" And "Demis Hassabis, reckless! Darío amodei reckless" I agree that working on safety is a key moral priority. But working on safety looks a lot more like the things I linked to in #3. That's what doing work looks like. This seems to be what a typical protest looks like. I've seen videos of others. I consider these to be juvenile and unserious and unlikely to build necessary bridged to accomplish outcomes. I'll let others form their opinions.

The provided source doesn't show PauseAI affiliated people calling Sam Altman and Dario Amodei evil.

0
MarcusAbramovitch
Correct, I potentially misremembered. the actual things they definitely say, at least in this video are "open ai sucks! Anthropic sucks! Mistral sucks!" And "Demis Hassabis, reckless! Darío amodei reckless" I would submit that I am at the very least directionally correct.
9
Ben_West🔸
"Demis Hassabis, reckless!" honestly feels to me like a pretty tame protest chant. I did a Google search for "protest" and this was the first result. Signs are things like "one year of genocide funded by UT" which seems both substantially more extreme and less epistemically valid than calling Demis "reckless." My sense from your other points is that you just don't actually want pause AI to accomplish their goals, so it's kind of over-determined for you, but if I wanted to tell a story about how a grassroots movement successfully got a international pause on AI, various people chanting that the current AI development process is reckless seems pretty fine to me?
6
MarcusAbramovitch
Actually, I'm uncertain if pausing AI is a good idea and I wish the Pause AI people had a bit more uncertainty (on both their "p(doom)" and on whether pausing AI is a good policy) as well. I look at people who have 90%+ p(doom) as, at the very least, uncalibrated, the same way I look at the people who are dead certain that AI is going to go positively brilliant and that we should be racing ahead as fast as possible. It's as if both of them aren't doing any/enough reading of history. In the case of my tribe I would submit that this kind of protesting, including/especially the example you posted makes your cause seem dumb/unnuanced/ridiculous to the onlookers who are indifferent/know little. Last, I was just responding to the prompt "What are some criticisms of PauseAI?". It's not exactly the place for a "fair and balanced view" but also, I think it is far more important to critique your own side than the opposite side since you speak the same language as your own team so they will actually listen to you.
4
Ben_West🔸
Fair enough! fwiw I would not have guessed that most pause AI supporters have a p(doom) of 90%+. My guess is that the crux between you is actually that they believe it's worth pushing for a policy even if you I think it's possible you will change your mind in the future. (But people should correct me if I'm wrong!)
3
Greg_Colbourn
What is a reasonable p(doom|ASI) to have to not be concluding that pausing AI is a good idea? Or - what % chance of death are you personally willing to accept for a shot at immortality/utopia? Would it be the same if it was framed in terms of a game of Russian Roulette?

Strong +1 on #3

5
Throwaway81
I can try to answer 3 for Marcus. Imagine that AI policy is a soccer game for professional soccer players. You've put in a lot of practice, know the rules, and know how to work well with your teammates. You're scoring some goals. Then someone from an interim/pick-up game league who is just learning to play soccer comes along and tried to be on the team, or -- in this case is not even aware of the team? If we let them on the team, not only do we look bad to the other team, but since policy is a team sport, they drive our overall impact down because it's kind of dead weight that we now have to try to guard against for things they do that they think are helpful but are not, depleting energy and resources better spent on getting goals.
2
Greg_Colbourn
I think in terms of this analogy, there are no midfielders, let alone strikers, on the pitch amongst the professionals. No one is even really trying to score goals. Maybe they are going for corners at best. Many are even colluding with the other team and their supporters to make money throwing the match.
-1
Throwaway81
That's just completely false. Sorry I can't say more.

The fact that you can't say more is part of the problem. There needs to be an open global discussion of an AGI Moratorium at the highest levels of policymaking, government, society and industry.

I agree with many of the things other people have already mentioned. However, I want to add one additional argument against PauseAI, which I believe is quite important and worth emphasizing clearly:

In general, hastening technological progress tends to be a good thing. For example, if a cure for cancer were to arrive in 5 years instead of 15 years, that would be very good. The earlier arrival of the cure would save many lives and prevent a lot of suffering for people who would otherwise endure unnecessary pain or death during those additional 10 years. The difference in timing matters because every year of delay means avoidable harm continues to occur.

I believe this same principle applies to AI, as I expect its main effects will likely be overwhelmingly positive. AI seems likely to accelerate economic growth, accelerate technological progress, and significantly improve health and well-being for billions of people. These outcomes are all very desirable, and I would strongly prefer for them to arrive sooner rather than later. Delaying these benefits unnecessarily means forgoing better lives, better health, and better opportunities for many people in the interim.

Of course, there are exceptions to this principle, as it’s not always the case that hastening technology is beneficial. Sometimes it is indeed wiser to delay the deployment of a new technology if the delay would substantially increase its safety or reduce risks. I’m not dogmatic about hastening technology and I recognize there are legitimate trade-offs here. However, in the case of AI, I am simply not convinced that delaying its development and deployment is justified on current margins.

To make this concrete, let’s say that delaying AI development by 5 years would reduce existential risk by only 0.001 percentage points. I would not support such a trade-off. From the perspective of any moral framework that incorporates even a slight discounting of future consumption and well-being, such a delay would be highly undesirable. There are pragmatic reasons to include time discounting in a moral framework: the future is inherently uncertain, and the farther out we try to forecast, the less predictable and reliable our expectations about the future become. If we can bring about something very good sooner, without significant costs, we should almost always do so rather than being indifferent to when it happens.

However, if the situation were different—if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger. In such a scenario, I would seriously consider supporting PauseAI and might even advocate for it loudly. That said, I find this kind of large reduction in existential risk from a delay in AI development to be implausible, partly for the reasons others in this thread have already outlined.

This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.

6
Matthew_Barnett
I think it would require an unreasonably radical interpretation of longtermism to believe, for example, that delaying something as valuable as a cure for cancer by 10 years (or another comparably significant breakthrough) would be justified, let alone overwhelmingly outweighed, because of an extremely slight and speculative anticipated positive impact on existential risk. Similarly, I think the same is true about AI, if indeed pausing the technology would only have a very slight impact on existential risk in expectation. I’ve already provided a pragmatic argument for incorporating at least a slight amount of time discounting into one’s moral framework, but I want to reemphasize and elaborate on this point for clarity. Even if you are firmly committed to the idea that we should have no pure rate of time preference—meaning you believe future lives and welfare matter just as much as present ones—you should still account for the fact that the future is inherently uncertain. Our ability to predict the future diminishes significantly the farther we look ahead. This uncertainty should generally lead us to favor not delaying the realization of clearly good outcomes unless there is a strong and concrete justification for why the delay would yield substantial benefits. Longtermism, as I understand it, is simply the idea that the distant future matters a great deal and should be factored into our decision-making. Longtermism does not—and should not—imply that we should essentially ignore enormous, tangible and clear short-term harms just because we anticipate extremely slight and highly speculative long-term gains that might result from a particular course of action. I recognize that someone who adheres to an extremely strong and rigid version of longtermism might disagree with the position I’m articulating here. Such a person might argue that even a very small and speculative reduction in existential risk justifies delaying massive and clear near-term benefits. However, I
1
Karthik Tadepalli
I don't care about population ethics so don't take this as a good faith argument. But doesn't astronomical waste imply that saving lives earlier can compete on the same order of magnitude as x risk?
3
Matrice Jacobine
https://nickbostrom.com/papers/astronomical-waste/
4
Matthew_Barnett
I'm curious how many EAs believe this claim literally, and think a 10 million year pause (assuming it's feasible in the first place) would be justified if it reduced existential risk by a single percentage point. Given the disagree votes to my other comments, it seems a fair number might in fact agree to the literal claim here. Given my disagreement that we should take these numbers literally, I think it might be worth writing a post about why we should have a pragmatic non-zero discount rate, even from a purely longtermist perspective.
4
Habryka
I think fixed discount rates (i.e. a discount rate where every year, no matter how far away, reduces the weighting by the same fraction) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like "predictability of the future" and "constraining our plans towards worlds we can influence", which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate in situations where it comes apart from the proxies (as it virtually guaranteed to happen in the long-run future). See also my comment here: https://forum.effectivealtruism.org/posts/PArvxhBaZJrGAuhZp/report-on-the-desirability-of-science-given-new-biotech?commentId=rsqwSR6h5XPY8EPiT 
5
Davidmanheim
I've referred to this latter point as candy bar extinction; using fixed discount rates, a candy bar is better than preventing extinction with certainty after some number of years. (And with moderately high discount rates, the number of years isn't even absurdly high!)
3
Matthew_Barnett
I agree there’s a decent case to be made for abandoning fixed exponential discount rates in favor of a more nuanced model. However, it’s often unclear what model is best suited to handle scenarios involving a sequence of future events — T_1, T_2,T_3,…,T_N — where our knowledge about T_i is always significantly greater than our knowledge about T_{i + 1}. From what I understand, many EAs seem to reject time discounting partly because they accept an empirical premise that goes something like this: “The future becomes increasingly difficult to predict as we look further ahead, but at some point, there will be a "value lock-in" — a moment when key values or structures become fixed — and after this lock-in, the long-term future could become highly predictable, even over time horizons spanning billions of years.” If this premise is correct, it might justify using something like a fixed discount rate for time periods leading up to the value lock-in, but then something like a zero rate of time discounting after the anticipated lock-in. Personally, I find the concept of a value lock-in to be highly uncertain and speculative. Because of this, I’m skeptical of the conclusion that we should treat the level of epistemic uncertainty about the world, say, 1,000 years from now as being essentially the same as the uncertainty about the world 1 billion years from now. While both timeframes might feel similarly distant from our perspective — both being “a long time from now” — I ultimately think there’s still a meaningful difference: predicting the state of the world 1 billion years from now is likely much harder than predicting the state of the world 1,000 years from now. One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting
3
Habryka
Yeah, this is roughly the kind of thing I would suggest if one wants to stay within the discount rate framework.

if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger

This is the crux. I think it would reduce existential risk by at least 10% (probably a lot more). And 5 years would just be a start - obviously any Pause should (and in practice will) only be lifted conditionally. I link your AGI timelines are relatively short? And I don't think your reasons for expecting the default outcome from AGI to be good are sound (as you even allude to yourself).

I do in fact believe that delaying AI by 5 years reduce existential risk by something like 10 percentage points. 

Probably this thread isn't the best place to hash it out, however. 

5
Matthew_Barnett
I think this is a reasonable point of disagreement. Though, as you allude to, it is separate from the point I was making.  I do think it is generally very important to distinguish between: 1. Advocacy for a policy because you think it would have a tiny impact on x-risk, which thereby outweighs all the other side effects of the policy, including potentially massive near-term effects, because reducing x-risk simply outweighs every other ethical priority by many orders of magnitude. 2. Advocacy for a policy because you think it would have a moderate or large effect on x-risk, and is therefore worth doing because reducing x-risk is an important ethical priority (even if it isn't, say, one million times more important than every other ethical priority combined). I'm happy to debate (2) on empirical grounds, and debate (1) on ethical grounds. I think the ethical philosophy behind (1) is quite dubious and resembles the type of logic that is vulnerable to Pascal-mugging. The ethical philosophy behind (2) seems sound, but the empirical basis is often uncertain.

What PauseAI wants to ban or "pause" seems fairly weakly defined and not necessarily relevant to any actual threat level. Their stated goals focus on banning scaling of LLM architecture with known limitations that make 'takeover' scenarios unlikely (limited context windows, lack of recursive self-updating independently from training, dependence on massive datacentres to run) and known problems (inscrutability and obvious lack of consistent "alignment")  that are still problems with smaller models if you try to use them for anything sensitive. It's not clear what "more powerful than GPT4" actually means. Nor is it clear what the level of understanding that will result in un-pausing is or how it will be obtained without any models to study.

Banning LLMs of a certain scale might even have the perverse effect of encouraging companies to optimize performance or reinvent the idea of learning in other ways which are more risky. Or setting back ability to understand extremely powerful LLMs when someone develops them outside a US/EU legislative framework anyway. Or preventing positive AI developments that could save thousands of lives (or from the point of view of a longtermist that believes existential risk is currently nonzero including non-AI factors  but might drop to zero in future because of friendly AI, perhaps 10^31 lives!)

Beyond that I think from the perspective of being an effective giving target, PauseAI suffers from the same shortcomings most lobbying outfits do (influencing government and public opinion in an opposing direction to economic growth is hard , it's unclear what results a marginal dollar donation achieves, and the other side have a lot more dollars and connections to ramp up activity in an equal and opposite direction if they feel their business interests are threatened) so there's no reason to believe they're effective even if one agrees their goal is well-defined and correct.

You could also question the motivations of some of the people arguing for AI pauses  (hi Elon, we see the LLM you launched shortly after signing the letter saying that LLMs that were ahead of yours were dangerous and should be banned...) although I don't think this applies to the PauseAI organization specifically.

>PauseAI suffers from the same shortcomings most lobbying outfits do...

I'm confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate). 

This doesn't feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.  

1
David T
This is the case only if you make the assumption that protesting is incapable of having a negative impact on outcomes either directly by creating a negative impact or indirectly by causing people supporting the other, richer and better connected side to put more effort into regulatory capture. Other people have more made specific claims about the nature of PauseAI's campaigns; I'm pointing out that this is a battle where their expected outcome isn't necessarily positive even if they're pretty good... (Relevant context: the incoming US administration is ambivalent at best towards the likes of Altman but is extremely hostile to doom and safety narratives to the point it sees partisan advantage in being seen to rejecting them in favour of economic growth; it also sees arms races as things to participate in and win.) And even if one ignores the abundance of evidence that protest movements often have negative impacts (particularly in the short term, which is what PauseAI care about) and that this might be one of those cases, the Pascalian argument that the payoffs are so high the lack of tractability is irrelevant to its effectiveness only works if there's literally no plausibly more effective ways to achieve the same goal.

I wrote some criticism in this comment. Mainly, I argue that 
(1) A pause could be undesirable. A pause could be net-negative in expectation (with high variance depending on implementation specifics), and that PauseAI should take this concern more seriously.
(2) Fighting doesn't necessarily bring you closer to winning. PauseAI's approach *could* be counterproductive even for the aim of achieving a pause, whether or not it's desirable. From my comment:

Although the analogy of war is compelling and lends itself well to your post's argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount. 

What is the ultimate counterfactual here? I'd argue it's extinction from AGI/ASI in the next 5-10 years with high probability. Better to fight this and lose than just roll over and die. 

To be clear - I'm open to more scouting being done concurrently (and open to changing my mind), but imo none of these answers are convincing or reassuring.

1
Tao
This is missing the point of my 2nd argument. It sure sounds better to "fight and lose than roll over and die." But I'm saying that "fighting" in the way that PauseAI is "fighting" could make it more likely that you lose.  Not saying "fighting" in general will have this effect. Or that this won't ever change. Or that I'm confident about this. Just saying: take criticism seriously, acknowledge the uncertainty, don't rush into action just because you want to do something.  Unrelated to my argument: Not sure what you mean by "high probability" but I'd take a combination of these views are a reasonable prior: XPT. 
2
Greg_Colbourn
Who else is pushing for a global Pause/Stop/Moratorium/Non-Proliferation Treaty? Who else is doing that in a way such that PauseAI might be counterfactually harming their efforts? Again, no action on this, or waiting for others to do something "better", are terrible choices when the consequences of insufficient global action are that we all die in the relatively near future.  Do you think it's possible for you to be convinced that building ASI is a suicide race, short of an actual AI-mediated global catastrophe? What would it take? ~50%. I think XPT is a terrible prior. Much better to look at the most recent AI Impacts Survey, or the CAIS Statement on AI Risk.

They don't have any experience and no people with experience driving the ship, where experience and relationships in DC are extremely important. They are meeting with offices, yes, but it's not clear that they are meeting with the right offices or the right staffers. It's likely that they are actually not cost-effective because the money could probably be better spent on two highly competent and experienced/plugged in people rather than a bunch of junior people in terms of ROI.

Hi! Interesting comment. To what extent does this also describe most charities spinning out of Ambitious Impacts incubation program?

1
Throwaway81
I'm not familiar with that program, sorry.
4
Throwaway81
Ah, formerly CE. No, I think that formerly CE is not well suited for US Policy-focused spinouts. There aren't any people on staff that can advise on that well (I've been involved in a couple of policy consultation projects for that and it seemed that the advisors just had no grasp regarding what was going on in US policy/advocacy). I think their classic charities are good though!

Another org in the same space, comprised of highly competent and experienced/plugged in people would certainly be welcome, and plausibly could be more effective. 

3
Greg_Colbourn
I think a bottleneck for this is finding experienced/plugged in people who are willing to go all out on a Pause.

I plan on donating to PauseAI, but I've put considerable thought into reasons not to donate.

I gave some arguments against slowing AI development (plus why I disagree with them) in this section of my recent post, so I won't repeat those.

  1. There's not that much evidence that protests are effective. There's some evidence, including a few real-world natural experiments and some lab experiments, but this sort of research doesn't have a good replication record.
  2. Research generally suggests that peaceful protests work, while violent protests reduce public support. If violent protests backfire, maybe some types of peaceful protests also backfire. And we don't have enough granularity in the research to identify under what specific conditions peaceful protests work, so maybe PauseAI-style protests will backfire.
  3. Polls suggest there is broad public support for a pause. If there's already public support, doesn't that weaken the case for protesting? Perhaps that means we should be doing something else instead.
  4. Protesting might not be a good use of resources. It might be better to lobby policy-makers.

I understand that this topic gets people excited, but commenters are confusing  a Pause policy with a Pause movement with the organisation called PauseAI.

Commenters are also confusing 'should we give PauseAI more money?' with 'would it be good if we paused frontier models tomorrow?'

I've never seen a topic in EA get a subsection of the community so out of sorts. It makes me extremely suspicious. 

Commenters are also confusing 'should we give PauseAI more money?' with 'would it be good if we paused frontier models tomorrow?'

I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.

I strongly agree. Almost all of the criticism in this thread seem to start from assumptions about AI that are very far from those held by PauseAI. This thread really needs to be split up to factor that out. 

As an example: If you don't think shrimp can suffer, then that's a strong argument against the Shrimp Welfare Project. However, that criticism doesn't belong in the same thread as a discussion about whether the organization is effective, because the two subjects are so different.  

Pause AI seems to not be very good at what they are trying to do. For example, this abysmal press release which makes pause AI sound like tinfoil wearing nutjobs, which I already complained about it in the comments here

I think they've been coasting for a while on the novelty of what they're doing, which helps obscure that only like a dozen or so people are actually showing up to these protests, making them an empty threat. This is unlikely to change as long as the focus of these protests are based on the highly speculative threat of AI x-risk, which people do not viscerally feel as a threat and does not carry authoritative scientific backing compared to something like climate change. People might say they're concerned about AI on surveys, but they aren't going to actually hit the streets unless they think it's meaningfully and imminently going to harm them. 

In todays climate, the only way to build a respectably sized protest movement is to put x-risk on the backburner and focus on attacking AI more broadly: there are a lot of people who are pissed at gen-AI in general, like people mad at data plagiarism, job loss and enshittification. They are making some steps towards this, but I think there's a feeling that doing so would end up aligning them politically with the left and make enemies among AI companies. They should either embrace this, or give up on protesting entirely. 

Press release is from Stop AI, which I think is a separate outfit?

4
Jeff Kaufman 🔸
It looks like they have one person in common: StopAI team ∩ PauseAI team is Guido Reichstadter. But he's listed on the former as "protestor" and on the latter as "volunteer", and I think "separate outfit" is right.
Comments1
Sorted by Click to highlight new comments since:

Marcus says:

But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) 

Matrice says:

Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). 

These seem to be hinting at an important crux. On the one hand, I can see that cooperating with people who have other concerns about AI could water down the content of one's advocacy.

On the other hand, might it be easier to get a broader coalition behind a pause, or some other form of regulation that many others in an AI-concerned coalition would view as a win? At least at a cursory level, many of the alternatives Marcus mentioned sound like things that wouldn't interest other members of a broader coalition, only people focused on x-risk. 

Whether x-risk focused advocates alone can achieve enough policy wins against the power of Big AI (and corporations interested in harnessing it) is unclear to me. If other members of the AI-concerned coalition have significantly more influence than the x-risk group -- such that a coalition-based strategy would excessively "risk focusing on policies and AI systems that have little to do with existential risk" -- then it is unclear to me whether the x-risk group had enough influence to go it alone either. In that case, would they have been better off with the coalition even if most of the coalition's work only generically slowed down AI rather than bringing specific x-risk reductions?

My understanding is that most successful political/social movements employ a fairly wide range of strategies -- from elite lobbying to grassroots work, from narrow focus on the movement's core objectives to building coalitions with those who may have common opponents or somewhat associated concerns. Ultimately, elites care about staying in power, and most countries important to AI do have elections. AI advocates are not wrong that imposing a bunch of regulations of any sort will slow down AI, make it harder for AI to save someone like me from cancer 25-35 years down the road, and otherwise impose some real costs. There has to be enough popular support for paying those costs.

So my starting point would be an "all of the above" strategy, rather than giving up on coalition building without first making a concerted effort first. Maybe PauseAI the org, or pause advocacy the idea, isn't the best way to go about coalition building or to build broad-based public support. But I'm not seeing too much public discussion of better ways?

Curated and popular this week
Relevant opportunities