Since I started PauseAI, I’ve encountered a wall of paranoid fear from EAs and rationalists that the slightest amount of wrongthink or willingness to use persuasive speech as an intervention will taint the person’s mind for life with self-deception-- that “politics” will kill their mind. I saw people shake in fear to join a protest of an industry they thought would destroy the world if unchecked because they didn’t want to be photographed next to an “unnuanced” sign. They were afraid of sinning by saying something wrong. They were afraid of sinning by even trying to talk persuasively!
The worry about destroying one’s objectivity was often phrased to me as “being a scout/not being a soldier”, referring to Julia Galef’s book Scout Mindset. I think we have all the info we need to contradict the fear of not being a scout in her metaphor. Scouts are important for success in battle because accurate information is important to draw up a good battle plan. But those battle plans are worthless without soldiers to fight the battle! “Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker” would be a much less rousing title, but this is how many EAs and rationalists have chosen to interpret the book.
Even a scout can’t be only a scout. If a scout reports what they found to a superior officer, and the officer wants to pretend they didn’t hear it, a good scout doesn’t just stay curious about the situation or note that the superior officer has chosen a narrative. They fight to be heard! Because the truth of what they saw matters to the war effort. The success of the scout and the officer and the soldier is all ultimately measured in the outcome of the war. Accurate intel is important for something larger than the map— for the battle.
Imagine if the insecticide-treated bednets hemmed and hawed about the slight chance of harm from their use in anti-malaria interventions. Would that help one bit? No! What helps is working through foreseeable issues ahead of time at the war table, then actually trying the intervention with each component fully committed. Bednets are soldiers, and all our thinking about the best interventions would be useless if there were no soldiers to actually carry the interventions out. Advocating for the PauseAI proposal and opposing companies who are building AGI through protests is an intervention, much like spreading insecticide-treated bednets, but instead of bednets the soldiers are people armed with facts and arguments that we hope will persuade the public and government officials.
Interventions that involve talking, thinking, persuasion, and winning hearts and minds require commitment to the intervention and not simply to the accuracy of your map or your reputation for accurate predictions. To be a soldier in this intervention, you have to be willing to be part of the action itself and not just part of the zoomed out thinking. This is very scary for a contingent of EAs and rationalists today who treat thinking and talking as sacred activities that must follow the rules of science or lesswrong and not be used for anything else. Some of them would like to entirely forbid "politics" (by which they generally mean trying to persuade people of your position and get them on your side) or "being a [rhetorical] soldier" out of the fear that people cannot compartmentalize persuasive speech acts from scout thinking and will lose their ability to earnestly truth-seek.
I think these concerns are wildly overblown. What are the chances that amplifying the message of an org you trust in a way the public will understand undermines your ability to think critically? That's just contamination thinking. I developed the PauseAI US interventions with my scout hat on. When planning a protest, I'm an officer. At the protest, I'm a soldier. Lo and behold, I am not mindkilled. In fact, it's illuminating to serve in all of those roles-- I feel I have a better and more accurate map because of it. Even if I didn't, a highly accurate map simply isn't necessary for all interventions. Advocating for more time for technical safety work and for regulations to be established is kind of a no-brainer.
It's noble to serve as a soldier when we need humans as bednets to carry out the interventions that scouts have identified and officers have chosen to execute. Soldiers win wars. The most accurate map made by the most virtuous scout is worth nothing without soldiers to do something with it.
This analysis seems roughly right to me. Another piece of it I think is that being a 'soldier' or a 'bednet-equivalent' probably feels low status to many people (sometimes me included) because:
To be clear I don't endorse this, I am just pointing out something I notice within myself/others. I think the second one is mostly just bad, and we should do things that are good regardless of whether they have 'EA vibes'. The first one I think is somewhat reasonable (e.g. I wouldn't want to pay someone to be a fulltime protest attendee to bring up the numbers) but I think soldiering can be quite challenging and laudable and part of a portfolio of types of actions one takes.
I'd like to add another bullet point
- personal fit
I think that protests play an important role in the political landscape, so I joined a few, but but walking through streets in large crowds and chanting made me feel uncomfortable. Maybe I'd get used to it if I tried more often.
Yes, this matches what potential attendees report to me. They are also afraid of being “cringe” and don’t want to be associated with noob-friendly messaging, which I interpret as status-related.
This deeply saddens me because one of the things I most admired about early EA and found inspirational was the willingness to do unglamorous work. It’s often neglected so it can be very high leverage to do it!
I feel this way—I recently watched some footage of a PauseAI protest and it made me cringe, and I would hate participating in one. But also I think there are good rational arguments for doing protests, and I think AI pause protests are among the highest-EV interventions right now.
This is a valuable post, but I don't think it engages with a lot of the concern about PauseAI advocacy. I have two main reasons why I broadly disagree:
AI safety is an area with a lot of uncertainty. Importantly, this uncertainty isn't merely about the nature of the risks but about the impact of potential interventions.
Of all interventions, pausing AI development is, some think, a particularly risky one. There are dangers like:
People at PauseAI are probably less concerned about the above (or more concerned about model autonomy, catastrophic risks, and short timelines).
Although you may have felt that you did your "scouting" work and arrived at a position worth defending as a warrior, others' comparably thorough scouting work has led them to a different position. Their opposition to your warrior-like advocacy, then, may not come (as your post suggests) from a purist notion that we should preserve elite epistemics at the cost of impact, but from a fundamental disagreement about the desirability of the consequences of a pause (or other policies), or of advocacy for a pause.
If our shared goal is the clichéd securing-benefits-and-minimizing-risks, or even just minimizing risks, one should be open to thoughtful colleagues' input that one's actions may be counterproductive to that end-goal.
2. Fighting does not necessarily get one closer to winning.
Although the analogy of war is compelling and lends itself well to your post's argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount.
I still concede that a lot of people dismiss PauseAI merely because they see it as cringe. But I don't think this is the core of most thoughtful people's criticism.
To be very clear, I'm not saying that PauseAI people are wrong, or that a pause will always be undesirable, or that they are using the wrong methods. I am answering to
(1) the feeling that this post dismissed criticism of PauseAI without engaging with object-level arguments, and the feeing that this post wrongly ascribed outside criticism to epistemic purism and a reluctance to "do the dirty work," and
(2) the idea that the scout-work is "done" already and an AI pause is currently desirable. (I'm not sure I'm right here at all, but I have reasons [above] to think that PauseAI shouldn't be so sure either.)
Sorry for not editing this better, I wanted to write it quickly. I welcome people's responses though I may not be able to answer to them!
Love this!
My experience in animal protection has shown me the immense value of soldiers and FWIW I think some of the most resolute soldiers I know are also the scouts I most look up to. Campaigning is probably the most mentally challenging work I have ever done. I think part of that is constantly iterating through the OODA loop, which is cycling through scout and soldier mindsets.
Most animal activists I know in the EA world, were activists first and EA second. It would be interesting to see more EAs tapping into activist actions, which often are a relatively low lift. And I think embracing the soldier mindset is part of that happening.
seems locally invalid.[1]
'locally invalid' means 'this is not a valid argument', separate from the truth of the premises or conclusion