Disclaimer: I’m not an expert in politics unions, or AI safety policy.

It seems like a significant number of people who deem AI safety to be important feel that “AI Policy” is an important potential solution. Personally, from the outside, AI policy seems very vaguely defined without clear proposals and obvious solutions. Here are some reasons that I, with my limited knowledge of these topics, see as reasons why unionization might benefit AI policy in the interest of AI safety.

Firstly, it seems like policymakers and the general public see AI as a very important topic, which might benefit from regulation. Stories such as this one provide already provide an easy sell why unionizing the people who work on AI is a good idea - even for non-AI-safety-related reasons.

Historically, it seems to me that government regulation is often toothless and the incentives of politicians and the general public are often not aligned. Regulation often tends to be self-regulation and companies have significant power to steer public policy. Proposals such as democratizing decisions that OpenAI makes are ultimately still reliant on OpenAI playing along. Unions are about recognizing that workers have significant power to influence companies when they know how to access this power. A unionized workforce has actual, impactful leverage to influence a company’s direction.

One assumption I make is that opening up to decision-making to a larger group (such as the workers) as opposed to just the C-suite, decreases the likelihood of scenarios that are often alluded to (such as “companies don’t care about safety if it negatively affects profits).

I know that especially in the US unions are a very divisive topic, but I would be interested in other people’s takes on this. I do recognize that in the current political climate, this might be a complete pipe dream.

23

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:

It's not obvious that unions or workers will care as much about safety as management. See this post for some historical evidence.

I agree that this should be a consideration. Based on the small amount of data I have talking with employees at major AI labs about this, I currently think that overall their workers are less concerned about safety than their management, so I'm worried this could be counterproductive.

You might find Haydn's work on the subject interesting:

Overall then, the AI community has achieved some
successes as workers organising and bargaining with their
employers. This may be attributed to the organisational products
and services (1), organisational production technology (3), and the
general economic conditions (4) all being favourable – though the
structure of bargaining has not been (2) – and the AI community
having been fairly committed to collective action (5).
Factors likely to continue include the products and
services being consumer-facing and difficult to stockpile; reliance
on high-skilled labour; and the unequal bargaining structure.
However it is unclear what the balance of talent supply and
demand will be, and to what extent the AI community will
continue to be committed to collective action. These are key
questions for further research.

However I am not that optimistic about unions specifically, because in general they seem mainly focused on benefiting their workers specifically, rather than taking into account impacts on broader society. In the same way that fossil fuel unions, or police unions, or longshoremen unions have interests that significantly diverge from society as a whole, I would expect AI employee unions to still want their employers to aggressively commercialize. 

Thanks for this! This has occurred to me too - I've not heard labour power discussed as a lever in AI governance (though maybe I've just missed that discussion), and it seems like something people should at least consider, as strikes and labour organizing have effectively changed company norms/actions in the past. 

Thanks for making this post! It's a very thought-provoking topic that certainly merits more discussion and investigation!

My response is in three parts. First, I'll share some my thoughts on why, in my view we should expect unionized companies to act more safely. Then, I'll share some doubts that I have about tractability on fast timelines. Lastly, I offer an alternative proposal.

1.

To gently push back against other commenters here, I think there's a case to be made that worker's incentives should lean much more toward safety than management's.

Management has incentives to signal that they care about safety, but incentives against appropriate caution in their actions. Talking about safety and credibly demonstrating it has PR benefits for a company. But the company who deploys powerful AI gains a massive portion of the upsides, while the downside risk is carried by all. Thus, our priors should be that company management will lean toward far more risk taking than if they were acting in the public interest.

Workers (at least those without stock options) don't have the same incentive. They might miss out on a raise or bonus from slowed progress, but they may also lose their job from fast progress if it allows for their automation. (As one example, I suspect that content moderation contractors like those interviewed in the podcast linked by OP won't be used the same quantity for GPT-5, most of the work will be performed by GPT-4.) Since workers represent a larger proportion of the human population at risk (i.e. all of us), we should expect their voices in company direction to better represent that risk, provided they behave rationally.

Of course, there are countless examples of companies convincing their workers to act against their own interests. But successful workplace organizing could be an effective way to balance corporate messaging with alternative narratives. Even if AI safety doesn't wind up as a top priority of the union, improvements to job security--a standard part of union contracts--could make employees more likely to voice safety concerns or to come out as whistleblowers.

2.

That said, I can think of some reasons to doubt that labor organizing at AI companies is tractable:
 

For one, despite their popular support and some recent unionizations, union membership in the USA is low and declining (by percentage). This means that unions are lacking resources and most workers are inexperienced with them. It also demonstrates that organizing is hard, especially so in a world where many only meet their colleagues through a screen, and where remote work and independent contractors makes it easier for companies to replace workers. It's possible that a new wave of labor organizing could overcome these challenges, but I think it's unlikely that we'll have widespread unions at tech companies within the next five years. (I hope I'm wrong.)

As we approach AGI, power will shift entirely from labor to capital. In a world without work, labor organizing isn't feasible. In my model, this is a gradual process, with worker value slowly declining well before AGI as more and more tasks are automated. This will be true across all industries, but the companies building AI tools will be the first to use them, so their workers are vulnerable to replacement a bit sooner.

An organized workplace might slow down the company's AI development or release. This would likely be a positive if it occurred everywhere at once, but otherwise would make it more likely that a non-unionized company develop AGI. This would be difficult to coordinate given the current legal status of unions in the USA: if at one company only 49% of employees vote in favor, they get no legally protected union.

On slower timelines, these issues may be overcome, but fast timelines are likely where most AI risk lies. In short, it seems unlikely that political momentum will build in time to prevent the deployment of misaligned AGI.
 

3.

An alternative that may be less affected by these issues is specifically organize around the issue of AI safety. I could imagine workers coming together in the spirit of the Federation of Atomic Scientists, perhaps forming the "Machine Learning Workers for Responsible AI" . This model would not fit legally recognized unionization in the USA, but it could be a way to build solidarity across the industry to add weight to asks such as that to pause giant AI experiments.

I expect that the MLWRAI could scale quickly, especially with an endorsement from, say, the Future of Life Institute. It would be able to grow in parallel across all AI companies, even internationally, and it should avoid the political backlash of unions. Employees supporting the MLWRAI would not have the legal protections of those in unions, but firing such employees would attract scrutiny. Given sufficient public support or regulatory oversight, this could be sufficient incentive for companies to voluntarily cooperate with the MLWRAI.
 

An inter-company and international workers organization would support coordination across companies by reducing the concern that slowing down or investing in safety would allow others to race ahead. It would also provide an avenue for employees to influence company decisions without the majority support required for a union.  With the support of the public and/or major EA organizations even a small minority of workers could have the leverage to push company decisions toward AI safety, worldwide.

Very interesting to hear your and the other commenters' additions!

I do really like your MLWRAI proposal as an alternative to unions; given that almost all important AI labs at the very least pay lip service to AI safety concerns, it seems like they could put their money where their mouth is by supporting rather than suppressing something like MLWRAI.

Curated and popular this week
Relevant opportunities