A

Ailanthus

4 karmaJoined

Comments
2

It might still be better than the counterfactual if an AI arms race was likely to happen soon anyway. I'd prefer the AI leader has some safety red tape (even if it's largely ignored by leaders and staff) as opposed to being a purely for-profit entity.

Nonetheless, there's a terrible irony in the organization with the mission "ensure that artificial general intelligence benefits all of humanity" not only kicking off the corporate arms race, but seemingly rushing to win it.

It's clear that the non-profit wrapper was inadequate to constrain the company. In hindsight, perhaps the right move would have been investing more in AI governance early on, and perhaps seeking to make OpenAI a government body. Though, I expect taking AI risk to DC in 2015 would have been a tough sell.

Thanks for making this post! It's a very thought-provoking topic that certainly merits more discussion and investigation!

My response is in three parts. First, I'll share some my thoughts on why, in my view we should expect unionized companies to act more safely. Then, I'll share some doubts that I have about tractability on fast timelines. Lastly, I offer an alternative proposal.

1.

To gently push back against other commenters here, I think there's a case to be made that worker's incentives should lean much more toward safety than management's.

Management has incentives to signal that they care about safety, but incentives against appropriate caution in their actions. Talking about safety and credibly demonstrating it has PR benefits for a company. But the company who deploys powerful AI gains a massive portion of the upsides, while the downside risk is carried by all. Thus, our priors should be that company management will lean toward far more risk taking than if they were acting in the public interest.

Workers (at least those without stock options) don't have the same incentive. They might miss out on a raise or bonus from slowed progress, but they may also lose their job from fast progress if it allows for their automation. (As one example, I suspect that content moderation contractors like those interviewed in the podcast linked by OP won't be used the same quantity for GPT-5, most of the work will be performed by GPT-4.) Since workers represent a larger proportion of the human population at risk (i.e. all of us), we should expect their voices in company direction to better represent that risk, provided they behave rationally.

Of course, there are countless examples of companies convincing their workers to act against their own interests. But successful workplace organizing could be an effective way to balance corporate messaging with alternative narratives. Even if AI safety doesn't wind up as a top priority of the union, improvements to job security--a standard part of union contracts--could make employees more likely to voice safety concerns or to come out as whistleblowers.

2.

That said, I can think of some reasons to doubt that labor organizing at AI companies is tractable:
 

For one, despite their popular support and some recent unionizations, union membership in the USA is low and declining (by percentage). This means that unions are lacking resources and most workers are inexperienced with them. It also demonstrates that organizing is hard, especially so in a world where many only meet their colleagues through a screen, and where remote work and independent contractors makes it easier for companies to replace workers. It's possible that a new wave of labor organizing could overcome these challenges, but I think it's unlikely that we'll have widespread unions at tech companies within the next five years. (I hope I'm wrong.)

As we approach AGI, power will shift entirely from labor to capital. In a world without work, labor organizing isn't feasible. In my model, this is a gradual process, with worker value slowly declining well before AGI as more and more tasks are automated. This will be true across all industries, but the companies building AI tools will be the first to use them, so their workers are vulnerable to replacement a bit sooner.

An organized workplace might slow down the company's AI development or release. This would likely be a positive if it occurred everywhere at once, but otherwise would make it more likely that a non-unionized company develop AGI. This would be difficult to coordinate given the current legal status of unions in the USA: if at one company only 49% of employees vote in favor, they get no legally protected union.

On slower timelines, these issues may be overcome, but fast timelines are likely where most AI risk lies. In short, it seems unlikely that political momentum will build in time to prevent the deployment of misaligned AGI.
 

3.

An alternative that may be less affected by these issues is specifically organize around the issue of AI safety. I could imagine workers coming together in the spirit of the Federation of Atomic Scientists, perhaps forming the "Machine Learning Workers for Responsible AI" . This model would not fit legally recognized unionization in the USA, but it could be a way to build solidarity across the industry to add weight to asks such as that to pause giant AI experiments.

I expect that the MLWRAI could scale quickly, especially with an endorsement from, say, the Future of Life Institute. It would be able to grow in parallel across all AI companies, even internationally, and it should avoid the political backlash of unions. Employees supporting the MLWRAI would not have the legal protections of those in unions, but firing such employees would attract scrutiny. Given sufficient public support or regulatory oversight, this could be sufficient incentive for companies to voluntarily cooperate with the MLWRAI.
 

An inter-company and international workers organization would support coordination across companies by reducing the concern that slowing down or investing in safety would allow others to race ahead. It would also provide an avenue for employees to influence company decisions without the majority support required for a union.  With the support of the public and/or major EA organizations even a small minority of workers could have the leverage to push company decisions toward AI safety, worldwide.