In response to my last post about why AI value alignment shouldn't be conflated with AI moral achievement, a few people said they agreed with my point but they would frame it differently. For example, Pablo Stafforini framed the idea this way:

it seems important to distinguish between normative and human specifications, not only because (arguably) “humanity” may fail to pursue the goals it should, but also because the team of humans that succeeds in building the first AGI may not represent the goals of “humanity”. So this should be relevant both to people (like classical and negative utilitarians) with values that deviate from humanity’s in ways that could matter a lot, and to “commonsense moralists” who think we should promote human values but are concerned that AI designers may not pursue these values (because these people may not be representative members of the population, because of self-interest, or because of other reasons).

I disagree with Pablo's framing because I don't think that "the team of humans that succeeds in building the first AGI" will likely be the primary force in the world responsible for shaping the values of future AIs. Instead, I think that (1) there isn't likely to be a "first AGI" in any meaningful sense, and (2) AI values will likely be shaped more by market forces and regulation than the values of AI developers, assuming we solve the technical problems of AI alignment.

In general, companies usually cater to what their customers want, and when they don't do that, they're generally outcompeted by companies who will do what customers want instead. Companies are also heavily constrained by laws and regulations. I think these constraints—market forces and regulationwill apply to AI companies too. Indeed, we have already seen these constraints play a role shaping the commercialization of existing AI products, such as GPT-4. It seems best to assume that this situation will largely persist into the future, and I see no strong reason to think there will be a fundamental discontinuity with the development of AGI.

There do exist some reasons to assume that the values of AI developers matter a lot. Perhaps most significantly, AI development appears likely to be highly concentrated at the firm-level due to the empirically high economies of scale of AI training and deployment, lessening the ability for competition to unseat a frontier AI company. In the extreme case, AI development may be taken over by the government and monopolized. Moreover, AI developers may become very rich in the future, having created an extremely commercially successful technology, giving them disproportionate social, economic, and political power in our world.

The points given in the previous paragraph do support a general case for caring somewhat about the morality or motives of frontier AI developers. Nonetheless, I do not think these points are compelling enough to make the claim that future AI values will be shaped primarily by the values of AI developers. It still seems to me that a better first-pass model is that AI values will be shaped by a variety of factors, including consumer preferences and regulation, with the values of AI developers playing a relatively minor role.

Given that we are already seeing market forces shaping the values of existing commercialized AIs, it is confusing to me why an EA would assume this fact will at some point no longer be true. To explain this, my best guess is that many EAs have roughly the following model of AI development:

  1.  There is "narrow AI", which will be commercialized, and its values will be determined by market forces, regulation, and to a limited degree, the values of AI developers. In this category we find GPT-4 from OpenAI, Gemini from Google, and presumably at least a few future iterations of these products.
  2.  Then there is "general AI", which will at some point arrive, and is qualitatively different from narrow AI. Its values will be determined almost solely by the intentions of the first team to develop AGI, assuming they solve the technical problems of value alignment.

My advice is that we should probably just drop the second step, and think of future AI as simply continuing from the first step indefinitely, albeit with AIs becoming incrementally more general and more capable over time.

This perspective actually matters a great deal for crafting effective policy. In the first step, if we want to influence the values of AIs, it's more important to shape the institutions and legal framework in which AIs operate. In the second step, the most important thing is making sure that we've solved the alignment problem, and making sure that the first team to develop AGI has good intentions and isn't self-interested. 

If we drop the second step and think instead that the first step will persist indefinitely, then our model of how to best influence future AI values becomes quite different. As a result, we should be relatively much less concerned about the intentions and moral virtues of specific AI leaders (e.g. Sam Altman) and much more concerned about the market structures and institutions governing AI.





More posts like this

Sorted by Click to highlight new comments since:

Executive summary: AI values will likely be shaped more by market forces and regulation than the personal values of AI developers.

Key points:

  1. There won't likely be a singular "first AGI" - AI capabilities will improve incrementally over time.
  2. Companies usually cater to customers and markets, not personal values of developers. This will likely hold for AI companies too.
  3. AI development will likely concentrate at the firm level, giving some power to developers. But market forces and regulation will still play a major role.
  4. We should drop the idea that "general AI" will be qualitatively different. The commercial forces shaping today's AI will likely persist.
  5. This perspective suggests we should focus more on market structures and institutions rather than motives of specific AI leaders.



This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

I think this is a good summary of my post.

I'm of a split mind on this. On the one hand, I definitely think this is a better way to think about what will determine AI values than "the team of humans that succeeds in building the first AGI".

But I also think the development of powerful AI is likely to radically reallocate power, potentially towards AI developers. States derive their power from a monopoly on force, and I think there is likely to be a period before the obsolesce of human labor in which these monopolies are upset by whoever is able to most effectively develop and deploy AI capabilities. It's not clear who this will be, but it hardly seems guaranteed to be existing state powers or property holders, and AI developers have an obvious expertise and first mover advantage.

Curated and popular this week
Relevant opportunities