Clearly, by definition, they are more capable than humans of efficiently using their resources for purposes, including at least the purpose of maximizing their own utility. Moreover, they are individually more capable of achieving full-scale cosmic colonization than they would be in the presence of humans (and regulating them with apparently more prejudice and hostility), and therefore more capable of avoiding "astronomical waste".

After the extinction of humans, however much negative utility is generated by the process of human extinction, these can be offset by the far greater positive utility of ASI. Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.

This doesn't mean I approve of this development, but it seems to raise the curious question of a variant of Nozick's utility monster argument. Although he originally made this argument to argue against the rationalization of the welfare state based on utility maximization, it would seem that this argument could also be used to justify the extinction of humans by ASI.

0

1
6

Reactions

1
6
Comments4
Sorted by Click to highlight new comments since:

I think this is an interesting post. I don’t agree with the conclusion, but I think it’s a discussion worth having. In fact, I suspect that this might be a crux for quite some people in the AI safety community. To contribute to the discussion, here are two other perspectives. These are rough thoughts and I could have added a lot more nuance.

Edit: I just noticed that your title includes the word "sentient". Hence, my second perspective is not as applicable anymore. My own take that I offer at the end seems to hold up nonetheless.
 

  1. If we develop an ASI that exterminates humans, it will likely also exterminate all other species that might exist in the universe. 
     
  2. Even if one subscribes to utilitarianism, it does not seem clear at all that an ASI would be able to experience any joy or happiness, or that it would be able to create it. Sure, it can accomplish objectives, but one can argue from a strong position that these won’t accomplish any utilitarian goals. Where is the the positive utility here? And even more importantly, how should we frame positive utility in this context? 

I think a big reason to not buy your argument stems from the apparent fact that humans are a lot more predictable than an ASI. We know how to work together (at least a bit), we know that we have managed to improve the world throughout the last centuries pretty well. Many people dedicate their life to helping others (such as this lovely community) the higher they are located on Maslow hierarchy. Sure, we have so many flaws (humans), but it seems a lot more plausible to me that we will be able to accomplish full-scale cosmic colonisation that actually maximises positive utility if we don't go extinct in the process. On the other hand, we don't even know whether an ASI could create positive utility, or experience it.

According to the "settlement" version of the "Dissolving the Fermi paradox", we seem to be roughly certain that the average number of other civilizations in the universe is even less than one.

Thus the extermination of other alien civilizations seems to be an equally worthwhile price to pay.

I strongly disagree. I think human extinction would be bad.

Not every utility function is equally desirable. For example, an ASI that maximizes the number of paperclips in the universe would be a bad outcome.

Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.

Most people here do adopt anthropocentric values, in that they think human flourishing would be more desirable than a vast amount of paperclips.

Considering the way many people calculate animal welfare, I would have thought that many people here are not anthropocentric.

Lots of paperclips are a possibility, but perhaps ASIs could be designed to be much more creative and sensory than humans, and does that mean humans shouldn't exist.

Curated and popular this week
Relevant opportunities