Hide table of contents

(This post also has a Russian version, translated from the present original by K. Kirdan.)

Many longtermists seem hopeful that our successors (or any advanced civilization/superintelligence) will eventually act in accordance with some moral truth.[1] While I’m sympathetic to some forms of moral realism, I believe that such a scenario is fairly unlikely for any civilization and even more so for the most advanced/expansionist ones. This post briefly explains why.

To be clear, my case does under no circumstances imply that we should not act according to what we think might be a moral truth. I simply argue that we can't assume that our successors -- or any powerful civilization -- will "do the (objectively) right thing". And this matters for longtermist cause prioritization.
 

Epistemic status: Since I believe the ideas in this post to be less important than those in future ones within this sequence, I wrote it quickly and didn’t ask anyone for thorough feedback before posting, which makes me think I’m more likely than usual to have missed important considerations. Let me know what you think!

Update April 10th: When I first posted this, the title was "It Doesn't Matter what the Moral Truth might be". I realized this was misleading. It was making it look like I was making a strong normative claim regarding what matters while my goal was actually to predict what might happen, so I changed it.

Rare are those who will eventually act in accordance with some moral truth

For agents to do what might objectively be the best thing to do, you need all these conditions to be met:

  1. There is a moral truth.
  2. It is possible to “find it” and recognize it as such.
  3. They find something they recognize as a moral truth.
  4. They (unconditionally) accept it, even if it is highly counterintuitive.
  5. The thing they found is actually the moral truth. No normative mistake.
  6. They succeed at acting in accordance with it. No practical mistake.
  7. They stick to this forever. No value drift.

I think these seven conditions are generally quite unlikely to be all met at the same time, mainly for the following reasons:

  • (Re: condition #1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.
  • (Re: #2) I don’t see why we should assume it is possible to “find” (with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.”
  • (Re: #3 and #4) If they “find” a moral truth and don’t like what it says, they might very well not act in accordance with it?[2]
  • (Re: #3, #4, #5, and #7) Within a civilization, we should expect the agents who have the values that are the most adapted/competitive to survival, replication, and expansion to eventually be selected for (see, e.g., Bostrom 2004Hanson 1998), and I see no reason to suppose the moral truth is particularly well adapted to those things. 

Even if they’re not rare, their impact will stay marginal

Now, let’s actually assume that many smart agents converge on THE moral truth and effectively optimize for whatever it says. The thing is that, for reasons analogous to what we addressed in the last bullet point above, we may expect civilizations -- or groups/individuals within a civilization -- adopting the moral truth to be less competitive than those who have the values that are the most adaptive and adapted to space colonization races.

My subsequent post investigates this selection effect in more detail, but here is an intuition pump: Say Denmark wants to follow the moral truth which is to maximize the sum , where  is something valuable and  something disvaluable. Meanwhile, France just wants something close to “occupy as much space territory as possible”. While the Danes face a trade-off between (A) spreading and building military weapons/defenses as fast as possible and (B) investing in “colonization safety”[3] to make sure they actually end up optimizing for what the moral truth says, the French don't face this trade-off and can just go all-in on (A), which gives them an evolutionary advantage. The significance of this selection effect, here, depends on whether the moral truth is among – or close to – the most “expansion-conducive” intrinsic goals civilizations can plausibly have, and I doubt that it is.

Conclusion

Acting in accordance with some moral truth requires the unlikely successful succession of many non-obvious steps.

Also, values don’t come out of nowhere. They are the product of evolutionary processes. We should expect the most adaptive and adapted values to be the most represented, at least among the most expansionist societies. And how true a moral theory might be seems fairly orthogonal to how competitive it is,[4] such that we -- a priori -- have no good reason to expect (the most powerful) civilizations/agents to do what might be objectively good.

If I'm roughly correct, this implies that the "discoverable moral reality" argument in favor of assuming the future will be good (see Anthis 2022) is pretty bad. This probably also has more direct implications for longtermist cause prioritization that will be addressed in subsequent posts within this sequence.

Acknowledgment

My work on this sequence so far has been funded by Existential Risk Alliance

All assumptions/claims/omissions are my own. 


 

  1. ^

    This is informed by informal interactions I had, plus my recollections of claims made in some podcasts I can’t recall. I actually can’t find anything fleshing out this exact idea, surprisingly, and I don’t think it’s worth spending more time searching. Please, share in the comments if you can think of any!

  2. ^

    Interestingly, Brian Tomasik (2014) writes: “Personally, I don't much care what the moral truth is even if it exists. If the moral truth were published in a book, I'd read the book out of interest, but I wouldn't feel obligated to follow its commands. I would instead continue to do what I am most emotionally moved to do.”

  3. ^

    After a thorough evaluation, they might even realize that the best way to maximize their utility requires avoiding colonizing space (e.g., because the expected disvalue of conflict with France or conflict with alien civilizations is too high).

  4. ^

    This comment thread discusses an interesting argument of Wei Dai that challenges this claim.

Comments9
Sorted by Click to highlight new comments since:

In Six Plausible Meta-Ethical Alternatives, I wrote (as one of the six alternatives):

  1. Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.

I think in this post you're not giving enough attention to the possibility that there's something that we call "doing philosophy" that can be used to discover all kinds of philosophical truths, and that you can't become a truly powerful civilization without being able to "do philosophy" and be generally motivated by the results. Consider that philosophy seems to have helped the West become the dominant civilization on Earth, for example by inventing logic and science, and more recently have led to the discovery of ideas like acausal extortion/trade (which seem promising albeit still highly speculative). Of course I'm very uncertain of this and have little idea what "doing philosophy" actually consists of, but I've written a few more words on this topic if you're interested.

Very interesting, Wei! Thanks a lot for the comment and the links. 

TL;DR of my response: Your argument assumes that the first two conditions I list are met by default, which is I think a strong assumption (Part 1). Assuming that is the case, however, your point suggests there might be a selection effect favoring agents that act in accordance with the moral truth, which might be stronger than the selection effect I depict for values that are more expansion-conducive than the moral truth. This is something I haven't seriously considered and this made me update! Nonetheless, for your argument to be valid and strong, the orthogonality thesis has to be almost completely false, and I think we need more solid evidence to challenge that thesis (Part 2).

Part 1: Strong assumption

This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts.

My understanding is that this scenario says the seven conditions I listed are met because it is actually trivial for a super-capable intergalactic civilization to meet those (or even required for it to become intergalactic in the first place, as you suggest later).

I think this is plausible for the following conditions:

  • #3 They find something they recognize as a moral truth.
  • #4 They (unconditionally) accept it, even if it is highly counterintuitive.
  • #5 The thing they found is actually the moral truth. No normative mistake.
  • #6 They succeed at acting in accordance with it. No practical mistake.
  • #7 They stick to this forever. No value drift.

You might indeed expect that the most powerful civs figure out how to overcome these challenges, and that those who don't are left behind.[1] This is something I haven't seriously considered before, so thanks!

However, recall the first two conditions:

  1. There is a moral truth. 
  2. It is possible to “find it” and recognize it as such. 

How capable a civilization is doesn't matter when it comes to how likely these two are to be met. And while most metaethical debates focus only on 1, saying 1 is true is a much weaker claim than saying 1&2 is true (see, e.g., the naturalism vs non-naturalism controversy, which is I think only one piece of the puzzle).

Part 2: Challenging the orthogonality thesis

Then, you say that in this scenario you depict

There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.

Maybe, but what I argue I that they are (occasional) "sophisticated minds" with values that are more expansion-conducive than the (potential) moral truth (e.g., because they have simple unconstrained goals such as "let's just maximize for more life" or "for expansion itself"), and that they're the ones who tend to take over.

But then you make this claim, which, if true, seems to sort of debunk my argument:

you can't become a truly powerful civilization without being able to "do philosophy" and be generally motivated by the results.

(Given the context in your comment, I assume that by "being able to do philosophy", you mean "being able to do things like finding the moral truth".) 

But I don't think this claim is true.[1] However, you made me update and I might update more once I read the posts of yours that you linked! :)

  1. ^

    I remain skeptical because this would imply the orthogonality thesis is almost completely false. Assuming there is a moral truth and that it is possible to "find" it and recognize it as such, I tentatively still believe that extremely powerful agents/civs with motivations misaligned with the moral truth are very plausible and not rare. You can at least imagine scenarios where they started aligned but then value drifted (without that making them significantly less powerful).

I remain skeptical because this would imply the orthogonality thesis is almost completely false. 

The orthogonality thesis could be (and I think almost certainly is) false with respect to some agent-generating processes (e.g., natural selection) and true with respect to others (e.g. Q-learning).

Do you have any reading to suggest on that topic? I'd  be curious to understand that position more :)

My impression was that philosophers tended to disagree a lot on what moral truths are?

Consider that philosophy seems to have helped the West become the dominant civilization on Earth, for example by inventing logic and science 

I'd argue that the process of Western civilization dominating the Earth was not a very moral process, and was actually pretty immoral, despite the presence of logic and science. It involved several genocides (in the Americas), colonization, the Two World Wars... In the process, some good things definitely happened (medicine progressing, for instance), but mostly for humans. The status of farmed animals seems to have consistently worsened with factory farming stepping in.

So I'd argue that was the West dominating the world happened because it was more powerful, not because it was more moral (see The End of the Magemachine by Fabian Scheidler for more on this topic).

In that view, science and logic matter because they allow you to have more power. They allow to have a more truthful picture of how the universe works, which allows making stuff like firearms and better boats and antibiotics and nuclear bombs. But this is the process of "civilizations competing with each other" described above. It's not a comparison based on "who is acting closer to what is morally good"?

Several objections in no particular order

  • I think your first section is correct, but you conclude far too much from it: failing to act in perfect accord with the moral truth does not mean you're not influenced by it at all. Humans fail your conditions 4-7 and yet are occasionally influenced by moral facts in ways that matter.
  • On the one hand, you assume that civilizations are agents which can simply decide to adopt this or that strategy; on the other hand, you expect intense selection within civilizations, such that their members behave so as to maximize their own reproductive success. But these can't both be true: you can throw all of your surplus into expanding as fast as possible, or you can spend it on internal competition, or you can do something in between, but you can't spend it all on both. 
  • I don't think 

    > Within a civilization, we should expect the agents who have the values that are the most adapted/competitive to survival, replication, and expansion to eventually be selected for

    is the right conclusion to draw from Hanson's paper. The selection effects he's talking about act on different regions of the frontier of a given civilization. Those living in the interior (who will be the vast majority of the population in the extreme long-run) may be disproportionately descended from fast expanders, but will not face the same pressure themselves. 
  • There are many different units of selection, and they can't all be subject to arbitrarily intense selection pressure simultaneously: cancer cells don't build spaceships. 

Insightful! Thanks for taking the time to write these.

failing to act in perfect accord with the moral truth does not mean you're not influenced by it at all. Humans fail your conditions 4-7 and yet are occasionally influenced by moral facts in ways that matter.

Agreed and I didn't mean to argue against that so thanks for clarifying! Note however that the more you expect the moral truth to be fragile/complex, the further from it you should expect agents' actions to be.

you expect intense selection within civilizations, such that their members behave so as to maximize their own reproductive success.

Hum... I don't think the "such that..." part logically follows. I don't think this is how selection effects work. All I'm saying is that those who are the most bullish on space colonization will colonize more space.

I'm not sure what to say regarding your last two points. I think I need to think/read more, here. Thanks :)

All I'm saying is that those who are the most bullish on space colonization will colonize more space.

Sure, but that doesn't tell you much about what happens afterwards. If the initial colonists' values are locked in ~forever, we should probably expect value drift to be weak in general, which means frontier selection effects have a lot less variation to work with. 

At the extreme lower limit with no drift at all, most agents within a mature civilization are about as expansionist as the most expansionist of the initial colonists - but no more so. And this might not be all that much in the grand scheme of things. 

At the other end, where most of the space of possible values gets explored, maybe you do get a shockwave of superintelligent sociopaths racing outwards at relativistic speeds - but you also get a vast interior that favors (relatively speaking) long-term survival and material efficiency.

Hi Jim,

Nice post!

(Re: condition #1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.

Under hedonism, any and all claims of moral truths are completely dependent on sentience. So 100 % weight on hedonism (in principle) makes condition 1 met, right?

(Re: #2) I don’t see why we should assume it is possible to “find” (with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.”

In general, the greater the amount of resources of a civilisation, the more resources could be directed towards finding the moral truth. So should we expect more powerful civilisations to be better at finding the moral truth? I suppose one could argue that such civilisations may be selected such that they direct fewer resources towards finding the moral truth, but I do not think that is obvious, because finding the moral truth also has selection effects. For example, finding the moral truth seIects for not going extinct (to continue searching for it), and space colonisation arguably decreases extinction risk.

Curated and popular this week
Relevant opportunities