Hide table of contents

Cross-posted on LessWrong.

Sorta related, but not the same thing: Problems and Solutions in Infinite Ethics


I don't know a lot about physics, but there appears to be a live debate in the field about how to interpret quantum phenomena.

There's the Copenhagen view, under which wave functions collapse into a determined state, and the many-worlds view, under which wave functions split off into different "worlds" as time moves forward. I'm pretty sure I'm missing important nuance here; this explainer (a) does a better job explaining the difference.

(Wikipedia tells me there are other interpretations apart from Copenhagen and many-worlds – e.g. De Broglie–Bohm theory – but from what I can tell the active debate is between many-worlders and Cophenhagenists.)

Eliezer Yudkowsky is in the many-worlds camp. My guess is that many folks in the EA & rationality communities also hold a many-worlds view, though I haven't seen data on that.

An interesting (troubling?) implication of many-worlds is that there are many very-similar versions of me. For every decision I've made, there's a version where the other choice was made.

And importantly, these alternate versions are just as real as me.

(I find this a bit mind-bending to think about; I again refer to this explainer (a) which does a better job than I can.)

If this is true, it seems hard to ground altruistic actions in a non-selfish foundation. Everything that could happen is happening, somewhere. I might desire to exist in the corner of the multiverse where good things are happening, but that's a self-interested motivation. There are still other corners, where the other possibilities are playing out.

Eliezer engages with this a bit at the end of his quantum sequence:


Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the twelfth century, which are also beyond your ability to affect. But the twelfth century is not your responsibility, because it has, as the quaint phrase goes, “already happened.” I would suggest that you consider every world that is not in your future to be part of the “generalized past.”
Live in your own world. Before you knew about quantum physics, you would not have been tempted to try living in a world that did not seem to exist. Your decisions should add up to this same normality: you shouldn’t try to live in a quantum world you can’t communicate with.

I find this a little deflating, and incongruous with his intense call-to-actions to save the world. Sure, we can work to save the world, but under many-worlds, we're really just working to save our corner of it.

Has anyone arrived at a more satisfying reconciliation of this? Maybe the thing to do here is bite the bullet of grounding one's ethics in self-interested desire, but that doesn't seem to be a popular move in EA.

14

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

For every decision I've made, there's a version where the other choice was made.

Is that actually something the many-worlds view implies? It seems like you're conflating "made a choice" with "quantum split"?


(I don't know any of the relevant physics.)

I think so? (I'm also lacking the relevant physics.)

From the explainer I linked to:


Looking down on the double slit experiment from outside you can ask questions like “what is the probability that the photon will go through each slot?”.  You have no “givens” to affect your probabilities so you say “50/50”, and you’re right.  The photon goes through both, but since there’s only one photon (conserved number of photons), it does it in a particular (some what obvious) way: it combines the states “left/not right” and “not left/right”.
Now say you’re presente
... (read more)
2
zdgroff
The explainer doesn't seem to imply the choice is equivalent to a quantum split unless I'm missing something? I've had Jeff's reservation every time I've heard this argument. It seems like it would just be a huge coincidence for our decisions to actually correspond to splits. Subjective senses of uncertainty may not equal actual lack of determinism at the atomic level.
1
Tessa A 🔸
My impression (also not a physicist) is that there's no obvious connection between a wave function collapsing somewhere in the universe and your neurons churning through a decision about which door you'd rather walk through. Under Many Worlds, every quantum-possible universe exists, but that doesn't mean that your experience of decision-making is equal-and-opposite distributed across those worlds. If you like the look of the right door better than the left door, then probably most of your selves will go through that door. (If you're interested in a fictional exploration of these issues, Ted Chiang's Anxiety is the Dizziness of Freedom is excellent.)
2
Milan_Griffes
fwiw, my concern isn't premised on "all futures / choices being equally likely."  I think the concern is closer to something like "some set of futures are going to happen (there's some distribution of Everett branches that exists and can't be altered from the inside), so there's not really room to change the course of things from a zoomed-out, point-of-view-of-the-universe perspective." I'll give the Chiang story a look, thanks!

So assuming the Copenhagen interpretation is wrong and something like MWI or zero-world or something else is right, it's likely the case that there are multiple, disconnected casual histories. This is true to a lesser extent even in classical physics due to the expansion of the universe and the gradual shrinking of Hubble volumes (light cones), so even a die-hard Cophenhagenist should consider what we might call generally acausal ethics.

My response is generally something like this, keeping in mind my ethical perspective is probably best described as virtue ethics with something like negative preference utilitarianism applied on top:

  • Causal histories I am not causally linked with still matter for a few reasons:
    • My compassion can extend beyond causality in the same way it can extend beyond my city, country, ethnicity, species, and planet (moral circle expansion).
    • I am unsure what I will be causally linked with in the future (veil of ignorance).
    • Agents in other causal histories can extend compassion for me in kind if I do it for them (acausal trade).
  • Given that other causal histories matter, I can:
    • act to make other causal histories better in those cases where I am currently causally connected but later won't be (e.g. MWI worlds that will split causally later from the one I will find myself in that share a common history prior to the split),
    • engage in acausal trade to create in the causal history I find myself in more of what is wanted in other causal histories when the tradeoffs are nil or small knowing that my causal history will receive the same in exchange,
    • otherwise generally act to increase the measure (or if the universe is finite, count) of causal histories that are "good" ("good" could mean something like "want to live in" or "enjoy" or something else that is a bit beyond the scope of this analysis).

I personally think many worlds is an unhelpful philosophy. I strongly conjecture that atoms evolve in a way where they all mutually connect to their independent degrees of freedom.

This happens in such a way that requires some creative thinking about how to sample and interpret the data, akin to how signal processing uses Nyquist frequency. It's just a little hard to do that, so some ideas have emerged like many worlds to justify why it's complex.

So, to simplify your problem: I help someone, but somewhere else there is someone else who I wasn't able to help. Wat do?

You're in this precise situation regardless of quantum physics; I guarantee you won't be able to save everyone in your personal future light cone either. So I think that should simplify your question a bunch.

Why would this change your metaethical position? The reason you'd want to help someone else shouldn't change if I make you aware of some additional people somewhere which you're not capable of helping.

The reason you’d want to help someone else shouldn’t change if I make you aware of some additional people somewhere which you’re not capable of helping.

Interestingly, Eliezer claims here that that is precisely what caused the change in his case:

If my mem­ory serves me, I con­verted to av­er­age util­i­tar­i­anism as a di­rect re­sult of be­liev­ing in a Big World.

That's from more than ten years ago. I'm unaware if that is still his position.

Comments3
Sorted by Click to highlight new comments since:

If you want to make a decision, you will probably agree with me that it's more likely that you'll end up making that decision, or at least that it's possible to alter the likelyhood that you'll make a certain decision by thinking (otherwise your question would be better stated as "if physics is deterministic, does ethics matter"). And, under many worlds, if something is more likely to happen, then there will be more worlds where that happens, and more observers that see that happen (I think this is usually how it's posed, anyway). So while there'll always be some worlds where you're not altruistic, no matter what you do, you can change how many worlds are like that.

Thanks, I haven't thought about this enough to say with confidence, but it seems plausible that many-worlds implies determinism such that this is really a question about determinism / living in a deterministic system.

Ah, Knobe et al. 2005 seems relevant. I haven't read it yet.

Curated and popular this week
Relevant opportunities