Hide table of contents

Crossposted from the Global Priorities Project

Introduction

It is commonly objected that the “long-run” perspective on effective altruism rests on esoteric assumptions from moral philosophy that are highly debatable. Yes, the long-term future may overwhelm aggregate welfare considerations, but does it follow that the long-term future is overwhelmingly important? Do I really want my plan for helping the world to rest on the assumption that the benefit from allowing extra people to exist scales linearly with population when large numbers of extra people are allowed to exist?

In my dissertation on this topic, I tried to defend the conclusion that the distant future is overwhelmingly important without committing to a highly specific view about population ethics (such as total utilitarianism). I did this by appealing to more general principles, but I did end up delving pretty deeply into some standard philosophical issues related to population ethics. And I don’t see how to avoid that if you want to independently evaluate whether it’s overwhelmingly important for humanity to survive in the long-term future (rather than, say, just deferring to common sense).

In this post, I outline a relatively atheoretical argument that affecting long-run outcomes for civilization is overwhelmingly important, and attempt to side-step some of the deeper philosophical disagreements. It won’t be an argument that preventing extinction would be overwhelmingly important, but it will be an argument that other changes to humanity’s long-term trajectory overwhelm short-term considerations. And I’m just going to stick to the moral philosophy here. I will not discuss important issues related to how to handle Knightian uncertainty, “robust” probability estimates, or the long-term consequences of accomplishing good in the short run. I think those issues are more important, but I’m just taking on one piece of the puzzle that has to do with moral philosophy, where I thought I could quickly explain something that may help people think through the issues.

In outline form, my argument is as follows:

  1. In very ordinary resource conservation cases that are easy to think about, it is clearly important to ensure that the lives of future generations go well, and it’s natural to think that the importance scales linearly with the number of future people whose lives will be affected by the conservation work.
  2. By analogy, it is important to ensure that, if humanity does survive into the distant future, its trajectory is as good as possible, and the importance of shaping the long-term future scales roughly linearly with the expected number of people in the future.
  3. Premise (2), when combined with the standard set of (admittedly debatable) empirical and decision-theoretic assumptions of the astronomical waste argument, yields the standard conclusion of that argument: shaping the long-term future is overwhelmingly important.
As when I have discussed this issue in other contexts (such as Nick Bostrom’s papers “Astronomical Waste” and “Existential Risk Prevention as Global Priority,” and my dissertation) this conversation is going to generally assume that we’re talking about good accomplished from an impartial perspective, and will not attend to deontological, virtue-theoretic, or justice-related considerations.

A review of the astronomical waste argument and an adjustment to it

The standard version of the astronomical waste argument runs as follows:
  1. The expected size of humanity's future influence is astronomically great.
  2. If the expected size of humanity's future influence is astronomically great, then the expected value of the future is astronomically great.
  3. If the expected value of the future is astronomically great, then what matters most is that we maximize humanity’s long-term potential.
  4. Some of our actions are expected to reduce existential risk in not-ridiculously-small ways.
  5. If what matters most is that we maximize humanity’s future potential and some of our actions are expected to reduce existential risk in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to reduce existential risk.
  6. Therefore, what it is best to do is primarily determined by how our actions are expected to reduce existential risk.
I’ve argued for adjusting the last three steps of this argument in the following way:

4’.   Some of our actions are expected to change our development trajectory in not-ridiculously-small ways.

5’.   If what matters most is that we maximize humanity’s future potential and some of our actions are expected to change our development trajectory in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.

6’.   Therefore, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.

The basic thought here is that what the astronomical waste argument really shows is that future welfare considerations swamp short-term considerations, so that long-term consequences for the distant future are overwhelmingly important in comparison with purely short-term considerations (apart from long-term consequences that short-term consequences may produce).

Astronomical waste may involve changes in quality of life, rather than size of population

Often, the astronomical waste argument is combined with the idea that the best way to minimize astronomical waste is to minimize the probability of pre-mature human extinction. How important it is to prevent pre-mature human extinction is a subject of philosophical debate, and the debate largely rests on whether it is important to allow large numbers of people to exist in the future. So when someone complains that the astronomical waste argument rests on esoteric assumptions about moral philosophy, they are implicitly objecting to premise (2) or (3). They are saying that even if human influence on the future is astronomically great, maybe changing how well humanity exercises its long-term potential isn’t very important because maybe it isn’t important to ensure that there are a large number of people living in the future.

However, the concept of existential risk is wide enough to include any drastic curtailment to humanity’s long-term potential, and the concept of a “trajectory change” is wide enough to include any small but important change in humanity’s long-term development. And the value of these existential risks or trajectory changes need not depend on changes in the population. For example,

  • In “The Future of Human Evolution,” Nick Bostrom discusses a scenario in which evolutionary dynamics result in substantial decreases in quality of for all future generations, and the main problem is not a population deficit.
  • Paul Christiano outlined long-term resource inequality as a possible consequence of developing advanced machine intelligence.
  • I discussed various specific trajectory changes in a comment on an essay mentioned above.

There is limited philosophical debate about the importance of changes in the quality of life of future generations

The main group of people who deny that it is important that future people exist have “person-affecting views.” These people claim that if I must choose between outcome A and outcome B, and person X exists in outcome A but not outcome B, it’s not possible to affect person X by choosing outcome A rather than B. Because of this, they claim that causing people to exist can’t benefit them and isn’t important. I think this view suffers from fatal objections which I have discussed in chapter 4 of my dissertation, and you can check that out if you want to learn more. But, for the sake of argument, let’s agree that creating “extra” people can’t help the people created and isn’t important.

A puzzle for people with person-affecting views goes as follows:

Suppose that agents as a community have chosen to deplete rather than conserve certain resources. The consequences of that choice for the persons who exist now or will come into existence over the next two centuries will be “slightly higher” than under a conservation alternative (Parfit 1987, 362; see also Parfit 2011 (vol. 2), 218). Thereafter, however, for many centuries the quality of life would be much lower. “The great lowering of the quality of life must provide some moral reason not to choose Depletion” (Parfit 1987, 363). Surely agents ought to have chosen conservation in some form or another instead. But note that, at the same time, depletion seems to harm no one. While distant future persons, by hypothesis, will suffer as a result of depletion, it is also true that for each such person a conservation choice (very probably) would have changed the timing and manner of the relevant conception. That change, in turn, would have changed the identities of the people conceived and the identities of the people who eventually exist. Any suffering, then, that they endure under the depletion choice would seem to be unavoidable if those persons are ever to exist at all. Assuming (here and throughout) that that existence is worth having, we seem forced to conclude that depletion does not harm, or make things worse for, and is not otherwise “bad for,” anyone at all (Parfit 1987, 363). At least: depletion does not harm, or make things worse for, and is not "bad for," anyone who does or will exist under the depletion choice.
The seemingly natural thing to say if you have a person-affecting view is that because conservation doesn’t benefit anyone, it isn’t important. But this is a very strange thing to say, and people having this conversation generally recognize that saying it involves biting a bullet. The general tenor of the conversation is that conservation is obviously important in this example, and people with person-affecting views need to provide an explanation consonant with that intuition.

Whatever the ultimate philosophical justification, I think we should say that choosing conservation in the above example is important, and this has something to do with the fact that choosing conservation has consequences that are relevant to the quality of life of many future people.

Intuitively, giving N times as many future people higher quality of life is N times as important

Suppose that conservation would have consequences relevant to 100 times as many people in case A than it would in case B. How much more important would conservation be in case A? Intuitively, it would be 100 times more important. This generally fits with Holden Karnofsky’s intuition that a 1/N probability of saving N lives is about as important as saving one life, for any N:
I wish to be the sort of person who would happily pay $1 for a robust (reliable, true, correct) 10/N probability of saving N lives, for astronomically huge N - while simultaneously refusing to pay $1 to a random person on the street claiming s/he will save N lives with it.
More generally, we could say:

Principle of Scale: Other things being equal, it is N times better (in itself) to ensure that N people in some position have higher quality of life than other people who would be in their position than it is to do this for one person.

I had to state the principle circuitously to avoid saying that things like conservation programs could “help” future generations, because according to people with person-affecting views, if our "helping" changes the identities of future people, then we aren't "helping" anyone and that's relevant. If I had said it in ordinary language, the principle would have said, “If you can help N people, that’s N times better than helping one person.” The principle could use some tinkering to deal with concerns about equality and so on, but it will serve well enough for our purposes.

The Principle of Scale may seem obvious, but even it would be debatable. You wouldn’t find philosophical agreement about it. For example, some philosophers who claim that additional lives have diminishing marginal value would claim that in situations where many people already exist, it matters much less if a person is helped. I attack these perspectives in chapter 5 of my dissertation, and you can check that out if you want to learn more. But, in any case, the Principle of Scale does seem pretty compelling—especially if you’re the kind of person that doesn’t have time for esoteric debates about population ethics—so let’s run with it.

Now for the most questionable steps: Let’s assume with the astronomical waste argument that the expected number of future people is overwhelming, and that it is possible to improve the quality of life for an overwhelming number of future people through forward-thinking interventions. If we combine this with the principle from the last paragraph and wave our hands a bit, we get the conclusion that shifting quality of life for an overwhelming number of future people is overwhelmingly more important than any short term consideration. And that is very close to what the long-run perspective says about helping future generations, though importantly different because this version of the argument might not put weight on preventing extinction. (I say “might not” rather than “would not” because if you disagree with the people with person-affecting views but accept the Principle of Scale outlined above, you might just accept the usual conclusion of the astronomical waste argument.)

Does the Principle of Scale break down when large numbers are at stake?

I have no argument that it doesn’t, but I note that (i) this wasn’t Holden Karnofsky’s intuition about saving N lives, (ii) it isn’t mine, and (iii) I don’t really see a compelling justification for it. The main reason I can think of for wanting it to break down is not liking the conclusion that affecting long-run outcomes for humanity is overwhelmingly important in comparison with short-term considerations.  If you really want to avoid the conclusion that shaping the long-term future is overwhelmingly important, I believe it would be better to accommodate this idea by appealing to other perspectives and a framework for integrating the insights of different perspectives—such as the one that Holden has talked about—rather than altering this perspective. For such people, my hope would be that reading this post would cause you to put more weight on the perspectives that place great importance on the future.

Summary

To wrap up, I’ve argued that:
  1. Reducing astronomical waste need not involve preventing human extinction—it can involve other changes in humanity’s long-term trajectory.
  2. While not widely discussed, the Principle of Scale is fairly attractive from an atheoretical standpoint.
  3. The Principle of Scale—when combined with other standard assumptions in the literature on astronomical waste—suggests that some trajectory changes would be overwhelmingly important in comparison with short-term considerations. It could be accepted by people who have person-affecting views or people who don’t want to get too bogged down in esoteric debates about moral philosophy.
The perspective I’ve outlined here is still philosophically controversial, but it is at least somewhat independent of the standard approach to astronomical waste. Ultimately, any take on astronomical waste—including ignoring it—will be committed to philosophical assumptions of some kind, but perhaps the perspective outlined would be accepted more widely, especially by people with temperaments consonant with effective altruism, than perspectives relying on more specific theories or a larger number of principles.

9

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since:

Nice post. It's also worth noting that this version of the far-future argument appeals even to negative utilitarians, strongly anti-suffering prioritarians, Buddhists, antinatalists, and others who don't think it's important to create new lives for reasons other than holding a person-affecting view.

I also think even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future. The most likely so-called "extinction" event in my mind is human replacement by AIs, but AIs would be their own life forms with their own complex galaxy-colonization efforts, so I think work on AI issues should be considered part of "changing the direction of the future" rather than "making sure there is a future".

I think it's an open question whether "even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future." But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of "long-term technological/cultural path dependence/lock in," rather than the GCR category (though that wasn't the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13.

Thanks Nick. I like the abstraction to see precisely which features allow you to run these arguments.

Although my best guess agrees with it, I am a little more hesitant about the principle of scale than you are. There are some reasons for scepticism:

1) Very many population axiologies reject it. Indeed it looks as though it will cut somewhere close to where a suitable separability axiom would -- which already gets you to summing utility functions (not necessarily preference-based ones). But perhaps I'm wrong about quite where it cuts; it could be interesting to explore this.

2) As well as doing the work in this argument, the principle of scale is a key part of what can make you vulnerable to Pascal's Mugging. I'd hope we can resolve that without giving up this principle, but I don't think it's entirely settled.

3) You say you see no great justification for the principle to break down when large numbers are at stake. But when not-so-large numbers are at stake, there are very compelling justifications to endorse the principle (and not just for improving quality of life). And these reasons do apply for a larger range of ethical views than would agree with it at large scale. So you might think that you only believed it for these reasons, and have no reason to support it in their absence.

Re 1, yes it is philosophically controversial, but it also does speak to people with a number of different axiologies, as Brian Tomasik points out in another comment. One way to frame it is that it's doing what separability does in my dissertation, but noticing that astronomical waste can run without making assumptions about the value of creating extra people. So you could think of it as running that argument with one less premise.

Re 2, yes it pushes in an unbounded utility function direction, and that's relevant if your preferred resolution of Pascal's Mugging is to have a bounded utility function. But this is also a problem for standard presentations of the astronomical waste argument. As it happens, I think you can run stuff like astronomical waste with bounded utility functions. Matt Wage has some nice stuff about this in his senior thesis, and I think Carl Shulman has a forthcoming post which makes some similar points. I think astronomical waste can be defended from more perspectives than it has been in the past, and it's good to show that. This post is part of that project.

Re 3, I'd frame this way, "We use this all the time and it's great in ordinary situations. I'm doing the natural extrapolation to strange situations." Yes, it might break down in weird situations, but it's the extrapolation I'd put most weight on.

Yes, I really like this work in terms of pruning the premises. Which is why I'm digging into how firm those premises really are (even if I personally tend to believe them).

It seems like the principle of scale is in fact implied by separability. I'd guess it's rather weaker, but I don't know of any well-defined examples which accept scale but not separability.

I do find your framing of 3 a little suspect. When we have a solid explanation for just why it's great in ordinary situations, and we can see that this explanation doesn't apply in strange situations, it seems like the extrapolation shouldn't get too much weight. Actually most of my weight for believing the principle of scale comes the fact that it's a consequence of separability.

One more way the principle might break down:

4) You might accept the principle for helping people at a given time, but not as a way of comparing between helping people at different times.

Indeed in this case it's not so clear most people would accept the small-scale version (probably because intuitions are driven by factors such as improving lives earlier gets more time to have indirect effects acting to improve lives later).

Assuming I'm understanding the principle of scale correctly, I would have thought that the Average View is an example of something where Scale holds, but Separability fails. As it seems that whenever Scale is applied, the population is the same size in both cases (via a suppressed other-things-equal clause).

"Reducing astronomical waste need not involve preventing human extinction—it can involve other changes in humanity’s long-term trajectory."

Glad to see this gaining more traction in the x-risk community!

Curated and popular this week
Relevant opportunities