I enjoyed William MacAskill's book What We Owe The Future, and I would consider myself persuaded that we should regard future people as an important moral consideration. MacAskill did a good job emphasizing the importance of the future without sounding too fanatical. I think that this was good from a public relations point of view. In his 2019 post "'longtermism'," MacAskill laid out his definitions for longtermism, strong longtermism, and very strong longtermism. 

(i) longtermism, which designates an ethical view that is particularly concerned with ensuring long-run outcomes go well;

(ii) strong longtermism, which, like my original proposed definition, is the view that long-run outcomes are the thing we should be most concerned about; 

(iii) very strong longtermism, the view on which long-run outcomes are of overwhelming importance.

Which stance you adopt about longtermism depends to a large extent on how much you discount future people. If you fully accept the premise that future people matter as much as current people, then I can't help but believe you ought to advocate for an even more strong version of longtermism. In this framework, long-term considerations are many many orders of magnitude more important than any short-term consideration. Long-term considerations would be so important that ethical prescriptions would converge on the principle "All actions should maximize long-term well-being."

This is similar to the conclusion that Bostrom (2003) reaches in his article "Astronomical Waste," in which he says "[t]he utilitarian imperative ‘Maximize expected aggregate utility!’ can be simplified to the maxim ‘Minimize existential risk!’." This argument is largely based on the premise that it may be possible to colonize other galaxies and create trillions and trillions of humans, perhaps even simulations, that live happy lives. Bostrom provides some rough estimates for the potential loss of life and utility.

As a rough approximation, let us say the Virgo Supercluster contains 10^13 stars. One estimate of the computing power extractable from a star and with an associated planet-sized computational structure, using advanced molecular nanotechnology, is 10^42 operations per second. A typical estimate of the human brain’s processing power is roughly 10^17 operations per second or less. Not much more seems to be needed to simulate the relevant parts of the environment in sufficient detail to enable the simulated minds to have experiences indistinguishable from typical current human experiences.

Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.

While this estimate is conservative in that it assumes only computational mechanisms whose implementation has been at least outlined in the literature, it is useful to have an even more conservative estimate that does not assume a non-biological instantiation of the potential persons. Suppose that about 10^10 biological humans could be sustained around an average star. Then the Virgo Supercluster could contain 10^23 biological humans. This corresponds to a loss of potential of over 10^13 potential human lives per second of delayed colonization.

While this type of radical longtermism has the same ethical basis as utilitarianism, it would seem that accepting this idea could possibly lead to moral prescriptions wildly different from current utilitarian evaluations. At best, a great deal of uncertainty is introduced into certain scenarios. In Philippa Foot's original trolley problem, it seems obvious from a consequentialist perspective that one ought to switch the lever. In Peter Singer's drowning child scenario, it seems clear one ought to save the drowning child. Retrospectively analyzing these scenarios from a radical longtermist perspective, it is not clear to me which choice minimizes existential risk.

I do not know if more people increase or decrease existential risk. If I had to guess, it seems it would increase existential risk. If more people mean more existential risk, it may be morally impermissible to switch the trolley track. It may even be morally bad to save the drowning child. Those both seem like extremely undesirable conclusions. If the child becomes a longtermist, they likely would reduce existential risk, but people who live a normal life indifferent to existential risk may very well contribute to more existential risk. I think, removed from the above argument, it doesn't seem so radical to believe that random people are net increasers of existential risk, but after understanding this argument, it seems that motivated reasoning may kick in. Saving Singer's drowning child is sort of the philosophical foundation of effective altruism, after all. I would save the child personally.

We could say that we are uncertain and we should do things with immediate good results, but if we take Bostrom's estimates seriously of "10^29 potential human lives per second" then any immediate good is overwhelmed even if we have the smallest hunch that the child could contribute to more existential risk. It could perhaps be the case that a society that permits unnecessary deaths could be bad value lock-in, but if fewer people are good, then that might be a positive side-effect as strange as that sounds.

Whereas I previously believed that the Total View on population ethics was correct, increasing fertility was morally good, and the RC was acceptable, I am now unsure. Perhaps most bizarrely, it seems like minor catastrophic events could be overall positives if they delay absolute catastrophic events. A major pandemic or nuclear winter might delay misaligned AI for example. Extreme climate change might divert research from risky physic research or dissuade hostile alien invasion. I am not saying that all of these are true, but it seems that many major moral dilemmas no longer appear clearly good or clearly bad to me. Are these arguments illogical? Does anyone else have these sorts of thoughts about longtermism?

7

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Interesting.  I'm inclined to say a world where you save lives would decrease existential risks -- more time to focus on AI alignment and combat global warming when we there is less death.  But generally, I think we should be really, really worried when speculative moral philosophy tells us to ignore drowning children.   

Interesting.  I'm inclined to say a world where you save lives would decrease existential risks -- more time to focus on AI alignment and combat global warming when we there is less death.  But generally, I think we should be really, really worried when speculative moral philosophy tells us to ignore drowning children.   

Curated and popular this week
Relevant opportunities