Hide table of contents

There is a growing shift within EA toward longtermism, which is a natural consequence of expanding our moral circle to include future sentient beings. Given that the future can potentially host a vast number of sentient beings, this makes longterm causes particularly relevant.

However, despite how big that number of future beings might be, it does not necessarily imply that those causes should always be prioritized over short-term ones, which are focused on the well-being of present humans and other animals. Yet there are those who claim that longterm causes are the most important ones (a view that is called “strong longtermism”).

In this article, I will give some arguments to justify why I disagree with strong longtermism, and to strengthen the case for short-term causes.

As a disclaimer, I will add that my limited knowledge of longtermism is mostly based on readings like The Precipice, different posts from the EA forum, and some texts by Will MacAskill and Hilary Greaves. I don’t have the same degree of confidence in all my arguments, and it’s possible that some of them, if not all, have already been covered in What We Owe the Future (which I haven’t had the chance to read yet) or in the previous literature. Nevertheless, I will share them in case they add some value to the debate. Additionally, not all of my points are necessarily critiques of strong longtermism, but they were included to make the reading more cohesive.

Having a finite future should not be a reason to despair

Although I haven’t yet read What We Owe the Future, I have listened to various podcasts where Will MacAskill discusses its ideas, for example, this one with Sam Harris. Here they consider what would happen if human beings realised that they could no longer reproduce. In this scenario, the last generation of humans is depicted as one without any hope. After all, what’s the point of making progress if there are no future generations to enjoy the fruits of our labour?

This question seems similar to asking: What’s the point of living a life if there’s nothing after we die? And in my view, the answer to both questions is quite simple: Because we can still enjoy life today.

I don’t think humans do all the incredible things we do just to leave a legacy for future generations. Although that may play a role in our motivations, I believe we do them mostly because it’s in our nature to explore, create, compete and so on. There is more biology in our motivations than there is longterm altruism.

Of course, if we knew that we were the last humans to ever live, our lifestyles would change drastically, but I would expect most of us would still do our best to survive and thrive for as long as we can. Just like any other living organism would.

We don’t need to live for trillions of years or expand across the Universe to believe that our civilization was worth existing. If humanity’s journey ends on Earth, we shouldn’t think that we failed, or that it was all for nothing.

Richard Dawkins once said: “We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born.” Even though he wasn’t referring to the trillions of humans that will never exist if civilization were to end, I think the reasoning is similar.

-> Conclusion: We do not need a “global after-life” in order to thrive as a civilization. In other words, we don’t need billions of future generations to exist in order to give meaning to our lives today.

With this, I am not advocating for ignoring existential risks! But we may need more modest goals as a civilization, which brings me to my second point.

Maximizing human potential is not a moral obligation

Longtermists often assume that humanity should maximize their longterm potential. This may involve expanding our scientific knowledge and technological capabilities to their physical limits; exploring new ways to experience art and to interact with each other; or discovering new ways of living in general.

As an ex-astrophysicist, when I think about travelling to other planets or expanding outside of the Solar System or even the Milky Way, I can’t help but feel a deep sense of excitement. But I also realise that this excitement may not be universally felt.

Firstly, this goal of maximizing our technological capabilities is probably only held by a small fraction of current humans. And if we could survey the entire world, we would probably collect many different views of what humanity as a whole should aim for.

Secondly, I assume that humans’ desire to explore and expand across the Universe can be explained in biological terms, as mentioned briefly in the previous section. However, future humans may have the capacity to modify their natural instincts, and may not feel the same excitement as we do. In fact, they may actually have a totally different view of what humanity should aim for.

-> Conclusion: The choice of reaching the limits of our scientific and technological potential is somewhat arbitrary. There is no rush and no moral obligation to pursue it.

There may be other goals that could be (if also arbitrary) more universally shared but I will come to these later on. But since I may be completely wrong in my conclusion, for now I’ll assume that maximizing humanity’s longterm potential is a worthwhile goal.

We may not live for very long

The Universe has a limited life. It is possible that our existence will end prematurely, by means of a Big Crunch, a Big Rip, or some other unexpected scenario. Currently, the most likely cosmological scenario is the Big Freeze, in which the Universe expands forever, and galaxies move further and further apart from each other until the point where each galaxy lives in its own Observable Universe. Eventually, stars stop forming and black holes keep growing for a long time until they eventually also evaporate.

But before any of these scenarios reach their judgement day, our planet and Solar System will also experience a lot of complications. To give a striking example, in a few billion years, while our galaxy is merging with Andromeda, the Sun will start growing into a red giant.

Despite all the adversities on Earth and in the Solar System, we could still live for at least a few additional billion years. But what seems clear is that humans (and whatever post-human entities might come after us) will live for a finite period of time. This is important when attempting to quantify the reach of human potential: It gives us a limited timeframe in which to do so. And, as Toby Ord explains in The Precipice, the biggest of all existential risks by far are those created by ourselves.

Sure, we may survive the Russian roulette that the next 100 years will possibly become. But, after that, the dangers in the following century might be even larger, since our technological capacity will be even more powerful. Of course, we should still try to avoid existential risks. But we should also consider the possibility that… we may fail. Maybe there isn’t much we can do to avoid our eventual civilization collapse, and therefore looking into the very far future is futile.

Technological development will unveil wonderful possibilities, as well as terrible ones. And we are assuming that we are capable of balancing the two. But thermodynamics imposes an asymmetry that plays strongly against our survival: It’s much easier to destroy a house of cards than it is to build one. In just a second, a nuclear bomb can destroy an entire city that took a century to build. Technology will allow us to build bigger and nicer houses of cards, but it will also make it much easier to destroy them.

As we unveil more and more incredible scientific achievements, the extinction risks are likely to keep growing. Therefore, assuming the worst-case scenario depicted in The Precipice, and assuming that humanity does nothing to avoid the risks, we have a 1/6 chance of going extinct during the next 100 years. Roughly, this implies that the expected value of any intervention by the end of this period should be scaled down by 5/6. The expected value of any intervention during the following 100 years should be scaled down even more, given the even larger probability of extinction. And so on.

Of course, those numbers depend on how much action we take to prevent extinction. The catastrophe rate could, in fact, decrease over time. However, assuming a timeline of constant technological development, my intuition is that the risks are unlikely to decrease quickly enough. This implies that, when we claim that the number of lives in the distant future will be vast, we should also consider that their probability of existing may be quite low.

Each forthcoming generation may have an increasing chance of being the last. Perhaps then, we should reduce our focus on the future millennia, and instead, ensure that the next few hundred years of our existence are as pleasant as possible, and the possible ending as painless as it can be.

-> Conclusion: Even if we survive the existential risks of the next few hundred years, it may not be possible to avoid the collapse of a highly technological civilization. This could greatly reduce the expected value of longterm interventions.

But again, I may be wrong in this conclusion, and there are interventions with a large, positive expected value, despite the vast extinction risks. Either way, for now I’ll assume that we will manage to control all possible natural and anthropogenic risks. Yet even then, achieving our long-term survival may impose some constraints, as I’ll explain in the next section.

Having more humans is not necessarily better

I don’t want to dive into population ethics. Instead, here I just want to suggest the following conundrum: The more humans that exist, the larger the human potential we can reach, but also the larger the anthropogenic existential risk.

We are already living in a situation where having more humans imposes a larger negative impact on the planet, which increases the severity of climate change risks. But that is not the point I’m making, since this issue could technically be solved by reaching net zero greenhouse gas emissions, as well as adopting other similar restrictions to allow a sustainable life.

Having more humans implies that there is a larger chance that some of them will do something atrocious, either accidentally, or intentionally. And the number of lives that that atrocious action can affect is likely to increase as technological capacity increases.

To be able to survive all risks, there may be an optimal number of humans at any given time, which limits our human potential. So, if we still want to aim at maximizing our potential, maybe we realise that we have to avoid producing more humans. The number of future humans would then be much lower than what we currently estimate.

On the contrary, one way in which having more humans would help avoid existential risks is if we settled on different planets. If different settlements become sufficiently independent from each other, a catastrophic event in one of them would not affect the others. But I don’t see this as a solution to the problem. We would have the same problem, just multiple times: Each one of those settlements (assuming they all aim at maximizing their technological capacities) would experience similar existential risks.

I would also consider the possibility that humanity manages to avoid natural death. In a world where people could live forever, birth rates would potentially be very low for long periods of time. In the extreme case, the cumulative number of humans alive would stagnate. And if our minds become fully digital, the definition of an individual being may become less clear: We could instead become part of a cloud of common consciousness. In either of these hypothetical scenarios, the number of future individual sentient beings would be reduced.

-> Conclusion: Either we accept that we will never reach our full technological potential or we might have to limit the number of humans that can exist at any given time. Either way, it’s possible that the number of future lives will be lower than we currently estimate.

Alternatively, if we want to live for as long as possible, we may need to put strong constraints on our technological development, as I’ll consider in the next section.

Having more technology is not necessarily better

As is often said in the context of investing: “Past performance is no guarantee of future results”. The fact that technological progress has brought overall positive outcomes so far (despite some significant negative ones like the risk of a nuclear war), does not imply it always will.

We may create an AGI that can beat us at any task we do. Or tweak our genes to the point that all humans are equally good at every mental or physical skill. At that point, those natural motivators that push us to thrive and do beautiful, complex things, could be turned off.

In other words, technology may take away the meaning in our lives.

The EA community (in particular 80,000 hours) often encourages young students to start a career in fields like AI and biotechnology safety. However, much of the research done explores “how to implement a technology”, rather than “whether we should implement it in the first place”.

One could also argue that these technologies are going to be developed anyway, and therefore we should work on improving them to ensure that they in turn improve our lives. But if our capacity to alter future technological development is so limited, our capacity to avoid existential risks may also be limited.

As an example highlighted by Dylan Matthews from Future Perfect, Open Philanthropy made a significant initial investment in Open AI with the aim of increasing AI safety. But, regardless of the achieved progress in AI safety, this investment also helped to develop some of the most advanced AI systems that exist today.

-> Conclusion: Rather than investing resources in exploring how to properly develop and use disruptive technologies, we may first need to investigate whether these technologies should exist in the first place.

Once more, I’ll consider that I’m wrong and not base any other arguments on this point for the rest of the article.

Human potential starts in current lives

In The Precipice, Toby Ord claims that losing 100% of humanity is far worse than losing 99% (which is based on an argument originally put forward by Derek Parfit in Reasons and Persons). The first time I read it, I intuitively agreed with it: All future human lives (potentially trillions of them) rely on the survival of that remaining 1%.

However, now I have come to believe that losing 100% of humanity is only about 1% worse than losing 99%. In other words, each random 1% portion of humanity that is lost is roughly equally tragic, regardless of whether it was the very last 1% or not.

To explain my argument, I will use a simplistic assumption: Let’s measure human potential in human lives (and let’s assume that we will keep having children in the good old-fashioned way, disregarding any possible 3-D human printers or digital transhumans).

Now, imagine that we could travel to the end of our finite existence, measure the number of humans that will ever live from now until that point, and trace back their family trees to pinpoint their original ancestors. If we did this we would see that any random sample of 1% of current humans is responsible for producing roughly 1% of the final number of humans to exist between now and the end of our civilisation.

Therefore, losing 99% of humanity today means losing 99% of the total human potential, which would be roughly 99 times worse than losing the remaining 1%. There are other considerations that could make the last 1% somewhat special. For example, the survivors would be less affected by planetary boundaries, and possibly experience (after a few generations of recovery) a higher birth rate than any other 1% would, had the catastrophe not happened. But I don’t think these considerations greatly affect the point I’m making.

If my argument is correct, it leads to some relevant consequences.

Firstly, when comparing short term and long term interventions, we often think that the latter may save orders of magnitude more lives than the former. But this is not necessarily true, considering that, for every family that we save today (for example, from dying of preventable diseases), we are not only saving that family but also their future descendants.

Secondly, given that our existence is finite, the earlier we manage to save lives, the more potential descendants will be saved in the final count of lives that ever existed.

Finally, this may strengthen the case for fighting climate change. In conversations about existential risk, climate change is often considered less threatening than other risks, because while it may kill a large fraction of humans (say, 90%), it is unlikely to destroy the entire human race. But, following my reasoning above, if a dystopian AI scenario could kill 100% of humans, that would make it just 10% worse than climate change.

Here I have assumed that human potential can be measured in the number of lives. But similar reasoning can be applied to artistic, scientific, or any kind of outcome that could be considered worth fighting for.

-> Conclusion: When comparing short term and long term causes, we have to keep in mind that, for every human saved today, we are also saving their potential descendants. So the earlier we save lives, the more human potential we can ever reach over the finite existence of humanity.

Again, my reasoning could be wrong. So I will not assume this conclusion for the rest of the article.

Reducing suffering may be a better goal than increasing happiness

Previously I argued that maximizing human potential is an arbitrary goal, and that there could be other possible goals to consider. A reasonable alternative would be to minimize suffering. And, to begin with, our goal for the mid-term future could be to end extreme suffering.

I don’t have very strong moral reasons to justify why minimizing suffering is a better choice than maximizing happiness. However, I do believe that humanity has a better agreement on what suffering is than on what happiness is.

It’s complicated to come up with a recipe for global human happiness. People have very different goals and needs in life, and these needs are highly dependent on people’s upbringing, historical and geopolitical contexts. However, almost everybody would agree that being burned, electrocuted or tortured in any way is a truly awful experience. 

Hence, there is a natural asymmetry by which reducing extreme physical suffering becomes a more universally shared goal than maximizing happiness.

The case for emotional suffering might be a bit more complicated, but I would still bet that, if given the choice, people would prefer to avoid chronic episodes of extreme depression than to enjoy the same number of episodes of extreme bliss (at least among people who have experienced depression).

For non-human animals, a similar argument applies: We may not know how to make pigs extremely happy, but we can clearly recognise what causes them to suffer immensely.

-> Conclusion: Maximizing human happiness is definitely a worthwhile goal. But minimizing suffering may be a more widely-shared objective.

This conclusion is not necessarily misaligned with strong longtermism. In fact, it may strengthen the case for fighting longterm suffering risks. However, in the next section, I will give an argument for why short term suffering is still of great relevance.

Our capacity to suffer may be significantly smaller in the future

I said before that technology is likely to cause our extinction, despite all our efforts. But if, on the other hand, we manage to survive, I believe that the technological developments will be enough to “cure suffering”. In other words, we will overcome our natural ability to suffer physical or emotional pain.

Reducing physical and emotional suffering has been one of the main outcomes of modern medicine, and I would expect future developments to continue in the same direction. In fact, we may also “cure death” at some point. Clearly, technology has the potential to change lives in such drastic ways that comparing current lives with those in the future may be pointless.

As an example, saving a person from a fire today will avoid an extreme amount of suffering. However, in the future, humans may not feel physical pain and have access to surgery that can quickly regenerate burned skin. If this is true, then we should prioritise saving people from fires today.

Let’s assume that we reach this level of medical development within the next ~100 years. Then, if we want to compare current human well-being with future human well-being, we should ask ourselves:

  • How many “humans capable of suffering” exist today? About 8 billion.
  • How many “humans capable of suffering” will ever exist? As many as the number of humans that will be born within the next ~100 years. This number could be around 15 or 20 billion humans, which is up to 3 times larger than the number of humans alive today (and not orders of magnitude larger as is often quoted in the longtermism debate).

Even if we never fully cure emotional and physical pain, it’s at least a fair assumption that we will significantly reduce them. This would decrease the expected value of long-term future interventions compared to short-term ones, since there is more suffering to alleviate in the short term.

In fact, if this assumption is roughly in the right ballpark, the next ~100 years may become the peak period of potential human suffering of all time, since there will be more humans capable of suffering than ever before. Any future generations after that, despite having larger populations, would experience significantly lower levels of suffering.

This conclusion can be extended to factory-farmed animals. The total consumption of animal products is currently increasing, but this trend is likely to change within the next ~100 years. We are already starting to explore alternative ways to produce meat products. These alternatives may soon become more energy-efficient, less polluting, and eventually significantly cheaper than animal products.

Therefore, the suffering of farmed animals between now and in the coming ~100 years is possibly the highest it will ever be. Note that the next ~100 years will be the peak of potential human suffering but of actual farmed-animal suffering. And after that point, our best bet to reduce any type of suffering would possibly be among wild animals.

An additional remark here is that if we build AGI within the next few hundred years, we might quickly have a myriad of new entities that are capable of suffering. However, in my view (albeit with little evidence to back it up), artificial conscious beings will not experience the same level of suffering as biological ones. Suffering is a consequence of natural selection, and digital consciousness will not require such mechanisms to achieve its goals. The experience of a reinforcement learning agent getting a negative reward can’t be compared to the extreme suffering of an animal being attacked by a predator.

-> Conclusion: If we manage to avoid existential risks in the coming ~100 years, our technological development will be enough to cure, or significantly reduce, emotional and physical human suffering. Therefore, there may be more suffering to avoid in the short term than in the long term future. A similar conclusion may apply to farmed animals (although not to wild animals).

Final remarks

As more people in EA focus their work in longterm causes, the number of strong longtermists is likely to grow. But, given all my arguments above, I believe this could be a deviation from doing the most good we can.

Moreover, even if strong longtermism is currently adopted by only a small fraction of the EA community, its views seem to attract a significant amount of negative criticism from the public. Understandably, it may depict EA as contradictory or inconsistent (e.g. is the value of our donations larger in developing countries, or should we donate to AI research in affluent countries?). Also, given that the EA community is highly skewed towards technical careers (especially computer science), it may lead one to think that EA’s views are also conveniently skewed to support investing in AI or other technologies. While these criticisms may apply to only one small part of the movement, I strongly believe this is an inaccurate and unfortunate picture of EA.

Despite my criticisms, I do believe that some resources should indeed be devoted to research institutions (which happen to be mostly in affluent countries) investigating how to mitigate existential risks. If we can, we should absolutely try to avoid dystopian scenarios which imply long-term suffering. But, given the list of arguments above, I think that short-term EA causes have a similar value. And we may want to focus our long term work in reducing the suffering of the coming ~100 years, rather than the very distant future.

Acknowledgements

Thanks to Mel Brennan for her comments and edits, and to Andrew Alonso y Fernández, Edouard Mathieu, Max Roser, Rafael Ruiz de Lira, Fiona Spooner, and Lars Yencken for their comments, suggestions, and especially for their criticism of my criticism.

15

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since:

I see some problems with the claims made in the section named "Human potential starts in current lives".

"Therefore, losing 99% of humanity today means losing 99% of the total human potential, which would be roughly 99 times worse than losing the remaining 1%."

Similarly, it seems like you're making the claim that a 100% loss of all lives is only slightly worse than 99% of lives because each 1% of people today contributes 1% to the final population of humanity.

But I think this claim rests on the assumption that 99% of humans dying would reduce the final population by 99%.

You mentioned that if 99% of humans died, the remaining 1% could repopulate the world by having a higher birth rate but then went on to say that this possibility didn't affect your point much.

But I think it would have a huge effect. If humanity lasts 1 billion years and 99% of humans died at some point, even if it took 1000 years to repopulate the earth, that would only be 1/1000 of all of history and the population wouldn't change much in the long term. Although the death of 99% of the population, might affect the genes of future people, I think the effect on the population size would be negligible. Therefore, I think the assumption is false.

If the assumption were correct, 100% of humanity dying would only be slightly worse than 99% dying. But since the 1% would probably rapidly repopulate the world, 99% dying would probably have a negligible impact on the total long-term population. Meanwhile, if 100% died the entire future population would be lost. Therefore 100% is far worse than 99%. 

Hi Stephen. I think I should have made this part clearer (I guess a chart would help). Consider the following scenarios:

A)  In Universe A nothing catastrophic happens today. You can pick any 1% of the world and trace the cumulative number of humans they produce between today and the end of time.

B)  In Universe B, a catastrophe happens today, leaving only 1% alive. You can trace the cumulative number of humans they produce between today and the end of time.

My intuition is that the cumulative number of humans that will ever exist at the end of time is similar in A and B. This applies to any random 1% of humans from Universe A. With this in mind, losing 99% of humanity today is approximately 99% worse than losing any 1% (including the last).

I agree that the total number of humans who will ever live at the end of time is similar in A and B. Therefore I think there is almost no difference between A and B in the long term.

The number of humans who will ever live is similar in scenarios A and B. But keep in mind that in scenario A we have randomly picked only 1% of all existing humans. The catastrophe that takes place in scenario B removes 99% of all humans alive, which in turn removes around 99% of all humans that could have lived at the end of time. That is an enormous difference in the long term. And that is the main point of that section: Saving lives now has an enormous impact in the long term.

"The catastrophe that takes place in scenario B removes 99% of all humans alive, which in turn removes around 99% of all humans that could have lived at the end of time."

That would only happen if the population never recovered. But since I would expect the world to rapidly repopulate, I therefore would expect the long-term difference to be insignificant.

The survivors in B would eventually catch up with the living population of the world today, yes. However, the survivors in B would never catch up with the cumulative population of the universe where there was no catastrophe. While the survivors in B were recovering, the counterfactual universe has been creating more humans (as well as new pieces of art, scientific discoveries, etc.). It is impossible for B to catch up, regardless of how much you wait. All the human potential of the 99% who died in the catastrophe is lost forever.

It's true that the universe B might never fully catch up because 99% of a single generation was lost. But over 1 billion years, we would expect about 40 million generations to live. Even if a few generations were lost, if there is a recovery the total loss won't be high.

Whether and to what extent the survivors could catch up with the counterfactual universe strongly depends on the boundary conditions. Universe A could have expanded to other planets by the time B fully recovers. We are comparing the potential of a full, and fully developed humanity with a small post-apocalyptic fraction of humanity. I agree with you that planet boundaries (and other physical constraints) could reduce the potential of a random 1% in A with respect to B. But I suppose it can also go the other way: The survivors in B could produce less humans than any 1% of A, and keep this trend for many (even all) future generations. My intuition here is very limited.

There is a growing shift within EA toward longtermism, which is a natural consequence of expanding our moral circle to include future sentient beings.

While not claiming to be an authority on longtermism, or anything close, my first impression so far is that this is yet another topic which appeals to academics because of it's complexity, but serves mostly to distract us from the far simpler fundamental challenges that we should be focused on.  For example...

If we don't take control of the knowledge explosion, there's not going to be a long term, and thus no need for longtermism.

If I understand correctly, longtermism seems to assume that we can accept and largely ignore the status quo of an ever accelerating knowledge explosion, defeat the multiplying threats that emerge from that explosion one by one by one without limit, and thus some day arrive at the long term which we are supposed to be concerned about.

If that's at least a somewhat accurate summary of longtermism, not buying it.

I think the argument for longtermism is pretty straightforward: if we have a long future then most people who will ever exist will live in the future. If we value all people across all times equally, then we should care far more about the future than the present.

Also, what do you mean by 'knowledge explosion'?

Hi Phil. I'm also not an authority on the topic, but I think your summary of longtermism is not accurate. You seem to be worried about the effects of the knowledge explosion, which means that you also care about the future. Maybe you disagree with strong longtermism (as I do, for the reasons above) or think that we should worry about the not-so-distant future. I would say that is still to some extent (a fraction of) longtermism. So even if you don't buy the whole package, you may still agree with a part of longtermism.

Curated and popular this week
Relevant opportunities