Cross-posted from LessWrong

In this Rational Animations video, we discuss s-risks (risks from astronomical suffering), which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that s-risks have a significant chance of occurring and that there are ways to lower that chance.

The script for this video was a winning submission to the Rational Animations Script Writing contest (https://forum.effectivealtruism.org/posts/p8aMnG67pzYWxFj5r/rational-animations-script-writing-contest). The first author of this post, Allen Liu, was the primary script writer with the second author (Writer) and other members of the Rational Animations writing team giving significant feedback. Outside reviewers, including authors of several of the cited sources, provided input as well.  Production credits are at the end of the video.  You can find the script of the video below.


Is there anything worse than humanity being driven extinct? When considering the long term future, we often come across the concept of "existential risks" or "x-risks": dangers that could effectively end humanity's future with all its potential. But these are not the worst possible dangers that we could face. Risks of astronomical suffering, or "s-risks", hold even worse outcomes than extinction, such as the creation of an incredibly large number of beings suffering terribly. Some researchers argue that taking action today to avoid these most extreme dangers may turn out to be crucial for the future of the universe.

Before we dive into s-risks, let's make sure we understand risks in general. As Swedish philosopher Nick Bostrom explains in his 2013 paper "Existential Risk Prevention as Global Priority",[1] one way of categorizing risks is to classify them according to their "scope" and their "severity". A risk's "scope" refers to how large a population the risk affects, while its "severity" refers to how much that population is affected. To use Bostrom's examples, a car crash may be fatal to the victim themselves and devastating to their friends and family, but not even noticed by most of the world. So the scope of the car crash is small, though its severity is high for those few people. Conversely, some tragedies could have a wide scope but be comparatively less severe. If a famous painting were destroyed in a fire, it could negatively affect millions or billions of people in the present and future who would have wanted to see that painting in person, but the impact on those people's lives would be much smaller.

In his paper, Bostrom analyzes risks which have both a wide scope and an extreme severity, including so-called "existential risks" or "x-risks". Human extinction would be such a risk: affecting the lives of everyone who would have otherwise existed from that point on and forever preventing all the joy, value and fulfillment they ever could have produced or experienced. Some other such risks might include humanity's scientific and moral progress permanently stalling or reversing, or us squandering some resource that could have helped us immensely in the future.

S-risk researchers take Bostrom's categories a step further. If x-risks are catastrophic because they affect everyone who would otherwise exist and prevent all their value from being realized, then an even more harmful type of risk would be one that affects more beings than would otherwise exist and that makes their lives worse than non-existence: in other words, a risk with an even broader scope and even higher severity than a typical existential risk, or a fate worse than extinction.

David Althaus and Lukas Gloor, in their article from 2016 titled "Reducing Risks of Astronomical Suffering: A Neglected Priority",[2] claim that such a terrible future is a possibility worth paying attention to, and that we might be able to prevent it or make it less likely. Specifically, they define s-risks as "risks of events that bring about suffering in cosmically significant amounts", relative to how much preventable suffering we expect in the future on average.

Because s-risks involve larger scopes and higher severities than anything we've experienced before, any examples of s-risks we could come up with are necessarily speculative and can sound like science fiction. Remember, though, that some risks that we take very seriously today, such as the risk of nuclear war, were first imagined in science fiction stories. For instance, in "The World Set Free", written by H.G. Wells in 1914,[3] bombs powered by artificial radioactive elements destroy hundreds of cities in a globe spanning war.

So, we shouldn't ignore s-risks purely because they seem speculative. We should also keep in mind that s-risks are a very broad category - any specific s-risk story might sound unlikely to materialize, but together they can still form a worrying picture. With that in mind, we can do some informed speculation ourselves.

Some s-risk scenarios are like current examples of terrible suffering, but on a much larger scale: imagine galaxies ruled by tyrannical dictators in constant, brutal war, devastating their populations; or an industry as cruel as today's worst factory farms being replicated across innumerable planets in billions of galaxies. Other possible s-risk scenarios involve suffering of a type we don't see today, like a sentient computer program somehow accidentally being placed in a state of terrible suffering and being copied onto billions of computers with no way to communicate to anyone to ease its pain.

These specific scenarios are inspired by "S-risks: An introduction" by Tobias Baumann,[4] a researcher at the Center for Reducing Suffering. The scenarios have some common elements that illustrate Althaus and Gloor's arguments as to why s-risks are a possibility that we should address. In particular, they involve many more beings than currently exist, they involve risks brought on by technological advancement, they involve suffering coming about as part of a larger process, and they are all preventable with enough foresight.

Here are some arguments for why s-risks might have a significant chance of occurring, and why we can probably lower that chance.

First, if humanity or our descendants expand into the cosmos, then the number of future beings capable of suffering could be vast: maybe even trillions of times more than today. This could come about simply by increasing the number of inhabited locations in the universe, or by creating many artificial or simulated beings advanced enough to be capable of suffering.

Second, as technology continues to advance, so does the capability to cause tremendous and avoidable suffering to such beings. We see this already happening with weapons of mass destruction or factory farming. Technology that has increased the power of humanity in the past, from fire to farming to flight, has almost always allowed for both great good and great ill. This trend will likely continue.

Third, if such suffering is not deliberately avoided, it could easily come about. While this could be from sadists promoting suffering for its own sake, it doesn't have to be. It could be by accident or neglect, for example if there are beings that we don't realize are capable of suffering. It could come about in the process of achieving some other goal: today's factory farms aren't explicitly for the purpose of causing animals to suffer, but they do create enormous suffering as part of the process of producing meat to feed humans. Suffering could also come about as part of a conflict, like a war on a much grander scale than anything in humanity's past, or in the course of beings trying to force others to do something against their will.

Finally, actions that we take today can reduce the probability of future suffering occurring. One possibility is expanding our moral circle.[5] By making sure to take as many beings as possible into account in making decisions about the future, we can avoid causing astronomical suffering because a class of morally relevant beings was simply ignored. We particularly want to prevent the idea of caring for other beings from becoming ignored or controversial. Another example, proposed by David Althaus, is reducing the influence of people Althouse describes as "malevolent", with traits shared by history's worst dictators. Additionally, some kinds of work that prevents suffering today will also help prevent suffering in the future, like curing or eliminating painful diseases and strengthening organizations and norms promoting peace.

Maybe you're not yet convinced that s-risks are likely enough that we should take them seriously. Tobias Baumann, in his book "Avoiding the Worst: How to Prevent a Moral Catastrophe", argues that the total odds of an s-risk scenario are quite significant, but even small risks are worth our attention if there's something we can do about them. The risk of dying in a car accident within the next year is around 1 in 10,000,[6] but we still wear seatbelts because they're easy interventions that effectively reduce the risk.

Or perhaps you think that even a life with a very large amount of suffering is still preferable to non-existence. This is a reasonable objection but remember that the alternative to astronomical suffering doesn't have to be extinction. Consider a universe with many galaxies of people living happy and fulfilled lives, but one galaxy filled with people in extreme suffering. All else being equal, it would still be a tremendous good to free the people of that galaxy and allow them to live as they see fit.[7] Just how much attention we should pay to preventing this galaxy of suffering depends on your personal morals and ethics, but almost everyone would agree that it is at least something worth doing if we can. And suffering doesn't need to be our only moral concern for s-risks to be important. It's very reasonable to care about s-risks while also believing that the best possible futures are full of potential and worth fighting for.

Althaus and Gloor further argue that s-risks could be even more important to focus on than existential risks. They picture human futures like a lottery. Avoiding risks of extinction is like buying tickets in that lottery and getting a better chance of humanity living to see one of those futures, but if the bad outweighs the good in the average future represented by each of those tickets, then buying more of them isn't worth it. Avoiding s-risks increases the average value of the tickets we already have, and makes buying more of them all the more valuable. Additionally, if an s-risk scenario comes about it may be extremely difficult to escape it, so we need to act ahead of time. Being aware and prepared today could be critical for stopping astronomical suffering before it can begin.

There are many causes in the world today demanding our limited resources and s-risks can seem too far off and abstract to be worth caring about. But we've shown arguments for why s-risks may have a significant chance of occurring, that we can lower that risk, and that doing so will help make the future better for everyone. If we can alter humanity's distant future at all, it's well worth putting in time and effort to prevent these worst case scenarios now, while we still have the chance.

  1. ^

    Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority" https://existential-risk.com/concept.pdf

  2. ^

    Althaus, David and Lukas Gloor (2016). “Reducing Risks of Astronomical Suffering: A Neglected Priority” (https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/)

  3. ^

    Wells, Herbert George (1914). The World Set Free. E.P.Dutton & Company.

  4. ^

    Baumann, Tobias (2017). “S-risks: An introduction” (https://centerforreducingsuffering.org/research/intro/)

  5. ^
  6. ^

    National Highway Traffic Safety Administration, US Department of Transportation: Overview of Motor Vehicle Crashes in 2020

  7. ^

    Example from: Sotala, Kaj and Lukas Gloor (2017). “Superintelligence as a Cause or Cure for Risks of Astronomical Suffering” https://longtermrisk.org/files/Sotala-Gloor-Superintelligent-AI-and-Suffering-Risks.pdf

104

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

Great video! Question - can we all just agree that factory farming is an S-Risk actualised? Then we can put to the side its supposedly "sci-fi" vibe while acknowledging how horrific factory farming is.

Part of the definition of astronomical suffering is that it's greater than any instances of suffering to date. But factory farming was unprecedented compared to anything before it, so I think the definition of s-risk could be applied retroactively to it.

I wouldn't consider factory farming to be an instance of astronomical suffering, as bad as the practice is, since I don't think the suffering from one century of factory farming exceeds hundreds of millions of years of wild animal suffering. However, perhaps it could be an s-risk if factory farming somehow continues for a billion years. For reference, here is definition of s-risk from a talk by CLR in 2017:

“S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.”

I definitely very strongly disagree that factory farming should be thought of as an S-risk. It's not good, but the moral badness of that seems like absolutely nothing compared to digital consciousnesses being essentially trapped in hell.

So, uh, does it follow that human extinction [or another x-risk which is not an s-risk] realised could be desired in order to avoid an s-risk? (e.g. VHEMT)

Reposting a comment I made last week

Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.

A helpful comment from a while back: https://forum.effectivealtruism.org/posts/rRpDeniy9FBmAwMqr/arguments-for-why-preventing-human-extinction-is-wrong?commentId=fPcdCpAgsmTobjJRB

Personally, I suspect there's a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it's likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.

I think theoretically you could compare (1) worlds with s-risk and (2) worlds without humans, and find that (2) is preferable to (1) - in a similar way to how no longer existing is better than going to hell. One problem is many actions that make (2) more likely seem to make (1) more likely. Another issue is that efforts spent on increasing the risk of (2) could instead be much better spent on reducing the risk of (1).

I think it definitely does, if we're in a situation where an S-risk is on the horizon with some sufficient (<- subjective) probability. Also consider https://carado.moe/when-in-doubt-kill-everyone.html (and the author's subsequent updates)

... of course, the whole question is subjective as in moral.

“You didn’t trust yourself,” Hirou whispered.  “That’s why you had to touch the Sword of Good.”

Executive summary: S-risks, involving astronomical suffering, may be more important to focus on than existential risks; researchers argue s-risks have a significant chance of occurring but can be made less likely through actions today.

Key points:

  1. S-risks have a wider scope and higher severity than existential risks, affecting more beings than would otherwise exist and making their lives worse than non-existence.
  2. S-risks are a possibility due to potential cosmic expansion, advancing technology, suffering occurring through neglect or as a side effect, and the fact that they are preventable with foresight.
  3. Actions to reduce s-risks include expanding our moral circle, reducing the influence of malevolent actors, and continuing work that prevents suffering today.
  4. Even if s-risks have low probability, they are worth addressing; the alternative to s-risks does not have to be extinction.
  5. Avoiding s-risks increases the value of ensuring humanity's survival, and s-risk scenarios may be extremely difficult to escape once they arise.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

I'm surprised the video doesn't mention cooperative AI and avoiding conflict among transformative AI systems, as this is (apparently) a priority of the Center on Long-Term Risk, one of the main s-risk organizations. See Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda for more details.

Curated and popular this week
Relevant opportunities