As per title. I often talk to people that have views that I think should straightforwardly imply a larger focus on s-risk than they think. In particular, people often seem to endorse something like a rough symmetry between the goodness of good stuff and the badness of bad stuff, sometimes referring to this short post that offers some arguments in that direction. I'm confused by this and wanted to quickly jot down my thoughts - I won't try to make them rigorous and make various guesses for what additional assumptions people usually make. I might be wrong about those.

 

Views that IMO imply putting more weight on s-risk reduction:

  1. Complexity of values: Some people think that the most valuable things possible are probably fairly complex (e.g. a mix of meaning, friendship, happiness, love, child-rearing, beauty etc.) instead of really simple (e.g. rats on heroin, what people usually imagine when hearing hedonic shockwave.) People also often have different views on what's good. I think people who believe in complexity of values often nonetheless think suffering is fairly simple, e.g. extreme pain seems simple and also just extremely bad. (Some people think that the worst suffering is also complex and they are excluded from this argument.) On first pass, it seems very plausible that complex values are much less energy-efficient than suffering. (In fact, people commonly define complexity by computational complexity, which translates directly to energy-efficiency.) To the extent that this is true, this should increase our concern about the worst futures relative to the best futures, because the worst futures could be much worse than the best futures.

    (The same point is made in more detail here.)
     
  2. Moral uncertainty: I think it's fairly rare for people to think the best happiness is much better than worst suffering is bad. I think people often have a mode at "they are the same in magnitude" and then uncertainty towards "the worst suffering is worse". If that is so, you should be marginally more worried about the worst futures relative to the best futures. The case for this is more robust if you incorporate other people's views into your uncertainty: I think it's extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.[1]

    (Weakly related point here.)
  3. Caring about preference satisfaction: I feel much less strongly about this one because thinking about the preferences of future people is strange and confusing. However, I think if you care strongly about preferences, a reasonable starting point is anti-frustrationism, i.e. caring about unsatisfied preferences but not caring about satisfied preferences of future people. That's because otherwise you might end up thinking, for example, that it's ideal to create lots of people who crave green cubes and give them lots of green cubes. I at least find that outcome a bit bizarre. It also seems asymmetric: Creating people who crave green cubes and not giving them green cubes does seem bad. Again, if this is so, you should marginally weigh futures with lots of dissatisfied people more than futures with lots of satisfied people.
    To be clear, there are many alternative views, possible ways around this etc. Taking into account preferences of non-existent people is extremely confusing! But I think this might be an underappreciated problem that people who mostly care about preferences need to find some way around if they don't want to weigh futures with dissatisfied people more highly.

I think point 1 is the most important because many people have intuitions around complexity of value. None of these points imply that you should focus on s-risk. However, they are arguments towards weighing s-risk higher. I wanted to put them out there because people often bring up "symmetry of value and disvalue" as a reason they don't focus on s-risk.

  1. ^

    There's also moral uncertainty 2.0: People tend to disagree more about what's most valuable than they disagree about what's bad. For example, some people think only happiness matters and others think justice, diversity etc. also matter. Roughly everybody thinks suffering is bad. You might think a reasonable way to aggregate is to focus more on reducing suffering, which everyone agrees on, at least whenever most efficiently increasing happiness trades off with justice or diversity.

95

5
1

Reactions

5
1

More posts like this

Comments7
Sorted by Click to highlight new comments since:

Related to your point 1 : 

I think one concrete complexity-increasing ingredient that many (but not all) people would want in a utopia is for one's interactions with other minds to be authentic – that is, they want the right kind of "contact with reality."

So, something that would already seem significantly suboptimal (to some people at least) is lots of private experience machines where everyone is living a varied and happy life, but everyone's life in the experience machines follows pretty much the same template and other characters in one's simulation aren't genuine, in the sense that they don't exist independently of one's interaction with them (meaning that your simulation is solipsistic and other characters in your simulation may be computed to be the most exciting response to you, but their memories from "off-screen time" are fake). So, while this scenario would already be a step upwards from "rats on heroin"/"brains in a vat with their pleasure hotspots wire-headed," it's still probably not the type of utopia many of us would find ideal. Instead, as social creatures who value meaning, we'd want worlds (whether simulated/virtual or not doesn't seem to matter) where the interactions we have with other minds are genuine. That these other minds wouldn't just be characters programmed to react to us, but real minds with real memories and "real" (as far as this is a coherent concept) choices. Utopian world setups that allow for this sort of "contact with reality" presumably cannot be packed too tightly with sentient minds.

By contrast, things seem different for dystopias, which can be packed tightly. For dystopias, it matters less whether they are repetitive, whether they're lacking in options/freedom, or whether they have solipsistic aspects to them. (If anything, those features can make a particular dystopia more horrifying.)

To summarize, here's an excerpt from my post on alignment researchers arguably having a comparative advantage in reducing s-risks:

Asymmetries between utopia and dystopia. It seems that we can “pack” more bad things into dystopia than we can “pack” good things into utopia. Many people presumably value freedom, autonomy, some kind of “contact with reality.” The opposites of these values are easier to implement and easier to stack together: dystopia can be repetitive, solipsistic, lacking in options/freedom, etc. For these reasons, it feels like there’s at least some type of asymmetry between good things and bad things – even if someone were to otherwise see them as completely symmetric.

Another argument for asymmetric preference views (including antifrustrationism) and preference-affecting views over total symmetric preference views is that the total symmetric views are actually pretty intrapersonally alienating or illiberal in principle, and possibly in practice in the future with more advanced tech or when we can reprogram artificially conscious beings.

Do you care a lot about your family or other goals? Nope! I can make you care and approve way more about having a green cube and your new life centered on green cubes, abandoning your family and goals. You'll be way better off. Even if you disprefer the prospect now, I'll make sure you'll be way more grateful after with your new preferences. The gain will outweigh the loss.

Basically, if you can manipulate someone's mind to have additional preferences that you ensure are satisfied, as long as the extra satisfaction exceeds the frustration from involuntarily manipulating them, it’s better for them than leaving them alone.

Asymmetric and preference-affecting views seem much less vulnerable to this, as long as we count as bad the frustration involved in manipulating or eliminating preferences, including preferences against certain kinds of manipulation and elimination. For example, killing someone in their sleep and therefore eliminating all their preferences has to still typically be bad for someone who would disprefer it, even if they don’t find out. The killing both frustrates and eliminates their preferences basically simultaneously, but we assume the frustration is still bad. And new satisfied preferences wouldn't make up for the frustration on these views.

This is the problem of replacement/replaceability, applied intrapersonally to preferences and desires.

I have two points regarding point 2. Firstly, what matters is the relationship between the expected happiness and the expected suffering, not the best happiness and the worst suffering. There is no particular reason that these relationships should be the same. It may be that the worst suffering outweighs the best happiness, and also that the expected happiness outweighs the expected suffering.

Secondly, why do you think people would skew towards the suffering dominating? My intuition is that the expected happiness will generally dominate. I've noticed there are a subset of EAs who seem to have an obsession with suffering, and the related position of anti-natalism, but I do not think EAs are representative of the broader population in this regard, and I do not think this subset of EAs are epistemically justified.

Yes, you're totally right that I was just speaking about the range and not the expectation! That's part of the reason why I said none of the points I made are decisive for working on s-risk. I was only providing arguments against the position that the range is symmetric, which I often see people take.

I don't understand the point about the complexity of value being greater than the complexity of suffering (or disvalue). Can you possibly motivate the intuition here? It seems to me like I can reverse the complex valuable things that you name, and I get their "suffering equivalents" e.g. (e.g. friendship -> hostility, happiness -> sadness, love -> hate ... etc.), and they don't feel significantly less complicated. 

I don't know exactly what it means for these things to be less complex; I'm imagining something like writing a Python program that simulates the behaviour of two robots in a way that is recognisable to many people as "friends" or "enemies" and measuring at the length of the program.

It's not that there aren't similarly complex reverses, it's that there's a type of bad that basically everyone agrees can be extremely bad, i.e. extreme suffering, and there's no (or much less) consensus on a good with similar complexity and that can be as good as extreme suffering is bad. For example, many would discount pleasure/joy on the basis of false beliefs, like being happy that your partner loves you when they actually don't, whether because they just happen not to love you and are deceiving you, or because they're a simulation with no feelings at all. Extreme suffering wouldn't get discounted (much) if it were based on inaccurate beliefs.

A torturous solipsistic experience machine is very bad, but a happy solipsistic experience machine might not be very good at all, if people's desires aren't actually being satisfied and they're only deceived into believing they are.

Executive summary: The post argues that several common philosophical views - including the complexity of value, moral uncertainty, and caring about preference satisfaction - actually imply putting more priority on avoiding catastrophic suffering (s-risk) than maximizing positive value.

Key points:

  1. Complexity of value but simplicity of suffering implies suffering is more energy-efficient, so worst cases outweigh best cases.
  2. Moral uncertainty leans towards suffering being worse than happiness is good.
  3. Caring about preferences suggests anti-frustrationism, which weighs dissatisfaction more than satisfaction.
  4. These undermine common appeals to symmetry as a reason not to focus on s-risk.
  5. They suggest moral views often point to more concern for avoiding the worst rather than maximizing the best.
  6. None definitively say s-risk should dominate, but shift priority towards avoiding downside risks.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from Chi
Curated and popular this week
Relevant opportunities