Summary: The alleged inevitable convergence between efficiency and methods that involve less suffering is one of the main arguments I’ve heard in favor of assuming the expected value of the future of humanity is positive, and I think it is invalid. While increased efficiency luckily converges with less biological suffering so far, this seems to be due to the physical limitations of humans and other animals rather than due to their suffering per se. And while past and present suffering beings all have severe physical limitations making them “inefficient”, future forms of sentience will likely make this past trend completely irrelevant. Future forms of suffering might even be instrumentally very useful and therefore “efficient”, such that we could make the reverse argument. Note that the goal of this post is not to argue that technological progress is bad, but simply to call out one specific claim that, despite its popularity, is – I think – just wrong.
The original argument
While I’ve been mostly facing this argument in informal conversation, it has been (I think pretty well) fleshed out by Ben West (2017)[1]: (emphasis is mine)
[W]e should expect there to only be suffering in the future if that suffering enables people to be lazier [(i.e., if it is instrumentally “efficient”.] The most efficient solutions to problems don’t seem like they involve suffering. [...] Therefore, as technology progresses, we will move more towards solutions which don’t involve suffering[.]
Like most people I’ve heard use this argument, he illustrates his point with the following two examples:
- Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.
- (This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase “beat like a rented mule” seem appalling.)
- Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.
- (This arguably is what happened with human slavery in the US: industrialization meant that slavery wasn’t required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)
Why this argument is invalid
While I tentatively think the “the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.
Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.
I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.
The fact that suffering has been correlated with inefficiency so far seems to be a lucky coincidence that allowed for the end of some forms of slavery/exploitation of biological sentient beings.
Potential future forms of suffering (e.g., digital suffering)[2] do not seem to similarly correlate with inefficiency, such that there seems to be absolutely no reason to assume future methods will engender less suffering by default.
In fact, there are reasons to assume the exact opposite, unfortunately. We may expect digital sentience/suffering to be instrumentally useful for a wide range of activities and purposes (see Baumann 2022a; Baumann 2022b).
Ben West, himself, acknowledges the following in a comment under his post:
[T]he more things consciousness (and particularly suffering) are useful for, the less reasonable [my “the most efficient solutions to problems don’t seem like they involve suffering” point] is.
For the record, he even wrote the following in a comment under another post six years later:
The thing I have most changed my mind about since writing the [2017] post of mine [...] is adjacent to the "disvalue through evolution" category: basically, I've become more worried that disvalue is instrumentally useful. E.g. maybe the most efficient paperclip maximizer is one that's really sad about the lack of paperclips.
While I find his particular example not very convincing (compared to examples in Baumann 2022a or other introductions to s-risks), he seems to agree that we might expect suffering to be somewhat “efficient”, in the future.
I should also mention that in the comments under his 2017 post, a few people have made a case somewhat similar to the one I make in the present post (see Wei Dai’s comment in particular).
The point I make here is therefore nothing very original, but I thought it deserved its own post, especially given that people didn’t stop making strong claims based on this flawed argument after 2017 when those comments were written. (Not that I expect my post to make the whole EA community realize this argument is invalid and that I'll never hear of it again, but it seems worth throwing this out there.)
I also do not want readers to perceive this piece as a mere critique of West’s post but as a
- “debunking” of an argument longtermists make quite often, despite its apparent invalidity (assuming I didn’t miss any crucial consideration; please tell me if you think I did!), and/or as a
- justification for the claim made in the title of the present post, or potentially for an even stronger one, like Future technological progress negatively correlates with methods that involve less suffering.
Again, the point of this post is not to argue that the value of the future of humanity is negative because of this, but simply that we need other arguments if we want to argue for the opposite. This one doesn’t pan out.
- ^
In fact, West makes two distinct arguments: (A) We’ll move towards technological solutions that involve less suffering thanks to the most efficient methods involving less suffering, and (B) We’ll move towards technological solutions that involve less suffering thanks to technology lowering the amount of effort required to avoid suffering. In this post, I only argue that (A) is invalid. As for (B), I tentatively think it checks out (although it is pretty weak on its own), for what it’s worth.
- ^
One could also imagine biological forms of suffering in beings that have been optimized to be more efficient, such that they’d be much more useful than enslaved/exploited sentient beings we’ve known so far.
Thanks Jim! I think this points in a useful direction, but I'm not sure I would describe this argument as "debunked". Instead, I think I would say that the following claim from you is the crux:
As an example of why this claim is not obviously true: Quicksort is provably the most efficient way to sort a list, and I'm fairly confident it doesn't involve suffering. If you told me that you had an algorithm which suffered while sorting a list, I would feel fairly confident that this algorithm would be less efficient than quicksort (i.e suffering is anti-correlated with efficiency).
Will this anti-correlation generalize to more complex algorithms? I don't really know. But I would be surprised if you were >90% confident that it would not.
Interesting, thanks Ben! I definitely agree that this is the crux.
I'm sympathetic to the claim that "this algorithm would be less efficient than quicksort" and that this claim is generalizable.[1] However, if true, I think it only implies that suffering is -- by default -- inefficient as a motivation for an algorithm.
Right after making my crux claim, I reference some of Tobias Baumann's (2022a, 2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/required in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons). Interestingly, his "incidental suffering" examples are more similar to the factory farming and human slavery examples than to the Quicksort example.
To be fair, it's been a while since I've read about stuff like suffering subroutines (see, e.g., Tomasik 2019) and its plausibility, and people might have raised considerations going against that claim.
I think it would be helpful if you provided some of those examples in the post.
Yeah, I find some of Baumann's examples plausible, but in order for the future to be net negative we don't just need some examples, we need the majority of computation to be suffering.[1]
I don't think Baumann is trying to argue for that in the linked pieces (or if they are, I don't find it terribly compelling); I would be interested in more research looking into this.
Or maybe the vast majority to be suffering. See e.g. this comment from Paul Christiano about how altruists may have outsized impact in the future.
I do not mean to argue that the future will be net negative. (I even make this disclaimer twice in the post, aha.) :)
I simply argue that the convergence between efficiency and methods that involve less suffering argument in favor of assuming it'll be positive is unsupported.
There are many other arguments/considerations to take into account to assess the sign of the future.
Ah yeah sorry, what I said wasn't precise; I mean that is not enough to show that there exists one instance of suffering being instrumentally useful, you have to show that this is true in general.
(Unless I misunderstood your post?)
If I want to prove that technological progress generally correlates with methods that involve more suffering, yes! Agreed.
But while the post suggests that this is a possibility, its main point is that suffering itself is not inefficient, such that there is no reason to expect progress and methods that involve less suffering to correlate by default (much weaker claim).
This makes me realize that the crux is perhaps this below part more than the claim we discuss above.
Sorry for the confusion and thanks for pushing back! Helps me clarify what the claims made in this post imply and don't imply. :)
Interesting post, Jim!
I think suffering may actually be a limiting factor. There is a point beyond which worsening the conditions in factory-farms would not increase productivity, because the increase in mortality and disability (and therefore suffering) would not be compensated by the decrease in costs. In general, if pain is sufficiently severe, animals will be physically injured, which limits how useful they will be.
Thanks Vasco! Perhaps a nitpick but suffering still doesn't seem to be the limiting factor per se, here. If farmed animals were philosophical zombies (i.e., were not sentient but still had the exact same needs), that wouldn't change the fact that one needs to keep them in conditions that are ok enough to be able to make a profit out of them. The limiting factor is their physical needs, not their suffering itself. Do you agree?
I think the distinction is important because it suggests that suffering itself appears as a limiting factor only insofar as it is strong evidence of physical needs that are not met. And while both strongly correlate in the present, I argue that we should expect this to change.
Thanks for clarifying!
Yes, I agree.
Nice post - I think I agree that Ben's argument isn't particularly sound.
Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation? If not, I imagine you could look at this with a different lense and consider one historical perspective which says something like "One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermining themselves". This seems fairly plausible to me, and would suggest that you might expect technological progress to correlate with methods involving less suffering.
I wonder if this theory might highlight points of resource contention where one might expect there to be less concern for digital suffering. Examples of this off the top of my head seem like AI arms races, early stage space colonisation, and perhaps some form of partial civilisation collapse.
Thanks!
Hum... not sure. I feel like my claims are very weak and true even in future worlds without autonomous advanced AIs.
Agreed but this is more similar to argument (A) fleshed out in this footnote, which is not the one I'm assailing in this post.