Crossposted on my blog

I have a new paper out titled “The Sequence Argument Against the Procreation Asymmetry,” for the journal Utilitas. Sorry for the extreme clickbait title of the paper—very salacious—academic papers are all titled that way.

It’s my third published paper which means I am now a Published Scholar and a Very Important Person who should be deferred to across the board. In it, I argue against the procreation asymmetry, which is the idea that while it’s good to make existing people better off, there’s nothing good about creating a happy person. Asymmetry defenders love the slogan “make people happy, not happy people,” which must be true because it’s catchy.

Suppose you have a child named Jake. According to the procreation asymmetry, having Jake might be good because it helps you or others, but it can’t be good because it helps Jake—after all, he wouldn’t have otherwise existed, so he can’t be worse off (for an explanation of what’s wrong with this reasoning, see here).

In other words, while the childless cat ladies might be saddened by not having kids, it wouldn’t be better for the kids if they had them. This paper is about pwning the childless cat ladies!

Patrick Flynn

, friend of the blog, has like 10 billion children—the paper argues that the childless cat ladies should do the same (or at least, there’s some strong moral reason to do so, though perhaps it’s outweighed by other stuff).

 

The argument is as follows. Imagine that you have a button. If you press this button, it will make an existing person better off and create a happy person. Perhaps the button will allow an existing person to live an extra year and create a well off person. Surely pressing the first button would be a good thing.

Then imagine that there’s a second button that would rescind the benefit to the existing person but provide a much greater benefit to the newly created person—the one created by the first button. Perhaps the first button creates a vial of medicine that you had planned to give to an existing person that would give them an extra year of life. The second button instead gives the medicine to the newly created person, giving them an extra 70 years of life.

Now that the person is guaranteed to exist, pressing the second button is clearly worthwhile. Taking an action that provides dramatically greater benefit for someone you’ve created than for a stranger is worth it. But together, these actions simply create a happy person—this is because the second button rescinds the benefit that would have gone to an existing person to instead benefit the newly created person. Thus, you should press two buttons which together simply make a happy person.

If this is right then you should press a single button that just creates a happy person. This button would be the same as a button that simply pressed both of the other two buttons. But if you should push each of two buttons, you should press a single button that presses those two buttons.

Thus, the argument goes roughly like:

  1. You should press the first button that creates a happy person and makes an existing person better off.
  2. You should press the second button that rescinds the benefit to the existing person and makes the newly created person vastly better off.
  3. If you should press two buttons which collectively do X, then you should press one button that does X.

From these, it follows that one should create a happy person.

The second premise is extremely obvious. Note: for the argument to go through it doesn’t have to be that any time you can benefit the person you created or a stranger, so long as your offspring benefits more, you should benefit them. It just has to be that if you can provide some vastly greater benefit—perhaps one millions of times as great—to the person you created rather than a stranger, you should do so.

It also doesn’t rely on the idea that you should harm a stranger to benefit your offspring. 2 is supposed to be pressed before the benefit has been given, so it’s not taking away a good from anyone, but just giving a good to your offspring rather than a stranger.

You might reject 1) and think that it’s a bit bad to create a happy person, so that you should only do it if it produces huge benefits to existing people. But that’s consistent with the argument. For 1) it need not be that it’s always worth pressing a button to benefit an existing person and create a happy person, just that there’s some degree of benefit that if given to an existing person is worth creating a happy person. For instance, creating a happy person is worth curing a person’s terminal illness.

One might reject 1) and think that the button is worth pressing, but only so long as you won’t later have the option to press 2). But this is super implausible. The second button is worth pressing, so to go this route, you’d have to think that the addition of extra worthwhile options sometimes makes options worse. This is super implausible—in the paper, I give the following dialogue:

Person 1: Hey, I think I will create a person by pressing a button. It is no strain on us and it will make the life of a random stranger better, so it will be an improvement for everyone.

Person 2: Oh, great! I will offer you a deal. If you do that, and you do not give the gift to the stranger, instead, I'll give your baby 20 million times as much benefit as the gift would have.

Person 1: Thanks for the offer. It is a good offer, and I would take it if I were having the baby. But I am not having a baby now, because you offered it.

Person 2: What? But you don't have to take the offer.

Person 1: No, no. It is a good offer. That is why I would have taken it. But now that you have offered me this good offer, that would be worth taking, it makes it so that having a baby by button is not worth doing.

Clearly, person 1 is being irrational here (for a similar principle, see Hare 2016, p. 460). If, after taking some action, one gets another good option, that would not make the original action not worth taking. The fact that some action allows one to do other worthwhile things counts in favor of it, not against it. As Huemer (2013, p. 334) notes, it is perfectly rational to refuse to take an action because you predict that if you take it, you will do other things that you should not. But it is clearly irrational to refuse to take an action on the grounds that, if you do, you will do other worthwhile things.

Additionally, the procreation asymmetry isn’t just deontic—it’s not just about whether one has moral reasons to procreate—but also axiological. To buy the asymmetry, you must think the world is no better with the mere addition of an extra happy person (if the world is better because of the addition of an extra happy person, then it seems there’s a reason to create happy people, as you have some reason to make the world better).

But if it’s deontic, then so long as the better than relation is transitive—meaning that if A is better than B and B is better than C then A is better than C—this option can’t work. So long as we accept:

  1. The world where the first button is pressed is better than the world where no buttons are pressed.
  2. The world where both buttons are pressed is better than the world where only the first button is pressed.

It will follow that:

  1. The world where both buttons are pressed is better than the world where no buttons are pressed.

But the world where both buttons are pressed just has an extra happy person! Thus, the addition of an extra happy person makes the world better. This can’t be avoided by holding that your reason to press the first button evaporates if the second button exists because the premises are about states of the world not a person’s reasons.

The only remaining premise is “If you should press two buttons which collectively do X, then you should press one button that does X.” This one seems obvious enough; if you should press two buttons that together do X, then it seems that you should press a single button which has the effect of pressing the other two buttons. But that means you should press a single button that does X. It would be bizarre to think that, for instance, you should press two buttons that give someone a cake, but not press one button that does that—whether you should take actions is given by what the actions do, not the order in which you press buttons.

But even if you reject this principle, I think there’s a powerful argument against the procreation asymmetry. The axiological argument I gave before didn’t make reference to any such principle—so long as the world is a better place because of the pressing of the first button, and it’s a better place if the second button is pressed after the first, then by transitivity, the addition of an extra happy person is good. Thus, the person who denies that it’s good to create a happy person is left in the awkward position of denying that you should do things that make the world better at no cost to anyone. Nuts!

Finally, if the procreation asymmetry is right, it would be weird if, despite having no reason to create a happy person, one has reason to take a sequence of actions that only creates a happy person. Yet that’s what the first two premises show.

From here, then, we’re off to the races. Starting with these sorts of principles about buttons, we can start deriving pretty dramatic conclusions like we were RFK Junior and pretty dramatic conclusions were people he was having affairs with. Assume that we go for the more ambitious versions of the principles, according to which the first button is worth pressing if it makes an existing person better off and creates a person with positive welfare, and the second button is worth pressing if it produces greater benefit to the newly created person than an existing person. From here, we can get more dramatic results.

From this it will follow not merely that there’s a reason to make very happy people but that you should make any person with net positive welfare ceteris paribus. Suppose we want to show that you should create a person with 1 microutil if all else is equal (that’s a very small amount). Well, a button that gives a person an extra .25 microutils and creates a person with .25 microutils would be good—it would benefit one person, create one well off person, and harm no one. But then a second button that gets rid of the .25 microutil benefit to the existing person to instead provide a .75 microutil benefit to the newly created person would be worth pressing. So then you should take a sequence of actions that creates a person with a single microutil which, by the earlier sequential principles, implies you should take a single action which creates a person with a microutil.

People normally think that it’s only good to create a person if they’ll have a very good life. For instance, most people think you have no reason to have kids to give them up for adoption. But this shows that’s wrong. So long as we accept the pareto principle—that if something is good for one person and bad for no one—then you’ll have a reason to create any person with net positive welfare. Here’s why.

From the earlier steps, we proved that you should create any person with net positive welfare. By pareto, if pressing a second button that harmed them produced some other greater benefit, it would be worth pressing. But this means one should take a sequence of actions that simply creates anyone with positive welfare.

Now, maybe you reject that one should create anyone with positive welfare. Perhaps you think that it’s only good to create someone with welfare level more than some amount X. Well, this won’t avoid counterintuitive conclusions: the above reasoning shows that there’s a reason to create any person if their net welfare level is more than X.

Finally, the argument shows that it’s a good thing if a person has a child with welfare level N, so long as having the child decreases their welfare level by less than N. By the above reasoning, you should create anyone with positive welfare. So, for instance, if a person has 50 units of well-being and 49 units of suffering, they’re worth creating. But then I appeal to the following principle:

Offspring Agony Passing On: one should endure some amount of suffering as long as it averts a greater amount of suffering from being experienced by their offspring.

Here when I’m talking about what should be done, I’m describing what action one has most reason to take—what would be the best thing to do. They’re not necessarily required to do this. But if this is right, then if a person creates someone with 50 units of well-being and 49 units of suffering, and then passes on the 49 units of suffering to themself, that would be a good thing to do. Thus, it’s good to create a person so long as their well-being level isn’t greater than your lost well-being in creating them. (From here, we can derive the repugnant conclusion—details in the paper).

Even if you reject that it’s good to create any person with positive well-being, the above argument will show that if X is the minimum well-being level at which it’s good to create a person, it’s good to create a person with well-being level X+V as long as doing so costs you less than V units of well-being.

It also shows that it’s as good to create a happy person with well-being level W as to benefit an existing person by W (I didn’t make this argument in the paper though, because I didn’t think of it at the time). Here are two plausible principles:

  1. If you can benefit your offspring more than a stranger, it’s better to benefit your offspring.
  2. If A is a better action to take than B and C is a good action to take, it’s better to take A and C then just B.

For example, if it’s better to give you 100 dollars than 50, and good to help a little old lady cross the street, then it’s better to give you 100 dollars and help a little old lady across the street than just to give you 50 dollars.

But if these two are right then it’s better to create a happy person with well-being level W*—where W* is any amount more than W—than to give an existing person an extra W units of well-being. For example, suppose that you’re deciding between giving an existing person an extra 50 units of well-being or creating a person with 51 units. It’s better to create the person given that:

  1. It would be good to create a person with .5 units of well-being (as per the above reasoning).
  2. It would be better to give the newly created person 50.5 units of well-being than a stranger 50 units of well-being.
  3. Therefore, it would be better to create a person with .5 units of well-being and give them 50.5 units of well-being than to give a stranger 50 units of well-being.
  4. Therefore, it would be better to create a person with 51 units of well-being than to give a stranger 50 units of well-being.

If this is right, then having kids is extremely valuable! Probably for most people, having kids is by far the best thing they ever do. Having a kid who lives a happy life for 40 years is about as good as giving 40 years worth of extra happy life to an existing person. The fertility crisis is thus a terrible thing, even if it doesn’t degrade the quality of our institutions and eviscerate pensions. It’s bad because there are tens of people who will never be.

It’s open access! Hope you enjoy!

7

1
1
1

Reactions

1
1
1

More posts like this

Comments12
Sorted by Click to highlight new comments since:

Congratulations on the publication!

 

FWIW, I don't find the denial of Sequential Desirability very counterintuitive, if and when it's done for certain person-affecting reasons, precisely because I am quite sympathetic to those person-affecting reasons. The discussion in the comments here seems relevant.

Also, a negative utilitarian would deny the coherence of Generative Improvement, because there's no positive utility. You could replace it with an improvement and generating a person with exactly 0 utility, or with utility less than the improvement. But from there, Modification Improvement is not possible.

Not all negative utilitarians deny that there exists such a thing as pleasure, they generally deny that it matters as much as pain.  The view that there are no good states is crazy.  

What do you make of the point I made here about why denying sequential desirability is implausible (if implies you should press one button C which simply presses A and B but that you should press A and B) and the reasoning for why your view commits you to denying the transitivity of the better than relation (I also make a third point in the paper). 

By what standard are you judging it to be crazy? I don't think the view that there are no good states is crazy, and I'm pretty sympathetic to it myself. The view that it's good to create beings for their own sake is totally unintuitive to me (although I wouldn't call it or really any other view crazy).

How I would personally deal with your hypothetical under the kind of person-affecting views to which I'm sympathetic is this:

We don't have reason to press the first button if we'd expect to later undo the welfare improvement of the original person with the second button. This sequence of pressing both isn't better on person-affecting intuitions than doing nothing. When you reason about what to do, you should, in general, use backwards induction and consider what options you'll have later and what you'd do later.

If you don't use backwards induction, you will tend to do worse than otherwise and can be exploited, e.g. money pumped. This is true even for total utilitarians.

I address that in the article.  FIrst of all, so long as we buy the transitivity of the better than relation that won't work.  Second, it's highly counterintuitive that the addition of extra good options makes an action worse.  

I find it crazy and I think nearly all people do. 

FIrst of all, so long as we buy the transitivity of the better than relation that won't work.

This isn't true. I can just deny the independence of irrelevant alternatives instead.

Second, it's highly counterintuitive that the addition of extra good options makes an action worse. 

It's highly counterintutive to you. It's intuitive to me because I'm sympathetic to the reasons that would justify it in some cases, and I outlined how this would work on my intuitions. The kinds of arguments you give probably aren't very persuasive to people with strong enough person-affecting intuitions, because those intuitions justify to them what you find counterintuitive.

I find it crazy and I think nearly all people do. 

This doesn't seem like a reason that should really change anyone's mind about the issue. Or, at least not the mind of any moral antirealist like me.

I suppose a moral realist could be persuaded via epistemic modesty, but if you are epistemically modest, then this will undermine your own personal views that aren't (near-)consensus (among the informed). For example, you should give more weight to nonconsequentialist views.

//This isn't true. I can just deny the independence of irrelevant alternatives instead.//

That doesn't help.  The world where only button 1 is pressed is better than the world where neither is pressed, the world where both are pressed is better than the world where only button 1 is pressed, so by transitivity, an extra happy person is good.  

You can always deny any intuition, but I'd hope this would convince people without fairly extreme views.

Your argument is implicitly assuming IIA.

On a person-affecting view violating IIA but not transitivity, we could have the following:

  1. button 1  neither, when exactly these two options are available
  2. both buttons  button 1, when exactly these two options are available
  3. both buttons  neither, when exactly these two options are available
  4. button 1  both buttons  neither, when exactly these three options are available

There's no issue for transitivitiy, because the 4 cases involve 4 distinct relations (distinguished by their subsripts), each of which is transitive. The 4 relations don't have to agree.

I was assuming both buttons are available.  Specifically, suppose Bob exists:

  1. Bob getting an extra 1 util and Todd being created with a util is better than that not happening.  
  2. Todd being created with 3 utils is better than the scenario in 1.  

I'm guessing there isn't much more we can gain by discussing further, and we'll have to agree to disagree. I'll just report my own intuitions here and some pointers, reframing things I've already said in this thread and elaborating.

It's useful to separate the outcomes from the actions here. Let's label the outcomes:

Nothing: the result of pressing neither button.

A: Bob getting an extra 1 util and Todd being created with a util, the result of only button 1 being pressed.

B: Todd being created with 3 utils, the result of both buttons being pressed.

 

On my person-affecting intuitions, I'd rank the outcomes as follows (using a different betterness relation for each set of outcomes, violating the independence of irrelevant alternatives but not transitivity):

  1. When only Nothing and A are available, A > Nothing.
  2. When only A and B are available, B > A.
  3. When only Nothing and B are available, Nothing ~ B.
  4. When all three outcomes are available, Nothing ~ B. I'm undecided on how to compare A to Nothing and B, other than that its comparison with Nothing and its comparison with B are the same. I have some sympathy for different ways of comparing A to the other two.

 

Now, I can say how I'd act, given the above.

If I already pressed button 1 and Nothing is no longer attainable, then we're in case 2, so pressing button 2 and so pressing both buttons is better than only pressing button 1, because it means choosing B over A.

If starting with all three options still available, and I expect with certainty that if I press button 1, Nothing will no longer be available and I will then press button 2 — say because I know I will follow the rankings in the previous paragraph at that point —, then the outcome of pressing button 1 is B, by backward induction. Then I would be indifferent between pressing button 1 and getting outcome B, and not pressing it and getting Nothing, because B ~ Nothing.[1]

If starting with all three options still available, and for whatever reason I think there's a chance I won't press button 2 if I press button 1, then using statewise dominance reasoning:

  1. If and because A > Nothing (and because B ~ Nothing) at this point, pressing button 1 would be better than not pressing either button.
  2. If and because A < Nothing (and because B ~ Nothing) at this point, pressing button 1 would be worse than not pressing either button.
  3. If and because A ~ Nothing (and because B ~ Nothing) at this point, I'd be indifferent.

Similarly if I'm not 100% sure that button 2 will actually even be available after pressing button 1.

 

My intuitions are guided mostly by something like the (actualist[2]) object interpretation and participant model of Rabinowicz and Österberg (1996)[3] and backward induction.

  1. ^

    We might say I'm in case 3 here, because I've psychologically ruled out A knowing I'd definitely pick B over A. But B ~ Nothing whether we're in case 3 or case 4.

  2. ^

    For more on actualism as a population ethical view, see Hare (2007) and Spencer (2021). I'm developing my own actualist(-ish) view, close to weak actualism in those two papers. I'm also sympathetic to Thomas (2019) and Pummer (2024).

  3. ^

    Rabinowicz and Österberg (1996) write:

    To the satisfaction and the object interpretations of the preference-based conception of value correspond, we believe, two different ways of viewing utilitarianism: the spectator and the participant models.

    According to the former, the utilitarian attitude is embodied in an impartial benevolent spectator, who evaluates the situation objectively and from the 'outside'. An ordinary person can approximate this attitude by detaching himself from his personal engagement in the situation. (...)

    The participant model, on the other hand, puts forward as a utilitarian ideal an attitude of emotional participation in other people's projects: the situation is to be viewed from 'within', not just from my own perspective, but also from the others' points of view. The participant model assumes that, instead of distancing myself from my particular position in the world, I identify with other subjects: what it recommends is not a detached objectivity but a universalized subjectivity.

    and

    the object interpretation presupposes a subjectivist (or 'projectivist') theory of value. Values are not part of the mind-independent world but something that we project upon the world, or — more precisely — upon the whole set of possible worlds. In this sense, our intrinsic value claims, while not world-bound in their range of application, constitute an expression of a particular world-bound perspective: the perspective determined by the preferences we actually have.

Not all negative utilitarians deny that there exists such a thing as pleasure, they generally deny that it matters as much as pain.  The view that there are no good states is crazy

Denying that pleasure is a "good state" is not the same as denying that pleasure exists.

"pwning the childless cat ladies" I know this is just a joke in passing and not the point of the paper, but this is sexist (in the sense that it comes off hostile to women or at least gender-nonconforming women) and sexism should be avoided for both substantive and PR reasons. 

I do not think a joking throwaway reference to a statement from the upcoming vice president is offensive.

Curated and popular this week
Relevant opportunities