Hide table of contents

Abstract

When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human survival. I raise objections to the claim that respect for others’ risk attitudes requires risk avoidance when choosing for future generations. In particular, I argue that there is no known principle of interpersonal aggregation that yields acceptable results in variable population contexts and is consistent with a plausible ideal of respect for others’ risk attitudes in fixed population cases.

Introduction

The long-run future is highly uncertain, as are the effects of present actions on posterity. In order to be able to make reasonable decisions that take account of the potential long-term impact of our choices today, we therefore need to know how to rationally manage uncertainty in decision-making.

According to orthodox decision theory, a rational agent in conditions of uncertainty prefers those acts that maximize expected utility (Arnauld and Nicole 1662; Bernoulli 1738; Ramsey 1926; von Neumann and Morgenstern 1947; Savage 1972). The utility function is assumed to be a cardinal measure of the agent’s strength of preferences over outcomes, and its expectation is taken relative to a probability function representing known chances and/or the strength of the agent’s beliefs about the state of the world.

In the recent philosophical literature, an influential alternative to expected utility the- ory is defended by Buchak (2013), building on earlier work by Quiggin (1982). Buchak argues for the rationality of maximizing risk-weighted expected utility (REU). On this view, a rational agent’s preferences over uncertain prospects depend not only on the probabilities she assigns to the different possible states of the world and the desirability of the different possible outcomes, but also independently on her attitude toward risk, as captured by a risk function on probabilities.

More recently, Buchak (2016, 2017, 2019) has argued that moral contexts require that we adopt a particular attitude toward risk. We are required, she claims, to exhibit a high degree of risk avoidance as a default. This default, she claims, should especially guide those of our decisions whose largest impacts are on future individuals, such as decisions about climate change. This approach has also been claimed to have important consequences for evaluating the prospect of continued human survival and actions aimed at ensuring that our species endures. In particular, Pettigrew (2022) argues that consequentialists are pushed in the direction of favouring premature human extinction over continued human survival in light of the significant risks associated with the persistence of human beings as a dominant species.

The core idea that motivates this line of argument is that of respect for others’ risk attitudes. When our actions affect others, we ought not simply impose our own idiosyncratic attitude toward risk on them, and should instead choose in a way that takes account of the risk attitudes of the people we potentially affect. So goes the thought. I will offer reasons to think that a plausible ideal of respect for others’ risk attitudes does not support the kind of conclusions outlined above, and may even be irrelevant when thinking about the impact of present actions on the long-run future. The problem is that there is no known principle of interpersonal aggregation suitable to variable population contexts that is consistent with a plausible ideal of respect for others’ risk attitudes in fixed population cases.

I begin in section 2 by outlining the theory of risk-weighted expected utility. In sec- tion 3, I then set out Buchak’s view that we should default to a risk avoidant risk attitude when making choices on behalf of others, and I outline claims made by Buchak and Pettigrew about the implications of this principle for actions whose most important effects concern future people. In section 4, I emphasize an important choice that we face when we aim to respect others’ risk attitudes when choosing on behalf of a group of persons: namely, whether to aggregate first across persons and then across outcomes, or vice versa. In section 5, I note that there is good reason to think that respect for others’ risk attitudes requires that we adopt the latter approach, but that this approach is incompatible with consequentialism. In section 6, I argue that this approach also threatens to break down in variable population cases of the kind we inevitably confront in making decisions about the long-run future. The same is not true of a procedure that aggregates across persons within outcomes and then across outcomes, but this procedure cannot be justified by appeal to a plausible ideal of respect for others’ risk attitudes. Section 7 provides a summary and conclusion.

Read the rest of the paper

4

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

FWIW,

Richard Pettigrew has written a condensed version of their paper on the EA Forum.

Curated and popular this week
Relevant opportunities