*In Defence of Fanaticism** is a Global Priorities Institute Working Paper by Hayden Wilkinson. This post is part of my **sequence** of GPI Working Paper summaries.*

Hilary Greaves and William MacAskill think objections to fanaticism are among the strongest counterarguments to strong longtermism. Such objections also underpin some of the strongest counterarguments to expected value theory. Thus, contemplating fanaticism is critical for comparing neartermist and longtermist causes. One of Greaves and MacAskill’s responses to this counterargument cites Hayden Wilkinson’s *In Defence of Fanaticism*, suggesting perhaps we should be fanatical on balance.

Here I’ve done my best to summarize Wilkinson’s argument, making it more easily accessible while sacrificing as little argumentative strength as possible.

# Introduction

### Dyson’s Wager (modified for brevity)

Say you have $2,000 and must choose to donate it to either a charity that will certainly prevent one death from malaria or a charity that will research ‘positronium,’ which has a nonzero probability of bringing astronomically many blissful lives into existence in the far future. Expected value theory suggests you should give the $2,000 to the speculative positronium research. That conclusion is *fanatical*.

### Fanaticism Definition

*Fanaticism*: For any tiny (finite) probability*∊*> 0, and for any finite value*v*, there is some finite*V*large enough that*L*is better than_{risky}*L*._{safe}*L*: Lottery with value_{risky}*V*and probability*∊*> 0; value 0 otherwise*L*: Lottery with value_{safe}*v*and probability 1

Like positronium research, *L _{risky }*offers a slim probability (

*∊*) of astronomical value (

*V*), while, like the malaria charity,

*L*offers a modest value (

_{safe}*v*) with certainty.

Previous justifications for fanatical verdicts relied on expected value theory. However, Wilkinson thinks there is good reason to accept fanaticism, even if we reject expected value theory. Hence, Wilkinson argues that denying fanaticism, as defined above, has implausible consequences—without relying on expected value theory. Wilkinson assumes totalism: we prefer outcomes with a higher total value (as opposed to, for example, average value). The Egyptology and Indology Objections (discussed later) rely on totalism.

# Intuitions against fanaticism

### Fanaticism is so counterintuitive it must be false.

It seems highly implausible that one should give up a guaranteed good payoff for a tiny chance of something better, no matter how tiny the chance.

However, intuitions about probabilities are often misguided and fall prey to various fallacies^{[1]}. Additionally, intuitive decision-making often ignores probabilities^{[2]}, radically over- or under-estimates probabilities^{[3]}, and treats low probabilities as having probability 0 (Wilkinson includes multiple examples of this in different contexts^{[4]}). For instance, one study found jurors are just as likely to convict a defendant based on fingerprint evidence if that evidence has a 1 in 100 probability of being a false positive as if the probability were 1 in 1,000 or even 1 in 1 million^{[5]}.

Thus, intuitions about low probabilities lead us to foolishly ignore probabilities roughly 1% or lower. Our intuitive judgments against fanaticism may be similarly foolish, which warrants at least considering the case in favor of it.

# A continuum argument

Consider the following two lotteries:

*L*: value 1 (e.g., one life is saved) with probability 1 (certainty)_{0}*L*: value 10_{1}^{10}(e.g., vastly more lives are saved) with probability 0.999999 (near-certainty); value 0 otherwise

Intuitively, *L _{1}* seems better. But now consider

*L*, which has a slightly lower probability of success but, if successful, saves many more lives.

_{2}*L*: value with probability 0.999999_{2}^{2}; value 0 otherwise

This seems better than *L _{1}*. We could continue with

*L*,

_{3}*L*, and so on until some

_{4}*L*, such that 0.999999

_{n}^{n}is less than

*∊*, for any arbitrarily small

*∊*.

Intuition suggests that *vastly* increasing the payoff can compensate for *slightly* decreasing the probability, meaning each lottery in this sequence is better than the last, making the final lottery better than the first. But, the final lottery’s probability of any positive payoff is less than *∊*. So we have Fanaticism.

### Transitivity and Minimal Tradeoffs Definitions

This continuum argument rests on two intuitive principles:

**Transitivity****: If****L**_{a}**≥****L**_{b}**and****L**_{b}**≥****L**_{c}**, then****L**_{a}**≥****L**_{c}**.**- That is, if a lottery (
*La*) is at least as good as a second lottery (*Lb*), and the second lottery (*Lb*) is at least as good as a third lottery (*Lc*), then the first lottery (*La*) is at least as good as the third lottery (*Lc*).

- That is, if a lottery (
**Minimal Tradeoffs****: We can make tradeoffs between probability and value.**For instance, we can always compensate for a slight decrease in the probability of success with a vastly greater payoff.*(This is highly simplified and deformalized, so if you disagree with this principle or want more detail, please see the section on “Minimal Tradeoffs” on*__page 13__*).*

Given the continuum argument, to reject fanaticism, you must reject one of these two principles.

# A dilemma for the unfanatical

To reject fanaticism, you must also reject *Scale Independence* (as defined below) or allow your lottery comparisons to be absurdly sensitive to tiny changes.

### Scale Independence Definition

*Scale Independence:*For any lotteries*L*and_{a}*L*, if_{b}*L*≥_{a}*L*, then_{b}*k*·*L*≥_{a}*k*·*L*for any positive, real_{b}*k*.- That is, if
*L*is at least as good as_{a}*L*, then after multiplying the value of both by the same factor (_{b}*k*),*L*is still at least as good as_{a}*L*._{b}

- That is, if

Scale Independence seems highly plausible. After all, if you’re multiplying the values of both lotteries by *k*, why should their value relative to each other change?

For fanaticism to be false, there must be some probability *∊* > 0 and value *v* that makes *L _{risky}* no better than

*L*, no matter how big

_{safe}*V*is.

*L*: value_{risky}*V*with probability*∊*; 0 otherwise*L*: value_{safe}*v*with probability 1

Thus, to reject fanaticism, you must alter your comparisons of lotteries based on their scale—i.e., discount larger values (*V*) beyond the extent that their probability (*∊*) is lower^{[6]}. This violates Scale Independence*.*

### Absurd sensitivity to tiny changes

You can avoid this violation by asserting that a guaranteed value (*v*), no matter how small, is always better than a greater value (*V*) with probability *∊*. However, that assertion means there must be a successive pair of probabilities (*p _{i}* and

*p*) between 1 and

_{i+1}*∊*such that no value at

*p*(no matter how large) is better than any value at

_{i}*p*(no matter how small). If there were no such pair, we could create a sequence similar to that in the continuum argument.

_{i+1}Since *p _{i}* and

*p*can be arbitrarily close together, there could be astronomical value at

_{i+1}*p*, and miniscule value at

_{i}*p*, and the latter would still be better. Thus, rejecting fanaticism, our evaluations of lotteries become absurdly sensitive to tiny changes in probability, which is both intuitively implausible and impractical for decision-making.

_{i+1}# Egyptology and Indology

Wilkinson presents two objections in this section. As presented in his paper, these objections rely on totalism (that we prefer outcomes with a higher total value).

## The Egyptology Objection

The Egyptology Objection is that denying fanaticism makes your moral decisions depend on events that aren't altered by your choice, including those in distant galaxies or ancient Egypt.

### Background Independence Definition

*Background Independence*:*L*and_{a}*L*, and any outcome_{b}*Ơ*,

if*L*≥_{a}*L*, then_{b}*L*+_{a}*Ơ*≥*L*+_{b}*Ơ*- That is, if
*L*is at least as good as_{a}*L*, then after adding the constant value of an outcome (_{b}*Ơ*) to both lotteries,*L*is still at least as good as_{a}*L*._{b}

- That is, if

Some rejections of fanaticism^{[7]} violate Background Independence and thus fall prey to the Egyptology Objection.

For instance, say *Ơ* is an unaffected “background outcome” that occurred in ancient Egypt. If we violate Background Independence, our choice between *L _{a}* and

*L*might change based on

_{b}*Ơ.*Thus, we fall prey to the Egyptology objection if we violate Background Independence.

### The Less Severe Egyptology Objection

But even rejections of fanaticism that don’t violate Background Independence give rise to a less severe Egyptology Objection. Wilkinson uses statistical distributions^{[8]} to show how, if you are *uncertain* about an event in ancient Egypt (*B*), rejecting fanaticism can lead to situations where *L _{risky}* +

*B*is considered better

^{[9]}than

*L*+

_{safe}*B*, even though

*L*is not better than

_{risky}*L*, again making our moral decisions depend on unaffected events, such as those in ancient Egypt.

_{safe}## The Indology Objection

Take the same *L _{risky}* +

*B*and

*L*+

_{safe}*B*from above, where the former is considered better

^{[9]}, yet

*L*is not better than

_{risky}*L*. (For fanaticism to be false, such lotteries must exist). But let’s change

_{safe}*B*to represent a

*very*uncertain value of an event in the ancient Indus Valley.

You could research Indology for years and pin down *B*’s real value as *b*, with certainty. If so, instead of choosing between *L _{risky}* +

*B*and

*L*+

_{safe}*B*, you’d choose between

*L*+

_{risky}*b*and

*L*+

_{safe}*b*.

If you accept Background Independence, you’d make the same choice between *L _{risky}* +

*b*and

*L*+

_{safe}*b*as you’d make between

*L*and

_{risky}*L*, so no matter what

_{safe}*b*you find, you’d make the same decision.

However, even though *L _{risky}* is not better than

*L*, you risk making the wrong decision using

_{safe}*L*+

_{risky}*B*and

*L*+

_{safe}*B*, because, if you reject fanaticism, the uncertainty makes the former considered better

^{[9]}. Thus, you should perhaps spend years finding

*b*—even though, if you accept Background Independence, you will make the same decision between

*L*+

_{risky}*b*and

*L*+

_{safe}*b*as you’d make between

*L*and

_{risky}*L*—no matter the value of

_{safe}*b*! This seems more absurd than accepting fanaticism.

# Conclusion

To recap, to deny Fanaticism…

- We must deny either Transitivity or Minimal Tradeoffs and accept the counterintuitive verdicts that follow.
- We must either violate Scale Independence or become absurdly sensitive to tiny differences in probability and value.
- We must accept a less severe version of the Egyptology Objection: in some cases, morally correct judgments depend on our beliefs about far-off, unaffected events, such as those in ancient Egypt.
- We either must accept the severe version of the Egyptology Objection by denying Background Independence, or face the Indology Objection: we sometimes ought to make decisions that we know we would reject if we learned more, no matter what we might learn.

Hence, rejecting fanaticism, as intuitive as it may initially feel, has highly unintuitive ramifications. The cure is worse than the disease.

We should accept that it is better to produce some tiny probability of infinite moral gain (or arbitrarily high gain), no matter how tiny the probability, than it is to produce some modest finite gain with certainty.

Accepting fanaticism also removes some of the counterarguments to expected value theory and its implications (which, for example, arguably includes strong longtermism).

^{^}Wilkinson lists "the Conjunction Fallacy (Tversky & Kahneman 1983), the Gambler’s Fallacy (Chen et al. 2016), the Hot Hand Fallacy (Gilovich et al. 1985), and the Base Rate Fallacy (Kahneman & Tversky 1982)."

^{^}^{^}"We intuitively overestimate some probabilities due to availability bias (Tversky & Kahneman 1974), and underestimate others out of indefensible optimism (Hanoch et al. 2019)."

^{^}"When presented with a medical operation that posed a 1% chance of permanent harm, many respondents considered it no worse than an operation with no risk at all (Gurmankin & Baron 2005). And in yet another context, subjects were unwilling to pay any money at all to insure against a 1% chance of catastrophic loss (McClelland et al. 1993)."

^{^}^{^}See the bottom of page 16 and page 17 for a much more formal line of reasoning as to why this is true.

^{^}"One such proposal is expected utility theory with a utility function that is concave and/or bounded (e.g., Arrow 1971). As Beckstead and Thomas (n.d.: 15-16) point out, this results in comparisons of lotteries being strangely dependent on events that are unaltered in every outcome and indeed some irrelevant to the comparison."

^{^}The math and full line of reasoning is on pages 23 to 27.

^{^}I mean 'considered better' by Stochastic Dominance, which "says that if two lotteries have exactly the same probabilities of exactly the same (or equally good) outcomes, then they are equally good; and if you improve an outcome in either lottery, keeping the probabilities the same, then you improve that lottery. And that’s hard to deny!" (See page 10 for the formal definition).

Thanks for this interesting summary! These are clearly really powerful arguments for biting the bullet and accepting fanaticism. But does this mean that Hayden Wilkinson would literally hand over their wallet to a pascal mugger, if someone attempted to Pascal mug them? Because Pascals mugger doesn't have to be a thought experiment. It's a script you could literally say to someone in real life, and I'm assuming that if I tried it on a philosopher advocating for fanaticism, then I wouldn't actually get their wallet. Why is that? What's the argument that lets you not follow through on that in practice?

Thanks for the helpful summary. I feel it's worth pointing out that these arguments (which seem strong!) defend only fanaticism

per se, but not a stronger claim that is used or assumed when people argue for long-termism. The stronger claim being that we ought to follow Expected Value Maximization. It's a stronger ask in the sense that we're asked to take betsnotof arbitrarily high payoffs, which can be 'gamed' to be high enough to be worth taking, but 'only' some specific astronomically high payoffs, which are derived from (as it were) empirically determined information, facts about the universe that ultimately give the payoff upper bounds. That said, it's helpful to have these arguments to show that 'longtermism depends on being fanatical' is not a knock-down argument against longtermism. Here's one example of that link being made: "...the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism" (Tarsney, 2019).I'll admit this was a lot to take in, and intuitively I'm inclined to reject fanaticism simply because it seems more reasonable, intuitively, to believe that high probability interventions are always better than low ones. This position, for me at least, is rooted in normalcy bias, and if there's one thing Effective Altruism has taught me, it's that normalcy bias can be a formidable obstacle to doing good.