TL;DR: Sometimes paired philosophical mistakes (mostly) cancel each other out, forming a protective equilibrium. A little knowledge is a dangerous thing: you don’t want people to end up in the situation of knowing enough to see through the illusory guardrails, but not enough to navigate successfully without the illusion. I suggest six such pairs, where it seems important to correct both mistakes simultaneously.
Introduction
I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out. For example, I think that common sense ethical norms tend to do a pretty good job (albeit with significant room for improvement), in practice, while resting upon significant theoretical falsehoods. These falsehoods may be part of a “local maximum”: if you corrected them, without making further corrections elsewhere, you could well end up with morally worse beliefs and practices.
This observation forms the kernel of truth in the claim that utilitarianism is self-effacing. Utilitarianism is not strictly self-effacing: I still expect the global maximum may be achieved by having entirely true moral beliefs (or a close enough approximation).[1] But most people are stubbornly irrational in various ways, which may make it better for them to have false beliefs of a sort that limit the damage done by their other irrationality. These paired mistakes then constitute a protective equilibrium that stops these people from veering off into severe practical error (such as naive utilitarianism).
It’s important to note that these paired mistakes are not the only protective equilibria available. The corresponding paired truths also work! But a little knowledge is a dangerous thing: you don’t want people to end up in the situation of knowing enough to see through the illusory guardrails, but not enough to navigate successfully without the illusion.
In this post, I’ll suggest a few examples of such “paired mistakes”:
- Using “collectivist” reasoning as a fudge to compensate for irrational views about individual efficacy.
- Using near-termism as a fudge to compensate for irrational cluelessness about the long term.
- Ignoring small probabilities as a fudge against Pascalian gullibility.
- Using deontology as a fudge to compensate for irrational naive instrumentalism.
- Tabooing inegalitarian empirical beliefs as a fudge for irrational (and unethical) essentializing of social groups.
- Viewing all procreative decisions as equally good, as a fudge against unethical coercive interference.[2]
Further suggestions welcome!
1. Inefficacy and Anti-individualism
Many people have false views about individual efficacy and expected value (see my Five Fallacies of Collective Harm), that lead them to underestimate the strength of our individualistic moral reasons to contribute to collective goods (like voting for the better candidate) and to reduce one’s contributions to collective bads (like pollution and environmental damage—or voting for the worse candidate, for that matter).
If you make this mistake, it would be good to also make the paired mistake of believing that you have collectivistic moral reasons based on group contributions. There are no (non-negligible) such reasons, as I prove in ‘Valuing Unnecessary Causal Contributions’. But the false belief that there are such reasons can help motivate you to do as you ought, when you’re too confused about inefficacy to be able to get the practical verdicts right for the right reasons.
Conversely: if you correctly understand why collectivist reasons are such a silly idea, it’s very important that you also appreciate why there often are sufficient individualistic moral reasons to contribute to good things even when the chance of your act making a difference is very small. (Remember that All Probabilities Matter!)
2. Cluelessness and Anti-longtermism
Some people falsely believe that we cannot justifiably regard anything (even preventing nuclear war!) as having long-term positive expected value. I’ve previously argued that such cluelessness is less than perfectly rational, though it may itself be a useful protection against some forms of “naive instrumentalist” irrationality (see #4 below).
Still, if you make this mistake, it would be good to pair it with anti-longtermism, so you avoid decision paralysis and continue to do some good things—like trying to prevent nuclear war—albeit in partial ignorance of just how good these things are.
3. Pascalian Gullibility and Probability Neglect
Another form of misguided prior involves “Pascalian gullibility”: giving greater-than-infinitesimal credence to claims that unbounded value depends upon your satisfying another’s whims (e.g. their demand for your wallet)—yielding a high “expected value” to blind compliance.
If you are disposed to make this mistake, it would be good to pair it with another—namely, the disposition to simply ignore any sufficiently small probabilities, effectively rounding the Pascalian mugger’s threat down to the “zero” it really ought to have been all along. But this latter disposition is itself a kind of mistake (i.e. when dealing with better-grounded probabilities), as explained in my recent post: All Probabilities Matter. So it might be especially important to correct this pair.
4. Naive Instrumentalism and Anti-consequentialism
Many people (from academic censors to those who think that utilitarianism would actually justify Sam Bankman-Fried’s crimes)[3] seem drawn to naive instrumentalism: the assumption that one’s moral goals are apt to be better achieved via Machiavellian means than by pursuing them with honesty and integrity, constraining one’s behaviour by tried-and-tested norms and virtues. Like most (all?) historical utilitarians, I reject naive instrumentalism as hubristic and incompatible with all we know of human fallibility and biased cognition. (See here for more on what sort of decision procedure I take to be rationally superior.)
Still, if you are—abhorrently—a naive instrumentalist, you’d best pair it with non-consequentialism to at least limit the damage your irrationality might otherwise cause!
5. Social Essentialism and Tabooed Empirical Inquiry
Most people are terrible at statistical thinking. As Sarah-Jane Leslie explains in ‘The Original Sin of Cognition: Fear, Prejudice, and Generalization’, people are natural “essentialists”, prone to generalize “striking [i.e. threatening] properties” to entire groups based on even a tiny proportion of actual threats. (She compares the generics “Muslims are terrorists” with “mosquitos carry the West Nile virus”.)
If you’re bad at thinking about statistical differences, and prone to draw unwarranted (and harmful) inferences about individuals on this basis, then it might be best for you to also believe that any sort of inquiry into group differences is taboo and morally suspect. You should just take it on faith that all groups are inherently equal, if anything more nuanced would corrupt you.[4]
But of course there’s no reason that any empirical possibility should prove morally corrupting to a clear thinker (rare though the latter may be). As I noted previously: “Just as opposition to homophobia shouldn’t be contingent on the (rhetorically useful but morally irrelevant) empirical claim that sexual orientation is innate, so our opposition to racial discrimination shouldn’t be contingent on empirical assumptions about genetics, IQ, or anything else.”[5] Group-level statistics just aren’t that relevant to how we should treat individuals, about whom we can easily obtain much more reliable evidence by directly assessing them on their own merits.
6. Illiberalism and Procreative Neutrality
Naive instrumentalists assume that illiberal coercion is often the best way to achieve moral goals. As a result, they imagine that pro-natalist longtermism must be a threat to reproductive rights (and to procreative liberty more generally).
I think this is silly because illiberalism is so obviously suboptimal. There’s just no excuse to resort to coercion when incentives work better (by allowing individuals to take distinctive features of their situation into account).
But for all the illiberal naive instrumentalists out there, perhaps it is best if they also mistakenly believe in procreative neutrality—i.e., the claim that there are no reasons of beneficence to bring more good lives into existence.
Should we lie?
Probably depends on your audience! I’m certainly not going to, because I’m committed to intellectual honesty, and I trust that my readers aren’t stupid. Plus, it’s dangerous for the lies to be too widespread: plenty of smart people are going to recognize the in-principle shortcomings of collectivism, neartermism, probability neglect, deontology, moralizing empirical inquiry, and procreative neutrality. We shouldn’t want such people to think that this commits them in practice to free riding, decision paralysis, Pascalian gullibility, naive instrumentalism, social essentialism, or procreative illiberalism. That would be both harmful and illogical.
So I think it’s worth making clear (i) that these pairs are (plausibly) mistakes, but (ii) it could be even worse to only correct one part of the mistake, since together they form a protective equilibrium. To avoid bad outcomes, you should try to move straight from one protective equilibrium to another, avoiding the shortcomings of just “a little knowledge”.
We should typically expect the accurate protective equilibrium to be practically superior to the thoroughly false one, since accurate beliefs do tend to be useful (with rare exceptions that one would need to make a case for). But if you don’t think you can manage to make it all the way to the correct pairing, maybe best to stick with the old fudge for now!
- ^
E.g., although I’m (like everyone) probably wrong about some things, I’m confident enough about the broad contours of my moral theory. And I’m not aware of any reason to think that any alternative broad moral outlook would be more beneficial in practice than the sort of view I defend. The only real danger I see is if people only go part way towards my view, miss out on the protective equilibrium that the full view offers, and instead end up in a “local minimum” for practicality. That would be bad. And maybe it would be difficult for some to make it all the way to my view, in which case it could be bad for them to attempt it. But that’s very different from saying that the view itself is bad.
- ^
I added this one after initial posting, thanks to Dan G.’s helpful comment on the public facebook thread suggesting a general schema for paired mistakes involving (i) openness to wrongful coercion and (ii) mistakenly judging all options to be on a par.
- ^
I think it’s interesting, and probably not a coincidence, that people with naive instrumentalist empirical beliefs are overwhelmingly not consequentialists. (A possible explanation: commitment to actually do what’s expectably best creates stronger incentives to think carefully and actually get the answer right, compared to critics whose main motivation may just be to make the view in question look bad. Alternatively, the difference may partly lie in selection effects: consequentialism may look more plausible to those who share my empirical belief that it typically prohibits intuitively “vicious” actions. Though it’s striking that the censors actually endorse their short-sighted censorship. Not really sure how to explain why their empirical beliefs differ so systematically from free-speech-loving consequentialists.)
- ^
I should stress that the “mistake” I’m attributing here is the taboo itself, not the resulting egalitarian beliefs. Due to the taboo, I have no idea what the first-order truth of the matter is. Maybe progressive dogma is 100% correct; it’s just that, for standard Millian reasons, we cannot really trust this in the absence of free and open inquiry into the matter. Still, if you would be corrupted by any result other than progressive orthodoxy, then it would also seem best to just take that on faith and not inquire any further. But the central error here, I want to suggest, is the susceptibility to corruption in the first place. That just seems really stupid.
- ^
I always worry about people who think there’s such a thing as inherently “racist (empirical) beliefs”. Like, suppose we’re unpleasantly surprised, and the empirical claims in question turn out to be true. (Philosophers have imagined stranger things.) Are you suddenly going to turn into a racist? I’d hope not! But then you shouldn’t think that any mere empirical contingency of this sort entails racism. Obviously we should be morally decent, and treat individuals as individuals, no matter what turns out to be the case as far as mere group statistics are concerned. The latter simply don’t matter to how we ought to treat people, and everyone ought to appreciate this.
Of course, conventionally “racist beliefs” may be (defeasible) evidence of racism, in the sense that the belief in question isn’t evidentially supported, but appeals to racists. After all, if the only reason to believe it is “wishful thinking”, except it wouldn’t be worth wishing for unless you were racist, then the belief is evidence of racism. But this reasoning doesn’t apply to more agnostic attitudes. This is because taboos prevent us from knowing what is actually evidentially supported: we know that people would say the same thing, for well-intentioned ideological reasons, no matter what the truth of the matter was. (Naive instrumentalism strikes again.)
If that's so, one might wonder why that happens.
In these cases, it seems that there are three questions; e.g.:
1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?
You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.
It's possible that in the cases you discuss, people tend to have the firmest intuitions about question 3) ("the conclusion"). E.g. they are more convinced that we ought not to be Machiavellian than that consequentialism is correct/incorrect or that consequentialism entails/does not entail Machiavellianism.
If that's the case, then it would be unsurprising that mistakes would cancel each other out. E.g. someone who would start to believe that consequentialism entails Machiavellianism would be inclined to reject consequentialism, since they otherwise would need to accept that we ought to be Machiavellian (which they by hypothesis don't do).
(Effectively, I'm saying that people reason holistically, reflective equilibrium-style; and not just from premises to conclusions.)
A corollary of this is that it's maybe not as common as one might think that "a little knowledge" is as dangerous as one might believe. Suppose that someone initially believes that consequentialism is wrong (Question 1), that consequentialism entails Machiavellianism (Question 2), and that we ought not to be Machiavellian (Question 3). They then change their view on Question 1, adopting consequentialism. That creates an inconsistency between their three beliefs. But if they have firmer beliefs about Question 3 (the conclusion) than about Question 2 (the other premise), they'll resolve this inconsistency by rejecting the other incorrect premise, not by endorsing the dangerous conclusion that we ought to be Machiavellian.
My argument is of course schematic and how plausible it is will no doubt vary depending which of the six cases you discuss we consider. I do think that "a little knowledge" is sometimes dangerous in the way you suggest. Nevertheless, I think the mechanism I discuss is worth remembering.
In general, I think a little knowledge is usually beneficial, meaning our prior that it's harmful in an individual case should be reasonably low. However, priors can of course be overturned by evidence in specific cases.
Thanks, yeah, I think I agree with all of that!
This is a nice read, however, in your conclusion, you asked the question "Should we lie?" Why that may seem self-explanatory and intriguing, where is the place of diplomacy in this regard? You know, as you've said your type of audience matters and others apart from your direct audience might or will see through the lies, here now lies the question, the exploration of diplomacy and frankness, can the two go pari-passu?
amazing read richard!