The following critique is a lightly modified version of the one found here. It builds on the recent post A Case Against Strong Longtermism by Vaden Masrani, but can be read and understood independently. If you’re sick of our rants on the forum, you can also listen to a podcast episode in which Vaden and I cover similar territory - albeit more quickly and with a large helping of cheekiness. We promise to move onto other topics after this (although Vaden is now threatening a response to his post - God help us. I guess that’s why downvoting exists). Much love to the community and all its members.
Thanks to Daniel Hageman, Vaden Masrani, and Mauricio Baker for their continual feedback and criticism as this piece evolved, and to Luke Freeman, Mira Korb, Isis Kearney, Alex HT, Max Heitmann, Gavin Acquroff, and Maximilian Negele for their comments and suggestions on earlier drafts. All errors, misrepresentations, and harsh words are my own.
The first paragraph of the final section is stolen from an upcoming piece I wrote for GWWC. Whoops.
TL;DR: Focusing on the long-term destroys the means by which we make progress — moral and otherwise.
The new moral philosophy of longtermism has staggering implications if widely adopted. In The Case for Strong Longtermism, Hilary Greaves and Will MacAskill write
The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers. (pg. 1; italics mine)
The idea energizing this philosophy is that most of our “moral value” lies thousands, millions, or even billions of years from now, because we can expect many more humans and animals to exist in the future than right now. In the words of Greaves and MacAskill: “If humanity’s saga were a novel we would still be on the very first page.” (pg. 1)
Longtermism is causing many to question why we should be at all concerned with the near term impact of our actions. Indeed, if you are convinced by this calculus, then all current injustice, death, and suffering are little more than rounding errors in our moral calculations. Why care about parasitic worms in Africa if we can secure utopia for future generations?
EA has yet to take irreversible action based on these ideas, but the philosophy is gaining traction and therefore deserves an equal amount of criticism. There have been millions donated to the cause of improving the long-term future: at the time of writing the Long-Term Future Fund has received just under $4.5 million USD in total, and the Open Philanthropy Project has dedicated a focus area to this cause in the form of “risks from advanced artificial intelligence.” While many millions more are still funneled through GiveWell, The Life You Can Save, and Animal Charity Evaluators, should Greaves and MacAskill prove sufficiently persuasive such “near-term” efforts could vanish: “If society came to adopt these views, much of what we would prioritise in the world today would change.” (pg. 3)
This post is a critique of longtermism as expounded in The Case for Strong Longtermism. Prior criticism of the idea has typically revolved around the intractability objection which, while agreeing that the long-term future should dominate our moral concerns, argues we can’t have any reliable effect on it. While correct, it lets longtermism off far too lightly. It does not criticize it as a moral ideal, but rather as something good but unrealizable.
The recent essay by Vaden Masrani does attempt to refute the two premises on which strong longtermism is founded. It argues that (i) the mathematics involved in the expected value calculations over possible futures are fundamentally flawed — indeed, meaningless — and (ii) that we should be biased towards the present because it is the only thing we know how to reliably affect. My criticisms will build on these.
I will focus on two aspects of strong longtermism, henceforth simply longtermism. First, the underlying arguments inoculate themselves from criticism by using arbitrary assumptions on the number of future generations. Second, ignoring short-term effects destroys the means by which we make progress — moral, scientific, artistic, and otherwise. In other words, longtermism is a dangerous moral ideal because it robs us of the ability to correct our mistakes.
Since the critique may come across as somewhat harsh, it’s worth spending a moment to frame it.
Motivation
My assailment of longtermism comes from a place of deep sympathy with and general support of the ideals of effective altruism. The community has both generated and advocated many great ideas, including evaluating philanthropic efforts based on impact rather than emotional valence, acknowledging that “doing good” is a difficult resource-allocation problem, and advocating an ethical system grounded in impartiality across all sentient beings capable of suffering. Calling attention to farmed animal welfare, rigorously evaluating charities, and encouraging the privileged among us to donate our wealth, have all been hugely important initiatives. Throughout its existence, EA has rightly rejected two forms of authority which have traditionally dominated the philanthropic space: emotional and religious authority.
It has, however, succumbed to a third — mathematical authority. Firmly grounded in Bayesian epistemology, the community is losing its ability to step away from the numbers when appropriate, and has forgotten that its favourite tools — expected value calculations, Bayes theorem, and mathematical models — are precisely that: tools. They are not in and of themselves a window onto truth, and they are not always applicable. Rather than respect the limit of their scope, however, EA seems to be adopting the dogma captured by the charming epithet shut up and multiply.
EA is now at risk of adopting a bad idea; one that if fully subscribed to, I fear will lead to severe and irreversible damage — not only to the movement, but to the many people and animals whose suffering would be willfully ignored. As will be elaborated on later, rejecting longtermism will not cause a substantial shift in current priorities; many of the prevailing causes will remain unaffected. If, however, longtermism is widely adopted and its logic taken seriously, many of EA’s current priorities would be replaced with vague and arbitrary interventions to improve the course of the long-term future.
Let’s begin by examining the kinds of reasoning used to defend the premises of longtermism.
Irrefutable Reasoning
“For the purposes of this article”, write Greaves and MacAskill,
we will generally make the quantitative assumption that there are, in expectation, at least 1 quadrillion (10^15) people to come — 100,000 times as many people in the future as are alive today. This we [sic] be true if, for example, we assign at least a 1% chance to civilization continuing until the Earth is no longer habitable, using an estimate of 1 billion years’ time for that event and assuming the same per-century population as today, of approximately 10 billion people per century. (pg. 5)
This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground. The assumptions, however, are tremendously easy to change on the fly. Consequently, they’re dangerously impermeable to reason. Just as the astrologer promises us that “struggle is in our future” and can therefore never be refuted, so too can the longtermist simply claim that there are a staggering number of people in the future, thus rendering any counter argument mute.
Such unfalsifiable claims lead to the following sorts of conclusions:
Suppose that $1bn of well-targeted grants could reduce the probability of existential catastrophe from artificial intelligence by 0.001%. . . . Then the expected good done by [someone] contributing $10,000 to AI [artificial intelligence] safety would be equivalent . . . to one hundred thousand lives saved. (pg. 14)
Of course, it is impossible to know whether $1bn of well-targeted grants could reduce the probability of existential risk, let alone by such a precise amount. The “probability” in this case thus refers to someone’s (entirely subjective) probability estimate — “credence” — a number with no basis in reality and based on some ad-hoc amalgamation of beliefs. Notice that if one shifted one’s credence from 0.001% to 0.00001%, donating to AI safety would still be more than twice as effective as donating to the Against Malaria Foundation (AMF) (using GiveWell’s 2020 estimates).
A reasonable retort here is that all estimates in this space necessarily include a certain amount of uncertainty. That, for example, the difference between GiveWell’s estimates and those for AI risk are a matter of degree, not of kind. This is correct — the differences are a matter of degree. But each of those degrees introduces more subjectivity and arbitrariness into the equation. Our incredulity and skepticism should rise in equal measure.
GiveWell’s estimates use real, tangible, collected data. Other studies may of course conflict with their findings, in which case we’d have work to do. Indeed, such criticism would be useful for it would force GiveWell to develop more robust estimates. Needless to say, this process is entirely different than assigning arbitrary numbers to events about which we are utterly ignorant. My credence could be that working on AI safety will reduce existential risk by 5% and yours could be 10^-19%, and there’s no way to discriminate between them. Appealing to the beliefs of experts in the field does not solve the problem. From which dataless, magical sources are their beliefs derived?
Moreover, should your credence be 10^-19% in the effectiveness of AI Safety interventions, then I can still make that intervention look arbitrarily good, simply by increasing the “expected number of humans” in the future. Indeed, in his book Superintelligence, Nick Bostrom has “estimated” that there could be 10^64 sentient beings in the future. By those lights, the expected number of lives, even with a credence of 10^-19%, is still positively astronomical.
As alluded to above, the philosophy validating the reliance on subjective probability estimates is called Bayesian epistemology. It frames the search for knowledge in terms of beliefs (which we quantify with numbers, and must update in accordance with Bayes rule, else risk rationality-apostasy!). It has imported valid statistical methods used in economics and computer science, and erroneously applied them to epistemology, the study of knowledge creation. It’s ill-defined, is based on confirmation as opposed to falsification, leads to paradoxes, and relies on the provably false probabilistic induction. In other words, it has been refuted, and yet, somehow manages to stick around (ironically, it’s precisely this aspect of Bayesianism which is so dubious: its inability to reject any hypothesis).
Bayesian epistemology unhelpfully borrows standard mathematical notation. Thus, subjective credences tend to be compared side-by-side with statistics derived from actual data, and treated as if they were equivalent. But prophecies about when AGI will take over the world — even when cloaked in advanced mathematics — are of an entirely different nature than, say, impact evaluations from randomized controlled trials. They should not be treated as equivalent.
Once one adopts Bayesianism and loses track of the different origins of various predictions, then the attempt to compare cause areas becomes a game of “who has the bigger number.” And longtermism will win this game. Every time. It becomes unavoidable because it abolishes the means by which one can disagree with its conclusion, because it can always simply use bigger numbers. But we must remind ourselves that the numbers used in longtermist calculations are not the same as those derived from actual data. We should remember that mathematics is not an oracle unto truth. It is a tool, and one that in this case is inappropriately used. There are insufficient constraints when reasoning based solely on beliefs and big numbers — it is not informative and is not in any way tethered to a real data set, or to reality. Just as we discard poor, unfalsifiable, justifications in other areas, so too should we dispense with them in moral reasoning.
The Antithesis of Moral Progress
If you wanted to implement a belief structure which justified unimaginable horrors, what sort of views would it espouse? A good starting point would be to disable our critical capacities from evaluating the consequences of our actions, most likely by appealing to some vague and distant glorious future lying in wait. And indeed, this tool has been used by many horrific ideologies in the past.
Definitely and beyond all doubt, our future or maximum program is to carry China forward to socialism and communism. Both the name of our Party and our Marxist world outlook unequivocally point to this supreme ideal of the future, a future of incomparable brightness and splendor.
- Mao Tse Tung, “On Coalition Government”. Selected Works, Vol. III, p. 282. (italics mine)
Of course, the parallel between longtermism and authoritarianism is a weak one, if only because longtermism has yet to be instantiated. I don’t doubt that longtermism is rooted in deep compassion for those deemed to be ignored by our current moral frameworks and political processes. Indeed, I know it is, because the EA community is filled with the most kind-hearted people I’ve ever met.
Inadvertently, however, longtermism is almost tailor-made to disable the mechanisms by which we make progress.
Progress entails solving problems and generating the knowledge to do so. Because humans are fallible and our ideas are prone to error, our solutions usually have unintended negative consequences. These, in turn, become new problems. We invent pain relief medications which facilitate an opioid epidemic. We create the internet which leads to social media addiction. We invent cars which lead to car accidents. This is not to say we would have been better off not solving problems (of course we wouldn’t), only that solutions beget new — typically less severe — problems. This is a good thing. It’s the sign of a dynamic, open society focused on implementing good ideas and correcting bad ones.
Moral progress is no different. Abstract reasoning from first principles can be useful, but it will only get you so far. No morality prior to the industrial revolution could have foreseen the need to introduce eight-hour workdays or labour laws. No one 1,000 years ago could have foreseen factory farming, child-pornography spread via the internet, or climate change. As society changes, it is crucial that we maintain the ability to constantly adapt and evolve our ethics in order to handle new situations.
The moral philosophy espoused by EA should be one focused on highlighting problems and solving them. On being open to changing our ideas for the better. On correcting our errors.
Longtermism is precisely the opposite. By “ignoring the effects contained in the first 100 (or even 1000) years,” we ignore problems with the status quo, and hamstring our efforts to create solutions. If longtermism had been adopted 100 years ago then problems like factory farming, HIV/AIDS, and Measles would have been ignored. Greaves and MacAskill argue that we should have no moral discount factor, i.e., a “zero rate of pure time preference”. I agree — but this is besides the point. While time is morally irrelevant, it is relevant for solving problems. Longtermism asks us to ignore problems now, and focus on what we believe will be the biggest problems many generations from now. Abiding by this logic would result in the stagnation of knowledge creation and progress.
It is certainly possible to accuse me of taking the phrase “ignoring the effects” too literally. Perhaps longtermists wouldn’t actually ignore the present and its problems, but their concern for it would be merely instrumental. In other words, longtermists may choose to focus on current problems, but the reason to do so is out of concern for the future.
My response is that attention is zero-sum. We are either solving current pressing problems, or wildly conjecturing what the world will look like in tens, hundreds, and thousands of years. If the focus is on current problems only, then what does the “longtermism” label mean? If, on the other hand, we’re not only focused on the present, then the critique holds to whatever extent we’re guessing about future problems and ignoring current ones. We cannot know what problems the future will hold, for they will depend on the solutions to our current problems which, by definition, have yet to be discovered. The best we can do is safe-guard our ability to make progress and to correct our mistakes.
In sum, given the need for a constantly evolving ethics, one of our most important jobs is to ensure that we can continue criticizing and correcting prevailing moral views. The focus on the long-term future, however, stops the means by which we can obtain feedback about our actions now — the only reliable way to improve our current moral theories. Moral principles, like all ideas, evolve over time according to the pressure exerted on them by criticism. The ability to criticize, then, is paramount to making progress. Disregarding current problems and suffering renders longtermism impermeable to error-correction. Thus, while the longtermist project may arise out of more compassion for sentient beings than many other dogmas, it has the same nullifying effect on our critical capacities.
What now?
We are at an unprecedented time in history: We can do something about the abundance of suffering around us. For most of the human story, our ability to eradicate poverty, cure disease, and save lives was devastatingly limited. We were hostages to our environments, our biology, and our traditions. Finally however, trusting in our creativity, we have developed powerful ideas on how to improve life. We now know of effective methods to prevent malaria, remove parasitic worms, prevent vitamin deficiencies, and provide surgery for fistula. We have the technology to produce clean-meat to reduce animal suffering. We constructed democratic institutions to protect the vulnerable and reduce conflict. These are all staggering feats of human ingenuity.
Longtermism would have us disavow this tradition of progress. We would stop solving the problems in front of us, only to focus on distant problems obscured by the impenetrable wall of time.
For what it’s worth, should the EA community abandon longtermism, I think many of its current priorities would remain unchanged; long-term causes do not yet dominate its portfolio. Causes such as helping the global poor and reducing suffering from factory farming would of course remain a priority. So too would interventions such as improving institutional decision making and reducing the threat of nuclear war and pandemics. Such causes are important because the problems exist and do not require arbitrary assumptions on the number of future people.
My goal is not necessarily to change the current focus of the EA community, but rather to criticize the beginnings of a philosophy which has the potential to upend the values which made it unique in the first place: the combination of compassion with evidence and reason. It is in danger of discarding the latter half of that equation.
We can look at their track record on other questions, and see how reliably (or otherwise) different people's predictions track reality.
I agree that below a certain level (certainly by 10^-19, and possibly as high as 10^-3) direct calibration-in-practice becomes somewhat meaningless. But we should be pretty suspicious of people claiming extremely accurate probabilities at the 10^-10 mark if they aren't even accurate at the 10^-1 mark.
In general I'm not a fan of this particular form of epistemic anarchy where people say that they can't know anything with enough precision under uncertainty to give numbers, and then act as if their verbal non-numeric intuitions are enough to carry them through consistently making accurate decisions.
It's easy to lie (including to yourself) with numbers, but it's even easier to lie without them.
I appreciate this tangential to the main point of the post, but these asides strike me as (unintentionally) likely to leave the reader with a common-but-inaccurate impression, and I think it's worth correcting this impression as it arises in the name of general integrity and transparency.
Specifically, I think a reader of the above without further context would assume that longtermism is very new (say <2 years old... (read more)
Thanks so much for writing this Ben! I think it's great that strong longtermism is being properly scrutinised, and I loved your recent podcast episode on this (as well as Vaden's piece).
I don't have a view of my own yet; but I do have some questions about a few of your points, and I think I can guess at how a proponent of strong longtermism might respond to others.
For clarity, I'm understanding part of your argument as saying something like the following. First, "[E]xpected value calculations, Bayes theorem, and mathematical models" are tools — often useful, often totally innapropriate or inapplicable. Second, 'Bayesian epistemology' (BE) makes inviolable laws out of these tools, running into all kinds of paradoxes and failing to represent how scientific knowledge advances. This makes BE silly at best and downright 'refuted' at worst. Third, the case for strong longtermism relies essentially on BE, which is bad news for strong longtermism.
I can imagine that a fan of BE would just object that Bayesianism in particular is just not a tool which can be swapped out for something else when it's convenient . This feels like an important but tangential argument — this LW post might b... (read more)
I share your concerns with using arbitrary numbers and skepticism of longtermism, but I wonder if your argument here proves too much. It seems like you're acting as if you're confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but I'm not sure you actually believe this. Do you?
It sounds like you're skeptical of AI safety work, but it also seems what you're proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if we're committed to beliefs that would make us pessimistic about longtermism.
I think it's more fair to think that we don't have enough reason to believe longtermist work does much good at all, or more good than harm (and generally be much more skeptical of causal effects with little evidence), than it is to be extremely confident that the future won't be huge.
I think you do need to entertain arbitrary probabilities, even if you're not a longtermist, although I don't think you should commit to a single joint probability... (read more)
I would rephrase as "You say you refuse to commit to a belief about x, but seem to act as if you've committed to a belief about x". Specifically, you say you have no idea about the number of future people, but it seems like you're saying we should act as if we believe it's not huge (in expectation). The argument for strong longtermism you're trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that you're committed to the belief that expected number is less than 1015, say, since you write in response "This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground".
Maybe I'm misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
... (read more)Thanks for taking the time to write this :)
In your post you say "Of course, it is impossible to know whether $1bn of well-targeted grants could reduce the probability of existential risk, let alone by such a precise amount. The “probability” in this case thus refers to someone’s (entirely subjective) probability estimate — “credence” — a number with no basis in reality and based on some ad-hoc amalgamation of beliefs."
I just wanted to understand better: Do you think its ever reasonable to make subjective probability estimates (have 'credences') over things? If so, in what scenarios is it reasonable to have such subjective probability estimates; and what makes those scenarios different from the scenario of forming a subjective probability estimate of what $1bn in well-target grants could do to reduce existential risk?
You say that "there are good arguments for working on the threat of nuclear war". As I understand your argument, you also say we cannot rationally distinguish between the claim "the chance of nuclear war in the next 100 years is 0.00000001%" and the claim "the chance of nuclear war in the next 100 years is 1%". If you can't rationally put probabilities on the risk of nuclear war, why would you work on it?
If you refuse to claim that the chance of nuclear war up to 2100 is greater than 0.000000000001%, then I don't see how you could make a good case to work on it over some other possible intuitively trivial action, such as painting my wall blue. What would the argument be if you are completely agnostic as to whether it is a serious risk?
To me this seems like you're making a rough model with a bunch of assumptions like that past use, threats and protocols increase the risks, but not saying by how much or putting confidences or estimates on anything (even ranges). Why not think the risks are too low to matter despite past use, threats and protocols?
Hey Ben, thanks a lot for posting this! And props for having the energy to respond to all these comments :)
I'll try to reframe points that others have made in the comments (and which I tried to make earlier, but less well): I suspect that part of why these conversations sometimes feel like we're talking past one another is that we're focusing on different things.
You and Vaden seem focused on creating knowledge. You (I'd say) correctly note that, as frameworks for creating knowledge, EV maximization and Bayesian epistemology aren't just useless--they're actively harmful, because they distract us from the empirical studies, data analysis, feedback loops, and argumentative criticism that actually create knowledge.
Some others are focused on making decisions. From this angle, EV maximization and Bayesian epistemology aren't supposed to be frameworks for creating knowledge--they're frameworks for turning knowledge into decisions, and your arguments don't seem to be enough for refuting them as such.
To back up a bit, I think probabilities aren't fundamental to decision making. But bets are. Every decision we make is effectively taking or refusing to take a bet (e.g. going outsi... (read more)
I'd like to make a point about the potential importance of working on current problems which I'm unsure has been made yet (apologies if I've missed it).
It seems to me that there are two possibilities here:
If number 1 is the case, a strong longtermist should agree with you and vadmas about the importance of working on current problems.
If number 2 is the case a strong longtermist may not agree with you about the importance of working on current problems either because they don't think that working on near term problems will generate much knowledge or because they don't think the knowledge that would be generated will help that much in making the long-run future go well.
Now there are two points I would like to make.
Firstly, you and vadmas seem to assume number 2 is the case. It seems important to me to note that this is certainly not a given.
Secondly you and vadmas seem to think that if number 2 is the case then the conclusion that we shouldn't work on near-term problems for knowledge creation in some way demonstrates the abusurdity... (read more)
I have a few comments on the critique of Bayesian epistemology, a lot of which I think is mistaken.
- You say "It frames the search for knowledge in terms of beliefs (which we quantify with numbers, and must update in accordance with Bayes rule, else risk rationality-apostasy!" I don't think anyone denies that Bayes theorem is true. It is mathematically proven. The most common criticism of Bayesianism is that it is "too subjective". I don't really understand what this means, but few sensible people deny Bayes theorem.
- "It has imported valid statist
... (read more)Thanks for writing this! I think it's important to question longtermism. I've actually found myself becoming slowly more convinced by it, but I'm still open to it being wrong. I'm looking forward to chewing on this a bit more (and you've reminded me I still have to properly read Vaden's post) but for now I will leave you with a preliminary thought.
... (read more)Coming from an economics background, here's how to persuade me of longtermism:
Set up a social planner problem with infinite generations and solve for the optimal allocation in each period. Do three cases:
Would the third planner ignore the utility of all generations less than 1000 years in the future? If so, then you've proved strong longtermism.
On the point about the arbitrariness of estimates of the size of the future - what is your probability distribution across the size of the future population, provided there is not an existential catastrophe?
Another way to look at this. What do you think is the probability that everyone will go extinct tomorrow? If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff.
.
I found it helpful that you were so clear about these two aspects of what you are saying. My responses to the two are different.
On the first, I think resting on possibilities of large futures is a central part of th... (read more)
Cool. I do think that trying to translate your position into the ontology used by Greaves+MacAskill it's sounding less like "longtermism is wrong" and more like "maybe longtermism is technically correct; who cares?; the practical advice people are hearing sucks".
I think that's a pretty interestingly different objection and if it's what you actually want to say it could be important to make sure that people don't hear it as "longtermism is wrong" (because that could lead them to looking at the wrong type of thing to try to refute you).
OK Jack, I have some time today so lets dive in:
So, my initial reading of 4.5 was that they get it very very wrong.
Eg: "we assumed that the correct way to evaluate options in ex ante axiological terms, under conditions of uncertainty, is in terms of expected value". Any of the points above would disagree with this.
Eg: "[Knightian uncertainty] supports, rather than undermining, axiological strong longtermism". This is just not true. Some Knightian uncertainty methods would support (eg robust decision making) and some would not support (eg plan-and-adapt).
So why does it look like they get this so wrong?
Maybe they are trying to achieve something different from what we in this thread think they are trying to achieve.
My analysis of their analysis of Knightian uncertainty can shed some light here.
The point of Knightian (or deep) uncertainty tools is that an expected value calculation is the wrong tool for humans to use when making decisions under Knightian uncertainty. That an expected value calculation, as a decision tool it will not lead to the best outcome, the outcome with the highest true expected value. [Note: I use true expected value to mean the expected value if... (read more)
I'm not sold on the cluelessness-type critique of long-termism. The arguments here focus on things we might do now or soon to reduce the direct risk posed by various things such as AI, bio or nuclear war. But even if this is true, this doesn't undermine the expected value of other long-termist activities.
I wonder if you have come across the literature on complex cluelessness? GiveWell may use some real, tangible data, but they are missing lots of highly-relevant and important data, most obviously relating to the longer-term consequences of the health interventions. For example they don't know what the long-term population effects will be nor the corresponding moral value of these population effects. It also really doesn't seem fair to me to just assume that this would be zero in expectation, which Giv... (read more)
What do you think about using ranges of probabilities instead of single (and seemingly arbitrary) sharp probabilities and doing sensitivity analysis? I suppose when there's no hard data, there might be no good bounds for the ranges, too, although Scott Alexander has argued against using arbitrarily small probabilities.