Hide table of contents

This project by CEARCH investigates the cost-effectiveness of resilient food pilot studies for mitigating the effects of an extreme climate cooling event. It consisted of three weeks of desktop research and expert interviews.

Headline Findings

  • CEARCH finds the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity[1] (link to full CEA).
  • The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse[2].
  • The greatest sources of uncertainty are 1) the likelihood of nuclear winter and 2) the degree to which evidence from a pilot study would influence government decision-making in a catastrophe
  • We considered two other promising interventions to mitigate the effects of an ASRS: policy advocacy at the WTO to amend restrictions that affect food stockpiling, and advocating for governments to form plans and strategies for coping with the effects of Abrupt Sunlight Reduction Scenarios (ASRSs). We believe these could possibly be very cost-effective interventions, although we expect them to be difficult to assess.
  • Our detailed cost-effectiveness analysis can be found here, and the full report can be read here.

Executive Summary

In an Abrupt Sunlight Reduction Scenario (ASRS) the amount of solar energy reaching Earth’s surface would be significantly reduced. Crops would fail, and many millions would be without food. Much of the research on mitigating the effects of these catastrophes is the product of ALLFED. Their work has been transformative for the field, but may be seen by some as overly pessimistic about the likelihood of agricultural shortfalls, and overly optimistic about the effectiveness of their suggested interventions. This project aims to provide a neutral “second opinion” on the cost-effectiveness of mitigating ASRSs.

Nuclear winter is probably the most likely cause of an ASRS, although the size of the threat is highly uncertain. Nuclear weapons have only once been used in war, and opinions differ on the likelihood of a future large-scale conflict. Even if there is a major nuclear war, the climate effects are contested. Much of the scientific literature on nuclear winter is the product of a small group of scientists who may be politically motivated to exaggerate the effects. Critics point to the long chain of reasoning that connects nuclear war to nuclear winter, and argue that slightly less pessimistic assumptions at each stage can lead to radically milder climate effects. We predict that there is just a 20% chance that nuclear winter follows large-scale nuclear war. Even so, this implies that nuclear winter represents over 95% of the total ASRS threat.

There are a number of ways to prepare: making plans; fortifying our networks of communication and trade; building food stockpiles; developing more resilient food sources. After a period of exploration, we decided to focus on the cost-effectiveness of conducting a pilot study for one resilient food source. “Resilient” food sources, such as seaweed, mass-produced greenhouses, or edible sugars derived from plant matter, can produce food when conventional agriculture fails, mitigating the food shortage. We know that these sources can produce edible food, but none have had pilot studies that identify the key bottlenecks in scaling up the process rapidly in a catastrophe. We believe that such a pilot study would increase the chances of the resilient food source being deployed in an ASRS, and that the lessons learnt in the study would enable the food source to be harnessed more productively.

We assume that resilient foods would not make a significant difference in milder scenarios, although this is far from certain[3]. We do not attempt to measure the benefits of resilient food sources in other climate and agricultural catastrophes[4].

Unlike previous analyses, ours accounts for specific reasons that resilient food technologies may not be adopted in a catastrophe, including disruptions to infrastructure, political dynamics or economic collapse. We attempt to model the counterfactual effect of the pilot itself on the deployment of the food source in a catastrophe. Due to lack of reliable data, however, we rely heavily on subjective discounts.

We estimate that a USD 23 million pilot study for one resilient food source would counterfactually reduce famine mortality by 0.66% in the event of an ASRS, preventing approximately 16 million deaths from famine. 

Mortality represents 80% of the expected burden of an ASRS in our model, with the remainder coming from morbidity and economic losses. There is some uncertainty about the scale of the economic damage, but we are confident that famine deaths would form the bulk of the burden. We do not consider the long-term benefits of mitigating mass global famine.

Our full CEA assesses the intervention in detail. We draw upon objective reference classes when we can, and we avoid anchoring on controversial estimates by using aggregates where possible.

Our final result suggests that there is a distinct possibility that the cost-effectiveness of developing a resilient food source is competitive with GiveWell top charities. The result is highly uncertain and is especially contingent on 1) the likelihood of nuclear winter and 2) the degree to which evidence from a pilot study would influence government decision-making in a catastrophe.

Cost-effectiveness

Link to full CEA.

Overall, we estimate that developing one resilient food source would cost approximately USD 23 million and would reduce the number of famine deaths in a global agricultural shortfall by 0.66%. We project that the intervention would have a persistence of approximately 17 years.

Given that we estimate the toll of a global agricultural shortfall to be approximately 98 billion DALYs, we obtain an estimated cost-effectiveness of 10,000 DALYs per USD 100,000, which is 14× as cost-effective as a GiveWell top charity[1].

The final cost-effectiveness calculation is approximately

With the following figures:

98,000,000,000Burden of an ASRS, DALYs
0.87Proportion of the burden that is affectable once the delay in conducting the study is accounted for
0.00025Annual probability of an ASRS
0.0066Reduction in burden due to resilient food pilot study 
17Persistence of intervention (equivalent baseline years)
23,000,000Cost of intervention, USD


 

 

The calculation above is heavily simplified. Check the full CEA to see how the figures above were reached.

Overall uncertainty

Although the cost-effectiveness is estimated to be 14× that of a GiveWell top charity, our uncertainty analysis suggests there is only a 47% chance that the cost-effectiveness is at least 1× that of a GW top-charity, and a 18% chance that it is at least 10×. Hence the central cost-effectiveness estimate is heavily influenced by a minority of “right-tail” scenarios of very high cost-effectiveness.


We derived the above estimate by creating an alternative version of our CEA that incorporates uncertainty. Most input values were modeled as Beta or LogNormally-distributed random variables, and the adapted CEA was put through a 3000-sample Monte Carlo simulation using Dagger. In most cases the input distributions were determined subjectively, using the same mean value as the point-estimate used in the CEA. An important exception is the cost: instead of modeling cost as a distribution and dividing “effectiveness” by “cost” to get cost-effectiveness, we model the reciprocal of cost as a distribution, and multiply “effectiveness” by this to get cost-effectiveness. This allows us to avoid obtaining a different central estimate to our CEA due to the result E[X/Y]≠E[X]/E[Y].

Link to full report // Link to cost-effectiveness analysis // Link to summary of expert interview notes

  1. ^
  2. ^

    The headline cost-effectiveness will almost certainly fall if this cause area is subjected to deeper research: (a) this is empirically the case, from past experience; and (b) theoretically, we suffer from optimizer's curse (where causes appear better than the mean partly because they are genuinely more cost-effective, but also partly because of random error favoring them, and when deeper research fixes the latter, the estimated cost-effectiveness falls).

  3. ^

    In mild agricultural shortfalls such as those that may be triggered by crop blight, VEI-7 volcanic eruption or extreme weather, adaptations like redirecting animal feed, rationing and crop relocation would in theory be sufficient to feed everyone. However, it’s plausible that resilient foods could ease these crises by filling gaps in available nutrition (protein, for example) or providing new sources of animal feed which make more crops available for human consumption.

  4. ^

    Resilient food sources could prove useful in a variety of agricultural shocks. Industrial sources such as cellulosic  sugar plants could form a reliable food source in scenarios of 10+°C warming, or in the face of an engineered crop  pathogen targeting the grass family [credit: David Denkenberger]

Comments6
Sorted by Click to highlight new comments since:

Nice analysis, Stan!

Under your assumptions and definitions, I think your 20.2 % probability of nuclear winter if there is a large-scale nuclear war is a significant overestimate. You calculated it using the mean of a beta distribution. I am not sure how you defined it, but it is supposed to represent the 3 point estimates you are aggregating of 60 %, 8.96 % and 0.0355 %. In any case, 20.2 % is quite:

  • Different from the output of what I think are good aggregation methods:
    • The geometric mean of odds, which I think should be the default method to aggregate probabilities, results in 3.61 % (= 1/(1 + (0.6/(1 - 0.6)*0.0896/(1 - 0.0896)*0.000355/(1 - 0.000355))^(-1/3))), which is 17.9 % (= 0.0361/0.202) your value.
    • The geometric mean, which performed the best among unweighted methods on Metaculus' data, results in 2.67 % (= (0.6*0.0896*0.000355)^(1/3), which is 13.2 % (= 0.0267/0.202) your value. Samotsvety used geometric mean after removing the lowest and highest values to aggregate estimates related to the probability of nuclear war from 7 forecasters who often differed a lot between them, as is the case for the 3 probabilities you are aggregating.
  • Similar to the output of what I think are bad aggregation methods:
    • The maximum likelihood estimator (MLE) of the mean of a beta distribution with the 3 aforementioned probabilities as random samples results in 21.7 %. On Metaculus' data, beta_mean_weighted performed worse than geo_mean_odds_weighted, median_weighted and beta_median_weighted.
    • The 23.0 % (= (0.6 + 0.0896 + 0.000355)/3) I get for the mean of the 3 aforementioned probabilities. Again, on Metaculus' data, mean_weighted performed worse than geo_mean_odds_weighted, median_weighted and beta_median_weighted.

A common thread here is that aggregation methods which ignore information from extreme predictions tend to be worse (although one should be careful not to overweught them). As Jaime said with respect to mean (and I think the same applies to the MLE of the mean of a beta distribution fitted to the samples):

The arithmetic mean of probabilities ignores information from extreme predictions

The arithmetic mean of probabilities ignores extreme predictions in favor of tamer results, to the extent that even large changes to individual predictions will barely be reflected in the aggregate prediction.

As an illustrative example, consider an outsider expert and an insider expert on a topic, who are eliciting predictions about an event.  The outsider expert is reasonably uncertain about the event, and each of them assigns a probability of around 10% to the event. The insider has priviledged information about the event, and assigns to it a very low probability.

Ideally, we would like the aggregate probability to be reasonably sensitive to the strength of the evidence provided by the insider expert - if the insider assigns a probability of 1 in 1000 the outcome should be meaningfully different than if the insider assigns a probability of 1 in 10,000 [9].

The arithmetic mean of probabilities does not achieve this - in both cases the pooled probability is around . The uncertain prediction has effectively overwritten the information in the more precise prediction.

The geometric mean of odds works better in this situation. We have that , while . Those correspond respectively to probabilities of 1.04% and 0.33% - showing the greater sensitivity to the evidence the insider brings to the table.

See (Baron et al, 2014) for more discussion on the distortive effects of the arithmetic mean of probabilities and other aggregates.

For these reasons, I would aggregate the 3 probabilities using the geometric mean of odds, in which case the final probability would be 17.9 % as large.

CEARCH finds the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity[1] (link to full CEA).

Based on my adjustment to the probability of nuclear winter, I would conclude the cost-effectiveness is 2.51 (= 14*0.179) times that of GiveWell's top charities (ignoring effects on animals), i.e. within the same order of magnitude. This would be in agreement with what I said in my analysis of nuclear famine about the cost-effectiveness of activities related to resilient food solutions:

I guess the true cost-effectiveness is within the same order of magnitude of that of GiveWell’s top charities

I should also note there are way more cost-effective intervention to increase welfare:

I have argued corporate campaigns for chicken welfare are 3 orders of magnitude more cost-effective than GiveWell’s top charities

In addition, life-saving interventions have to contend with the meat-eater problem:

From a nearterm perspective, I am concerned with the meat-eater problem, and believe it can be a crucial consideration. The people whose lives were saved thanks to resilient food solutions would go on to eat factory-farmed animals, which may well have sufficiently bad lives for the decrease in human mortality to cause net suffering. In fact, net global welfare may be negative and declining.

I estimated the annual welfare of all farmed animals combined is -4.64 times that of all humans combined[70], which suggests not saving a random human life might be good (-12 < -1). Nonetheless, my estimate is not resilient, so I am mostly agnostic with respect to saving random human lives. There is also a potentially dominant beneficial/harmful effect on wild animals.

Accordingly, I am uncertain about whether decreasing famine deaths due to the climatic effects of nuclear war would be beneficial or harmful. I think the answer would depend on the country, with saving lives being more beneficial in (usually low income) countries with lower consumption per capita of farmed animals with bad lives. I calculated the cost-effectiveness of saving lives in the countries targeted by GiveWell’s top charities only decreases by 8.72 % accounting for negative effects on farmed animals, which means it would still be beneficial (0.0872 < 1).

Some hopes would be:

  • Resilient food solutions mostly save lives in countries where there is low consumption per capita of animals with bad lives.
  • The conditions of animals significantly improving, or the consumption of animals with bad lives majorly decreasing in the next few decades[71], before an eventual nuclear war starts.
  • The decreased consumption of animals in high income countries during the 1st few years after the nuclear war persisting to some extent[72].

Bear in mind price-, taste-, and convenience-competitive plant-based meat would not currently replace meat.

I would also be curious to know about whether CEARCH has been mostly using the mean, or other methods underweighting low predictions, to aggregate probabilities differing a lot between them, both in this analysis and others. I think using the mean will tend to result in overestimating the cost-effectiveness, which might explain some of the estimates I consider intuitively quite high.

Thanks for the comment, Vasco!

We have been thinking about aggregation methods a lot here at CEARCH, and our views on it are evolving. A few months ago we switched to using the geometric mean as our default aggregation method - although we are considering switching to the geometric mean of odds for probabilities, based on Simon's M persuasive post that you referenced (although in many cases the difference is very small).

Firstly I'd like to say that our main weakness on the nuclear winter probability is a lack of information. Experts in the field are not forthcoming on probabilities, and most modeling papers use point-estimates and only consider one nuclear war scenario. One of my top priorities as we take this project to the "Deep" stage is to improve on this nuclear winter probability estimate. This will likely involve asking more experts for inside views, and exploring what happens to some of the top models when we introduce some uncertainty at each stage.

I think you are generally right that we should go with the method that works the best on relatively large forecasting datasets like Metaculus. In this case I think there is a bit more room for personal discretion, given that I am working from only three forecasts, where one is more than two orders of magnitude smaller than the others. I feel that in this situation - some experts think nuclear winter is an almost-inevitable consequence for large-scale nuclear war, others think it is very unlikely - it would just feel unjustifiably confident to conclude that the probability is only 2%. Especially since two of these three estimates are in-house estimates.

Thanks for the reply, Stan!

We have been thinking about aggregation methods a lot here at CEARCH, and our views on it are evolving. A few months ago we switched to using the geometric mean as our default aggregation method - although we are considering switching to the geometric mean of odds for probabilities, based on Simon's M persuasive post that you referenced (although in many cases the difference is very small).

Cool!

Firstly I'd like to say that our main weakness on the nuclear winter probability is a lack of information. Experts in the field are not forthcoming on probabilities, and most modeling papers use point-estimates and only consider one nuclear war scenario.

Right, I wish experts were more transparent about their best guesses and uncertainty (accounting for the limitations of their studies).

One of my top priorities as we take this project to the "Deep" stage is to improve on this nuclear winter probability estimate. This will likely involve asking more experts for inside views, and exploring what happens to some of the top models when we introduce some uncertainty at each stage.

Nice to know there is going to be more analysis! I think one important limitation of your current model, which I would try to eliminate in further work, is that it relies on the vague concept of nuclear winter to define the climatic effects. You calculate the expected mortality multiplying:

  • Probability of a large nuclear war.
  • Probability of nuclear winter if there is a large nuclear war.
  • Expected mortality if there is a nuclear winter.

However, I believe it is better to rely on a more precise concept to assess the climatic effects, namely the amount of soot injected into the stratosphere, or the mean drop in global temperature over a certain period (e.g. 2 years) after the nuclear war. In my analysis, I relied on the amount of soot, estimating the expected famine deaths due to the climatic effects multiplying:

  • Probability of a large nuclear war.
  • Expected soot injection into the stratosphere if there is a large nuclear war.
  • Expected famine deaths due to the climatic effects for the expected soot injection into the stratosphere.

Ideally, I would get the expected famine deaths multiplying:

  • Probability of a large nuclear war.
  • Expected famine deaths if there is a large nuclear war. To obtain the distribution of the famine deaths, I would:
    • Define a logistic function describing the famine deaths as a function of the soot injected into the stratosphere (or, even better, the mean drop in global temperature over a certain period). In my analysis, I approximated the logistic function as a piecewise linear function.
    • Input into the above function a distribution for the soot injected into the stratosphere if there is a large nuclear war (or, even better, the mean drop in global temperature over a certain period if there is a large nuclear war). To obtain this soot distribution, I would:
      • Define a function describing the soot injected into the stratosphere as a function of the number of offensive nuclear detonations.
      • Input into the above function a distribution for the number of offensive nuclear detonations if there is a large nuclear war.

Luisa followed something like the above, although I think her results are super pessimistic.

I think you are generally right that we should go with the method that works the best on relatively large forecasting datasets like Metaculus. In this case I think there is a bit more room for personal discretion, given that I am working from only three forecasts, where one is more than two orders of magnitude smaller than the others.

Fair point, there is no data on which method is best when we are just aggregating 3 forecasts. That being said:

  • A priori, it seems reasonable to assume that the best method for large samples is also the best method for small samples.
  • Samotsvety aggregated predictions differing a lot between them from 7 forecasters[1], and still used a modified version of the geometric mean, which ensures predictions smaller than 10 % of the mean are not ignored. A priori, it seems sensible to use an aggregation method that one of the most accomplished forecasting groups uses.

I feel that in this situation - some experts think nuclear winter is an almost-inevitable consequence for large-scale nuclear war, others think it is very unlikely - it would just feel unjustifiably confident to conclude that the probability is only 2%. Especially since two of these three estimates are in-house estimates.

I think there is a natural human bias towards thinking that the probability of events whose plausibility is hard to assess (not lotteries) has to be somewhere between 10 % to 90 %. In general, my view is more that it feels overconfident to ignore predictions, and using the mean does this when samples differ a lot among them. To illustrate, if I am trying to aggregate N probabilities, 10 %, 1 %, 0.1 %, ..., and 10^-N, for N = 9:

  • The probability corresponding to the geometric mean of odds is 0.0152 % (= 1/(1 + (1/9)^(-(1 + 7)/2*7/7))), which is 1.52 times the median of 0.01 %.
  • The mean is 1.59 % (= 0.1*(1 - 0.1^7)/(1 - 0.1)/7), i.e. 159 times the median.

I think the mean is implausible because:

  • Ignoring the 4 to 5 lowest predictions among only 7 seems unjustifiable, and using the mean is equivalent to using the probability corresponding to the geometric mean of odds putting 0 weight in the 4 to 5 lowest predictions, which would lead to 0.894 % (= 1/(1 + (1/9)^(-(1 + 5)/2*5/7))) to 4.15 % (= 1/(1 + (1/9)^(-(1 + 4)/2*4/7))). 
  • Ignoring the 3 lowest and 3 highest predictions among 7 seems justifiable, and would lead to the median, whereas the mean is 159 times the median.

You say 2 % probability of nuclear winter conditional on large nuclear war seems unjustifiable, but note the geometric mean of odds implies 4 %. In any case, I suspect the reason even this would feel too high is that it may in fact be too high, depending on how one defines nuclear winter, but that you are overestimating famine deaths conditional on nuclear winter. You put a weight of:

  • 1/3 in Luisa's results multiplied by 0.5, but I think the weight may still be too high given how pessimistic they are. Luisa predicts a 5 % chance of at least 36 % deaths (= 2.7/7.5), which looks quite high to me.
  • 2/3 in Xia 2022's results multiplied by 0.75, but this seems like an insufficient adjustment given you are relying on 37.5 % famine deaths, and this refers to no adaptation. Reducing food waste, decreasing the consumption of animals, expanding cultivated area, and reducing the production of biofuels are all quite plausible adaptation measures to me. So I think their baseline scenario is quite pessimistic, unless you also want to account for deaths indirectly resulting from infrastructure destruction which would happen even without any nuclear winter. I have some thoughts on reasons Xia 2022's famine deaths may be too low and high here.

At the end of the day, I should say our estimates for the famine deaths are pretty much in agreement. I expect 4.43 % famine deaths due to the climatic effects of a large nuclear war, whereas you expect 6.16 % (20.2 % probability of nuclear winter if there is a large-scale nuclear war times 30.5 % deaths given nuclear winter).

  1. ^

    For the question "What is the unconditional probability of London being hit with a nuclear weapon in October?", the 7 forecasts were 0.01, 0.00056, 0.001251, 10^-8, 0.000144, 0.0012, and 0.001. The largest of these is 1 M (= 0.01/10^-8) times the smallest, whereas in your case the largest probability is 2 k (= 0.6/0.000355) times the smallest.

In the report a footnote 2 states:
"In mild agricultural shortfalls such as those that may be triggered by crop blight, VEI-7 volcanic eruption or extreme weather, adaptations like redirecting animal feed, rationing and crop relocation would in theory be sufficient to feed everyone".

How did you come to that conclusion? We're only aware of 1 academic study (Puma et al 2015) about food losses from a VEI 7 eruption and it estimates 1-3 billion people without food per year (I think this is a likely an overestimate, and I'm trying to do research to quantify this), so just trying to figure out what you're basing the above statement on, does this take into consideration food price increases and who would be able to pay for food (even if there is technically enough)?

Also note the use recurrence intervals of super eruptions is an order of magnitude off from Loughlin paper which has since been changed (see discussion here: https://forum.effectivealtruism.org/posts/jJDuEhLpF7tEThAHy/on-the-assessment-of-volcanic-eruptions-as-global ). Also note, VEI 7 eruptions can sometimes have the same/if not greater climatic impact as super-eruptions, as the magnitude scale is based on the quantity of ash erupted whilst the clmatic impact is based on the amount of sulfur emitted (which can be comparable for VEI 7 and 8 eruptions). I mention these as whilst nuclear war probabilities have huge uncertainties, our recurrence intervals from ice cores of large eruptions are now well constrained, so it might help with the calcs. 

Thank you for your comment!

From a quick scan through Puma et al 2015 it seems like the argument is that many countries are net food importers, including many poor countries, so smallish shocks to grain production would be catastrophic as prices rise and importers have to buy more and at higher prices which they can't afford. I agree that this is a major concern and that it's possible that a sub-super eruption could lead to a large famine in this way. When I say "adaptations like redirecting animal feed, rationing and crop relocation would in theory be sufficient to feed everyone" I mean that with  good global coordination we would be able to free up plenty of food to feed everyone. That coordination probably includes massive food aid or at the least large loans to help the world's poorest avoid starvation. More importantly, resilient food sources don't seem like a top solution in these kinds of scenarios. It seems cheaper to cull livestock and direct their feed to humans than to scale up expensive new food sources.

Thanks for the link - now you mention it I think I read this post at the beginning of the year and found it very interesting. In my analysis I'm assuming resilient foods only help in severe ASRS, where there are several degrees of cooling for several years. Do you think this could happen with VEI 7 eruptions?

I couldn't find the part where the Loughlin paper has been changed. Could you direct me towards it?

Hi Stan, thanks for your response. I understand your main thesis now -seems logical provided those ideal circumstances (high global co-operation and normal trade).

VEI 7 eruptions could lead to up to 2-3 degrees of global cooling for ~5-10 years (but more elevated in the northern hemisphere). See here: https://doi.org/10.1029/2020GL089416 

More likely is two VEI 6 eruptions close together, which may provide longer duration cooling of a similar amount ~2 degrees, like in the mid 6th century (Late antique ice age).

The Loughlin chapter didn't account for the incompletness of the geological record like papers published since have done with statistical methods (e.g. Rougier paper I cite in that post), or with ice cores that are better at preserving eruption signatures compared with the geological record. 

Curated and popular this week
Relevant opportunities