VG

Vasco Grilo🔸

6418 karmaJoined Working (0-5 years)Lisbon, Portugal
sites.google.com/view/vascogrilo?usp=sharing

Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1419

Topic contributions
25

Makes sense. Just to clarify, the data on deaths and disease burden from non-optimal temperature until now are from GBD, but the projections for the future death rates from non-optimal temperature are from Human Climate Horizons.

I think I remain confused as to what you mean with "all deaths from non-optimal temperature".

I mean the difference between the deaths for the predicted and ideal temperature. From OWID:

Schematic figure showing how temperature relates to mortality risk. Risk increases at extreme cold and hot temperatures.

The deaths from non-optimal temperature are supposed to cover all causes (temperature is a risk factor for death rather than a cause of death in GBD), not just extreme heat and cold (which only account for a tiny fraction of the deaths; see my last comment). I say "supposed" because it is possible the mortality curves above are not being modelled correctly, and this applies even more to the mortality curves in the future.

So to me it seems you are saying "I don't trust arguments about compounding risks and the data is evidence for that" whereas the data is inherently not set up to include that concern and does not really speak to the arguments that people most concerned about climate risk would make.

My understanding is that (past/present/future) deaths from non-optimal temperature are supposed to include conflict deaths linked to non-optimal temperature. However, I am not confident these are being modelled correctly.

I was not clear, but in my last comment I mostly wanted to say that deaths from non-optimal temperature account for the impact of global warming not only on deaths from extreme heat and cold, but also on cardiovascular or kidney disease, respiratory infections, diabetes and all others (including conflicts). Most causes of death are less heavy-tailed than conflict deaths, so I assume we have a better understanding of how they change with temperature.

Thanks for this, Vasco, thought-provoking as always!

Likewise! Thanks for the thoughtful comment.

Insofar as is this a correct representation of your argument

It seems like a fair representation.

a. Dying from heat stress is a very extreme outcome and people will act in response to climate change much earlier than dying. For example, before people die from heat stress, they might abandon their livelihoods and migrate, maybe in large numbers.

b. More abstractly, the fact that an extreme impact outcome (heat death) is relatively rare is not evidence for low impact in general. Climate change pressures are not like a disease that kills you within days of exposure and otherwise has no consequence.

Agreed. However:

  • I think migration will tend to decrease deaths because people will only want to migrate if they think their lives will improve (relative to the counterfactual of not migrating).
  • The deaths from non-optimal temperature I mentioned are supposed to account for all causes of death, not just extreme heat and cold. According to GBD, in 2021, deaths from environmental heat and cold exposure were 36.0 k (I guess this is what you are referring to by heat stress), which was just 1.88 % (= 36.0*10^3/(1.91*10^6)) of the 1.91 M deaths from non-optimal temperature. My post is about how these 1.91 M deaths would change.

a. You seem to suggest we are very uncertain about many of the effect signs. I think the basic argument why people concerned about climate change would argue that changes will be negative and that there be compounding risks is because natural and human systems are adapted to specific climate conditions. That doesn't mean they cannot adapt at all, but that does mean that we should expect it is more likely that effects are negative, at least as short-term shocks, than positive for welfare.

This makes sense. On the other hand, one could counter global warming will be good because:

  • There are more deaths from low temperature than from high temperature.
  • The disease burden per capita from non-optimal temperature has so far been decreasing (see 2nd to last graph).

b. I think a lot of the other arguments on the side of "indirect risks are low" you cite are ultimately of the form (i) "indirect effects in other causes are also large" or (ii) "pointing to indirect effects make things inscrutable and unverifiable".  (i) might be true but doesn't matter, I think, for the question of whether warming is net-bad and (ii) is also true, but does nothing by itself on whether those indirect effects are real -- we can live in a world where indirect effects are rhetorically abused and still exist and indeed dominate in certain situations!

Agreed. I would just note that i) can affect prioritisation across causes.

Thanks for the comment, Stephen.

Vasco, how do your estimates account for model uncertainty?

I tried to account for model uncertainty assuming 10^-6 probability of human extinction given insufficient calorie production.

I don't understand how you can put some probability on something being possible (i.e. p(extinction|nuclear war) > 0), but end up with a number like 5.93e-14 (i.e. 1 in ~16 trillion). That implies an extremely, extremely high level of confidence.

Note there are infinitely many orders of magnitude between 0 and any astronomically low number like 5.93*10^-14. At least in theory, I can be quite uncertain while having a low best guess. I understand greater uncertainty (e.g. higher ratio between the 95th and 5th percentile) holding the median constant tends to increase the mean of heavy-tailed distributions (like lognormals), but it is unclear to which extent this applies. I have also accounted for that by using heavy-tailed distributions whenever I thought appropriate (e.g. I modelled the soot injected into the stratosphere per equivalent yield as a lognormal).

As a side note, 10 of 161 (6.21 %) forecasters of the Existential Risk Persuasion Tournament (XPT), 4 experts and 6 superforecasters, predicted a nuclear extinction risk until 2100 of exactly 0. I guess these participants know the risk is higher than 0, but consider it astronomically low too.

Putting ~any weight on models that give higher probabilities would lead to much higher estimates.

I used to be persuaded by this type of argument, which is made in many contexts by the global catastrophic risk community. I think it often misses that the weight a model should receive is not independent of its predictions. I would say high extinction risk goes against the low prior established by historical conflicts.

I am also not aware of any detailed empirical quantitative models estimating the probability of extinction due to nuclear war.

Thanks for the update, Toby. I used to defer to you a lot. I no longer do. After investigating the risks myself in decent depth, I consistently arrived to estimates of the risk of human extinction orders of magnitude lower than your existential risk estimates. For example, I understand you assumed in The Precipice an annual existential risk for:

  • Nuclear war of around 5*10^-6 (= 0.5*10^-3/100), which is 843 k (= 5*10^-6/(5.93*10^-12)) times mine.
  • Volcanoes of around 5*10^-7 (= 0.5*10^-4/100), which is 14.8 M (= 5*10^-7/(3.38*10^-14)) times mine.

In addition, I think the existential risk linked to the above is lower than their extinction risk. The worst nuclear winter of Xia et. al 2022 involves an injection of soot into the stratosphere of 150 Tg, which is just 1 % of the 15 Pg of the Cretaceous–Paleogene extinction event. Moreover, I think this would only be existential with a chance of 0.0513 % (= e^(-10^9/(132*10^6))), assuming:

  • An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
    • An exponential distribution with a mean of 66 M years describes the time between:
      • 2 consecutive such catastrophes.
      • i) and ii) if there are no such catastrophes.
    • Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1/2).
    • Consequently, one should expect the time between i) and ii) to be 2 times (= 1/0.50) as long as that if there were no such catastrophes.
  • An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.

Hi David,

Existential catastrophe, annual0.30%20.04%David Denkenberger, 2018
Existential catastrophe, annual0.10%3.85%Anders Sandberg, 2018

Based on my adjustments to CEARCH's analysis of nuclear and volcanic winter, the expected annual mortality of nuclear winter as a fraction of the global population is 7.32*10^-6. I estimated the deaths from the climatic effects would be 1.16 times as large as the ones from direct effects. In this case, the expected annual mortality of nuclear war as a fraction of the global population would be 1.86 (= 1 + 1/1.16) times the expected annual mortality of nuclear winter as a fraction of the global population, i.e. 0.00136 %(= 1.86*7.32*10^-6). So the annual losses in future potential mentioned in the table above are 221 (= 0.0030/(1.36*10^-5)) and 73.5 (= 0.0010/(1.36*10^-5)) times my expected annual death toll, whereas I would have expected the annual loss in future potential to be much lower than the expected annual death toll.

Great points, Matt.

I think essentially all (not just many) pathways from AI risk will have to flow through other more concrete pathways. AI is a general purpose technology, so I feel like directly comparing AI risk with other lower level pathways of risk, as 80 k seems to be doing somewhat when they describe the scale of their problems, is a little confusing. To be fair, 80 k tries to account for this talking about the indirect risk of specific risks, which they often set to 10 times the direct risk, but these adjustments seem very arbitrary to me.

In general, one can get higher risk estimates by describing risk at a higher level. So the existential risk from LLMs is smaller than the risk from AI, which is smaller than the risk from computers, which is smaller than the risk from e.g. subatomic particles. However, this should only update one towards e.g. prioritise "computer risk" over "LLM risk" to the extent the ratio between the cost-effectiveness of "computer risk interventions" and "LLM risk interventions" is proportional to the ratio between the scale of "computer risk" and "LLM risk", which is quite unclear given the ambiguity and vagueness of the 4 terms involved[1].

To get more clarity, I believe it is be better to prioritise at a lower level, assessing the cost-effectiveness of specific classes of interventions, as Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Centre for Exploratory Altruism Research (CEARCH), and GiveWell do.

  1. ^

    "Computer risk", "LLM risk", "computer risk interventions" and "LLM risk interventions".

Here is an example with text in a table aligned to the left (select all text -> cell properties -> table cell text alignement).

StatisticAnnual epidemic/pandemic deaths as a fraction of the global population
Mean0.236 %
Minimum0
5th percentile1.19*10^-6
10th percentile3.60*10^-6
Median0.0276 %
90th percentile0.414 %
95th percentile0.684 %
Maximum10.3 %

Thanks for the post! I wonder whether it would also be good to have public versions of the applications (sensible information could be redacted), as Manifund does, which would be even less costly than having external reviewers.

Thanks, Will! Relatedly, I noted the importation makes the text in tables go from aligned to the centre in docs to aligned to the left/right on the EA Forum editor.

Load more