M

michaelB

82 karmaJoined

Bio

Undergrad student at Stanford. I help run Stanford EA and the Stanford Alt. Protein Project.

Comments
9

This comment reads to me as unnecessarily adversarial and as a strawman of the authors' position.

It sounds to me like their real complaint is something like: How dare EA/utilitarianism prioritize other things over my pet causes, just because there's no reason to think that my pet causes are optimal? 

I think a more likely explanation of the authors' position includes cruxes like:

  • disagreeing with the assumption of maximization (and underlying assumptions about the aggregation of utility), such that arguments about optimality are not relevant
  • moral partiality, e.g. a view that people have special obligations towards those in their local community
  • weighting (in)justice much more strongly than the average EA, such that the correction of (certain) historical wrongs is a very high priority
  • disagreements about the (non-consequentialist) badness of e.g. foreign philanthropic interventions

Your description of their position may very well be compatible with mine, they do write with a somewhat disparaging tone, and I expect to strongly disagree with many of the book's arguments (including for some of the reasons you point out). However, it doesn't feel like you're engaging with their position in good faith.

Additionally, EA comprises a lot of nuanced ideas (e.g. distinguishing "classic (GiveWell-style) EA" from other strains of EA) and there isn't a canonical description of those ideas (though the EA Handbook does a decent job). While they might be obvious to community members, many of those nuances, counterarguments to naive objections, etc. aren't in easy-to-find descriptions of EA. While in an ideal world all critics would pass their subjects' ITT, I'm wary of creating too high of a bar for how much people need to understand EA ideas before they feel able to criticize them.

Thanks for the post! Minor quibble, but it bothers me that "people" in the title is taken to mean "British adults". I would guess that the dietary choices of Brits aren't super indicative of the dietary choices of people in general, and since the Forum isn't a British platform, I don't think Brits are the default reference class for "people".

Military/weapons technologies, in particular nuclear weapons, biological weapons, chemical weapons, and cyberattacks

Several infectious diseases, including COVID-19, Ebola, SARS, MERS, swine flu, HIV/AIDS, etc.

Gene-edited humans (see coverage of / responses to the twins modified by He Jiankui)

Some more examples of risks which were probably not extreme*, but which elicited strong policy responses:

  • Y2K (though this might count as an extreme risk in the context of corporate governance)
  • Nuclear power plant accidents (in particular Three Mile Island and Chernobyl)
  • GMOs (both risks to human health and to the environment; see e.g. legislation in the EU, India, and Hawai'i)
  • various food additives (e.g. Red No. 2)
  • many, many novel drugs/pharmaceuticals (thalidomide, opioids, DES, Fen-phen, Seldane, Rezulin, Vioxx, Bextra, Baycol...)

*I'm not really sure how you're defining "extreme risk", but the examples you gave all have meaningfully life-changing implications for >10s of millions of people. These examples are lesser in severity and/or scope, but seem like they still caused strong policy responses due to overestimated risk (though this warrants being careful about ex ante vs. ex post risk) and/or unusually high concern about the topic.

Answer by michaelB12
0
0

The 2014 NIH moratorium on funding gain-of-function research (which was lifted in 2017)

Answer by michaelB10
0
0

The Asilomar Conference on Recombinant DNA, which Katja Grace has a report on: https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/

If you want to draw useful lessons for successful risk governance from this research, it also seems pretty important to collect negative examples of the same reference class, i.e. conditions of extreme risk where policies were proposed but not enacted/enforced, or not proposed at all. E.g. (in the spirit of your example of the DoD's UFO detection program), I don't know of policy governing the risk from SETI-style attempts to contact intelligent aliens.

Are you interested only in public policies related to extreme risk, or examples from corporate governance as well? Corporate risk governance likely happens in a way that's meaningfully different from public policy, and might be relevant for applying this research to e.g. AI labs.