I'm dissatisfied with my explanation of why there is not more attention from EAs and EA funders on nuclear safety and security, especially relative to e.g. AI safety and biosecurity. This has come up a lot recently, especially after the release of Oppenheimer. I'm worried I'm not capturing the current state of affairs accurately and consequently not facilitating fully contextualized dialogue.
What is your best short explanation?
(To be clear, I know many EAs and EA funders are working on nuclear safety and security, so this is more so a question of resource allocation, rather than inclusion in the broader EA cause portfolio.)
This gets a lot of things right, but (knowing some of the EAs who did look into this or work on it now,) I would add a few:
1. Lindy effect and stability - we're 70 years in, and haven't had any post-first-use nuclear usage, so we expect it's somewhat stable - not very stable, but the risk from newer technologies under this type of estimation is higher, because we have less of a record.
2. The current inside-view stability of the nuclear situation, where strong norms exist against use, and are being reinforced already by large actors, with deep pockets... (read more)
This characterization seems pretty at odds to me with recent EA work, e.g. from Longview but also my colleague Christian Ruhl at FP, who tend to argue that the philanthropic space on nuclear risk is very funding-constrained and there are plenty of good funding margins left unfilled.
For anyone who is interested, Founders Pledge has a longer report on this (with a discussion of funding constraints as well as funding ideas that could absorb a lot of money), as well as some related work on specific funding opportunities like crisis communications hotlines.
I agree that the nuclear risk field as a whole is less neglected than AGI safety (and probably than engineered pandemic), but I think that resilience to nuclear winter is more neglected. That's why I think overall cost-effectiveness of resilience is competitive with AGI safety.