J

jackva

Climate Research Lead @ Founders Pledge
2411 karmaJoined Working (6-15 years)

Comments
172

Answer by jackva10
2
0

We've been doing a fair amount of work in this direction at Founders Pledge in "epistemically mildly" longtermist areas, such as climate or nuclear risk --areas that are clearly much more uncertain than RCT-global-health but probably a fair amount less cluelessless-riddled than the most intractable longtermist interventions where there is little agreement on the sign of interventions.

We are describing some of the ideas here (and, initially, in our Changing Landscape report), my colleague Christian Ruhl has also just published a report using some of those ideas to allow evaluating nuclear risk interventions, and I expect we do a fair bunch more of this work.

It is my impression that there is a bunch of low-hanging fruit here, that methodology to prioritize in high-uncertainty contexts could be a lot more developed than it is.

A lot of this is early-stage and pre-quantification (and the quantified stuff is not ready for publication yet), but it is something we are thinking about a lot (though, as per Jeff, we are quite small and, as per Nuno, it might take seven years for something like this to become good!).

(Sorry for a post mostly containing links to FP work, but they seem relevant to the discussion here).


 

Oh yeah, that is true and I think both Christian and I think that even left-of-boom nuclear security philanthropy is super-neglected (as I like to say, it is more than 2 OOM lower than climate philanthropy, which seems crazy to me).

Thank you, Vasco! I am not sure and I might very well be missing something here this being the end of a long week.

In my head right-of-boom thinking is just applying expected value thinking within a catastrophic scenario whereas the motivation for GCR work generally comes from applying it at the cause level.

So, to me there seems a parallel between the multiplier for preparatory work on GCR in general and the multiplier/differentiator within a catastrophic risk scenario.

To add, I think if we thought the difference in efficiency were only 30x then societally the optimal response to most catastrophic risks would be to essentially not prepare at all.

And, philanthropically, things like investing in protection against engineered or natural pandemics, AI risk, nuclear war (in general, independent of side of boom), etc. would all seem like fairly bad ideas as well (given the 30x needs to be adjusted for low probability of events).

So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.

To me, a discount of 30x seems vastly too low of a discount.

It seems true that in a right-of-boom situation massive resources will be mobilized (if they are still available), but effects like the ones that Christian mentions are probably an argument for much larger efficiency of preemptive spending than a factor of 30x.

I don't have time to estimate this (but would be curious if you tried, Vasco!), but I think factors underlying Christian's arguments like path dependency causing much larger investments over time than initially committed, non-accelerability of physical constraints around the speed of production or technological change, necessary conditions that exist now but maybe not in a right-of-boom situation (silly example: in a right-of-boom situation you can't establish a red telephone between Washington and Moscow if the right of boom situation is a nuclear conflict between the two) together seem like a discount probably in the 1000s, maybe even infinite for some interventions (where no amount of money can buy a given desired outcome in a right-of-boom situation).

Sorry to not be able to send something more comprehensive, but I think this is a good introductory resource that is relatively recent and policy oriented and probably has lots of good links/snowballing opportunities (https://www.energypolicy.columbia.edu/publications/bring-emissions-slashing-technologies-market-united-states-needs-targeted-demand-pull-innovation/).

Also the whole literature on "entrepreneurial state", and "mission economy", probably provides a good overview.

I also know that Schmidt Futures is quite into this, so talking to them could be good. 

Not sure how transferable, but there is a large and recent literature on this from how we did this for clean energy -- if relevant I can look up a review paper when back from holiday (next week).

Thank you, upvoted!

How were the experts sampled?

Thanks for catching this, I indeed understood this as "the harms from climate change are doubling" but I can see how your interpretation seems more likely to be correct and would be accurate.

I find it very confusingly worded given it says just above "causing significant and immediate harms to billions of people" and then says "these harms".

[EDIT: Note that this might be a misunderstanding -- see Benny's reply below]

Thank you for this!

What is the mechanism by which the failure of alternative proteins doubles harm from climate in 2050? APs are clearly a significant climate solution and worthy of more support (I regularly send donors in the direction of GFI), but this seems a very strong claim.

Load more