Great to see attempts to measure impact in such difficult areas. I'm wondering if there's a problem of attribution that looks like this (I'm not up to date on this discussion):
When you account for this properly, it's clear that each of these estimates is too high, because part of the impact and cost has to be attributed elsewhere.
A few off the cuff thoughts:
It seems there should be a more complicated discounted measure of impact here for each organisation, that takes into account additional costs.
It certainly could be the case that at each stage the impact is high enough to justify the program at the discounted rate.
This might be a misunderstanding of what you're actually doing, in which case I would be excited to learn that you (and similar organisations) already accounted for this!
I don't mean to pick on any organisation in particular if no one is doing this, it's just a thought about how these measures could be improved in general.
Thanks, this is really helpful information about trusts and the 4% rule!
On self trust: I feel that a common pattern might be that when you're young, you're 'idealistic' and want to do things like donate. When you're older, you feel like spending your money (if you have it) in ways that might not make you particularly happy. I might even decide I would rather give it all to my kids (if I have some). This makes me think there's a good chance I won't donate it later if I haven't pre-committed.
On safety: I am from Australia, and to some extent my context is probably quite different to many others. (On the whole, Australia tends to look after you if you get severely injured or run entirely out of money. This makes quick access less of a pressing consideration for me). But to the extent that it is an important consideration, why not have a little money that's easily accessible and most of it in a trust?
Here are some articles I think would make good scripts (I'll also be submitting one script of my own).
Summaries of the following papers:
I'd also suggest the following papers which I haven't seen a summary of:
(Edit: Spacing)
I am writing these 8 summaries, message me if you want to see them early.
Ask him about counterfactuals, ask him if his views have any implications for our ideas of counterfactual impact?
Ask him whether relative expectations can help us get out wagers like this one from Hayden Wilkinson's paper:
Dyson's Wager
You have $2,000 to use for charitable purposes. You can donate it to either of two charities.
The first charity distributes bednets in low-income countries in which malaria is endemic. With an additional $2,000 in their budget this year, they would prevent one additional death from malaria. You are certain of this.
The second charity does speculative research into how to do computations using ‘positronium’ - a form of matter which will be ubiquitous in the far future of our universe. If our universe has the right structure (which it probably does not), then in the distant future we may be able to use positronium to instantiate all of the operations of human minds living blissful lives, and thereby allow morally valuable life to survive indefinitely long into the future. From your perspective as a good epistemic agent, there is some tiny, non-zero probability that, with (and only with) your donation, this research would discover a method for stable positronium computation and would be used to bring infinitely many blissful lives into existence.
Recently, I was reading David Thorstad’s new paper “Existential risk pessimism and the time of perils”. In it, he models the value of reducing existential risk on a range of different assumptions.
The headline result is that 1) most plausibly, existential risk reduction is not overwhelmingly valuable–though it may still be quite valuable, it doesn’t probably swamp all other cause areas. And 2) thinking that extinction is more likely tends to weaken the case for existential risk reduction rather than strengthen it.
It struck me that one of the results is particularly interesting, I call it the repugnant solution:
If we can reduce existential risk to 0% per century across all future centuries, this act is infinitely valuable, even if the initial risk was absolutely tiny and each century is only just of positive value. This act is therefore, better than basically anything else we could do.
Perhaps, in a Pascalian way, if we think there is a tiny chance that some particular action will lead to a permanent reduction in existential risk, that act too is infinitely valuable, and everything breaks.
This is also true even if we decrease the value of each century from “really amazingly great” to “only just net positive”.
What do you think are the biggest wins in technical safety so far? What do you see as the most promising strategies going forward?