This is a special post for quick takes by MikhailSamin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.

  • The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)

  • I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the pledge, too).

  • Anti-correlation with status might mean that people will identify the pledge with average though altruistic Twitter users, not with cool people they want to be more like.

  • You won’t see a lot of e/accs putting the 🔸 in their names. There might be downside effects of perception of a group of people as clearly outlined and having this as an almost political identity; it seems bad to have directionally-political properties that might do mind-killing things both to people with 🔸 and to people who might argue with them.

How do effectiveness estimates change if everyone saved dies in 10 years?

“Saving lives near the precipice”

Has anyone made comparisons of the effectiveness of charities conditional on the world ending in, e.g., 5-15 years?

[I’m highly uncertain about this, and I haven’t done much thinking or research]

For many orgs and interventions, the impact estimations would possibly be very different from the default ones made by, e.g., GiveWell. I’d guess the order of the most effective non-longtermist charities might change a lot as a result.

It would be interesting to see how it changes as at least some estimates account for the world ending in n years.

Maybe one could start with updating GiveWell’s estimates: e.g., for DALYs, one would need to recalculate the values in GiveWell’s spreadsheets derived from the distributions that are capped or changed as a result of the world ending (e.g., life expectancy); for estimates of relative values of averting deaths at certain ages, one would need to estimate and subtract something representing that the deaths still come at (age+n). The second-order and long-term effects would also be different, but it’s possibly more time-consuming to estimate the impact there.

It seems like a potentially important question since many people have short AGI timelines in mind. So it might be worthwhile to research that area to give people the ability to weigh different estimates of charities’ impacts by their probabilities of an existential catastrophe.

Please let me know if someone already has worked this out or is working on this or if there’s some reason not to talk about this kind of thing, or if I’m wrong about something.

I think this could be an interesting avenue to explore. One very basic way to (very roughly) do this is to model p(doom) effectively as a discount rate. This could be an additional user input on GiveWell's spreadsheets.

So for example, if your p(doom) is 20% in 20 years, then you could increase the discount rate by roughly 1% per year

[Techinically this will be somewhat off since (I'm guessing) most people's p(doom) doesn't increase at a constant rate, in the way a fixed discount rate does.]

I think discounting QALYs/DALYs due to the probability of doom makes sense if you want a better estimate of QALYs/DALYs; but it doesn’t help with estimating the relative effectiveness of charities and doesn’t help to allocate the funding better.

(It would be nice to input the distribution of the world ending in the next n years and get the discounted values. But it’s the relative cost of ways to save a life that matters; we can’t save everyone, so we want to save the most lives and reduce suffering the most, the question of how to do that means that we need to understand what our actions lead to so we can compare our options. Knowing how many people you’re saving is instrumental to saving the most people from the dragon. If it costs at least $15000 to save a life, you don’t stop saving lives because that’s too much; human life is much more valuable. If we succeed, you can imagine spending stars on saving a single life. And if we don’t, we’d still like to reduce the suffering the most and let as many people as we can live for as long as humanity lives; for that, we need estimates of the relative value of different interventions conditional on the world ending in n years with some probability.)

Curated and popular this week
Relevant opportunities