Bio

Participation
3



You can send me a message anonymously here: https://www.admonymous.co/will

Comments
426

That sounds great too. Perhaps both axes labels should be possible and it should just be specified for each question asked.

I like the idea of operationalizing the Agree/Disagree as  probability that the statement is true. So "Agree" is 100%, neutral is 50%, disagree is 0%. In this case, 20% vs 40% means something concrete.

How does marginal spending on animal welfare and global health influence the long-term future?

I'd guess that most of the expected impact in both cases comes from the futures in which Earth-originating intelligent life (E-OIL) avoids near-term existential catastrophe and goes on to create a vast amount of value in the universe by creating a much larger economy and colonizing other galaxies and solar systems, and transforming the matter there into stuff that matters a lot more morally than lifeless matter ("big futures").

For animal welfare spending, then, perhaps most of the expected impact come from the spending reducing the amount of animal suffering and suffering of other non-human sentient beings (e.g. future AIs) in the universe compared to the big futures without the late 2020s animal welfare spending. Perhaps the causal pathway for this is affecting what people think about the moral value of animal suffering and that positively affecting what E-OIL does with the reachable universe in big futures (less animal suffering and lower probability of neglecting the importance of sentient AI moral patients).

For global health spending, perhaps most of the expected impact comes from increasing the probability that E-OIL goes on to have a big future. Assuming the big futures are net positive (as I think is likely) this would be a good thing.

I think some global health spending probably has much more of an impact on this than others. For example, $100M would only put a dent in annual malaria deaths (~20,000 fewer deaths, a <5% reduction in annual deaths for 1 year) and it seems like that would have quite a small effect on existential risk. Whereas it seems plausible to me that if the money was spent on reducing the probability of a severe global pandemic in the 2030s (spending which seems like it could qualify as "global health" spending) plausibly could have a much more significant effect. I don't know how much $100M could reduce the odds of a global pandemic in the 2030s, but intuitively I'd guess that it could make enough of a difference to be much more impactful on reducing 21st century existential risk than reducing malaria deaths.

How would the best "global health" spending compare to the "animal welfare" spending? Could it reduce existential risk by enough to do more good than better values achieved via animal welfare spending could do?

I think it plausibly could (i.e. the global health spending plausibly could do much more good), especially in the best futures in which it turns out that AI does our moral philosophy really well such that our current values don't get locked in, but rather we figure out fantastic moral values after e.g. a long reflection and terraform the reachable universe based on those values.

But I think that in expectation, $100M of the global health spending would only reduce existential risk by a small amount, increasing the EV of the future by a small amount (something like <0.001%), and intuitively $100M extra spent on animal welfare (given the relatively small size of current spending on animal welfare), could do a lot more good (to increase the value of the big future by a larger small amount (than the small amount of increased probability of a big future from the global health scenario).

Initially I was answering about halfway toward Agree from Neutral, but after thinking this out, I'm moving further toward Agree.

This is horrifying! A friend of the author just shared this along with a Business Insider post that was just published that links to this post:

https://www.businessinsider.com/dangerous-surgery-stop-blushing-side-effects-ruined-life-no-emotions-2024-2

I'm curious if you or the other past participants you know had a good experience with AISC are in a position to help fill the funding gap AISC currently has. Even if you (collectively) can't fully fund the gap, I'd see that as a pretty strong signal that AISC is worth funding. Or, if you do donate but you prefer other giving opportunities instead (whether in AIS or other cause areas) I'd find that valuable to know too.

But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive. 

Naive question, but does AISC have enough of such past alumni that you could meet your current funding need by asking them for support? It seems like they'd be in the best position to evaluate the program and know that it's worth funding.

Nevertheless, AISC is probably about ~50x cheaper than MATS

~50x is a big difference, and I notice the post says:

We commissioned Arb Research to do an impact assessment
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding. 

Multiplying that number (which I'm agnostic about) by 50 gives $600k-$1.5M USD. Does your ~50x still seem accurate in light of this?

I'm a big fan of OpenPhil/GiveWell popularizing longtermist-relevant facts via sponsoring popular YouTube channels like Kurzgesagt (21M subscribers). That said, I just watched two of their videos and found a mistake in one[1] and took issue with the script-writing in the other one (not sure how best to give feedback -- do I need to become a Patreon supporter or something?):

Why Aliens Might Already Be On Their Way To Us

My comment:

9:40 "If we really are early, we have an incredible opportunity to mold *thousands* or *even millions* of planets according to our visions and dreams." -- Why understate this? Kurzgesagt already made a video imagining humanity colonizing the Milky Way Galaxy to create a future of "a tredecillion potential lives" (10^42 people), so why not say 'hundreds of billions of planets' (the number of planets in the Milky Way), 'or even more if we colonize other galaxies before other loud/grabby aliens reach them first'? This also seems inaccurate because the chance that we colonize between 1,000-9,999,999 planets (or even 1,000-9,999,999 planets) is less than the probability that we colonize >10 million (or even >1 billion) planets.

As an aside, the reason I watched these two videos just now was because I was inspired to look them up after watching the depressing new Veritasium video Do People Understand the Scale of the Universe? in which he shows a bunch of college students from a university with 66th percentile average SAT scores who do not know basic facts about the universe.

[1] The mistake I found was in the most recent video You Are The Center of The Universe (Literally) was that it said (9:10) that the diameter of the observable universe is 465,000 Milky Way galaxies side-by-side, but that's actually the radius of the observable universe, not the diameter.

I also had a similar experience making my first substantial donation before learning about non-employer counterfactual donation matches that existed.

It was the only donation I regretted since by delaying making it 6 months I could have doubled the amount of money I directed to the charity for no extra cost to me.

Great point, thanks for sharing!

While I assume that all long-time EAs learn that employer donation matching is a thing, we'd do well as a community to ensure that everyone learns about it before donating a substantial amount of money, and clearly that's not the case now.

Reminds me of this insightful XKCD: https://xkcd.com/1053/

For each thing 'everyone knows' by the time they're adults, every day there are, on average, 10,000 people in the US hearing about it for the first time.

Load more