BT

Benjamin_Todd

8487 karmaJoined

Comments
858

I should maybe have been more cautious - how messaging will pan out is really unpredictable.

However, the basic idea is that if you're saying "X might be a big risk!" and then X turns out to be a damp squib, it looks like you cried wolf.

If there's a big AI crash, I expect there will be a lot of people rubbing their hands saying "wow those doomers were so wrong about AI being a big deal! so silly to worry about that!"

That said, I agree if your messaging is just "let's end AI!", then there's some circumstances under which you could look better after a crash e.g. especially if it looks like your efforts contributed to it, or it failed due to reasons you predicted / the things you were protesting about (e.g. accidents happening, causing it to get shut down).

However, if the AI crash is for unrelated reasons (e.g. the scaling laws stop working, it takes longer to commercialise than people hope), then I think the Pause AI people could also look silly – why did we bother slowing down the mundane utility we could get from LLMs if there's no big risk?

I agree people often overlook that (and also future resources).

I think bio and climate change also have large cumulative resources.

But I see this as a significant reason in favour of AI safety, which has become less neglected on an annual basis recently, but is a very new field compared to the others.

Also a reason in favour of the post-TAI causes like digital sentience.

Or you might like to look into Christian's grantmaking at Founders Pledge: https://80000hours.org/after-hours-podcast/episodes/christian-ruhl-nuclear-catastrophic-risks-philanthropy/

Thanks that's helpful background!

I agree tractability of the space is the main counterargument, and MacArthur might have had good reasons to leave. Like I say in the post, I'd suggest people think about this issue carefully if you're interested in giving to this area.

I don't focus exclusively on philanthropic funding. I added these paragraphs to the post to clarify my position:

I agree that a full accounting of neglectedness should consider all resources going towards the cause (not just philanthropic ones), and that 'preventing nuclear war' more broadly receives significant attention from defence departments. However, even considering those resources, it still seems similarly neglected as biorisk.

And the amount of philanthropic funding still matters because certain important types of work in the space can only be funded by philanthropists (e.g. lobbying or other policy efforts you don't want to originate within a certain national government).

I'd add that if if there's almost no EA-inspired funding in a space, there's likely to be some promising gaps by someone applying that mindset.

In general, it's a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it's also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).

--

Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust. Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.

E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective. But once you've spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.

It might take more than $1bn, but around that level, you could become a major funder of one of the causes like AI safety, so you'd already be getting significant benefits within a cause.

Agree you'd need to average 2x for the last point to work.

Though note the three pathways to impact - talent, intellectual diversity, OP gaps - are mostly independent, so you'd only need one of them to work.

Also agree in practice there would be some funging between the two, which would limit the differences, that's a good point.

I'd also be interested in that. Maybe worth adding that the other grantmaker, Matthew, is younger. He graduated in 2015 so is probably under 32.

Intellectual diversity seems very important to figuring out the best grants in the long term.

If atm the community, has, say $20bn to allocate, you only need a 10% improvement to future decisions to be worth +$2bn.

Funder diversity also seems very important for community health, and therefore our ability to attract & retain talent. It's not attractive to have your org & career depend on such a small group of decision-makers.

I might quantify the value of the talent pool around another $10bn, so again, you only need a ~10% increase here to be worth a billion, and over centralisation seems like one of the bigger problems.

The current situation also creates a single point of failure for the whole community.

Finally it still seems like OP has various kinds of institutional bottlenecks that mean they can't obviously fund everything that would be 'worth' funding in abstract (and even moreso to do all the active grantmaking that would be worth doing). They also have PR constraints that might make some grants difficult. And it seems unrealistic to expect any single team (however good they are) not to have some blindspots.

$1bn is only 5% of the capital that OP has, so you'd only need to find a 1 grant for every 20 that OP makes that they've missed with only 2x the effectiveness of marginal OP grants in order to get 2x the value.

One background piece of context is that I think grants often vary by more than 10x in cost-effectiveness.

Load more