In June this year, Good Ventures announced that it would stop supporting certain sub-causes, and not expand into new cause areas by default. Neither Good Ventures nor Open Philanthropy included a public list of the sub-causes or organisations they were no longer supporting.

Both Good Ventures, and Alexander Berger, on behalf of Open Philanthropy, expressed (as they have before) that they would like to see more diversity of funding across the cause areas that they support.

From the Good Ventures blog: “Our hope is that other donors will be in a position to take on some of these opportunities and that, over the longer term, this will lead to healthier and more resilient ecosystems with more diversified bases of funding.”

It’s been a few months now. Wild Animal Initiative have shared the effect that the funding shift had on them, and later announced that their funding gap was being filled by The Navigation Fund, through 2026. But I haven't heard from many other organisations. 

Knowledge of the other areas where funding has been cut, and alternative funders who have stepped in, is currently diffused through the community. I think it would be valuable to share this information more widely. This could help donors find out about important funding gaps, and organisations find out about possible alternative funders.

If you represent an organisation, and you are able to share your story, please do so in the answers below. Thank you!

PS— in my opinion the EA movement wouldn’t be as vibrant and capable as it is today without Good Ventures and Open Philanthropy. I doubt people would take it as such, but I’d like to clarify that I’m not asking this question as a rhetorical dig at Good Ventures. Getting more information here would be useful, regardless of your opinions on Good Ventures’ decision to shift funding from certain sub-causes.

81

2
0

Reactions

2
0
Comments3
Sorted by Click to highlight new comments since:

A few months ago, Good Ventures, the primary funder behind Open Philanthropy, decided to exit grantmaking in the areas of farmed invertebrates and wild animals, which had supported much of Rethink Priorities' work over the last 18 months, including recent publications on shrimp welfare and farmed insect welfare. While The Navigation Fund has committed to sustaining our insect welfare portfolio through 2026, other invertebrate and wild animal projects lack secure funding, making additional support crucial for their continuation. The switch in funding approaches has also (in my albeit speculative estimation) resulted in a loss of significant funding for digital sentience for Rethink Priorities as well. Some funds have been raised there in lieu but no long-term commitments secured. My main concern is the long-term outlook for these areas; while there is some short-term interest for the next year or two, sustained funding remains uncertain, and the overall impact opportunities in these areas now seem significantly diminished by the more uncertain and reduced funding landscape.

Based on this, it appears shrimp welfare was an area affected by this, and that TNF has filled SWP's funding gap until the end of 2026.

I'd be interested in updates about funding for the welfare of smaller animals in general!

@Habryka has stated that Lightcone has been cut off from OpenPhil/GV funding; my understanding is that OP/GV/Dustin do not like the rationalism brand because it attracts right-coded folks. Many kinds of AI safety work also seem cut off from this funding; reposting a comment from Oli:

As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don't think this is because of any COIs, it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.

Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]

Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth's work, or Wei Dai's work, or Daniel Kokotajlo's work, or Brian Tomasik's work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]

I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]

In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has "good judgement" on public comms, and who isn't the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn't like, or that might strain Dustin's relationships with others in any non-trivial way. 

Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren't the kind of person who gets the hint that this is how the game is played now.

And to provide some pushback on things you say, I think now that OPs bridges with OpenAI are thoroughly burned after the Sam firing drama, OP is pretty OK with people criticizing OpenAI (since what social capital is there left to protect here?). My sense is criticizing Anthropic is slightly risky, especially if you do it in a way that doesn't signal what OP considers good judgement on maintaining and spending your social capital appropriately (i.e. telling them that they are harmful for the world, or should really stop, is bad, but doing a mixture of praise and criticism without taking any controversial top-level stance is fine), but mostly also isn't the kind of thing that OP will totally freak out about. I think OP used to be really crazy about this, but now is a bit more reasonable, and it's not the domain where OP's relationship to reputation-management is causing the worst failures.

I think all of this is worse in the longtermist space, though I am not confident. At the present it wouldn't surprise me very much if OP would defund a global health grantee because their CEO endorsed Trump for president, so I do think there is also a lot of distortion and skew there, but my sense is that it's less, mostly because the field is much more professionalized and less political (though I don't know how they think, for example, about funding on corporate campaign stuff which feels like it would be more political and invite more of these kinds of skewed considerations).

Also, to balance things, sometimes OP does things that seem genuinely good to me. The lead reduction fund stuff seems good, genuinely neglected, and I don't see that many of these dynamics at play there (I do also genuinely care about it vastly less than OPs effect on AI Safety and Rationality things).

Also, Manifold, Manifund, and Manifest have never received OP funding -- I think in the beginning we were too illegible for OP, and by the time we were more established and OP had hired a fulltime forecasting grantmaker, I would speculate that were seen as too much of a reputational risk given eg our speaker choices at Manifest.

Curated and popular this week
Relevant opportunities