Hey there~ I'm Austin, currently building https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
I'm not aware of any projects that aim to advise what we might call "Small Major Donors": people giving away perhaps $20k-$100k annually.
We don't advertise very much, but my org (Manifund) does try to fill this gap:
I encourage Sentinel to add a paid tier on their Substack, just as an easy mechanism for folks like you & Saul to give money, without paywalling anything. While it's unlikely for eg $10/mo subscriptions to meaningfully affect Sentinel's finances at this current stage, I think getting dollars in the bank can be a meaningful proof of value, both to yourselves and to other donors.
@Habryka has stated that Lightcone has been cut off from OpenPhil/GV funding; my understanding is that OP/GV/Dustin do not like the rationalism brand because it attracts right-coded folks. Many kinds of AI safety work also seem cut off from this funding; reposting a comment from Oli :
Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it's definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.
...
As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don't think this is because of any COIs, it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.
Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]
Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth's work, or Wei Dai's work, or Daniel Kokotajlo's work, or Brian Tomasik's work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]
I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]
In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has "good judgement" on public comms, and who isn't the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn't like, or that might strain Dustin's relationships with others in any non-trivial way.
Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren't the kind of person who gets the hint that this is how the game is played now.
And to provide some pushback on things you say, I think now that OPs bridges with OpenAI are thoroughly burned after the Sam firing drama, OP is pretty OK with people criticizing OpenAI (since what social capital is there left to protect here?). My sense is criticizing Anthropic is slightly risky, especially if you do it in a way that doesn't signal what OP considers good judgement on maintaining and spending your social capital appropriately (i.e. telling them that they are harmful for the world, or should really stop, is bad, but doing a mixture of praise and criticism without taking any controversial top-level stance is fine), but mostly also isn't the kind of thing that OP will totally freak out about. I think OP used to be really crazy about this, but now is a bit more reasonable, and it's not the domain where OP's relationship to reputation-management is causing the worst failures.
I think all of this is worse in the longtermist space, though I am not confident. At the present it wouldn't surprise me very much if OP would defund a global health grantee because their CEO endorsed Trump for president, so I do think there is also a lot of distortion and skew there, but my sense is that it's less, mostly because the field is much more professionalized and less political (though I don't know how they think, for example, about funding on corporate campaign stuff which feels like it would be more political and invite more of these kinds of skewed considerations).
Also, to balance things, sometimes OP does things that seem genuinely good to me. The lead reduction fund stuff seems good, genuinely neglected, and I don't see that many of these dynamics at play there (I do also genuinely care about it vastly less than OPs effect on AI Safety and Rationality things).
Also, Manifold, Manifund, and Manifest have never received OP funding -- I think in the beginning we were too illegible for OP, and by the time we were more established and OP had hired a fulltime forecasting grantmaker, I would speculate that were seen as too much of a reputational risk given eg our speaker choices at Manifest.
This looks awesome! $1k struck me as a pretty modest prize pool given the importance of the questions; I'd love to donate $1k towards increasing this prize, if you all would accept it (or possibly more, if you think it would be useful.)
I'd suggest structuring this as 5 more $200 prizes (or 10 $100 honorable mentions) rather than doubling the existing prizes to $400 -- but really it's up to you, I'd trust your allocations here. Let me know if you'd be interested!
The act of raising funding from "EA general public" is quite rare at the moment - most orgs I'm familiar with get the vast majority of their funding from a handful of institutions (OP, EA Funds, SFF, some donor circles).
I do think fundraising from the public can be a good forcing function and I wish more EA nonprofits tried to do so. Especially meta/EA internal orgs like 80k or EA Forum or EAG (or Lightcone), since there, "how much is a user willing to donate" could be a very good metric for the how much value they are receiving from their work.
One of the best things that happened to Manifold early on was when our FTX Future Fund regrantor offered to cover up to half of our $2m seed round - contingent on us raising the other half from other sources. We then had to build the muscle of fundraising from regular Silicon Valley angels/VCs, which especially served us well when Future Fund went kaput.
Manifund tries to make public fundraising for EA projects much easier, and there have been a few success cases such as MATS and Act I - though in the end most of our dollars moved come from our regrantors.
If you are a mechanical engineer digging around for new challenges and you’re not put off by everyone else’s failure to turn a profit, I’d be enthusiastic about your building a lamp and would do my best to help you get in touch with people you could learn from.
If this describes you, I'd also love to help (eg with funding) -- reach out to me at austin@manifund.org!
Thanks for posting this! I appreciate the transparency from the CEA team around organizing this event and posting about the results; putting together this kind of stuff is always effortful for me, so I want to celebrate when others do it.
I do wish this retro had a bit more in the form of concrete reporting about what was discussed, or specific anecdotes from attendees, or takeaways for the broader EA community; eg last year's MCF reports went into substantial depth on these, which really enjoyed. But again, these things can be hard to write up, perfect shouldn't be the enemy of good enough, and I'm grateful for the steps that y'all have already taken towards showing your work in public.
Thanks for the questions! Most of our due diligence happens in the step where the Manifund team decides whether to approve a particular grant; this generally happens after a grant has met its minimum funding bar and the grantee has signed our standard grant agreement (example). At that point, our due diligence usually consists of reviewing their proposal as written for charitable eligibility, as well as a brief online search, looking through the grant recipient's eg LinkedIn and other web presences to get a sense of who they are. For larger grants on our platform (eg $10k+), we usually have additional confidence that the grant is legitimate coming from the donors or regrantors themselves.
In your specific example, it's very possible that I personally could have missed cross-verifying your claim of attending Yale (with the likelihood decreasing the larger the grant is for). Part of what's different about our operations is that we open up the screening process so that anyone on the internet can chime in if they see something amiss; to date we've paused two grants (out of ~160) based on concerns raised from others.
I believe we're classified as a public charity and take on expenditure responsibility for our grants, via the terms of our grant agreement and the status updates we ask for from grantees.
And yes, our general philosophy is that Manifund as a platform is responsible for ensuring that a grant is legitimate under US 501c3 law, while being agnostic about the impact of specific grants -- that's the role of donors and regrantors on our platform.
I'd really appreciate you leaving thoughts on the projects, even if you decided not to fund them. I expect that most project organizers would also appreciate your feedback, to help them understand where their proposals as written are falling short. Copy & paste of your personal notes would be great!
That's good to know - I assume Oli was being somewhat hyperbolic here. Do you (or anyone else) have examples of right-of-center policy work that OpenPhil has funded?