The EA Infrastructure Fund isn’t currently funding-constrained. Hooray! This means that if you submit a strong application that fits within our “principles-first” effective altruism scope soon, we’d be excited to fund it, and won’t be constrained by a lack of money. We’re open to considering a range of grant sizes, including grants over $500,000 and below $10,000.[1]
In part, we’re writing this post because we spoke to a few people with projects we’d be interested in funding who didn’t know that they could apply to EAIF. If you’re unsure, feel free to ask me questions or just apply!
The rest of this post gives you some tips and ideas for how you could apply, including ideas we’re excited to receive applications for. I (Jamie) wrote this post relatively quickly; EAIF staff might make more such posts if people find them helpful.
🔍 What’s in scope?
- Research that aids prioritisation across different cause areas.
- Projects that build communities focused on impartial, scope-sensitive and ambitious altruism.
- Infrastructure, especially epistemic infrastructure, to support these aims.
- (More on this post and our website, though the site needs a bit of a revamp. And please err on the side of applying. You don’t need to be fully ‘principles first’; that’s our strategy.)
💪 What makes an application strong?
A great idea — promising theory of change and expected cost-effectiveness.[2]
- Evidence suggesting you are likely to execute well on the idea.
- (I’m simplifying a bit of course. See also Michael Aird’s tips here.)
The second part is straightforward enough; if your project has been ongoing for a while, we’d like to understand the results you’ve achieved so far. If it’s brand new, or you’re pivoting a lot, we’re interested in evidence about your broader achievements and skills that would set you up well to do a good job.
You might already have a great idea. If so, nice one! Please ignore the rest of this post and crack on with an application. If not, I’ll now highlight a few specific topics that we’re especially interested in receiving applications for at the moment.[3]
💡 Consider applying for projects in these areas
Epistemics and integrity
What’s the problem?
- EA is vulnerable to groupthink, echo chambers, and excessive deference to authority.
- A bunch of big EA mistakes and failures were perhaps (partly) due to these things.
- A lot of external criticism of EA stems back to this.
What could be done?
- Training programmes and fellowships that help individual participants develop good epistemic habits or integrity directly (e.g. Scout Mindset, Fermi estimates, developing virtues), indirectly (e.g. helping them form their own views on cause prioritisation), or as part of a broader package.
- Training, tools, or platforms for forecasting and prediction markets.
- Researching and creating tools that aid structured and informed decision-making.
- Developing filtering and vetting mechanisms to weed out applicants with low integrity or poor epistemics.
- New structures or incentives at the community level: integrating external feedback, incentivising red-teaming, or creating better discussion platforms.
What have we funded recently?
- Elizabeth Van Nostrand and Timothy Telleen Lawton recorded a discussion about why Elizabeth left EA and why Timothy is seeking a ‘renaissance’ of EA instead. They’re turning this into a broader podcast.
- EA Netherlands is working with Shoshannah Tekofsky to develop 5-10 unique rationality workshops to be presented to 100-240 Dutch EAs over a 12-month period, aiming to improve their epistemic skills and decision-making processes.
- André Ferretti launched the “Retrocaster” tool on Clearer Thinking, to enhance users’ forecasting skills. By obscuring data from sources like Our World in Data, Retrocaster invites users to forecast hidden trends.
Harri Besceli, another EAIF Fund Manager, wrote more thoughts on EA epistemics projects here. This is beyond EAIF’s scope but if you have a for-profit idea here, feel free to contact me.[4]
EA brand and reputation
What’s the problem?
- Since FTX, the public perception of EA has become significantly worse.
- This makes it harder to grow and do community outreach.
- Organisations and individuals are less willing to associate with EA; this reduces the benefits it provides and further worsens its reputation.
What could be done?
- Good PR. There’s a whole massive industry out there focused on exactly this, and presumably a bunch of it works. Not all PR work is dishonest.
- Empirical testing of different messages and frames to see what resonates best with different target audiences.
- More/better comms and marketing generally for promising organisations.
- Inwards-focusing interventions that help create a healthier self-identity, culture, and vision, or that systematically boost morale (beyond one-off celebratory posts).
- Support for high-quality journalism on relevant topics.
What have we funded recently?
- Yi-Yang Chua is exploring eight community health projects. Some relate to navigating EA identity; others might have knock-on effects for EA’s reputation by mitigating harms and avoiding scandals.
- Honestly not much. Please send us requests!
I’ve focused on addressing challenges of poor brand and reputation, but of course the ideal would be to actually fix any underlying issues that have bad consequences and in turn cause poor reputation. Proposals relating to those are of course welcome (e.g. on epistemics & integrity).
Funding diversification
What’s the problem?
- Many promising projects are bottlenecked by funding, from AI safety to animal welfare.
- Projects are often dependent on funding from Open Philanthropy, which makes their situation unstable and incentivises deference to OP’s views.
- There’s less funding in EA than there used to be (especially due to the FTX crash) or could be (especially given historical reliance on OP and FTX).
What could be done?
- Projects focused on broadly raising funding from outside the EA community.
- More targeted fundraising, like projects focusing specifically on high-net-worth donors, local donors in priority areas (e.g. India), or specific professions and interest groups (e.g. software engineers, alt protein startup founders, AI lab staff).
- Regranting projects.
- Projects focused on democratising decision making within the EA community.
- Philanthropic advising, grantmaking, or talent pipelines to help address bottlenecks here.
What have we funded recently?
- Giv Effektivt hired its first FTE staff member to reach high-net-worth individuals and improve operations, media outreach, and SEO.
- EA Poland grew and promoted a platform for cost-effective donations to address global poverty, factory farming, and climate change.
- But we’ve mostly only received applications for broad, national effective giving initiatives; and there are so many more opportunities in this space!
Areas deprioritised by Good Ventures
Good Ventures announced that it would stop supporting certain sub-causes via Open Philanthropy. We expect that programmes focused on rationality or supporting under 18s (aka ‘high school outreach’) are the most obviously in-scope-for-EAIF affected areas; you can check this post for other possibilities.
We expect that Good Ventures’ withdrawal here leaves at least some promising projects underfunded, and we’d be excited to help fill (some of) the gap.
✨ This is by no means an exhaustive list!
There are lots of problems in effective altruism, and lots of bottlenecks faced by projects making use of EA principles; if you have noticed an issue, let us know about how you can help fix it by submitting an application.
For instance, if you’ve been kicking around for a few years — you’ve built up some solid career capital in top orgs, and have a rich understanding of the EA community, warts and all — then there’s a good chance we’d be excited to fund you to make progress on tackling an issue you’ve identified.[5]
And of course, other people have already done some thinking and suggested some ideas. Here are a few longlists of potential projects, if you want to scour for options[6]:
- Here’s a quick list we made earlier with different tabs from different EAIF staff (2024)
- Rethink Priorities’ General Longtermism Team’s longlist (2023)
- Finn Moorhouse’s list of “EA Projects I'd Like to See” (2022)
- FTX Future Fund’s list (2022) and crowdsourced suggestions (731 comments!)
- Charity Entrepreneurship’s “survey of 40 EAs” on the most promising areas to start new EA meta charities (2020)
- Probably a bunch of other posts in the EA Forum tags for “Opportunities to take action”, “Research agendas, questions, and project lists”, or “Community projects”
❓ Ask me almost anything
I’m happy to do an informal ‘ask me anything’ here — I encourage you to ask away in the comments section if there’s anything you’re unsure about or that is holding you back, and I expect to be able to respond to most/all of them. You can also email me (jamie@effectivealtruismfunds.org) or use my anonymous advice form, but posting your comment here is a public good if you’re up for it, since others might have the same question.
But if you already know everything you need to know…
🚀 Apply
See also: “Don’t think, just apply! (usually)”. By the way, EAIF’s turnaround times are much better than they used to be; typically 6 weeks or less.
The application form is here. Thanks!
- ^
We don’t have a hard upper bound at the moment. Historically, most of our grants have been between about $10,000 and $200,000. We’d be a bit hesitant evaluating something much higher than $500,000 but we’re open to it. If it was over $1m, we’d likely encourage you to apply elsewhere, e.g. Open Philanthropy.
- ^
Scalability and high upside value can make an application more promising but are not requirements.
- ^
The first three of these are inspired by a few calls Harri Besceli (another EAIF Fund Manager) carried out with some people who work in EA community building or who have been engaged in the EA community for a long time. But the bullet points here are my own take; this isn’t a writeup of the findings so to speak. I’m not trying to ‘make the case’ for any of these areas’ importance here; it’s fine if you disagree, I’m just flagging that we’d be excited for applications in these areas.
- ^
I also work at Polaris Ventures, which makes investments and might be interested.
- ^
Of course, this isn’t guaranteed; it still needs to be an in-scope, strong application. And we sometimes receive strong applications from people who are newer to effective altruism, too.
- ^
Caveats:
- With the exception of mine and CE’s, these lists all contain ideas that wouldn’t be in-scope for EAIF.
- Many of these lists were put together quickly or had a low bar for inclusion. Some are mostly outdated, some may focus on worldviews you disagree with, etc. You shouldn’t treat an idea being mentioned on one of these lists as a strong vote of confidence from anyone that it’s actually a good use of time. These are usually just ideas.
- Even if it is a great idea, you still need to have relevant skills and track record to be able to put in a 💪 strong application.
Re "epistemics and integrity" - I'm glad to see this problem being described. It's also why I left (angrily!) a few years ago, but I don't think you're really getting to the core of the issue. Let me try to point at a few things
centralized control and disbursion of funds, with a lot of discretionary power and a very high and unpredictable bar, gives me no incentive to pursue what I think is best, and all the incentive to just stick to the popular narrative. Indeed groupthink. Except training people not to groupthink isn't going to change their (existential!) incentive to groupthink. People's careers are on the line, there are only a few opportunities for funding, no guarantee to keep receiving it after the first round, and no clear way to pivot into a safer option except to start a new career somewhere your heart does not want to be, having thrown years away
lack of respect for "normies". Many EA's seemingly can't stand interacting with non-EA's. I've seen EA meditation, EA bouldering, EA clubbing, EA whatever. Orgs seem to want everyone and the janitor to be "aligned". Everyone's dating each other. It seems that we're even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...
perhaps in part due to the above, massive hubris. I don't think we realise how much we don't know. We started off with a few slam dunks (yeah wow 100x more impact than average) and now we seem to think we are better at everything. Clearly the ability to discern good charities does not transfer to the ability to do good management. The truth is: we are attempting something of which we don't even know whether it is possible at all. Of course we're all terrified! But where is the humility that should go along with that?
Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.
Yes, very plausible that the example interventions don't really get to the core of the issue -- I didn't spend long creating those and they're more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.
Re "centralized control and disbursion of funds": I agree that my example ideas in the epistemics section wouldn't help with this much. Would the "funding diversification" suggestions below help here?
And I'm intrigued if you're up for elaborating why you don't think the sorts of "What could be done?" suggestions would help with the other two problems you highlight. (They're not optimising for addressing those two specific concerns of course, but insofar as they all relate back to bad/weird epistemic practices, then things like epistemics training programmes might help?) No worries if you don't want to or don't have time though.
Thanks again!
Yes, I imagine funding diversification would help, though I'm not sure if it would go far enough to make EA a good career bet.
My own solution is to work myself up to the point where I'm financially independent from EA so my agency is not compromised by someone elses model of what works
And you're right that better epistemics might help address the other two problems, but only insofar that these are interventions that are targeted at "s1 epistemics" i.e. the stuff that doesn't necessarily follow from conscious deliberation. Most of the techniques in this category would fall under the banner of spirituality (the pragmatic type without metaphysics). This is something that the rationalist project has not addressed sufficiently. I think there's a lot of unexplored potential there.
I think it's great that EAIF is not funding constrained.
Here's a random idea I had recently if anyone is interested and has the time:
An org that organizes a common application for nonprofits applying to foundations. There is enormous economic inefficiency and inequality in matching PF grants to grantees. PF application processes are extremely opaque and burdensome. Attempts to make common applications have largely been unsuccessful, I believe mostly because they tend to be for a specific geographic region. Instead, I think it would be interesting to create different common applications by cause area. A key part of the common application could be incorporating outcome reporting specific to each cause area, which I believe would cause PF to make more impact-focused grants, making EAs happy.
Excellent idea. This would also incentivize writing an application that is generally convincing instead of trying to hack the preferences of the specific fund