Update: The EA Funds has launched!
This post introduces a new project that CEA is working on, which we’re calling the Effective Altruism Funds.
Some details about this idea are below. We’d really appreciate community feedback about whether this is the kind of thing they’d like to see CEA working on. We’ve also been getting input from our mentors at Y Combinator, who are excited about this idea.
The Idea
EAs care a lot about donating effectively, but, donating effectively is hard, even for engaged EAs. The easiest options are GiveWell-recommended charities, but many people believe that other charities offer an even better opportunity to have an impact. The alternative, for them, is to figure out: 1) which cause is most important; 2) which interventions in the cause are most effective; and 3) which charities executing those interventions are most effective yet still have a funding gap.
Recently, we’ve seen demand for options that allow individuals to donate effectively while reducing their total workload, whether by deferring their decision to a trusted expert (Nick Beckstead’s EA Giving Group) or randomising who allocates a group’s total donations (Carl Shulman and Paul Christiano’s donation lottery). We want to meet this demand and help EAs give more effectively at lower time cost. We hope this will allow the community to take advantage of the gains of labor specialization, rewarding a few EAs for conducting in-depth donation research while allowing others to specialize in other important domains.
The Structure
Via the EA Funds, people will be able to donate to one or more funds with particular focus areas. Donors will be able to allocate their donations to one or more of CEA’s EA Funds. Donations will be disbursed based on the recommendations of fund managers. If people don’t know what cause or causes they want to focus on, we'll have a tool that asks them a few questions about key judgement calls, then makes a recommendation, as well as more in-depth materials for those who want to deep-dive. Once people have made their cause choices, fund managers use their up-to-date knowledge of charities’ work to do charity selection.
We want to keep this idea as simple as possible to begin with, so we’ll have just four funds, with the following managers:
- Global Health and Development - Elie Hassenfeld
- Animal Welfare – Lewis Bollard
- Long-run future – Nick Beckstead
- Movement-building – Nick Beckstead
(Note that the meta-charity fund will be able to fund CEA; and note that Nick Beckstead is a Trustee of CEA. The long-run future fund and the meta-charity fund continue the work that Nick has been doing running the EA Giving Fund.)
It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy. First, these are the organisations whose charity evaluation we respect the most. The worst-case scenario, where your donation just adds to the Open Philanthropy funding within a particular area, is therefore still a great outcome. Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.
The Vision
One vision I have for the effective altruism community is that its members can function like a people’s foundation: any individual donor on their own might not have that much power, but if the community acts together they can have the sort of influence that major foundations like the Gates Foundation have. The EA funds help move us toward that vision.
In the first instance, we’re just going to have four funds, to see how much demand there is. But we can imagine various ways in which this idea could grow.
If the initial experiment goes well, then in the longer run, we'd probably host a wider variety of funds. For example, we’re in discussion with Carl and Paul about running the Donor Lottery fund, which we think was a great innovation from the community. Ultimately, it could even be that anyone in the EA community can run a fund, and there's competition between fund managers where whoever makes the best grants gets more funding. This would overcome a downside of using GiveWell and Open Philanthropy staff members as fund managers, which is that we potentially lose out on benefits from a larger variety of perspectives.
Having a much wider variety of possible charities also could allow us to make donating hassle-free for effective altruism community members. Rather than every member of the effective altruism community making individual contributions to multiple charities, having to figure out themselves how to do so as tax efficiently as possible, instead they could set up a direct debit to contribute through this platform, simply write in how much they want to contribute to which charities, and we could take care of the rest. And, with respect to tax efficiency, we’ve already found that even professional accountants often misadvise donors with respect to the size of the tax relief they can get. At least at the outset, only US and UK donors will be eligible for tax benefits when donating through the fund.
Finally, we could potentially use this platform to administer moral trades between donors. At the moment, people just give to wherever they think is best. But this loses out on the potential for a community to have more impact, by everyone’s lights, than they could have otherwise.
For example, imagine that Alice and Bob both want to give $100 to charity, and see this donation as producing the following amounts of value relative to one another. (E.g. where Alice believes that a $100 donation to AMF produces 1 QALY)
AMF |
GiveDirectly |
SCI |
|
Alice |
1 |
0.8 |
0.5 |
Bob |
0.5 |
0.8 |
1 |
This means that if Alice and Bob were each to give to the charities that they think are the most effective, (AMF and SCI, respectively), they would evaluate the total value as being:
1 QALY (from their donation) + 0.5 QALYs (from the other person’s donation)
= 1.5 QALYs
But if they paired their donations, they would evaluate the total value as being:
0.8 (from their donation) + 0.8 (from the other person’s donation)
= 1.6 QALYs
The same idea could happen with respect to the timing of donations, too, if one party prefers to donate earlier, and another prefers to invest and donate later.
We’re still exploring the EA Funds idea, so we welcome suggestions and feedback in the comments below.
Thanks for the feedback!
Two thoughts: 1) I don't think the long-term goal is that OpenPhil program officers are the only fund managers. Working with them was the best way to get an MVP version in place. In the long-run, we want to use the funds to offer worldview diversification and to expand the funding horizons of the EA community.
2)
I think I agree with you. However, since the OpenPhil program officers know what OpenPhil is funding it means that the funds should provide options that are at least as good as OpenPhil's funding. (See Carl Shulman's post on the subject.) The hope is that the "at least as good as OpenPhil" bar is higher than most donors can reach now, so the fund is among the most effective options for individual donors.
Let me know if that didn't answer the question.
The article you link (quote below) suggests the opposite should be true - individual donors should be able to do at least better than OpenPhil.
... (read more)