Hide table of contents

In a comment on Benjamin Todd's article in favor of small donors, NunoSempere writes:

This article is kind of too "feel good" for my tastes. I'd also like to see a more angsty post that tries to come to grips with the fact that most of the impact is most likely not going to come from the individual people, and tries to see if this has any new implications, rather than justifying that all is good.

I am naturally an angsty person, and I don't carry much reputational risk, so this seemed like a natural fit.

I agree with NunoSempere that Benjamin's epistemics might be suffering from the nobility of his message. It's a feel-good encouragement to give, complete with a sympathetic photo of a very poor person who might benefit from your generosity. Because that message is so good and important, it requires a different style of writing and thinking than "let's try very hard to figure out what's true."

Additionally, I see Benjamin's post as a reaction to some popular myths. This is great, but we shouldn't mistake "some arguments against X are wrong" for "X is correct".

As to not bury the lede: I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.

Funny enough, although this is framed as a "red-team" post, I think that Benjamin mostly agrees with that advice. You can choose to take this as evidence that the advice is robust to worldview diversification, or as evidence that I'm really bad at red-teaming and falling prey to justification drift.

In terms of epistemic status: I take my own arguments here seriously, but I don't see them as definitive. Specifically, this post is meant to counterbalance Benjamin's post, so you should read his first, or at least read it later as a counterbalance against this one.

1. Our default view should be that high-impact funding capacity is already filled.

Consider Benjamin's explanation for why donating to LTFF is so valuable:

I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health.

I absolutely agree that those issues are very neglected, but only among the general population. They're not at all neglected within EA. Specifically, the question we should be asking isn't "do people care enough about this", but "how far will my marginal dollar go?"

To answer that latter question, it's not enough to highlight the importance of the issue, you would have to argue that:

  1. There are longtermist organizations that are currently funding-constrained,
  2. Such that more funding would enable them to do more or better work,
  3. And this funding can't be met by existing large EA philanthropists.

It's not clear to me that any of these points are true. They might be, but Benjamin doesn't take the time to argue for them very rigorously. Lacking strong evidence, my default assumptions are that funding capacity for extremely high-impact organizations well aligned with EA ideology will be filled by donors.

Benjamin does admirably clarify that there are specific programs he has in mind:

there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic.

At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn't OpenPhil just fund it?

In general, my default view for any EA cause is always going to be:

  • If this isn't funded by OpenPhil, why should I think it's a good idea?
  • If this is funded by OpenPhil, why should I contribute more money?

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsourcing ideas, I'm even less willing to believe that that large EA foundations have simply missed a good opportunity.

Benjamin might argue that OpenPhil is just taking its time to evaluate CEPI, and we should fill its capacity with small donations in the meantime. That might be true, but would still greatly lower the expected impact of giving to CEPI. In this view, you're accelerating CEPI's agenda by however long it takes OpenPhil to evaluate them, but not actually funding work that wouldn't happen otherwise. And of course, if it's taking OpenPhil time to evaluate CEPI, I don't feel that confident that my 5 minutes of thinking about it should be decisive anyway.

When I say "our default view", I don't mean that this is the only valid perspective. I mean it's a good place to start, and we should then think about specific cases where it might not be true.

2. Donor coordination is difficult, especially with other donors thinking seriously about donor coordination.

Assuming that EA is a tightly knit high-trust environment, there seems to be a way to avoid this whole debate. Don't try too hard to reason from first principles, just ask the relevant parties. Does OpenPhil think they're filling the available capacity? Do charities feel like they're funding-constraints despite support from large foundations?

The problem is that under Philanthropic Coordination Theory, there are altruistic reasons to lie, or at least not be entirely transparent. As GiveWell itself writes in their primer on the subject:

Alice and Bob, are both considering supporting a charity whose room for more funding is $X, and each is willing to give the full $X to close that gap. If Alice finds out about Bob's plans, her incentive is to give nothing to the charity, since she knows Bob will fill its funding gap.

Large foundations are Bob in this situation, and small donors are Alice. Assuming GiveWell wants to maintain the incentive for small donors to give, they have to hide their plans.

But why would GiveWell even want to maintain the incentive? Why not just fill the entire capacity themselves? One simple answer is that GiveWell wants to keep more money for other causes. A better answer is that they  don't want to breed dependence on a single large donor. As OpenPhil writes:

We typically avoid situations in which we provide >50% of an organization's funding, so as to avoid creating a situation in which an organization's total funding is "fragile" as a result of being overly dependent on us.

The optimistic upshot of this comment is that small donors are essentially matched 1:1. If GiveWell has already provided 50% of AMF's funding, then by giving AMF another $100, you "unlock" another $100 that GiveWell can provide without exceeding their threshold.

But the most pessimistic upshot is that assuming charities have limited capacity, it will be filled by either GiveWell or other small donors. In the extreme version of this view, a donation to AMF doesn't really buy more bednets, it's essentially a donation to GiveWell, or even a donation to Dustin Moskovitz.

Is that so bad? Isn't donating to GiveWell good? That's the argument I'll address in the next section. [1]

3. Benjamin's views on funging don't make sense.

Okay, so maybe a donation to AMF is really a donation to GiveWell, but isn't that fine? After all, it just frees GiveWell to use the money on the next most valuable cause, which is still pretty good.

This seems to be the view Benjamin holds. As he writes, if you donate $1000 to a charity that is OpenPhil backed, "then that means that Open Philanthropy has an additional $1,000 which they can grant somewhere else within their longtermist worldview bucket." The upshot is that the counterfactual impact of your donation is equivalent to the impact of OpenPhil's next-best cause, which is probably a bit lower, but still really good.

The nuances here depend a bit on your model of how OpenPhil operates. There seem to be a few reasonable views:

  1. OpenPhil will fund the most impactful things up to $Y/year.
  2. OpenPhil will fund anything with an expected cost-effectiveness of above X QALYs/$.
  3. OpenPhil tries to fund every highly impactful cause it has the time to evaluate.

In the first view, Benjamin is right. OpenPhil's funding is freed up, and they can give it to something else. But I don't really believe this view. By Benjamin's own estimate, there's around $46 billion committed to EA causes. He goes on to say that: "I estimate the community is only donating about 1% of available capital per year right now, which seems too low, even for a relatively patient philanthropist."

What about the second view? In that case, you're not freeing up any money since OpenPhil just stops donating once it's filled the available capacity.

The third view seems most plausible to me, and is equally pessimistic. As Benjamin writes further on:

available funding has grown pretty quickly, and the amount of grantmaking capacity and research has not yet caught up. I expect large donors to start deploying a lot more funds over the coming years. This might be starting with the recent increase in funding for GiveWell.

But what exactly is "grantmaking capacity and research"? It would make sense if GiveWell has not had time to evaluate all possible causes and institutions, and so there are some opportunities that they're missing. It would not make sense that GiveWell is unable to give more money to AMF due to a research bottleneck.

That implies that you might be justified in giving to a cause that OpenPhil simply hasn't noticed (note the concerns in section 1), but not justified in giving more money to a cause OpenPhil already supports. If Benjamin's view is that EA foundations are research bottlenecked rather than funding bottlenecked, small donations don't "free up" more funding in an impact-relevant way.

4. Practical recommendations

Where does this all leave us? Surprisingly, about back where we started. Benjamin already noted in his post that "there's an opportunity to do even more good than earning to give".

First of all, think hard about the causes that large EA foundations are unable to fund, despite being high impact. As Scott Alexander wrote:

It's not exactly true that EA "no longer needs more money" - there are still some edge cases where it's helpful; a very lossy summary might be "things it would be too weird and awkward to ask Moskovitz + Tuna to spend money on".

This is not exhaustive, but a short list of large foundation limitations include:

  • PR risk: It's not worth funding a sperm bank for nobel-prize winners that might later get you labeled a racist. See also, the Copenhagen Interpretation of Ethics, i.e. it might not be worth funding a highly imperfect intervention, even if it's net good.
    • More generally, it might not be worth funding an intervention that has a 90% chance of going well, but a 10% chance of going really poorly.
  • Small grants: When he launched Emergent Ventures, Tyler Cowen explained that "the high fixed costs of processing any request discriminate against very small proposals". E.g., it's not even worth OpenPhil's time to consider, evaluate and dispense a $500 grant.

To be clear, I don't think these are particular failings of OpenPhil, or EA Funds. Actually, I think that EA foundations do better on these axes than pretty much every other foundation. But there are still opportunities for small individual donors to exploit.

More positively, what are the opportunities I think you should pursue?

  • Fund individuals: As Dan Luu writes, some work depends entirely on who's doing it. If you know a specific person whose work you think is likely to be high-impact, and if some of that knowledge is not institutionally legible, you should consider just funding them yourself.

  • Fund weird things: A decent litmus test is "would it be really embarrassing for my parents, friends or employer to find out about this?" and if the answer is yes, more strongly consider making the grant.

    • Of course, the weird things are still subject to more conventional cost-effectiveness estimates.
  • Fund yourself: Instead of earning-to-give, earn-to-retire, and then do direct work yourself with the freedom to ignore that's "fundable" or laudable.

    • You might worry that "unfundable" work is unlikely to be high-impact, but again, you should think specifically about what work large foundations can't fund.

Outside of funding, try to:

  • Be more ambitious: There's some tradeoff curve between cost-effectiveness and scale. When EA was more funding constrained, a $1M grant with 10X ROI looked better than a $1B grant with 5x ROI, but now the reverse is true.
  • Be more entrepreneurial: Similarly, there's a tradeoff between making marginal improvements to a high-impact org, and starting a new org with potentially lower-impact. When EA was more talent constrained, working at existing EA orgs was higher impact. A lot of people would argue that it's still a very high impact, but relatively speaking, the value of starting a brand new org is higher.
    • This doesn't mean starting Generic Longtermist Research Firm X, it means trying to do work outside the scope of current organizations.

But as I mentioned at the outset, that's all fairly conventional, and advice that Benjamin would probably agree with. So given that my views differ, where are the really interesting recommendations?

The answer is that I believe in something I'll call "high-variance angel philanthropy". But it's a tricky idea, so I'll leave it for another post.


  1. Is this whole section an infohazard? If thinking too hard about Philanthropic Coordination Theory risks leading to weird adversarial game theory, isn't it better for us to be a little naive? OpenPhil and GiveWell have already discussed it, so I don't personally feel bad about "spilling the beans". In any case, OpenPhil's report details a number of open questions here, and I think the benefits of discussing solutions publicly outweighs the harms of increasing awareness. More importantly, I just don't think this view is hard to come up with on your own. I would rather make it public and thus publicly refutable than risk a situation where a bunch of edgelords privately think donations are useless due to crowding-out but don't have a forum for subjecting those views to public scrutiny. ↩︎

Comments53
Sorted by Click to highlight new comments since:

Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.

First off, I agree with this:

I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.

I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile. I agree my post might not have made this sufficiently salient. It's really hard to correct one misperception without accidentally encouraging one in the opposite direction.

The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.

Before going into your specific points, I’ll try to describe an overall model of what happens when more funds come into the community, which will explain why more money means more but diminishing impact.

Very roughly, EA donors try to fund everything above a ‘bar’ of cost-effectiveness (i.e. value per dollar). Most donors (especially large ones) are reasonably committed to giving away a certain portion of their funds unless cost-effectiveness drops very low, which means that the bar is basically set by how impactful they expect the ‘final dollar’ they give away in the future to be. This means that if more money shows up, they reduce the bar in the long run (though capacity constraints may make this take a while). Additional funding is still impactful, but because the bar has been dropped, each dollar generates a little less value than before.

Here’s a bit more detail of a toy model. I’ll focus on the longtermist case since I think it’s harder to see what’s going on there.

Suppose longtermist donors have $10bn. Their aim might be to buy as much existential risk reduction over the coming decades as possible with that $10bn, for instance, to get as much progress as possible on the AI alignment problem.

Donations to things like the AI alignment problem has diminishing returns – it’s probably roughly logarithmic. Maybe the first $1bn has a cost-effectiveness of 1000:1. This means that it generates 1000 units of value (e.g. utils, x-risk reduction) per $1 invested. The next $10bn returns 100:1, the next $100bn returns 10:1, the next $1,000bn is 2:1, and additional funding after that isn’t cost-effective. (In reality, it’s a smoothly declining curve.)

If longtermist donors currently have $10bn (say), then they can fund the entire first $1bn and $9bn of the next tranche. This means their current funding bar is 100:1 – so they should aim to take any opportunities above this level.

Now suppose some smaller donors show up with $1m between them. Now in total there is $10.001bn available for longtermist causes. The additional $1m goes into the 100:1 tranche, and so has a cost-effectiveness of 100:1. This is a bit lower than the average cost-effectiveness of the first $10bn (which was 190:1), but is the same as marginal donations by the original donors and still very cost-effective.

Now instead suppose another mega-donor shows up with $10bn, so the donors have $20bn in total. They’re able to spend $1bn at 1000:1, then $10bn at 100:1 and then the remaining $9bn is spent on the 10:1 tranche. The additional $10bn had a cost-effectiveness of 19:1 on average. This is lower than the 190:1 of the first $10bn, but also still worth doing.

How does this play out over time?

Suppose you have $10bn to give, and want to donate it over 10 years.

If we assume hinginess isn’t changing & ignore investment returns, then the simplest model is that you’ll want to donate about $1bn per year for 10 years.

The idea is that if the rate of good opportunities is roughly constant, and you’re trying to hit a particular bar of cost-effectiveness, then you’ll want to spread out your giving. (In reality you’ll give more in years where you find unusually good things, and vice versa.)

Now suppose a group of small donors show up who have $1bn between them. Then the ideal is that the community donates $1.1bn per year for 10 years – which requires dropping their bar (but only a little).

One way this could happen is for the small donors to give $100m per year for 10 years (‘topping up’). Another option is for the small donors to give $1bn in year 1 – then the correct strategy for the megadonor is to only give $100m in year 1 and give $1.1bn per year for the remaining 9 (‘partial funging’).

A big complication is that the set of opportunities isn’t fixed – we can discover new opportunities through research or create them via entrepreneurship. (This is what I mean by ‘grantmaking capacity and research’.)

It takes a long time to scale up a foundation, and longtermism as a whole is still tiny. This means there’s a lot of scope to find or create better opportunities. So donors will probably want to give less at the start of the ten years, and more towards the end when these opportunities have been found (and earning investment returns in the meantime). 

Now I can use this model to respond to some of your specific points:

At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn't OpenPhil just fund it?

Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.

This doesn’t, however, mean donating to CEPI has no value. I think CEPI could make a meaningful contribution to biosecurity (and given my personal cause selection, likely similarly or more effective than donating to GiveWell-recommended charities).

An opportunity can be below Open Phil’s current funding bar if Open Phil expects to find even better opportunities in the future (as more opportunities come along each year, and as they scale up their grantmaking capacity), but that doesn’t mean it wouldn’t be ‘worth funding’ if we had even more money. 

My point isn’t that people should donate to CEPI, and I haven’t thoroughly investigated it myself. It’s just meant as an illustration of how there are many more opportunities at lower levels of cost-effectiveness. I actually think both small donors and Open Phil can have an impact greater than funding CEPI right now.

(Of course, Open Phil could be wrong. Maybe they won’t discover better opportunities, or EA funding will grow faster than they expect, and their bar today should be lower. In this case, it will have been a mistake not to donate to CEPI now.)


In general, my default view for any EA cause is always going to be:

If this isn't funded by OpenPhil, why should I think it's a good idea?

If this is funded by OpenPhil, why should I contribute more money?

It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest.  Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.

In the extreme version of this view, a donation to AMF doesn't really buy more bednets, it's essentially a donation to GiveWell, or even a donation to Dustin Moskovitz.

You’re right that donations to AMF probably doesn’t buy more bednets, since AMF is not the marginal opportunity any more (I think, not sure about that). Rather, additional donations to global health get added to the margin of GiveWell donations over the long term, which Open Phil and GiveWell estimate has a cost-effectiveness of about 7x GiveDirectly / saving the life of a child under 5 for $4,500.

You’re also right that as additional funding comes in, the bar goes down, and that might induce some donors to stop giving all together (e.g. maybe people are willing to donate above a certain level of cost-effectiveness, but not below.

However, I think we’re a long way from that point. I expect Dustin Moskovitz would still donate almost all his money at GiveDirectly-levels of cost-effectiveness, and even just within global health, we’re able to hit levels at least 5x greater than that right now.

Raising everyone in the world above the extreme poverty line would cost perhaps $100bn per year (footnote 8 here), so we’re a long way from filling everything at a GiveDirectly level of cost-effectiveness – we’d need about 50x as much capital as now to do that, and that’s ignoring other cause areas.

There seem to be a few reasonable views:

1. OpenPhil will fund the most impactful things up to $Y/year.

2. OpenPhil will fund anything with an expected cost-effectiveness of above X QALYs/$.

3. OpenPhil tries to fund every highly impactful cause it has the time to evaluate.

I think view (2) is closest, but this part is incorrect:

What about the second view? In that case, you're not freeing up any money since OpenPhil just stops donating once it's filled the available capacity.

What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)

Why doesn’t Open Phil drop its bar already, especially given that they’re only spending ~1% of available capital per year? Ideally they’d be spending perhaps more like 5% of available capital per year. The reason this isn’t higher already is because growth in grantmaking capacity, research and the community will make it possible to find even more effective opportunities in the future. I expect Open Phil will scale up its grantmaking several fold over the coming decade. It looks like this is already happening within neartermism.

One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable. If the ratio became sufficiently extreme, additional capital would start to have relatively little value. However, I think we could actually deploy billions more without any additional people and still achieve reasonable cost-effectiveness. It’s just that I think that if we had more labour (especially the types of labour that are most complementary with funding), the cost-effectiveness would be even higher.

Finally, on practical recommendations, I agree with you that small donors have the potential to make donations even more effective than Open Phil’s current funding bar by pursuing strategies similar to those you suggest (that’s what my section 3 covers – though I don’t agree that grants with PR issues is a key category).  But simply joining Open Phil in funding important issues like AI safety and global health still does a lot of good.

In short, world GDP is $80 trillion. The interest on EA funds is perhaps $2.5bn per year, so that’s the sustainable amount of EA spending per year. This is about 0.003% of GDP. It would be surprising if that were enough to do all the effective things to help others.


 

One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable

I'm not sure about this, but I currently believe that the human capital in EA is worth considerably more than the financial capital.

It's hard to know – most valuations of the human capital are bound up with the available financial capital. One way to frame the question is to consider how much the community could earn if everyone tried to earn to give. I agree it's plausible that would be higher than the current income on the capital, but I think could also be a lot less.

 It's hard to know – most valuations of the human capital are bound up with the available financial capital. 

Agreed. Though I think I believe this much less now than I used to.  To be more specific, I used to believe that the primary reason direct work is valuable is because we have a lot of money to donate, so cause or intervention prioritization is incredibly valuable because of the leveraged gains. But I no longer think that's the but-for factor, and as a related update think there are many options at similar levels of compellingness as prioritization work. 

One way to frame the question is to consider how much the community could earn if everyone tried to earn to give

I like and agree with this operationalization. Though I'd maybe say "if everybody tried to earn to give or fundraise" instead.

I agree it's plausible that would be higher than the current income on the capital, but I think could also be a lot less.

I agree it could also be a lot less, but I feel like that's the more surprising outcome? Some loose thoughts in this direction:

  • Are we even trying? Most of our best and brightest aren't trying  to make lots of money. Like I'd be surprised if among the 500 EAs most capable of making lots of money, even 30% are trying to make lots of money.
    • And honestly it feels less, more like 15-20%?
    • Maybe you think SBF is unusually good at making money, more than the remaining 400-425 or so EAs combined?
      • This at least seems a little plausible to me, but not overwhelmingly so.
    • I feel even more strongly about this for fundraising. We have HNW fundraisers, but people are very much not going full steam on this
      • Of the 500 EAs with the strongest absolute advantage for fundraising from non-EA donors, I doubt even 25 of them are working full-time on this.
  • Retrodiction issues. Believing that we had more capital than human capital at any point in EAs past would have been a mistake, and I don't see why now is different.
    • We had considerably less than ~50B in your post a few years ago, and most of the gains appears to be in revenue, not capital appreciation
  • (H/T AGB) Age curves and wealth. If the income/wealth-over-time of EAs look anything like  that of normal people (including normal 1%-ers), highest earnings would be in ages >40, highest wealth in ages >60. Our movement's members have a median age of 27 and a mean age of 30. We are still gaining new members, and most  of our new recruits are younger than our current median. So why think we're over the middle point  in lifetime earnings or donations?
    • Maybe you think crypto + early FB is a once-in-a-lifetime thing, and that is strong enough to explain the lifetime wealth effect?
      • I don't believe that. I think of crypto as a once-in-a-decade thing.
        • Maybe your AI timelines are short enough that once-in-a-decade is the equivalent to a once-in-a-lifetime belief for you?
          • If so, I find this at least plausible, but I think this conjunction is a pretty unusual belief, whether in EA or the world at large, so it needs a bit more justification.
    • I'm not sure I even buy that SBF specifically is past >50% of his earning potential, and would tentatively bet against.
  • Macabre thought-experiment: if an evil genie forced you to choose between a) all EAs except one (say a good grantmaker like Holden or Nick Beckstead) die painlessly with their inheritance sent to the grantmaker vs b) all of our wealth magically evaporate, which would you choose?
    • For me it'd be b), and not even close
    • another factor is that ~half of our wealth is directly tied to specific people in EA. If SBF + cofounders disappeared, FTX's valuation will plummet.

But I don't think that's the relevant comparison for the 'ETG versus direct work' question. If we have a lot of human capital that also means we could earn and give more through ETG.

The more relevant comparison is something like

is the typical EA's human capital more valuable in doing direct EA work than it could be in non-EA work? If not, he/she should ETG ... and her donation could hire (?non-EAs) to fulfill the talent gap

If the financial capital is $46B and the population is 10k, the average person's career capital is worth about ~$5M of direct impact (as opposed to the money they'll donate)? I have a wide confidence interval but that seems reasonable. I'm curious to see how many people currently going into EA jobs will still be working on them 30 years later.

I want to 'second' some key points you made (which I was going to make myself). The main theme is that these 'absolute' thresholds are not absolute; these are simplified expressions of the true optimization problem.

The real thresholds will be adjusted in light of available funding, opportunities, and beliefsabout future funding.

See comments (mine and others) on the misconception of 'room for more funding'... the "RFMF" idea must be, either an approximate relative judgment ('past this funding, we think other opportunities may be better') or short-term capacity constraint ('we only have staff/permits/supplies to administer 100k vaccines per year, so we'd need to do more hiring and sourcing to go above this'.)

Diminishing returns ... but not to zero

The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.

It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest. Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.

The bar moves

What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)

> At face value, [an EA organization] seems great. But at the meta-level, I still have to ask, if [organization] is a good use of funds, why doesn't OpenPhil just fund it?

Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.

This is highly implausible. First of all, if it's true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.

But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we're really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.

When you're working on global poverty, perhaps you'd want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.

For x-risks this seems totally implausible. What's the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don't think alignment progress has outpaced that. If time until AGI is limited and short then we're actively falling behind. I don't think their investments or effectiveness are increasing fast enough for this explanation to make sense.

I think the party line is that the well-vetted (and good) places in AI Safety aren't funding-constrained, and the non-well-vetted places in AI Safety might do more harm than good, so we're waiting for places to build enough capacity to absorb more funding.

Under that worldview, I feel much more bullish about funding constraints for longtermist work outside of AI Safety, as well as more meta work that can feed into AI Safety later.

Within AI Safety, if we want to give lots of money quickly, I'd think about:

  • funding individuals who seem promising and are somewhat funding constrained
    • eg, very smart students in developing countries, or Europe, who want to go into AI Safety.
      • also maybe promising American undergrads from poorer backgrounds
    • The special case here is yourself if you want to go into AI Safety, and want to invest $s in your own career capital
  • Figure out which academic labs differentially improve safety over capabilities, throw GPUs or research engineers or teaching time buyouts for their grad students
    • When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
    • We're mostly bottlenecked on strategic clarity here, different AI Safety people I talk to have pretty different ideas about which research differentially advance safety over capabilities.
  • Possibly just throw lots of money at "aligned enough" academic places like CHAI, or individual AI-safety focused professors.
    • Unlike the above, here the focus is more on alignment rather than strategic understanding that what people are doing is good, just hoping that apparent alignment + trusting other EAs is "good enough" to be net positive.
  • Seriously consider buying out AI companies, or other bottlenecks to AI progress.

Other than #1 (which grantmakers are bottlenecked somewhat on due to their lack of local knowledge/networks), none of these things seem like "clear wins" in the sense of shovel ready projects that can absorb lots of money and we're pretty confident is good. 

When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.

I've never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they're probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.

This has good content but I am genuinely confused (partly because this article's subject is complex and this is after several successive replies). 

Your point about timelines seems limited to AI risk. I don't see the connection to the point about CEPI.

Maybe biorisk has similar "fast timelines" as AI risk—is this what your meaning? 

I hesitate to assume this is your meaning, so I write this comment instead. I really just want to understand this thread better.
 

Sorry, I didn't mean to imply that biorisk does or doesn't have "fast timelines" in the same sense as some AI forecasts. I was responding to the point about "if [EA organization] is a good use of funds, why doesn't OpenPhil fund it?" being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.

The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile.

I think this is a very useful way of putting it. I would be interested in anyone trying to actually quantify this (even to just get the right order of magnitude from the top). I suspect you have already done something in this direction when you decide what jobs to list on your job board.

I want to mildly push back on the "fund weird things" idea. I'm not aware of EA Funds grants having been rejected due to being weird. I think EA Funds is excited about funding weird things that make sense, and we find it easy to refer them to private donors. It's possible that there are good weird ideas that never cross our desk, but that's again an informational reason rather than weirdness.

Edit: The above applies primarily to longtermism and meta. If you're a large (>$500k/y) neartermist donor who is interested in funding weird things, please reach out to us (though note that we have had few to none weird grant ideas in these areas).

I agree EA is really good as funding weird things, but every in-group has something they consider weird. A better way of phrasing that might have been "fund things that might create PR risk for OpenPhil".

See this comment from the Rethink Priorities Report on Charter Cities:

Finally, the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial. Charter cities are likely to be financed by rich-country investors but built in low-income countries. If rich developers enforce radically different policies in their charter cities, that opens up the charge that the rich world is using poor communities to experiment with policies that citizens of the rich world would never allow in their own communities. Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals, thereby reducing the appeal of charter cities.

Note the phrasing "Whether or not this criticism is justified". The authors aren't worried that Charter Cities are actually neocolonialist, they're just worried that it creates PR risk. So Charter Cities are a good example of something small donors can fund that large EA foundations cannot.

I agree that EA Funds is in a slightly weird place here since you tend to do smaller grants. Being able to refer applicants to private donors seems like a promising counter-argument to some of my criticisms as well. Though is that case, is the upshot that I should donate to EA Funds, or that I should tell EA Funds to refer weird grant applicants to me?

Though is that case, is the upshot that I should donate to EA Funds, or that I should tell EA Funds to refer weird grant applicants to me?

If you're a <$500k/y donor, donate to EA Funds; otherwise tell EA Funds to refer weird grant applications to you (especially if you're neartermist – I don't think we're currently constrained by longtermist/meta donors who are open to weird ideas).

Regarding Charter Cities, I don't think EA Funds would be worried about funding them. However, I haven't yet encountered human-centric (as opposed to animal-inclusive) neartermist (as opposed to longtermist) large private donors who are open to weird ideas, and fund managers haven't been particularly excited about charter cities.

One possible source of confusion here is that EA grantmakers and (in the report) Rethink Priorities tend to think of charter cities (and for that matter, climate change) as a near-/medium- termist intervention in global health and development, whereas perhaps other EAs or EA-adjacent folks (including yourself?) think of it as a longtermist intervention.

This doesn't seem like it is common knowledge. Also, "weird things that make sense" does kind of screen off a bunch of ideas which make sense to potential applicants, but not to fund managers. 

It's possible that there are good weird ideas that never cross our desk, but that's again an informational reason rather than weirdness.

This is not the state of the world I would expect to observe if the LTF was getting a lot of weird ideas. In that  case, I'd expect some weird ideas to be funded, and some really weird ideas to not get funded.

Also, "weird things that make sense" does kind of screen off a bunch of ideas which make sense to potential applicants, but not to fund managers. 

It's up to the applicant to state their case to convince a (hopefully) risk neutral, intelligent and knowledgeable fund manager to give them money. If they don't do so convincingly enough then it's probably  because their idea isn't good enough.
 

This doesn't seem like it is common knowledge. 

To me, it feels like I (and other grantmakers) have been saying this over and over again (on the Forum, on Facebook, in Dank EA Memes, etc.), and yet people keep believing it's hard to fund weird things. I'm confused by this.

Also, "weird things that make sense" does kind of screen off a bunch of ideas which make sense to potential applicants, but not to fund managers. 

Sure, but that argument applies to individual donors in the same way. (You might say that having more diverse decision-makers helps, but I'm pretty skeptical and think this will instead just lower the bar for funding.)

[...] you would have to argue that:

  1. There are longtermist organizations that are currently funding-constrained,
  2. Such that more funding would enable them to do more or better work,
  3. And this funding can't be met by existing large EA philanthropists.

It's not clear to me that any of these points are true.

It seems to me that those points might currently be true of Rethink Priorities. See these relevant paragraphs from this recent EA Forum post on their 2021 Impact and 2022 Strategy:

If better funded, we would be able to do more high-quality work and employ more talented researchers than we otherwise would.

Currently, our goal is to raise $5,435,000 by the end of 2022 [...]. However, we believe that if we were maximally ambitious and expanded as much as is feasible, we could effectively spend the funds if we raised up to $12,900,000 in 2022.

Not all of this is for their longtermist work, but it seems that they plan to spend at least 26% of additional funding on longtermism in 2022-2023 (if they succeed at raising at least  $5,435,000), and up to about 41% if they raise $12,900,000.

It seems that they aren't being funded as much as they'd like to be by large donors. In the comments of that post, RP's Director of Development said that there have been several instances in which major grantmakers gave them only 25%-50% of the amount of funding they requested. Also, Linch, on Facebook, asked EAs considering donating this year to donate to Rethink Priorities. So I think there's good evidence that all of those points you mentioned are currently true. 

That being said, great funding opportunities like this can disappear very quickly, if a major grantmaker changes their mind.

Fund weird things: A decent litmus test is "would it be really embarrassing for my parents, friends or employer to find out about this?" and if the answer is yes, more strongly consider making the grant.

Things don't even have to be that weird to be things that let you have outsized impact with small funding.

A couple examples come to mind of things I've either helped fund or encouraged others to fund that for one reason or another got passed over for grants. Typically the reason wasn't that the idea was in principle bad, but that there were trust issues with the principals: maybe the granters had a bad interaction with the principals, maybe they just didn't know them that well or know anyone who did, or maybe they just didn't pass a smell test for one reason or another. But, if I know and trust the principals and think the idea is good, then I can fund it when no one else would.

Basically this is a way of exploiting information asymmetries to make donations. It doesn't scale indefinitely, but if you're a small time funder with plenty of social connections in the community there's probably work you could fund that would get passed over for being weird in the sense I describe above.

A lot's been said. Is this a fair summary: small donors can do a lot of good (and earning and giving can be much higher impact than other altruistic activities, like local community volunteering) but as the amount of 'EA dedicated' money goes up, small donors are less impactful and more people should consider careers which are directly impactful?

Thanks for the great post (and for your great writing in general)! It mostly makes a ton of sense to me, though I am a bit confused on this point:

"If Benjamin's view is that EA foundations are research bottlenecked rather than funding bottlenecked, small donations don't "free up" more funding in an impact-relevant way."

EA foundations might be research bottlenecked now, but funding bottlenecked in the future. So if I donate $1 that displaces a donation that OpenPhil would have made, then OpenPhil has $1 more to donate to an effective cause in the future when we are not funding constrainedthe future.

So essentially, a $1 donation by me now is an exercise in patient philanthropy, with OpenPhil acting as the intermediary. 

 Does this fit within your framework, or is there something I'm missing? 

I don't think this "changes the answer" as far as your recommendation goes - we should fund more individuals, selves, and weirdos.

Hey, thanks. That's a good point.

I think it depends partially on how confident you are that Dustin Moskovitz will give away all his money, and how altruistic you are. Moskovitz seems great, I think he's pledged to give away "more than half" his wealth in his lifetime (though I current find a good citation, it might be much higher). My sense is that some other extremely generous billionaires (Gates/Buffet) also made pledges, and it doesn't currently seem like they're on track. Or maybe they do give away all their money, but it's just held by the foundation, not actually dolled out to causes. And then you have to think about how foundations drift over time, and if you think OpenPhil 2121 will have values you still agree with.

So maybe you can think of this roughly as: "I'm going to give Dustin Moskovitz more money, and trust that he'll do the right thing with it eventually". I'm not sure how persuasive that feels to people.

(Practically, a lot of this hinges on how good the next best alternatives actually are. If smart weirdos you know personally are only 1% as effective as AMF, it's probably still not worth it even if the funding is more directly impactful. Alternatively, GiveDirectly is ~10% as good as GiveWell top charities, and even then I think it's a somewhat hard sell that all my arguments here add up to a 10x reduction in efficacy. But it's not obviously unreasonable either.)

That’s helpful thank you! I think the mode is more “I’m going to give OpenPhil more money”. It only becomes “I’m going to give Dustin more money” if it’s true that Dustin adjusts his donations to OpenPhil every year based on how much OpenPhil disburses, such that funging OpenPhil = funging Dustin

But in any case I’d say most EAs are probably optimistic that these organizations and individuals will continue to be altruistic and will continue to have values we agree with.

And in any any case, I strongly agree that we should be more entrepreneurial

Strong upvote, I think the "GiveDirectly of longtermism" is investing* the money and deploying it to CEPI-like (but more impactful) opportunities later on. 

* Donors should invest it in ways that return ≥15% annually (and plausibly 30-100% on smaller amounts, with current crypto arbitrage opportunities). If you don't know how to do this yourself, funging with a large EA donor may achieve this.

(Made a minor edit)

The claim that large EA donors are likely to return ≥15% annually, and plausibly 30%-100%, is incredibly optimistic. Why would we expect large EA donors to get so much higher returns on investment than everyone else, and why would such profitable opportunities still be funding-constrained? This is not a case where EA is aiming for something different from others; everyone is trying to maximize their monetary ROI with their investments.

Markets are made efficient by really smart people with deep expertise. Many EAs fit that description, and have historically achieved such returns doing trades/investments with a solid argument and without taking crazy risks. 

Examples include: crypto arbitrage opportunities like these (without exposure to crypto markets), the Covid short, early crypto investments (high-risk, but returns were often >100x, implying very favorable risk-adjusted returns), prediction markets, meat alternatives.

Overall, most EA funders outperformed the market over the last 10 years, and they typically had pretty good arguments for their trades.

But I get your skepticism and also find it hard to believe (and would also be skeptical of such claims without further justification).

Also note that returns will get a lot lower once more capital is allocated in this way. It's easy to make such returns on $100 million, but really 

(Made some edits)

But the more you think everyone else is doing that, the more important it is to give now right? Just as an absurd example, say the $46b EA-related funds grows 100% YoY for 10 years, then we wake up in 2031 with $46 trillion. If anything remotely like that is actually true, we'll feel pretty dumb for not giving to CEPI now.

Yeah, I agree. (Also, I think it's a lot harder / near-impossible to sustain such high returns on a $100b portfolio than on a $1b portfolio.)

Thanks for the post. It has turned this into currently one of the most interesting discussions in the Forum.
However, I'm not convinced  that donor coordination  among EAs is particular hard by your argument (what makes it hard is that we might have conflicting goals, such as near-term vs. long-term, or environmentalism vs. wild animal suffering, etc. and even so EAs are the only guys talking about things like moral trade).

Actually, I'm particular suspicious of the recommendation "fund weird things" - I mean, yeah, I agree you should fund a project that you think has high-expected value and is neglected because only you know it, but... are you sure you paid all the relevant informational costs before getting to this conclusion? I guess I prefer to pay some EA orgs to select what wild things are worth funding.

I'll probably have to write a whole post to deal with this, but my TL;DR is: the movement / community Effective Altruism exists for us to efficiently deal with the informational costs and coordination necessary to do the most good. It isn't a movement created only to convince people they should do the most good (EAs often don't need to be convinced of this, but yeah, convincing others sure helps) or so they could feel less lonely doing it (but again, it helps) - I think we need a movement especially because we are trying to find out what is the most good you can do. It turns out it is more effective to do that in community of high-skilled and like-minded (up to a point: diversity is an asset, too) people. So when someone say "fund weird things", I want to reply something like "Sure... but how do I do it effectively, instead of just like another normie?"

Of course, I'm afraid someone might accuse me of misunderstanding the case for "fund weird things", but my point is precisely that this advice should have some caveats added to prevent misunderstanding. Though I agree EAs should look for more low-hanging fruit in the wild, they should also think about how, as a group, they could coordinate to make the most of it.

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsourcing ideas, I'm even less willing to believe that that large EA foundations have simply missed a good opportunity.

I would like to respond specifically to this reasoning.

Consider the scenario that a random (i.e. probably not EA-affiliated) genius comes up with an idea that is, as a matter of fact, high value. 

Simplifying a lot, there are two possibilities here: X their idea falls within the window of what the EA community regards as effective, and Y it does not.

Probabilities for X and Y could be hotly debated, but I'm comfortable stating that the probability for X is less than 0.5. i.e. we may have a high success rate within our scope of expertise, but the share of good ideas that EA can recognize as good is not that high. 

The ideas that reach Openphil via the EA community might be good, but not all good ideas make it through the EA community.

I made a comment on Ben's post proposing that small donations to the Patient Philanthropy Fund may have very high expected impact, if you buy into the general argument for patient philanthropy. I'd be interested to hear your thoughts on this. It seems to me that PPF may be a good candidate for the "GiveDirectly of Longtermism".

My original comment:

One point I don't think has been mentioned in this post is that a small donation to the Patient Philanthropy Fund could end up being a much larger donation in the future, in real terms, due to likely investment returns. Couple that with probable exogenous learning over time on where/when best to give, a small donation to PPF now really could do a phenomenal amount of good later on. 

More on this in Founders Pledge's report.

On Open Phil aiming to not cover the vast majority of an organization's budget, from the 80,000 Hours podcast:

Rob Wiblin: A regular listener wrote in and was curious to know where Open Phil currently stands on its policy of not funding an individual organization too much, or not being too large a share of their total funding, because I think in the past you kind of had a rule of thumb that you were nervous about being the source of more than 50% of the revenue of a nonprofit. And this kind of meant that there was a niche where people who were earning to give could kind of effectively provide the other 50% that Open Phil was not willing to provide. What’s the status of that whole situation?

Holden Karnofsky: Well, it’s always just been a nervousness thing. I mean, I’ve seen all kinds of weird stuff on the internet that people… Games of telephone are intense. The way people can get one idea of what your policy is from hearing something from someone. So I’ve seen some weird stuff about it “Open Phil refuses to ever be more than 50%, no matter what. And this is becoming this huge bottleneck, and for every dollar you put in, it’s another dollar…” It’s like, what? No, we’re just nervous about it. We are more than 50% for a lot of EA organizations. I think it is good to not just have one funder. I think that’s an unhealthy dynamic. And I do think there is some kind of multiplier for people donating to organizations, there absolutely is, and that’s good. And you should donate to EA organizations if you want that multiplier. I don’t think the multiplier’s one-to-one, but I think there’s something there. I don’t know what other questions you have on that, but it’s a consideration.

Rob Wiblin: I mean, I think it totally makes sense that you’re reluctant to start approaching the 100% mark where an organization is completely dependent on you and they’ve formed no other relationships with potential backup supporters. They don’t have to think about the opinions of anyone other than a few people that Open Phil. That doesn’t seem super healthy.

Holden Karnofsky: Well, not only do they… I mean, it’s a lack of accountability but it’s also a lack of freedom. I think it’s an unhealthy relationship. They’re worried that if they ever piss us off, they could lose it and they haven’t built another fundraising base. They don’t know what would happen next, and that makes our relationship really not good. So it’s not preferred. It doesn’t mean we can never do it. We’re 95% sometimes.

Rob Wiblin: Yeah, it does seem like organizations should kind of reject that situation in almost any circumstance of becoming so dependent on a single funder that to some extent, they’re just… Not only is the funder a supporter, but they’re effectively managing them, or you’re going to be so nervous about their opinions that you just have to treat them as though they were a line manager. Because you know so much more about the situation than the funder probably does, otherwise they would be running the organization. But accepting that, so you’re willing to fund more than 50% of an organization’s budget in principle?

Holden Karnofsky: Yeah.

Rob Wiblin: But you get more and more reluctant as they’re approaching 100%. That does mean that there is a space there for people to be providing the gap between what you’re willing to supply and 100%. So maybe that’s potentially good news for people who wanted to take the earning to give route and were focused on longtermist organizations.

Holden Karnofsky: Yeah, and I think the reason it’s good news is the thing I said before, which is that it is good for there not to just be one dominant funder. So when you’re donating to EA organizations, you’re helping them have a more diversified funding base, you’re helping them not be only accountable to one group, and we want that to happen. And we do these fair-share calculations sometimes. So we’ll kind of estimate how much longtermist money is out there that would be kind of eligible to support a certain organization, and then we’ll pay our share based on how much of that we are. And so often that’s more like two thirds, or has been more like two thirds than 50%. Going forward it might fall a bunch. So I mean, that’s the concept. And I would say it kind of collapses into the earlier reason I gave why earning to give can be beneficial.

I have a similar thread on Ben's post here.

This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health.

I absolutely agree that those issues are very neglected, but only among the general population. They're not at all neglected within EA. Specifically, the question we should be asking isn't "do people care enough about this", but "how far will my marginal dollar go?"

To answer that latter question, it's not enough to highlight the importance of the issue, you would have to argue that:

  1. There are longtermist organizations that are currently funding-constrained,
  2. Such that more funding would enable them to do more or better work,
  3. And this funding can't be met by existing large EA philanthropists.

 

This is a good illustration of how tractability has been neglected by longtermists. Benjamin is only thinking in terms of importance and crowdedness, and not incorporating tractability.

There's some tradeoff curve between cost-effectiveness and scale. When EA was more funding constrained, a $1M grant with 10X ROI looked better than a $1B grant with 5x ROI, but now the reverse is true.

Could you explain what you mean by 10X ROI?

Yeah, that's a good question. It's underspecified, and depends on what your baseline is.

We might say "for $1 donated, how much can we increase consumption". Or "for $1 donated, how much utility do we create?" The point isn't really that it's 10x or 5x, just that one opportunity is roughly 2x better than the other.

https://www.openphilanthropy.org/blog/givewells-top-charities-are-increasingly-hard-beat

So if we are giving to, e.g., encourage policies that increase incomes for average Americans, we need to increase them by $100 for every $1 we spend to get as much benefit as just giving that $1 directly to GiveDirectly recipients.

That's not exactly "Return on Investment", but it's a convenient shorthand.

So it's like a benefit to cost ratio. So I can see with diminishing returns to more money, the benefit to cost ratio could be half. So with $1 million in the early days of EA, we could have $10 million of impact. But now that we have $1 billion, we can have $5 billion of impact. It seems like the latter scenario is still much better. Am I missing something?

Uhh, I'm not sure if I'm misunderstanding or you are. My original point in the post was supposed to be that the current scenario is indeed better.

Ok, so we agree that having $1 billion is better despite diminishing returns. So I still don't understand this statement:

When EA was more funding constrained, a $1M grant with 10X ROI looked better than a $1B grant with 5x ROI

Are you saying that in 2011, we would have preferred $1M over $1B? Or does "look better" just refer to the benefit to cost ratio?

I think I see the confusion.

No, I meant an intervention that could produce 10x ROI on $1M looked better than an intervention that could produce 5x ROI on $1B, and now the opposite is true (or should be).

By the way, sometimes rep risks signal something is just a bad idea.

PR risk: It's not worth funding a sperm bank for nobel-prize winners that might later get you labeled a racist

Or you could just fund a gamete (why just sperm?) bank for very high-IQ / cognitive skilled / successful people - which would be way cheaper and more effective (you could buy the whole embryo if you want it). Or just fund genetics research and ethical eugenics advocacy, which is way more scalable. Thus people coulde better tell the difference between things that have a bad rep because they are bad ideas and things that have it because they are associated with bad ideas.

My point: best case scenario, you should be neutral to PR risks, and maybe see it as a con, instead of a complement to neglectedness in your cost-benefit analysis. But that's hard to do when you're looking for weird things by yourself.

Maybe we should collect the high-variance angel philanthropy ideas somewhere?

I was discussing this recently with someone, I think it could be highly valuable to crowdsource ideas related to EA.

The first point only applies to long-termist charities as far as I can tell. I think if you're earning to give and excited to give to GiveDirectly (or other charities that are 2-5x more effective) there's not a problem.

Agreed that my arguments don't apply to donations to GiveDirectly, it's just that they're 5-10x less effective than top GiveWell charities.

I think that part of my arguments don't apply to other GiveWell charities, but the general concern still does. If AMF (or whoever) has funding capacity, why shouldn't I just count on GiveWell to fill it?

I have also been thinking about whether GiveWell/other donors will fill the funding needs at AMF and I should look for something in between AMF and GiveDirectly that needs funding.

1 Day Sooner (promoting greater use of Human Challenge Trials for vaccine development) is the global health group I'm personally most excited about. I would personally probably invest money in them over AMF if I was doing global health donations even if AMF is funding constrained, though I'd want to think carefully about both 1 Day Sooner and AMF if I was advising or donating considerable amounts of money in global health.  

I also feel fairly optimistic about lead reduction, based on this Rethink Priorities report

Note that both subcause areas are less well-studied and higher uncertainty than AMF. I'm not aware of Pareto improvements to AMF along the risk to cost-effectiveness frontier.

For GiveWell and its top charities, excluding GiveDirectly, I think a lot depends on whether you expect GiveWell to have more RFMF than funds anytime in the near future. The obvious question is why wouldn't OpenPhil fill in the gaps. Maybe if GiveWell's RFMF expands enough then OpenPhil won't want to spend that much on GiveWell-level interventions?

GiveWell gives some estimates here (Rollover Funding FAQ | GiveWell) saying they expect to have capacity to spend down their funds in 2023, but they admit they're conservative on the funding side and ambitious on the RFMF side.

If GiveWell will actually be funding constrained within a few years, I feel pretty good about donating to them, effectively letting them hold the money in OpenPhil investments until they identify spending opportunities at the 5-10x GD level (especially where donating now yields benefits like matching).

If they're ultimately going to get everything 5x+ funded by OpenPhil no matter what, then your argument that I'm donating peanuts to the huge pile of OpenPhil or Moskovitz money seems right to me.

GiveWell does say "If we’re able to raise funds significantly faster than we've forecast, we will prioritize finding additional RFMF to meet those funds." So it sounds like they're almost-committing to not letting donor money get funged by OpenPhil for more than a few years.

I am naturally an angsty person, and I don't carry much reputational risk Relate! Although you're anonymous, I'm just ADD.

Point 1 is interesting to me:

  • longtermist/AI safety orgs could require a diverse ecosystem of groups working based on different approaches. This would mean the "current state of under-funded-ness" is in flux, uncertain, and leaning towards "some lesser-known group(s) need money".
  • lots of smaller donations could indicate/signal interest from lots of people, which could help evaluators or larger donors with something.

Another point: since I think funding won't be the bottleneck in the near future, I've refocused my career somewhat to balance more towards direct research.

(Also, partly inspired by your "Irony of Longtermism" post, I'm interested in intelligence enhancement for existing human adults, since the shorter timelines don't leave room for embryo whatevers, and intelligence would help in any timeline.)

Curated and popular this week
Relevant opportunities