For the past few years, I've generally mostly heard from alignment grantmakers that they're bottlenecked by projects/people they want to fund, not by amount of money. Grantmakers generally had no trouble funding the projects/people they found object-level promising, with money left over. In that environment, figuring out how to turn marginal dollars into new promising researchers/projects - e.g. by finding useful recruitment channels or designing useful training programs - was a major problem.

Within the past month or two, that situation has reversed. My understanding is that alignment grantmaking is now mostly funding-bottlenecked. This is mostly based on word-of-mouth, but for instance, I heard that the recent lightspeed grants round received far more applications than they could fund which passed the bar for basic promising-ness. I've also heard that the Long-Term Future Fund (which funded my current grant) now has insufficient money for all the grants they'd like to fund.

I don't know whether this is a temporary phenomenon, or longer-term. Alignment research has gone mainstream, so we should expect both more researchers interested and more funders interested. It may be that the researchers pivot a bit faster, but funders will catch up later. Or, it may be that the funding bottleneck becomes the new normal. Regardless, it seems like grantmaking is at least funding-bottlenecked right now.

Some takeaways:

  • If you have a big pile of money and would like to help, but haven't been donating much to alignment because the field wasn't money constrained, now is your time!
  • If this situation is the new normal, then earning-to-give for alignment may look like a more useful option again. That said, at this point committing to an earning-to-give path would be a bet on this situation being the new normal.
  • Grants for upskilling, training junior people, and recruitment make a lot less sense right now from grantmakers' perspective. 
  • For those applying for grants, asking for less money might make you more likely to be funded. (Historically, grantmakers consistently tell me that most people ask for less money than they should; I don't know whether that will change going forward, but now is an unusually probable time for it to change.)

Note that I am not a grantmaker, I'm just passing on what I hear from grantmakers in casual conversation. If anyone with more knowledge wants to chime in, I'd appreciate it.

Comments13


Sorted by Click to highlight new comments since:

I've always been surprised that there is no fund you can donate to that is only for AI Alignment. You can either donate directly to an org or project or you can donate to a longtermist fund which is broader than just alignment.

I've tried to argue before that plenty of people are just not that cause neutral and would want to donate to a fund just for alignment. And now that alignment has gone much more mainstream it is even more important that we actually have a legible place for people to donate.

AI Safety has gone mainstream but most people in the world wouldn't have a clue what “longtermism” is.

FWIW, the LTFF is considering spinning off a fund focused on AI alignment (though no promises at this point in time).

Agreed. Lots of people aren't longtermist.

[anonymous]14
7
1

I had also heard anecdotally from some AI orgs that they might not be able to hire as many people as they would like. This seems surprising given how this seems like a pivotal time for AI safety research and the field is still very young and neglected. 

This is interesting but not so surprising to me. Increasing attention to AI work means growing orgs and more people working on it.

AI safety work (outside of big AI companies) is funded almost entirely by donations (as far as I know), with hardly any government money. The large majority of this is probably EA affiliated donations too.

So if the number of AI safety orgs increase and people wanting to working on AI safety increases rapidly (including people outside EA), but the donation growth doesn't keep up with this then work will stall.

Also this rings true from the OP
"Grants for upskilling, training junior people, and recruitment make a lot less sense right now from grantmakers' perspective. "

"For those applying for grants, asking for less money might make you more likely to be funded" 

My guess is that it's good to still apply for lots of money, and then you just may not be funded the full amount? And one can say what one would do with more or less money granted, so that the grantmakers can take that into account in their decision.

Judging by the stats page LTFF currently has 2M in the bank, which is an evidence against being funding constrained. If LTFF have distributed ~all the funds and managers put up a post saying e.g. : “here are 20 more grants totalling 10M we’d really like to see funded”, the argument would have sounded more persuasive, and i would predict the funding gap could get closed relatively quickly by some crypto magnate. 

I appreciate the author starting this conversation and would really love to see comment from LTFF here

Hi Mckiev! 

We consider LTFF to be funding constrained because we're still giving out more in grants monthly than we're receiving in donations, despite having raising our internal bar twice since November 2022 (the first time because of an assessment of changes in the LT funding landscape overall, the second time due to worries about our own liquidity constraints). 

If you're interested, you might like my new post on marginal grants at different levels of funding here, which might help you get a useful sense of whether donations to LTFF are valuable relative to your best counterfactual use of money elsewhere.

Re: "2M in the bank," I think it is literally true that we have 1.8M in the bank, but the number listed is a lagging indicator, because we have a number of grants that were promised or effectively promised but not actually paid out. 

In this appendix, I also wrote some notes on our current approach to how much we save/donation smoothing (mostly we don't do it that much).

Thank you for pointing me to this post, now I better understand the situation. I hope you'll figure out how to distribute approved grants faster, and also how raise more funds - I'd love to see most net-positive longtermist grants funded, and believe it's achievable.

 

I think you laid out a very compelling reason to donate to LTFF and I'm sorry I didn't see it earlier. Am I unusual in this regard? What share of current LTFF donors do you think are up to date on this? 

I think you laid out a very compelling reason to donate to LTFF and I'm sorry I didn't see it earlier.

Thanks for your kind words! And as for being sorry, hardly your fault, given that the relevant public writings are between a few days and 2 weeks old!

Am I unusual in this regard? What share of current LTFF donors do you think are up to date on this? 

I'm not sure. We've always been low on grantmaking capacity at least since I've joined in early 2022, and the relative capacity: applications ratio has gotten even worse this year. This resulted in us prioritizing our limited LTFF time on grant evaluations rather than donor engagement or big picture strategic calls. It has also resulted in a number of other deficiencies in LTFF like lack of reliability in responding to time-sensitive applications, see some of Asya's reflections here.

We've onboarded a few more guest fund managers (grantmakers) so hopefully the grant evals will be faster. I've also decided recently to devote a significantly higher percentage of time on EA Funds/LTFF rather than my historical "day job" at Rethink Priorities, at least until the situation is a bit more stable.

Anyway, one of my bigger priorities in the upcoming weeks/months is to understand donor concerns, communicate with donors and the EA public more, possibly pitch specific donors about our funding needs, etc. Hopefully this will give us more clarity into whether my tentative assessment (that LTFF's current marginal grant is equal to or better than other marginal longtermist expenditures) is accurate, vs donors are correctly responding to changes in the funding landscape by prioritizing other, better, longtermist donation opportunities.

Glad to hear that you are increasing capacity! In regards to understanding donor concerns and pitching them: seems like it should be relatively easy to hire someone for this role (unlike hiring a grant evaluator)

Hi Anton,

Trusting what Caleb Parikh said 1 week or so ago, it looks like the LTFF is funding constrained:

LTFF funding gap

  • The LTFF has a funding gap of $1M/month.[7]
  • Based on donations over the past few months, I estimate that each fund will receive (by default) roughly $120k per month (720K over the next six months), which will be matched at a 2:1 rate, by Open Philanthropy to give us a total of $360k/month.
  • This means we expect to be unable to fund around $640k/month of projects we believe should be funded.
  • This could be filled by an additional $213k in public donations each month (or $1.27M over the next six months).

On the other hand, the LTFF might still not be funding constrained if the above gap is expected to be easily filled thanks to its announcement.

On the other hand, the LTFF might still not be funding constrained if the above gap is expected to be easily filled thanks to its announcement.

I definitely hope so! On the other hand, we at least haven't received sufficiently high donations in the last week. But of course it's very possible/likely that people very reasonably need to take a while to decide whether donating to us is worth it, so the situation will look less scary in the coming weeks.

More from johnswentworth
70
· · 4m read
Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr