The Long-Term Future Fund (LTFF) is one of the EA Funds. Between Friday Dec 4th and Monday Dec 7th, we'll be available to answer any questions you have about the fund – we look forward to hearing from all of you!
The LTFF aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
Grant recommendations are made by a team of volunteer Fund Managers: Matt Wage, Helen Toner, Oliver Habryka, Adam Gleave and Asya Bergal. We are also fortunate to be advised by Nick Beckstead and Nicole Ross. You can read our bios here. Jonas Vollmer, who is heading EA Funds, also provides occasional advice to the Fund.
You can read about how we choose grants here. Our previous grant decisions and rationale are described in our payout reports. We'd welcome discussion and questions regarding our grant decisions, but to keep discussion in one place, please post comments related to our most recent grant round in this post.
Please ask any questions you like about the fund, including but not limited to:
- Our grant evaluation process.
- Areas we are excited about funding.
- Coordination between donors.
- Our future plans.
- Any uncertainties or complaints you have about the fund. (You can also e-mail us at ealongtermfuture[at]gmail[dot]com for anything that should remain confidential.)
We'd also welcome more free-form discussion, such as:
- What should the goals of the fund be?
- What is the comparative advantage of the fund compared to other donors?
- Why would you/would you not donate to the fund?
- What, if any, goals should the fund have other than making high-impact grants? Examples could include: legibility to donors; holding grantees accountable; setting incentives; identifying and training grant-making talent.
- How would you like the fund to communicate with donors?
We look forward to hearing your questions and ideas!
I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?
I think LTFF is doing something valuable by giving people the freedom to not "sell out" to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I'm worried about a situation where receiving a grant from LTFF isn't enough to be sustainable, so that people go back to doing more "safe" things like working in academia or at an established org.
Any thoughts on this topic?
The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we'd be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).
Many of the grants we make to individuals are for career transitions, such as someone retraining from one research field to another, or for one-off projects. So I would expect most grants to not be renewals. That said, the bar for renewals does tend to be higher. This is because we pursue a hits-based giving approach, so are willing to fund projects that are likely not to work out -- but of course will not want to renew the grant if it is clearly not working.
I think being a risk-tolerant funder is particularly valuable since most employers are, quite rightly, risk-averse. Firing people tends to be harmful to morale; internships or probation periods can help, but take a lot of supervisory time. This means people who might be... (read more)
Yeah, I am also pretty worried about this. I don't think we've figured out a great solution to this yet.
I think we don't really have sufficient capacity to evaluate organizations on an ongoing basis and provide good accountability. Like, if a new organization were to be funded by us and then grow to a budget of $1M a year, I don't feel like we have the capacity to evaluate their output and impact sufficiently well to justify giving them $1M each year (or even just $500k).
Our current evaluation process routes feels pretty good for smaller projects, and granting to established organizations that have other active evaluators looking into them that we can talk to, but doesn't feel very well-suited to larger organizations that don't have existing evaluations done on them (there is a lot of due diligence work to be done on that I think requires higher staff capacity than we have).
I also think the general process of the LTFF specializing into more something like venture funding, with other funders stepping in for more established organizations feels pretty good to me. I do think the current process has a lot of unnecessary uncertainty and risk in it, and I would like to ... (read more)
I agree with @Habryka that our current process is relatively lightweight which is good for small grants but doesn't provide adequate accountability for large grants. I think I'm more optimistic about the LTFF being able to grow into this role. There's a reasonable number of people who we might be excited about working as fund managers -- the main thing that's held us back from growing the team is the cost of coordination overhead as you add more individuals. But we could potentially split the fund into two sub-teams that specialize in smaller and larger grants (with different evaluation process), or even create a separate fund in EA Funds that focuses on more established organisations. Nothing certain yet, but it's a problem we're interested in addressing.
In the April 2020 payout report, Oliver Habryka wrote:
I'm curious to hear more about this (either from Oliver or any of the other fund managers).
Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver.
Thank you!
I am planning to respond to this in more depth, but it might take me a few days longer, since I want to do a good job with it. So please forgive me if I don't get around to this before the end of the AMA.
I wrote a long rant that I shared internally that was pretty far from publishable, but then a lot of things changed, and I tried editing it for a bit, but more things kept changing. Enough that at some point I gave up on trying to edit my document to keep up with the new changes, and instead just wait until things settle down, so I can write something that isn't going to be super confusing.
Sorry for the confusion here. At any given point it seemed like things would settle down more so I would have a more consistent opinion.
Overall, a lot of the changes have been great, and I am currently finding myself more excited about the LTFF than I have in a long time. But a bunch of decisions are still to be made, so I will hold off on writing a bit longer. Sorry again for the delay.
If you had $1B, and you weren't allowed to give it to other grantmakers or fund prioritisation research, where might you allocate it?
$1B is a lot. It also gets really hard if I don't get to distribute it to other grantmakers. Here are some really random guesses. Please don't hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.
My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output.
My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so.
I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.
I think I... (read more)
The cop-out answer of course is to say we'd grow the fund team or, if that isn't an option, we'd all start working full-time on the LTFF and spend a lot more time thinking about it.
If there's some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:
For any long-termist org who (a) I'd usually want to fund at a small scale; and (b) whose leadership's judgement I'd trust, I'd give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).
I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation's prestige, and some things (like creating a professorship) often require endowments.
However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wron
About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.
I think I put around 40% on it being a company that does already exist, and 20% on "other" (academia, national labs, etc).
Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower -- maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!
What processes do you have for monitoring the outcome/impact of grants, especially grants to individuals?
As part of CEA's due diligence process, all grantees must submit progress reports documenting how they've spent their money. If a grantee applies for renewal, we'll perform a detailed evaluation of their past work. Additionally, we informally look back at past grants, focusing on grants that were controversial at the time, or seem to have been particularly good or bad.
I’d like us to be more systematic in our grant evaluation, and this is something we're discussing. One problem is that many of the grants we make are quite small: so it just isn't cost-effective for us to evaluate all our grants in detail. Because of this, any more detailed evaluation we perform would have to be on a subset of grants.
I view there being two main benefits of evaluation: 1) improving future grant decisions; 2) holding the fund accountable. Point 1) would suggest choosing grants we expect to be particularly informative: for example, those where fund managers disagreed internally, or those which we were particularly excited about and would like to replicate. Point 2) would suggest focusing on grants that were controversial amongst donors, or where there were potential conflicts of interest.
It's important t... (read more)
I notice that all but one of the November 2020 grants were given to individuals as opposed to organisations. What is the reason for this?
To clarify I'm certainly not criticising - I guess it makes quite a bit of sense as individuals are less likely than organisations to be able to get funding from elsewhere, so funding them may be better at the margin. However I would still be interested to hear your reasoning.
I notice that the animal welfare fund gave exclusively to organisations rather than individuals in the most recent round. Why do you think there is this difference between LTFF and AWF?
Speaking just for myself on why I tend to prefer the smaller individual grants:
Currently when I look at the funding landscape, it seems that without the LTFF there would be a pretty big hole in available funding for projects to get off the ground and for individuals to explore interesting new projects or enter new domains. Open Phil very rarely makes grants smaller than ~$300k, and even many donors don't really like giving to individuals and early-stage organizations because they often lack established charity status, which makes their donations non-tax-deductable.
CEA has set up infrastructure to allow tax-deductible grants to individuals and organizations without charity status, and the fund itself seems well-suited to evaluate organizations by individuals, since we all have pretty wide networks and can pretty quickly gather good references on individuals that are working on projects that don't yet have an established track record.
I think in a world without Open Phil or the Survival and Flourishing Fund, much more of our funding would go to established organizations.
Separately, I also think that I personally view a lot of the intellectual work to be done on... (read more)
I largely agree with Habryka's comments above.
In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there's a lot of talent interested in the area, but there's limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there's less need to strike out in an independent direction. While I'm not sure on this, there might also be a cultural factor -- if you're trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it's just a one-person org). This seems much less important if you want to do research.
Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the ... (read more)
This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually.
I think one thing that's going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I've made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky.
What do you think has been the biggest mistake by the LTF fund (at least that you can say publicly)?
(I’m not a Fund manager, but I’ve previously served as an advisor to the fund and now run EA Funds, which involves advising EA Funds.)
In addition to what Adam mentions, two further points come to mind:
1. I personally think some of the April 2019 grants weren’t good, and I thought that some (but not all) of the critiques the LTFF received from the community were correct. (I can’t get more specific here – I don’t want to make negative public statements about specific grants, as this might have negative consequences for grant recipients.) The LTFF has since implemented many improvements that I think will prevent such mistakes from occurring again.
2. I think we could have communicated better around conflicts of interest. I know of some 2019 grants donors perceived to be subject to a conflict of interest, but there actually wasn’t a conflict of interest, or it was dealt with appropriately. (I also can recall one case where I think a conflict of interest may not have been dealt with well, but our improved policies and practices will prevent a similar potential issue from occurring again.) I think we’re now dealing appropriately with COIs (not in the sense that we refrain from any grants with a potential COI, but that we have appropriate safeguards in place that prevent the COI from impairing the decision). I would like to publish an updated policy once I get to it.
Historically I think the LTFF's biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren't funding interventions on climate change. We've received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it's important that donors have clear expectations regarding how their money will be used.
We've edited the fund page to make our focus areas more explicit, and EA Funds also added Founders Pledge Climate Change Fund for donors who want to focus on that area (and Jonas emailed donors who made this complaint, encouraging to switch their donations to the climate change fund). I hope this will help clarify things, but we'll have to be attentive to donor feedback both via things like this AMA and our donor survey, so that we can proactively correct any misconceptions.
Another issue I think we have is that we currently lack the capacity to be more proactively engaged with our grantees. I'd like us to do this for around 10% of our grant appli... (read more)
The very first sentence on that page reads (emphasis mine):
I personally think that's quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn't mention pandemics in that sentence? Perhaps you think "especially" is not strong enough?
An important reason why we don't make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.
Here's a spreadsheet with all EA Funds grants (though without categorization). I agree a proper grants database would be good to set up at some point; I have now added this to my list of things we mig... (read more)
Thanks Jonas, glad to hear there are some related improvements in the works For whatever it’s worth, here’s an example of messaging that I think accurately captures what the fund has done, what it’s likely to do in the near term, and what it would ideally like to do:
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks or promote the adoption of longtermist thinking. While many grants so far have prioritized projects addressing risks posed by artificial intelligence (and the grantmakers expect to continue this at least in the short term), the Fund is open to funding, and welcomes applications from, a broader range of activities related to the long-term future.
(Not sure if this is the best place to ask this. I know the Q&A is over, but on balance I think it's better for EA discourse for me to ask this question publicly rather than privately, to see if others concur with this analysis, or if I'm trivially wrong for boring reasons and thus don't need a response).
Open Phil's Grantmaking Approaches and Process has the 50/40/10 rule, where (in my medicore summarization) 50% of a grantmaker's grants have to have the core stakeholders (Holden Karnofsky from Open Phil and Cari Tuna from Good Ventures) on board, 40% have to be grants where Holden and Cari are not clearly on board, but can imagine being on board if they knew more, and up to 10% can be more "discretionary."
Reading between the lines, this suggests that up to 10% of funding from Open Phil will go to places Holden Karnofsky and Cari Tuna are not inside-view excited about, because they trust the grantmakers' judgements enough.
Is there a similar (explicit or implicit) process at LTFF?
I ask because
- part of the original pitch for EA Funds, as I understood it, was that it would be able to evaluate higher-uncertainty, higher-reward donation oppor
... (read more)This is an important question. It seems like there's an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position -- and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:
I'm not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and "safer" fund. Then donors can choose what... (read more)
How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?
Really good question!
We currently have ~$315K in the fund balance.* My personal median guess is that we could use $2M over the next year while maintaining this year's bar for funding. This would be:
Reasoning below:
Generally, we fund anything above a certain bar, without accounting explicitly for the amount of money we have. According to this policy, for the last two years, the fund has given out ~$1.5M per year, or ~$500K per grant round, and has not accumulated a significant buffer.
This round had an unusually large number of high-quality applicants. We spent $500K, but we pushed two large grant decisions to our next payout round, and several of our applicants happened to receive money from another source just before we communicated our funding decision. This mak... (read more)
Do you have a vision for what the 3 to 10 year vision for the Long-Term Future Fund looks like? Do you expect it to be mostly the same and possibly add revenue, or have any large structural changes?
As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds, and I’ve also been thinking about the longer-term strategy for EA Funds as a whole.
Some thoughts on this question:
- LTFF strategy: There is no official 3-10 year vision or strategy for the LTFF yet, but I hope we will get there sometime soon. My own best guess for the LTFF’s vision (which I haven’t yet discussed with the LTFF) is: ‘Thoughtful people have the resources they need to successfully implement highly impactful projects to improve the long-term future.’ My best guess for the LTFF’s mission/strategy is ‘make judgment-driven grants to individuals and small organizations and proactively seed new longtermist projects.’ A plausible goal could be to allocate $15 million per year to effective longtermist projects by 2025 (where ‘effective’ means something like ‘significantly better than Open Phil’s last dollar, similar to the current quality of grants’).
- Grantmaking capacity: To get there, we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of impl
... (read more)I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:
Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren't really part of the community, that I don't feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is pa... (read more)
Are there any areas covered by the fund's scope where you'd like to receive more applications?
I’d overall like to see more work that has a solid longtermist justification but isn't as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.
There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:
These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question-- I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.
These are very much a personal take, I'm not sure if others on the fund would agree.
Buying extra time for people already doing great work. A lot of high-impact careers pay pretty badly: many academic roles (especially outside the US), some non-profit and think-tank work, etc. There's certainly diminishing returns to money, and I don't want the long-termist community to engage in zero-sum consumption of Veblen goods. But there's also plenty of things that are solid investments in your productivity, like having a comfortable home office, a modern computer, ordering takeaway or having cleaners, enough runway to not have financial insecurity, etc.
Financial needs also vary a fair bit from person to person. I know some people who are productive and happy living off Soylent and working on a laptop on their bed, whereas I'd quickly burn out doing that. Others might have higher needs than me, e.g. if they have financial dependents.
As a general rule, if I'd be happy to fund someone for $Y/year if they were doing this work by themselves, and they're getting paid $X/year by their employer to do this work, I think I should be happy to pay the difference $(Y-X)/year provided the applicant has
What is the LTFF's position on whether we're currently at an extremely influential time for direct work? I saw that there was a recent grant on research into patient philanthropy, but most of the grants seem to be made from the perspective of someone who thinks that we are at "the hinge of history". Is that true?
At least for me the answer is yes, I think the arguments for the hinge of history are pretty compelling, and I have not seen any compelling counterarguments. I think the comments on Will's post (which is the only post I know arguing against the hinge of history hypothesis) are basically correct and remove almost all basis I can see for Will's arguments. See also Buck's post on the same topic.
I think this century is likely to be extremely influential, but there's likely important direct work to do at many parts of this century. Both patient philanthropy projects we funded have relevance to that timescale-- I'd like to know about how best to allocate longtermist resources between direct work, investment, and movement-building over the coming years, and I'm interested in how philanthropic institutions might change.
I also think it's worth spending some resources thinking about scenarios where this century isn't extremely influential.
What are you not excited to fund?
Of course there's lots of things we would not want to (or cannot) fund, so I'll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.
Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them
This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain themselves is by dealing in prestige: universities selling naming rights being a canonical example. It's also pretty easy to justify to oneself: of course you have to make this one sacrifice of your principles, so you can do more good later, etc.
I'm torn on this because gaining leverage can be a good strategy, and indeed it seems hard to see how we'll solve some major problems without individuals or organisations pursuing this. So I wouldn't necessarily discourage people from pursuing this path, though you might want to think hard about whether you'll be able to avoid value drift. But there's a big information asymmetry as a dono
Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying -- there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.
- Improving science or technology, unless there’s a strong case that the improvement would differentially benefit existential risk mitigation (or some other aspect of our long-term trajectory). As Ben Todd explains here, I think this is unlikely to be as highly-leveraged for improving the long-term future as trajectory changing efforts. I don’t think there’s a strong case that generally speeding up economic growth is an effective existential risk intervention.
- Climate change mitigation. From the evidence I’ve seen, I think climate change is unlikely to be either directly existentially threatening or a particularly highly-leveraged existential risk factor. (It’s also not very neglected.) But I could be excited about funding
... (read more)What are you excited to fund?
A related question: are there categories of things you'd be excited to fund, but haven't received any applications for so far?
I think the long-termist and EA communities seem too narrow on several important dimensions:
Methodologically there are several relevant approaches that seem poorly represented in the community. A concrete example would be having more people with a history background, which seems critical for understanding long-term trends. In general I think we could do better interfacing with the social sciences and other intellectual movements.
I do think there are challenges here. Most fields are not designed to answer long-term questions. For example, history is often taught by focusing on particular periods, whereas we are more interested in trends that persist across many periods. So the first people joining from a particular field are going to need to figure out how to adapt their methodology to the unique demands of long-termism.
There's also risks from spreading ourselves too thin. It's important we maintain a coherent community that's able to communicate with each other. Having too many different methodologies and epistemic norms could make this hard. Eventually I think we're going to need to specialize: I expect different fields will benefit from different norms and heuristics. But righ
I've already covered in this answer areas where we don't make many grants but I would be excited about us making more grants. So in this answer I'll focus on areas where we already commonly make grants, but would still like to scale this up further.
I'm generally excited to fund researchers when they have a good track record, are focusing on important problems and when the research problem is likely to slip through the cracks of other funders or research groups. For example, distillation style research, or work that is speculative or doesn't neatly fit into an existing discipline.
Another category which is a bit harder to define are grants where we have a comparative advantage at evaluating. This could be that one of the fund managers happens to already be an expert in the area and has a lot of context. Or maybe the application is time-sensitive and we're just about to start evaluating a grant round. In these cases the counterfactual impact is higher: these grants are less likely to be made by other donors.
LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?
The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn't represent what we think is the ideal split of total EA funding between cause-areas.
In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our applications are for speculative or early-stage projects. Given this, if you're reading this and are interested in applying to the LTFF but haven't seen us fund projects in your area before -- don't let that put you off. We're open to funding things in a very broad range of areas provided there's a compelling long-termist case.
Because cause prioritization isn't actually that decision relevant for most of our applications, I haven't thought especially deeply about it. In general, I'd say the fund is comparably excited about marginal work in reducing long-term risks from AI, biosafety, and general longtermist macrostrategy and capacity buildin... (read more)
What are the most common reasons for rejection for applications of the Long-Term Future Fund?
Filtering for obvious misfits, I think the majority reason is that I don't think the project proposal will be sufficiently valuable for the long-term future, even if executed well. The minority reason is that there isn't strong enough evidence that the project will be executed well.
Sorry if this is an unsatisfying answer-- I think our applications are different enough that it’s hard to think of common reasons for rejection that are more granular. Also, often the bottom line is "this seems like it could be good, but isn't as good as other things we want to fund". Here are some more concrete kinds of reasons that I think have come up at least more than once:
- Project seems good for the medium-term future, but not for the long-term future
- Applicant wants to learn the answer to X, but X doesn't seem like an important question to me
- Applicant wants to learn about X via doing Y, but I think Y is not a promising approach for learning about X
- Applicant proposes a solution to some problem, but I think the real bottleneck in the problem lies elsewhere
- Applicant wants to write something for a particular audience, but I don’t think that writing will be received well by that audience
- Project would be
... (read more)Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be really good, rather than from the object-level work itself.
In the case of other technical Ph.D.s, I generally check whether their work is impressive in the context of their field, whether their academic credentials are impressive, what their references have to say. I also place a lot of weight on whether their proposal makes sense and shows an understanding of the topic, and on my own impressions of the person after talking to them.
I do want to emphasize that "paying a smart person to test their fit for AI safety" is a really good use of money from my perspective-- if the person turns out to be good, I've in some sense paid for a whole lifetime of high-quality AI safety research. So I think my bar is not as high as it is when evaluating grant proposals for object-level work from people I already know.
Do you think it's possible that, by only funding individuals/organisations that actually apply for funding, you are missing out on even better funding opportunities for individuals or organisations that didn't apply for some reason?
If yes, one possible remedy might be putting more effort into advertising the fund so that you get more applications. Alternatively, you could just decide that you won't be limited by the applications you receive and that you can give money to individuals/organisations who don't actually apply for funding (but could still use it well). What do you think about these options?
A common case is people who are just shy to apply for funding. I think a lot of people feel awkward about asking for money. This makes sense in some contexts - asking your friends for cash could have negative consequences! And I think EAs often put additional pressure on themselves: "Am I really the best use of this $X?" But of course as a funder we love to see more applications: it's our job to give out money, and the more applications we have, the better grants we can make.
Another case is people (wrongly) assuming they're not good enough. I think a lot of people underestimate their abilities, especially in this community. So I'd encourage people to just apply, even if you don't think you'll get it.
Do you feel that someone who had applied, unsuccessfully, and then re-applied for a similar project (but perhaps having gathered more evidence), would be more likely, less likely, or equally likely to get funding than someone submitting an identical application to the second case, but not having been rejected once before, having chosen to not apply?
It feels easy to get into the mindset of "Once I've done XYZ, my application will be stronger, so I should do those things before applying", and if that's a bad line of reasoning to use (which I suspect it might be), some explicit reassurance might result in more applications.
Do you have any plans to become more risk tolerant?
Without getting too much into details, I disagree with some things you've chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don't know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you've taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I'm somewhat hesitant around donating to LTF because I... (read more)
From an internal perspective I'd view the fund as being fairly close to risk-neutral. We hear around twice as many complaints that we're too risk-tolerant than too risk-averse, although of course the people who reach out to us may not be representative of our donors as a whole.
We do explicitly try to be conservative around things with a chance of significant negative impact to avoid the unilateralist's curse. I'd estimate this affects less than 10% of our grant decisions, although the proportion is higher in some areas, such as community building, biosecurity and policy.
It's worth noting that, unless I see a clear case for a grant, I tend to predict a low expected value -- not just a high-risk opportunity. This is because I think most projects aren't going to positively influence the long-term future -- otherwise the biggest risks to our civilization would already be taken care of. Based on that prior, it takes significant evidence to update me in favour of a grant having substantial positive expected value. This produces similar decisions to risk-aversion with a more optimistic prior.
Unfortunately, it's hard to test this prior: we'd need to see how good the grants we didn't make w... (read more)
Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don't feel like I have a great picture of the details here.
If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I'd hope that we could eventually identify opportunities for long-term impact that aren't "find a small set of particularly highly talented researchers", but things more like, "spend X dollars advertising Y in a way that could scale" or "build a sizeable organization of people that don't all need to be top-tier researchers".
Some things I think could actively cause harm:
More broadly, I think as Adam notes above that the movement grows as a function of its initial composition. I think that even if the LTFF had infinite money, this pushes against funding every project where we expect the EV of the object-level wo... (read more)
I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:
If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused harm, including myself (I mentioned one of them here), and while it would have been good to avoid them, learning from those mistakes also helped us improve our work.
My perception is that “taking carefully calculated risks” won’t lead to your grant application being rejected (perhaps it would even improve your chances of being funded because it’s hard to find people who can do that well) – but “taking risks without taking good measures to prevent/mitigate them” will.
What crucial considerations and/or key uncertainties do you think the EA LTF fund operates under?
Several comments have mentioned that CEA provides good infrastructure for making tax-deductible grants to individuals and also that the LTF often does, and is well suited to, make grants to individual researchers. Would it make sense for either the LTF or CEA to develop some further guidelines about the practicalities of receiving and administering grants for individuals (or even non-charitable organisations) that are not familiar with this sort of income, to help funds get used effectively?
As a motivating example, when I recently received an L... (read more)
What would you like to fund, but can't because of organisational constraints? (e.g. investing in private companies is IIRC forbidden for charities).
What do you think is a reasonable amount of time to spend on an application to the LFTT?
What percentage of people who are applying for a transition grant from something else to AI Safety, get approved? Anything you want to add to put this number in context?
What percentage of people who are applying for funding for independent AI Safety research, get approved? Anything you want to add to put this number in context?
For example, if there is a clear category of people who don't get funding, becasue they clearly want to do something different than saving the long term future, than this would be useful contextual information.