I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer's.
You can give me positive and negative feedback here.
We don't have anything public, but we are conducting some retrospective evaluations, which we expect to publish eventually which will look at upskilling grants in particular (though I don't know the timeline for publication right now).
Internally we have taken some lightweight measures to assess the value of upskilling grants and think that they are often pretty useful for accelerating people and getting people to do impactful work in or outside of the specific area they did their upskilling project in - though we hope to have more concrete and sharable data in a few months' time.
I don't think we have much data on the effects of these grants several years out, as we have only been making them for 1-2 years, but I think that people often move into doing impactful work pretty quickly after their upskilling grant anyway.
Thanks for the suggestion. A decent fraction of applicants already outline different budgets for their projects, and we generally feel comfortable adjusting their budgets based on our willingness to pay. At the same time, we want to be mindful of not underfunding projects or leaving grantees with deals they would rather turn down but feel uncomfortable doing so due to grantmaker-grantee power imbalances.
"IMHO, as much effort should be spent increasing the productivity of donated fund spending as is spent marketing for increased donations."
I think this is a good point. I estimate that we spend something like 100x more time evaluating grants and prioritising between them (which I see as trying to increase the productivity of donated funds) than fundraising. I expect we should actually spend more time fundraising.
Just fyi the EA Infrastructure fund grantmakers are all part-time and have full-time jobs with some doing 'meta' work and others doing 'object level' work. You can see who is on the team here.
I am a bit worried about a narrative of "the forecasters think x-risk is low" when I know a bunch of excellent forecasters who have much higher AI x-risk probabilities.
For example, Samotsvety (who afaict have an excellent forecasting track record on domain-relevant questions) gave some estimates here (on sep-8-2022) :
A few of the headline aggregate forecasts are:
- 25% chance of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe
- 81% chance of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe
- 32% chance of AGI being developed in the next 20 years
I work as a grantmaker and have spent some time trying to improve the LTFF form. I am really only speaking for myself here and not other LTFF grantmakers.
I think this post made a bunch of interesting points but I am just responding with my quick impressions mostly where I disagree as I think it will generate more useful discussion.
Pushes away the most motivated people
I think this is one of the points raised that I find most worrying (if true). I think it would be great to make grants useful for x-risk reduction to people who aren't motivated x-risk but are likely to do useful instrumental work anyway. I feel a bit pessimistic about being able to identify such people in the current LTFF set-up (though it totally does happen) and feel more optimistic about well-scoped "requests for proposals" and "active grantmaking" (where the funder has a pretty narrow vision for the projects they want to fund and are often approaching grantees proactively or are directly involved in the projects themselves). My best guess is that passive and broad grantmaking (which is the main product of the LTFF) is not the best way of engaging with these people and we shouldn't optimise this kind of application form for them and should instead invest in 'active' programs.
(I also find it a little surprising that you used community building as an example here. My personal experience is that the majority of productive community building I am aware of has been lead by people who were pretty cause motivated (though I may be less familiar with less cause-motivated CB efforts that op is excited about).)
The grand narrative claim
My sense is that most applicants (particularly ones in EA and adjacent communities) do not consider "what impact will my project have on the world?" to create an expectation of some kind of grand narrative. It's plausible that we are strongly selecting against people who are put off by this question but I think this is pretty unlikely (e.g. afaik this hasn't been given as feedback before and the answers I see people give don't generally give off a 'grand narrative vibe'). My best guess is that this is interpreted as something closer to "what are the expected consequences of your project?". Fwiw I do think that people find applying to funders intimidating but I don't think this question is unusually intimidating relative to other 'explain your project' type questions in the form (or your suggestions).
Confusion around the corrupting epistemics point
I didn't quite understand this point. Is the concern that people will believe that they won't be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?
Edit: I have a lot of sympathy for the take above but I tried to write up my response around why I think lock-ins are pretty plausible.
I’m not sure rn whether the majority of downside comes from lock-in but I think that’s what I’m most immediately concerned about.
I assume by singularity you mean an intelligence explosion or extremely rapid economic growth. I think my default story for how this happens in the current paradigm involves people using AIs in existing institutions (or institutions that look pretty similar today’s one’s) in markets that looks pretty similar to current markets which (on my view) are unlikely to care about the moral patienthood of AIs in a pretty similar ways to current market failures.
On the “markets still exist and we do things in kind of like how we do now view” - I agree that in principle we’d be better positioned to make progress on problems generally if we had something like PASTA but I feel like you need to tell a reasonable story for one of
I’m guessing your view is that progress will be highly discontinuous and society will look extremely different post singularity to how it does now (kind of like going from pre-agricultural revolution to now whereas my view is more like preindustrial revolution to now).
I’m not really sure where the cruxes are on this view or how to reason about it well but my high level argument is that the “god like AGI which has significant responsibility but still checks in with its operators” will still need to make some trade offs across various factors and unless it’s doing some cev type thing, outcomes will be fairly dependent on the goals that you give it and it’s not clear to me that the median world leader or ceo gives the agi goals that concern the ai’s wellbeing (or its subsystems wellbeing) - even if it’s relatively cheap to evaluate it. I am more optimistic about agi controlled by person sampled from a culture that has already set up norms around how to orient to the moral patienthood of ai systems than one that needs to figure it out on the fly. I do feel much better about worlds where some kind of reflection process is overdetermined.
My views here are pretty fuzzy and are often influenced substantially by thought experiments like “If random tech ceo could effectively control all the worlds scientists and have them run at 10x speed and had 100 trillion dollars does factory farming still exist?” which isn’t a very high epistemic bar to beat. (I also don’t think I’ve articulated my models very well and I may take another stab at this later on).
I have some tractability concerns but my understanding is that few people are actually trying to solve the problem right now and when few people are trying it’s pretty hard for me to actually get a sense of how tractable a thing is, so my priors on similarly shaped problems are doing most of the work (which leaves me feeling quite confused).
Some quick thoughts on AI consciousness work, I may write up something more rigorous later.
Normally when people have criticisms of the EA movement they talk about its culture or point at community health concerns.
I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs. I do think that ea has done a decent job of pointing at the most important issues relative to basically every other social movement that I’m aware of but I’m going to complain about one of it’s shortcomings anyway.
It looks to me like we could build advanced ai systems in the next few years and in most worlds we have little idea of what’s actually going on inside them. The systems may tell us they are conscious, or say that they don’t like the tasks we tell them to do but right now we can’t really trust their self reports. There’ll be a clear economic incentive to ignore self reports that create a moral obligation to using the systems in less useful/efficient ways. I expect the number of deployed systems to be very large and that it’ll be plausible that we lock in the suffering of these systems in a similar way to factory farming. I think there are stronger arguments for the topic’s importance that I won’t dive into right now but the simplest case is just the “big if true-ness” of this area seems very high.
My impression is that our wider society and community is not orienting in a sane way to this topic. I don’t remember ever coming across a junior EA seriously considering directing their career to work in this area. 80k has a podcast with Rob Long and a very brief problem profile (that seems kind of reasonable), ai consciousness (iirc) doesn’t feature in ea virtual programs or any intro fellowship that I’m aware of, there haven’t been many (or any?) talks about it at eag in the last year. I do think that most organisations could turn around and ask “well what concrete action do you actually want our audience to take” and my answers are kind of vague and unsatisfying right now - I think we were at a similar point with alignment a few years ago and my impression is that it had to be on the communities mind for a while before we were able to pour substantial resources into it (though the field of alignment feels pretty sub-optimal to me and I’m interested in working out how to do a better job this time round).
I get that there aren’t shovel ready directions to push people to work on right now, but in so far as our community and its organisations brand themselves substantially as the groups identifying and prioritising the worlds most pressing problems it sure does feel to me like more people should have this topic on their minds.
There are some people I know of dedicating some of their resources to making progress in this area, and I am pretty optimistic about the people involved - the ones that I know of seem especially smart and thoughtful.
I don’t want all of the EA to jump into this rn, and I’m optimistic about having a research agenda in this space that I’m excited about and maybe even a vague plan about what one might do about all this by the end of this year - after which I think we’ll be better positioned to do field building. I am excited about people who feel especially well placed moving into this area - in particular people with some familiarity with both mainstream theories of consciousness and ml research (particularly designing and running empirical experiments). Feel free to reach out to me or apply for funding at the ltff.
I’m not an expert but I’d be fairly surprised if the Industrial Revolution didn’t do more to lift people in LMICs out of poverty than any known global health intervention even if you think it increased inequality. Would be open to taking bets on concrete claims here if we can operationalise one well.
Hi Markus,
For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual).
We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:
You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.