Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
27

Sorted by New
3
calebp
· · 1m read

Comments
333

Topic contributions
6

Answer by calebp19
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

The main difference in actions so far is that the ARM Fund has focussed on active grantmaking (e.g. in AI x information security fieldbuilding). In contrast, the LTFF has a more democratic and passive grantmaking focus. I also don't think that ARM Fund has reached product market fit yet, it's done a few things reasonably well but I don't think it has a scalable product (unless we decide to do a lot more active grantmaking but so far that has been more opportunistic).

This fund was spun out of the Long-Term Future Fund (LTFF), which makes grants aiming to reduce existential risk. Over the last five years, the LTFF has made hundreds of grants, specifically in AI risk mitigation, totalling over $20 million. Our team includes AI safety researchers, expert forecasters, policy researchers, and experienced grantmakers. We are advised by staff from frontier labs, AI safety nonprofits, leading think tanks, and others.

More recently ARM Fund has been doing active grantmaking in AIS areas, we'll likely write more about this soon. I expect the funds to become much more differentiated in staff in the next few months (though that's not a commitment). Longer term, I'd like them to be pretty separate entities but for now they share roughly the same staff.

If you perceive any sort of downside from it, you can always remove it again.

Aren't most of the downsides and upsides to norms hard to reverse (almost by definition)? Maybe you don't think the upside is in getting other people to also participate in using the signal - but my read of the OP thinks that this is mostly about creating norms.

I've only skimmed this post, but I think this I agree with all of the main points. I would prefer EA meta orgs that provide benefits to people with money charged for some of their services. I do think the situation is significantly more complicated with orgs that receive substantial institutional funding so I think the original post applies a bit less to orgs like CEA, and more to specific EA groups or small-scale projects (including projects that the EAIF funds).

I suggested to various regional EA groups that they should try and cover some fraction of their costs from members, but there was quite a lot of negative push back (e.g. fundraising distracting them from their main jobs).[1]

EAIF is most interested in funding projects that shouldn't be funded via regular markets, or might not be noticed as being especially valuable (e.g. many public goods in the nonexcludable and nonrivalrous sense).

@Harri Besceli feel free to push back on any of this if it conflicts with your impression of how the EAIF does/should work.

  1. ^

    I still think some version of this is workable but it's not a priority for the EAIF to figure out right now.

I think focussing on pledges of future income (if you are targeting students) seems great, most students don't have much money and are also used to living on a much lower amount than they will in a few years after graduating (particularly people in engineering, CS, and math).

I'm aware of at least two efforts to run table top exercises on AI takeoff with decision makers so I don't think this is particularly neglected, but I do think it's valuable.

  • AW seems clearly more neglected in terms of funding inside the effective giving space and on net.
  • There is plenty of room for funding in the AW space, I would be surprised if $100M couldn't  be spent down over the next 10 years in the AW space at > 50% of the cost-effectiveness of the current marginal dollar (on average).
  • Most of my uncertainty comes from some credence that human lives are vastly more important than non-human animal lives, or medium-term growth being accelerated by GH interventions which could make GH work much more leveraged.

This is great! Thank you very much for writing this up. I'd be extremely excited for more local groups to self-fund retreats like this. I have seen similar events have large impacts on people's goals/career choices/etc. and they seem pretty viable to do without a huge amount of planning/money.

Answer by calebp9
1
1

A few things that come to mind that I appreciate in people’s applications:

  • apply to several funders where possible
  • try and point to a concrete plan (even if it’s basic)
  • talk about any tests you’ve done for your plan already (e.g. have you spent some time trying to upskill outside of a grant)
  • talk about why a grant is better than applying to a program/internship/job (or it could be that it’s worse, but you aren’t ready to do those alternatives yet)
  • try to talk about an end-to-end theory of change for your work - this is mostly about showing you’ve thought about how this project fits into a larger plan and you’re thinking strategically about your career

To be clear you don’t need to do any of these things to get funding, but I often find that applications are improved after people consider some of these bullet points.

Load more