I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to, and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.
With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.
Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis.
I also post to LessWrong sometimes.
If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".
AI Safety Support have a list of funding opportunities. I'm pretty sure all of them are already in this post + comments section, but it's plausible that'll change in future.
Yeah, the "About sharing information from this report" section attempts to explain this. Also, for what it's worth, I approved all access requests, generally within 24 hours.
That said, FYI I've now switched to the folder being viewable by anyone with the link, rather than requiring requesting access, though we still have the policies in "About sharing information from this report". (This switch was partly because my sense of the risks vs benefits has changed, and partly because we apparently hit the max number of people who can be individually shared on a folder.)
Description provided to me by one of the organizers:
This is a public platform for AI safety projects where funders can find you. You shop around for donations from donors that already have a high donor score on the platform, and their donations will signal-boost your project so that more donors and funders will see it.
See also An Overview of the AI Safety Funding Situation for indications of some additional non-EA funding opportunities relevant to AI safety (e.g. for people doing PhDs or further academic work).
FYI, if any readers want just a list of funding opportunities and to see some that aren't in here, they could check out List of EA funding opportunities.
(But note that that includes some things not relevant to AI safety, and excludes some funding sources from outside the EA community.)
$20 Million in NSF Grants for Safety Research
After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research.
Here is the detailed program description.
The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.
Yeah, this seems to me like an important question. I see it as one subquestion of the broader, seemingly important, and seemingly neglected questions "What fraction of importance-adjusted AI safety and governance work will be done or heavily boosted by AIs? What's needed to enable that? What are the implications of that?"
I previously had a discussion focused on another subquestion of that, which is what the implications are for government funding programs in particular. I wrote notes from that conversation and will copy them below. (Some of this is also relevant to other questions in this vicinity.)
Thanks for making this!
What do the asterisks before a given resource mean? (E.g. before "Act of Congress: How America’s Essential Institution Works, and How It Doesn’t".) Maybe they mean you're especially strongly recommending that?