M

MichaelA

Senior Research Manager @ Rethink Priorities; also guest fund manager @ the EA Infrastructure Fund
12344 karmaJoined Working (0-5 years)Oxford, UK

Bio

I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to,  and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.

With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.

Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".

Sequences
4

Nuclear risk research project ideas
Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline

Comments
2486

Topic contributions
793

Thanks for making this!

What do the asterisks before a given resource mean? (E.g. before "Act of Congress: How America’s Essential Institution Works, and How It Doesn’t".) Maybe they mean you're especially strongly recommending that? 

AI Safety Support have a list of funding opportunities. I'm pretty sure all of them are already in this post + comments section, but it's plausible that'll change in future. 

Yeah, the "About sharing information from this report" section attempts to explain this. Also, for what it's worth, I approved all access requests, generally within 24 hours.

That said, FYI I've now switched to the folder being viewable by anyone with the link, rather than requiring requesting access, though we still have the policies in "About sharing information from this report". (This switch was partly because my sense of the risks vs benefits has changed, and partly because we apparently hit the max number of people who can be individually shared on a folder.)

AI Safety Impact Markets

Description provided to me by one of the organizers: 

This is a public platform for AI safety projects where funders can find you. You shop around for donations from donors that already have a high donor score on the platform, and their donations will signal-boost your project so that more donors and funders will see it. 

See also An Overview of the AI Safety Funding Situation for indications of some additional non-EA funding opportunities relevant to AI safety (e.g. for people doing PhDs or further academic work). 

FYI, if any readers want just a list of funding opportunities and to see some that aren't in here, they could check out List of EA funding opportunities.

(But note that that includes some things not relevant to AI safety, and excludes some funding sources from outside the EA community.)

$20 Million in NSF Grants for Safety Research

After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research.

Here is the detailed program description.

The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope

Announcing Manifund Regrants

Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.

Yeah, this seems to me like an important question. I see it as one subquestion of the broader, seemingly important, and seemingly neglected questions "What fraction of importance-adjusted AI safety and governance work will be done or heavily boosted by AIs? What's needed to enable that? What are the implications of that?"

I previously had a discussion focused on another subquestion of that, which is what the implications are for government funding programs in particular. I wrote notes from that conversation and will copy them below. (Some of this is also relevant to other questions in this vicinity.)

"Key takeaways 

  • Maybe in future most technical AI safety work will be done by AIs. 
  • Maybe that has important implications for whether & how to get government funding for technical AI safety work? 
    • E.g., be less enthusiastic about getting government funding for more human AI safety researchers?
    • E.g., be more enthusiastic about laying the groundwork for gov funding for AI assistance for top AI safety researchers later? 
      • Such as by more strongly prioritizing having well-scoped research agendas, or ensuring top AI safety researchers (or their orgs) have enough credibility signals to potentially attract major government funding?
    • This is a subquestion of the broader question “What should we do to prep for a world where most technical AI safety work can be done by AIs?”, which also seems neglected as far as I can tell. 
  • Seems worth someone spending 1-20 hours doing distillation/research/writing on that topic, then sharing that with relevant people.

Additional object-level notes

  • See [v. A] Introduction & summary – Survey on intermediate goals in AI governance for an indication of how excited AI risk folks are about “Increase US and/or UK government spending on AI reliability, robustness, verification, reward learning, interpretability, and explainability”.
  • But there may in future be a huge army of AI safety researchers in the form of AIs, or AI tools/systems that boost AI safety researchers in other ways. What does that imply, esp. for gov funding programs?
    • Reduced importance of funding for AI safety work, since it’ll be less bottlenecked by labor (which is costly) and more by a handful of good scalable ideas?
    • Funding for AI safety work is mostly important for getting top AI safety researchers to have huge compute budgets to run (and train?) all those AI assistance, rather than funding people themselves or other things?
      • Perhaps this even increases the importance of funding, since we thought it’d be hard to scale the relevant labor via people but it may be easier to scale via lots of compute and hence AI assistance? 
    • Increased importance of particular forms of “well-scoped” research agendas/questions? Or more specifically, focusing now on whatever work it’s hardest to hand off to AIs but that best sets things up for using AIs? 
    • Make the best AI safety researchers, research agendas, and orgs more credible/legible to gov people so that they can absorb lots of funding to support AI assistants?
      • What does that require? 
      • Might mean putting some of the best AI safety researchers in new or existing institutions that look credible? E.g. into academic labs, or merging a few safety projects into one org that we ensure has a great brand? 
    • Start pushing the idea (in EA, to gov people, etc.) that gov should now/soon provide increasingly much funding for AI safety via compute support for relevant people?
    • Start pushing the idea that gov should be very choosy about who to support but then support them a lot? Like support just a few of the best AI safety researchers/orgs but providing them with a huge compute budget? 
  • That’s unusual and seems hard to make happen. Maybe that makes it worth actively laying groundwork for this?

Research proposal

  • I think this seems worth a brief investigation of, then explicitly deciding whether or not to spend more time. 
  • Ideally this’d be done by someone with decent AI technical knowledge and/or gov funding program knowledge.
  • If someone isn’t the ideal fit for working on this but has capacity and interest, they could:
    • spend 1-10 hours
    • aim to point out some somewhat-obvious-once-stated hypotheses, without properly vetting them or fleshing them out
    • Lean somewhat on conversations with relevant people or on sharing a rough doc with relevant people to elicit their thoughts 
  • Maybe the goals of an initial stab at this would be:
    • Increase the chance that someone who does have strong technical and/or gov knowledge does further thinking on this
    • Increase the chance that relevant technical AI safety people, leaders of technical AI safety orgs, and/or people in government bear this in mind and adjust their behavior in relevant ways"
Load more