Give me feedback! :)
MATS is now hiring for three roles!
We are generally looking for candidates who:
Please apply via this form and share via your networks.
TL;DR: MATS could support another 10-15 scholars at $21k/scholar with seven more high-impact mentors (Anthropic, DeepMind, Apollo, CHAI, CAIS)
The ML Alignment & Theory Scholars (MATS) Program is twice-yearly educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and connect them with the Berkeley AI safety research community.
MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research coaching, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and complementary scholar support and research management systems, greatly reducing the barriers to research mentorship.
The Winter 2023-24 Program will run Jan 8-Mar 15 in Berkeley, California and feature seminar talks from leading AI safety researchers, workshops on research strategy, and networking events with the Bay Area AI safety community. We currently have funding for ~50 scholars and 23 mentors, but could easily use more.
We are currently funding constrained and accepting donations. We would love to include up to seven additional interested mentors from Anthropic, Apollo Research, CAIS, Google DeepMind, UC Berkeley CHAI, and more, with up to 10-15 additional scholars at $21k/scholar.
Buck Shlegeris, Ethan Perez, Evan Hubinger, and Owain Evans are mentoring in both programs. The links show their MATS projects, "personal fit" for applicants, and (where applicable) applicant selection questions, designed to mimic the research experience.
Astra seems like an obviously better choice for applicants principally interested in:
MATS has the following features that might be worth considering:
Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.
I think that one's level of risk aversion in grantmaking should depend on the upside and the downside risk of grantees' action space. I see a potentially high upside to AI safety standards or compute governance projects that are specific, achievable, and verifiable and are rigorously determined by AI safety and policy experts. I see a potentially high downside to low-context and high-bandwidth efforts to slow down AI development that are unspecific, unachievable, or unverifiable and generate controversy or opposition that could negatively affect later, better efforts.
One might say, "If the default is pretty bad, surely there are more ways to improve the world than harm it, and we should fund a broad swathe of projects!" I think that the current projects to determine specific, achievable, and verifiable safety standards and compute governance levers are actually on track to be quite good, and we have a lot to lose through high-bandwith, low-context campaigns.
Thanks for publishing this, Arb! I have some thoughts, mostly pertaining to MATS:
Why do we emphasize acceleration over conversion? Because we think that producing a researcher takes a long time (with a high drop-out rate), often requires apprenticeship (including illegible knowledge transfer) with a scarce group of mentors (with high barrier to entry), and benefits substantially from factors such as community support and curriculum. Additionally, MATS' acceptance rate is ~15% and many rejected applicants are very proficient researchers or engineers, including some with AI safety research experience, who can't find better options (e.g., independent research is worse for them). MATS scholars with prior AI safety research experience generally believe the program was significantly better than their counterfactual options, or was critical for finding collaborators or co-founders (alumni impact analysis forthcoming). So, the appropriate counterfactual for MATS and similar programs seems to be, "Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash."