Per Andy Jones over at LessWrong:
If you think you could write a substantial pull request for a major machine learning library, then major AI safety labs want to interview you today.
I work for Anthropic, an industrial AI research lab focussed on safety. We are bottlenecked on aligned engineering talent. Specifically engineering talent. While we'd always like more ops folk and more researchers, our safety work is limited by a shortage of great engineers.
I've spoken to several other AI safety research organisations who feel the same.
I'm not sure what you mean by "AI safety labs", but Redwood Research, Anthropic, and the OpenAI safety team have all hired self-taught ML engineers. DeepMind has a reputation for being more focused on credentials. Other AI labs don't do as much research that's clearly focused on AI takeover risk.
I'm currently at DeepMind and I'm not really sure where this reputation has come from. As far as I can tell DeepMind would be perfectly happy to hire self-taught ML engineers for the Research Engineer role (but probably not the Research Scientist role; my impression is that this is similar at other orgs). The interview process is focused on evaluating skills, not credentials.
DeepMind does get enough applicants that not everyone makes it to the interview stage, so it's possible that self-taught ML engineers are getting rejected before getting a chance to show they know ML. But presumably this is also a problem that Redwood / Anthropic / OpenAI have? Presumably there is some way that self-taught ML engineers are signaling that they are worth interviewing. (As a simple example, if I personally thought someone was worth interviewing, my recommendation would function as a signal for "worth interviewing", and in that situation DeepMind would interview them, and at that point I predict their success would depend primarily on their skills and not their credentials.)
If there's some signal of "worth interviewing" that DeepMind is failing to pick up on, I'd love to know that; it's the sort of problem I'd expect DeepMind-the-company to want to fix.