Current spec is that you should spend at least 25% of the time in SF per month. We've a slight preference for folks who can be here full-time, but that's easily overwhelmed by a promising candidate.
It certainly loses us some talent. So far the feeling is that it's worth it for the cultural benefits, but that might change in future. We've definitely noticed that 'similar timezone' is the majority of the friction in folks working remotely, so that might be the thing to specify rather than explicitly being on-site.
To provide a contrasting view, I surveyed the background of Anthropic's technical staff a while ago.
In particular, we had no ML PhDs as of when the survey was done (though we've hired two since!). I think Anthropic is an unusual organisation and our demographics won't generalise well to the broader community, but I do think it's representative of the ongoing shift to more empirical work.
The paperwork required to be entered into the lottery is almost trivial - see the step-by-step instructions here. Most orgs will want an immigration lawyer to do it though because while it's an easy first step, it's an easy first step in a long and difficult process. If an org isn't used to handling H-1B cases, I expect the huge hangup will be finding and retaining an immigration lawyer in the first place.
Hand-in-hand with that, Anthropic is hiring, especially for great engineers. And we sponsor visas!
I appreciate the feedback, but the spec is intentionally over-broad rather than over-narrow. I and several other engineers in AI safety have made serious efforts to try and pin down exactly what 'great software engineering' is, and - for want of a better phrase - have found ourselves missing the forest for the trees. What we're after is a certain level of tacit, hard-to-specify skills and knowledge that we felt was best characterised by the litmus test given above.