Tentative implications:
- People outside of labs are less likely to have access to the very best models and will have less awareness of where the state of the art is.
- Warning shots are somewhat less likely as highly-advanced models may never be deployed externally.
- We should expect to know less about where we’re at in terms of AI progress.
- Working at labs is perhaps more important than ever to improve safety and researchers outside of labs may have little ability to contribute meaningfully.
- Whistleblowing and reporting requirements could become more important as without them government would have little ability to regulate frontier AI.
- Any regulation based solely on deployment (which has been quite common) should be adjusted to take into account that the most dangerous models may be used internally long before they're deployed.
For what it's worth, I think that the last year was an update against many of these claims. Open source models currently seem to be closer to state of the art than they did a year ago or two years ago. Currently, researchers at labs seem mostly in worse positions to do research than researchers outside labs.
I very much agree that regulations should cover internal deployment, though, and I've been discussing risks from internal deployment for years.
Well known EA sympathizer Richard Hanania writes about his donation to the Shrimp Welfare Project.
Note: When an earlier private version of these notes was circulated, a senior figure in technical AI safety strongly contested my description. They believe that the Anthropic SAE work is much more valuable than the independent SAE work, as both were published around the same time, but the Anthropic work provides sufficient evidence to be worth extending by other researchers, whereas the independent research was not dispositive.
For the record, if the researcher here was COI’d, eg working at Anthropic, I think you should say so, and you should also substantially discount what they said.
I agree with you that people seem to somewhat overrate getting jobs in AI companies.
However, I do think there's good work to do inside AI companies. Currently, a lot of the quality-adjusted safety research happens inside AI companies. And see here for my rough argument that it's valuable to have safety-minded people inside AI companies at the point where they develop catastrophically dangerous AI.