:)
Thanks for the comments!
Speaking from my experience in AI governance: There are some opportunities to work on projects that more experienced people have suggested. At GovAI we have recently made a list of ideas people should work on. People on the GovAI fellowship program have been given suggestions.
Overall, yes, I do think there are fewer such opportunities than it sounds like there are in technical areas. That makes sense to me, because for AI governance research projects, the vast majority of junior people don't yet have the skills necessary to execute the project to a high standard.
Another potential difference is that you don't get do-overs: the more senior person can't later write a paper that follows exactly the same idea but that's written to a much higher standard, because there's more of a requirement that each paper brings original ideas. (Perhaps in technical subjects you can say e.g. "previous authors have tried to get this method to work but the results weren't great, and we show that it actually works really well".)
Therefore, I don't think the problem is that we have bad norms. The deeper issue is that we need to find ways of accelerating the very slow process of junior researchers learning how to execute research projects to a high standard.
Good question. A few possible strategies:
(1) Make it really easy. Have accessible software tools out there, so labs don't have to build everything from scratch.
(2) Sponsor relevant technical research. I'm especially thinking of research falling under "AI security". E.g. how easy is model-stealing, given different forms of access?
(3) Have certain labs act as early adopters. They experiment with the best setup and set an example for other labs.
(4) More public advocacy in favour of structured access.
(5) Set up a conference track where there's a specific role for labs sharing large models in a structured way. The expectations of the content of the paper would be different, e.g. they don't need to have scientifically interesting findings already. The authors explain everything included, e.g. "we have model checkpoints corresponding to XYZ different points in the training run". Analogous to a paper that introduces a new dataset.