AI Safety Info (aka Stampy), the large community-written interactive FAQ, is launching a paid three month fellowship for writers. Up to 5 fellows, working collaboratively with each other and with Rob Miles, will distil content from around the alignment ecosystem into answers which summarise key ideas and link out to the relevant resources.
Our theory of change: A single-point-of-access to AI Safety
TL;DR: directing people to the right parts of the alignment ecosystem requires lots of hard to find knowledge. We’re building a place you can throw a wide range of people at and they'll get what they need, which means onboarding people will be much easier.
FAQs are usually somewhere on the spectrum between “too long so you can't find your question” and “too short so it doesn't answer your question”. We bypass this trade-off by using a custom interface, with LM-powered semantic search. The page starts off showing some example questions and, as you click to expand them, related questions pop up, creating a "tab explosion in one page" effect. If a reader has a specific question, they can easily search our collection of answers, or request an answer from our editors if there isn't one already written.
This fills a hole in the outreach and on-boarding landscape, as a place where a wide range of readers can all be sent the same link and be directed towards the information they need. We aim to cater to people who:
- are totally new to the field
- are unconvinced of the need for alignment research
- are interested and want to learn more, or
- are already on board but don’t know how to help
The project also hosts canonical living documents, like an overview of what each organization is working on, and an index of relevant videos. The goal is to be a central nexus for the growing AGI safety information ecosystem, to make it easier to get people to the right place. This will cause more people to be on-boarded well, save people time, and ultimately result in more progress being made.
Distilling content for the site can also become a great on-ramp for aspiring researchers; writing alongside a community of co-learners provides social encouragement, feedback, and the motivation of producing content which will be read by a large audience.
Apply!
Our application process is:
- Join Rob Miles’s Discord (channel: #editing) and sign up to the team.
- Read the get involved and editor's hub pages.
- (optional) Join one of our weekly onboarding calls. The first is on Sunday the 18th at 8 pm UK time, future ones will be announced on Discord.
- (optional) Hang out in our virtual office while you work.
- (optional) Help on-board other contributors.
- Try your hand at contributing content (either writing new answers, or improving existing ones) while receiving and offering feedback on Discord, from now until March 6th (let us know in Discord if you can only block off a shorter time for applying and we will consider this when selecting people)
- You can suggest changes to anything, request direct edit access to specific docs you’re authoring, or earn global edit access
- Ping @reviewer when you're ready to put your answer into on-site review, and @feedback for feedback (you can volunteer to be @feedback! this will help your application!).
- Submit an application form showcasing your contributions, update it as you add more, and finalize it by March 6th
For our first cohort we'll pick up to five people who:
- Contribute consistently high quality content (questions and answers!)
- Give great feedback to other writers (partially measured by our EigenKarma system, you can give stamp reactions to people for helpful feedback!)
- Help us to develop this resource in creative ways (including contributing to the discussions on editor guidelines, organizing co-working sessions, on-boarding other applicants, etc)
The fellows will receive $2500/month for three months of full-time editing work. Volunteer contributions are always welcome, and will help with applications to any future rounds of this program we'll run if this is successful.
Other ways you can contribute
Spread the word, link your friends to this post, browse our FAQ and link it to friends when relevant questions come up in conversation or online. We’re not a comprehensive resource yet, but we do already have a good amount of seed content.
The system was built by a team of volunteers in our free time. We have the core system working, but there are a handful of features which would improve the experience of using the site and we’d love to have more actively involved developers join the team. We also have ideas for more ambitious expansions of the project, including a unified feed and integration with an alignment-dataset-finetuned LLM for search over all alignment literature, and eventually a conversation agent.
Thanks to Rick Schwall from Saving Humanity from Homo Sapiens for reaching out and funding this program!
Hi, I think you might find something helpful from this system for information organization. It divides up four different purposes a user might be looking for information and so the optimal service model for each can be a bit different. Like you say in the post, some information can be too dense for new people onboarding so I believe the information needs to be presented in different ways.
https://documentation.divio.com/structure/