The AGI Safety Fundamentals (AGISF): Alignment Course is designed to introduce the key ideas in AGI safety and alignment, and provide a space and support for participants to engage, evaluate and debate these arguments. Participants will meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for their next steps in the field.
The course is being run by the same team as for previous rounds, now under a new project called BlueDot Impact.
Apply here, by 5th January 2023.
Time commitment
The course will run from February-April 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week capstone project.
The time commitment is around 4 hours per week, so participants can engage with the course alongside full-time work or study.
Course structure
Participants are provided with structured content to work through, alongside weekly, facilitated discussion groups. Participants will be grouped depending on their ML experience and background knowledge about AI safety. In these sessions, participants will engage in activities and discussions with other participants, guided by the facilitator. The facilitator will be knowledgeable about AI safety, and can help to answer participants’ questions.
The course is followed by a capstone project, which is an opportunity for participants to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career.
The course content is designed by Richard Ngo (Governance team at OpenAI, previously a research engineer on the AGI safety team at DeepMind). You can read the curriculum content here.
Target audience
We are most excited about applicants who would be in strong position to pursue technical alignment research in their career, such as professional software engineers and students studying technical subjects (e.g. CS/maths/physics/engineering).
That said, we consider all applicants and expect 25-50% of the course to consist of people with a variety of other backgrounds, so we encourage you to apply regardless. This includes community builders who would benefit from a deeper understanding of the concepts in AI alignment.
We will be running another course on AI Governance in early 2023 and expect a different distribution of target participants.
Apply now!
If you would like to be considered for the next round of the courses, starting in February 2023, please apply here by Thursday 5th January 2023. More details can be found here. We will be evaluating applications on a rolling basis and we aim to let you know the outcome of your application by mid-January 2023.
If you already have experience working on AI alignment and would be keen to join our community of facilitators, please apply to facilitate.
Who is running the course?
AGISF is now being run by BlueDot Impact - a new non-profit project running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers. BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of AGISF. You can read more in our announcement post here.
We’re really excited about the amount of interest in the courses and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:
- Working with pedagogy experts to make discussion sessions more engaging
- Formalising our course design process with greater transparency for participants and facilitators
- Building systems to improve participant networking to create high-value connections
- Collating downstream opportunities for participants to pursue after the courses
- Forming a team that can continue to build, run and improve these courses over the long-term
Applications for our other courses, including the AGISF: Governance Course, will open in early 2023!