DeepMind is now hiring for several Research Scientist positions on the Long-term Strategy and Governance Team, and we are seeking applications here. We are interested in a diverse range of backgrounds and skills, so even if you are unsure of whether you have all the relevant qualifications, please consider submitting an application.
If you have questions, please reach out to us: dm-strategy-governance-inbox@google.com. We will reply as we are able.
The team is focused on helping DeepMind and the world prepare for a future with advanced AI. We map out AI’s potential risks and opportunities, from a long-term global perspective. We work to envision and build recommendations for better governance of AI, identifying actions, norms, and institutional structures that could improve decision-making around advanced AI. We are building a collaborative team, with expertise spanning political science, international relations, technology policy, economics, history, institution design, ethics, and philosophy, as well as technical AI domains.
Minimum qualifications:
- PhD or equivalent research or practical experience in a relevant field.
- Strong analytical skills.
- The ability to come up to speed quickly on complex political, social, and technical topics.
- Team player with demonstrated ability to collaborate with a diverse range of stakeholders.
Preferred qualifications:
- Familiarity with policy-making in and history of major governments relevant to AI, and the global political economy of AI.
- Talented generalist with a breadth of abilities, and specialist expertise in a relevant field (e.g. political science, international relations, technology policy, economics, history, institutional design, philosophy).
- Excellent communication, writing, and presentation skills.
- Technical AI knowledge.
- A deep passion for AI and the governance of AI
I lead some of DeepMind's technical AGI safety work, and wanted to add two supporting notes: