Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups.
BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.
How does one robustly set themselves up during their studies and early career for meaningfully contributing to making transformative AI go well?
How can we increase the global capacity for the amount of people working on the most pressing problems?
Community building and setting up new (university) groups.
How many safety-focused people have left since the board drama now? I count 7, but I might be missing more. Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Cullen O'Keefe, Pavel Izmailov, William Saunders.
This is a big deal. A bunch of the voices that could raise safety concerns at OpenAI when things really heat up are now gone. Idk what happened behind the scenes, but they judged now is a good time to leave.
Possible effective intervention: Guaranteeing that if these people break their NDA's, all their legal fees will be compensated for. No idea how sensible this is, so agree/disagree voting encouraged.
Interesting post. I've always wondered how sensitive the views and efforts of the EA community are to the arbitrary historical process that led to its creation and development. Are there any in-depth explorations that try to answer this question?
Or, since thinking about alternative history can only get us so far, are there any examples of EA-adjacent philosophies or movements throughout history? E.g. Mohism, a Chinese philosophy from 400 BC, sounds like a surprisingly close match in some ways.
Not really an answer to your questions, but I think this guide to SB 1047 gives a good overview of a related aspects.