Effective Altruism Georgetown is thrilled to present our "Intro to AI/ML Fundamentals" Reading Group! I am very excited to teach this curriculum over the next 5 weeks alongside Sazan Khalid (GU '25).

In sharing this with the broader EA community, Sazan and I hope to receive feedback, recommendations, and —if it's helpful— provide a format for others to take from or use.

For a finalized version of our curriculum, see the attached link. Furthemore, please see the cited EAs for further references on the materials that made our curriculum possible.


Contacts: dhw34@georgetown.edu; sk2153@georgetown.edu

#EffectiveAltruism #ArtificialIntelligence #MachineLearning #GeorgetownUniversity

10

0
0

Reactions

0
0
Comments3
Sorted by Click to highlight new comments since:

To the extent that the program is meant to provide an introduction to "catastrophic and existential risk reduction in the context of AI/ML", I think it should include some more readings on the alignment problem, existential risk from misaligned AI, transformative AI or superintelligence. I think Mauricio Baker's AI Governance Program has some good readings for this.

Thanks for your comment. Question -- Do you think it's worth introducing X risk (+ re areas) in this context? I ask this because we envision this reading group as a lead-in to an intro fellowship or other avenues of early stage involvement. Given this, we want to balance materials we introduce with limited time, while also making people curious about ideas discussed in the EA space.

My experience with EA at Georgia Tech is that a relatively small proportion of people who complete our intro program participate in follow-up programs, so I think it's valuable to have content you think is important in your initial program instead of hoping that they'll learn it in a later program. I think plenty of Georgetown students would be interested in signing up for an AI policy/governance program, even if it includes lots of x-risk content.

Curated and popular this week
Relevant opportunities