VG

Vael Gates

1048 karmaJoined

Comments
47

I didn't give a disagreement vote, but I do disagree on aisafety.training being the "single most useful link to give anyone who wants to join the effort of AI Safety research", just because there's a lot of different resources out there and I think "most useful" depends on the audience. I do think it's a useful link, but most useful is a hard bar to meet!

Not directly relevant to the OP, but another post covering research taste: An Opinionated Guide to ML Research (also see Rohin Shah's advice about PhD programs (search "Q. What skills will I learn from a PhD?") for some commentary.

Two authors gave me permission to publish their transcripts non-anonymously! Thus:

- Interview with Michael L. Littman

- Interview with David Duvenaud

Whoops, forgot I was the owner. I tried moving those files to the drive folder, but also had trouble with it? So I'm happy to have them copied instead. 

Thanks plex, this sounds great!

Update: Michael Keenan reports it is now fixed!

Thanks for the bug report, checking into it now. 

No, the same set of ~28 authors read all of the readings. 

The order of the readings was indeed specified:

  1. Concise overview (Stuart Russell, Sam Bowman; 30 minutes)
  2. Different styles of thinking about future AI systems (Jacob Steinhardt; 30 minutes)
  3. A more in-depth argument for highly advanced AI being a serious risk (Joe Carlsmith; 30 minutes)
  4. A more detailed description of how deep learning models could become dangerously "misaligned" and why this might be difficult to solve with current ML techniques (Ajeya Cotra; 30 minutes)
  5. An overview of different research directions (Paul Christiano; 30 minutes)
  6. A study of what ML researchers think about these issues (Vael Gates; 45 minutes)
  7. Some common misconceptions (John Schulman; 15 minutes)

Researchers had the option to read the transcripts where transcripts were available; we said that consuming the content in either form (video or transcript) was fine.

Load more