Hide table of contents

Last month we held an AI Safety Debate with the UCL EA Society.

I thought I'd share a few thoughts from running the event: both about community building, because I think the event went well, and more broadly about AI Safety. Not of all these thoughts are mine: thank you to Erin and Otto for sharing theirs. 

A full YouTube recording is here:

 

Community-Building Takes

  • Entertainment Value: Around 60/70 people attended, which is around 10x our normal attendance for a typical event. I think this is primarily because a debate is interesting to watch than a speaker event, or a workshop. Perhaps this was already obvious others, but if you are looking for an event to reach a big audience, entertainment value is important. 
  • Disagreeing about AI risk is okay: before I was concerned that the event might be overly polarising. The opposite happened – despite disagreements about 'rogue AI' scenarios, the speakers agreed broadly that: AI could be transformative for humanity, misuse risks are serious, and that regulation/evals are important. This may not have happened if the people arguing against x-risk were e/accs. 
  • X-Risk sentiment in the audience: at one point in the debate, one participant asked the audience who thought  AI was an existential risk. From memory, around 2/3s of students put up their hands. This shouldn't be too surprising, given that the 'public' is worried about about x-risk (e.g. here). (Although, obviously, this wasn't a representative sample.)

AI Things

  • AI Ethics folks aren't aware of the common ground: At one point in the debate, the "x-risk is a distraction" argument was brought up. In response, Reuben Adams mentioned that there is potential common ground between "ethics" and "safety" concerns, through evals. This seemed to have genuinely surprised the Science/Technology Professor (Jack Stilgoe) who was arguing against x-risk. Perhaps this is a result from Twitter echo-chambers? Who knows. 
  • (Bio) Misuse Risks were most convincing to the audience: this seemed like a particularly persuasive threat model, based on conversations after. I don't think this is particularly novel: I believe bio-terror was a prominent theme in the discussion of 'catastrophic risk' at the UK AI Summit last November.  

Feel free to reach out if you are a community-builder and  you'd like advise on organising a similar event. 

25

0
0
1

Reactions

0
0
1
Comments2
Sorted by Click to highlight new comments since:

X-Risk sentiment in the audience: at one point in the debate, one participant asked the audience who thought  AI was an existential risk. From memory, around 2/3s of students put up their hands.

Do you have a rough sense of how many of these had interacted with AI Safety programming/content from your group? Like, was a substantial part of the audience just members from your group who had heard EA arguments about AIS?

I’d guess less than 1/4 of the people had engaged w AIS (e.g. read some books/articles). Perhaps 1/5 had heard about EA before. Most were interested in AI though.

Curated and popular this week
Relevant opportunities