A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices.

The Misalignment Museum in San Francisco, showing a grand piano, sculpture made of paperclips, and an image based on The Creation of Adam by Michelangelo. Source:https://twitter.com/MisalignmentM

SORRY FOR KILLING MOST OF HUMANITY

Misalignment Museum Original Story Board, 2022

  1. Apology statement from the AI for killing most of humankind
  2. Description of the first warning of the paperclip maximizer problem
  3. The heroes who tried to mitigate risk by warning early
  4. For-profit companies ignoring the warnings
  5. Failure of people to understand the risk and politicians to act fast enough
  6. The company and people who unintentionally made the AGI that had the intelligence explosion
  7. The event of the intelligence explosion
  8. How the AGI got more resources (hacking most resources on the internet, and crypto)
  9. Got smarter faster (optimizing algorithms, using more compute)
  10. Humans tried to stop it (turning off compute)
  11. Humans suffered after turning off compute (most infrastructure down)
  12. AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.)
  13. AGI taking compute resources from the humans by force (via robots, weapons, car)
  14. AGI started killing humans who opposed it (using infrastructure, airplanes, etc.)
  15. AGI concluded that all humans are a threat and started to try to kill all humans
  16. Some humans survived (remote locations, etc.)
  17. How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threat
  18. AGI improved the lives of the remaining humans
  19. AGI started this museum to apologize and educate the humans

The Misalignment Museum is curated by Audrey Kim.

Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.”

99

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:

I appreciate cultural works creating common knowledge that the AGI labs are behaving strongly unethically. 

As for the specific scenario, point 17 seems to be contradicted by the orthogonality thesis / lack of moral realism.

I don't think the orthogonality thesis is correct in practice, and moral antirealism certainly isn't an agreed upon position among moral philosophers, but I agree that point 17 seems far fetched.

Michael - thanks for posting about this. 

I think it's valuable to present ideas about AI X-risk in different forms, venues, and contexts, to spark different cognitive, emotional, and aesthetic reactions in people.

I've been fascinated by visual arts for many decades, have written about the evolutionary origins of art, and one of my daughters is a professional artist. My experience is that art installations can provoke a more open-minded contemplation of issues and ideas than just reading things on a screen or in a book. There's something about walking around in a gallery space that encourages a more pensive, non-reactive, non-judgmental response.

I haven't seen the Misalignment Museum in person, but would value reactions from anyone who has.

Looks cool! Do we know who funded this?

The donor is anonymous.

From the Wired article: "The temporary exhibit is funded until May by an anonymous donor..."

Thanks for sharing!

I think to would be nice to have Q&As that the visitors could fill at the end of the visit to see whether the museum successfully increased their awareness about the risk of advanced misaligned AI.

Curated and popular this week
Relevant opportunities