TH

Tyler H.

2 karmaJoined

Comments
1

Upvoted; I was considering making almost exactly the same post.

When philosophers[1] react this way to hearing the idea, it suggests a lot more work might have to be done on communication around AI risk.[2]

My giving has been directed by EA for almost a decade now, but I've only become familiar with the community and the forum and longtermism in the last year, so it's entirely possible that there are lots of amazing EA's working on this issue already. 

That said, my experience was that it was very hard to find any concrete stories of how AGI becomes an existential threat, even when I was specifically looking for them. I read The Precipice last year partially in an effort to find them, and here were my thoughts:

"In the EA community, AI risk is a super normal thing to talk about, and has been at the forefront of people's minds for several years now as potentially the biggest existential risk we will face in the next century. It's a normal part of the conversation, and the pathways through which AGI could threaten humanity are understood, at least on a basic level. 

This is just not at all true for most people. Like, it's easy to imagine how an asteroid impact could end humanity, or a nuclear war, or a pandemic, etc. So there's no need to spend time telling a story of how, for example, a nuclear war might start. But it just isn't easy to see, especially for a normal person new to the idea of existential risk, how AGI actually becomes a threat.

Because of this, most people just dismiss it out of hand. Many think of Terminator, which makes them even more dismissive of the risk. Among all the risks in the book, AI risk stands out as 1) the most difficult to realistically imagine, 2) the one people have probably thought the least about, and 3) the one Ord believes poses the greatest risk (by a wide margin), so it confused me that so little time was spent actually explaining it."

I'd be very interested to hear what work has been done on this issue, because it seems quite important, at least to me. If a growing number of people are introduced to EA by being told it's a group of people who are scared of "robot overlords," that's bad.

And a few people associating EA with "people scared of robot overlords" carries some risk of becoming many people, of course.[3]

  1. ^

    Kate Mann is a philosopher and author of Down Girl⁠—which I highly recommend—but many of the likes are from other philosophers as well.

  2. ^

    I suppose there's a chance some philosophers would actually be more dismissive of the idea of an AGI takeover relative to the average person, but the fact that smart people trained in critical thinking reacted so emotionally and dismissively made me stop and think "wow maybe this is an even more dangerous communication/pr problem than I thought."

  3. ^

     My brain jumps to a scenario where John Oliver is introduced to EA in this way and then dismisses all the other ideas out of hand and does a segment that's more about Peter Singer and deworming and repugnant conclusions than about EA, but is still disastrous for EA.