Existential risk
Existential risk
Discussions of risks which threaten the destruction of the long-term potential of life

Quick takes

42
2y
2
In Twitter and elsewhere, I've seen a bunch of people argue that AI company execs and academics are only talking about AI existential risk because they want to manufacture concern to increase investments and/or as a distraction away from near-term risks and/or regulatory capture. This is obviously false.  However, there is a nearby argument that is likely true: which is that incentives drive how people talk about AI risk, as well as which specific regulations or interventions they ask for. This is likely to happen both explicitly and unconsciously. It's important (as always) to have extremely solid epistemics, and understand that even apparent allies may have (large) degrees of self-interest and motivated reasoning.  Safety-washing is a significant concern; similar things have happened a bunch in other fields, it likely has already happened a bunch in AI, and will likely happen again in the months and years to come, especially if/as policymakers and/or the general public become increasingly uneasy about AI.
22
2y
TL;DR: Someone should probably write a grant to produce a spreadsheet/dataset of past instances where people claimed a new technology would lead to societal catastrophe, with variables such as “multiple people working on the tech believed it was dangerous.” Slightly longer TL;DR: Some AI risk skeptics are mocking people who believe AI could threaten humanity’s existence, saying that many people in the past predicted doom from some new tech. There is seemingly no dataset which lists and evaluates such past instances of “tech doomers.” It seems somewhat ridiculous* to me that nobody has grant-funded a researcher to put together a dataset with variables such as “multiple people working on the technology thought it could be very bad for society.” *Low confidence: could totally change my mind  ——— I have asked multiple people in the AI safety space if they were aware of any kind of "dataset for past predictions of doom (from new technology)"? There have been some articles and arguments floating around recently such as "Tech Panics, Generative AI, and the Need for Regulatory Caution", in which skeptics say we shouldn't worry about AI x-risk because there are many past cases where people in society made overblown claims that some new technology (e.g., bicycles, electricity) would be disastrous for society. While I think it's right to consider the "outside view" on these kinds of things, I think that most of these claims 1) ignore examples of where there were legitimate reasons to fear the technology (e.g., nuclear weapons, maybe synthetic biology?), and 2) imply the current worries about AI are about as baseless as claims like "electricity will destroy society," whereas I would argue that the claim "AI x-risk is >1%" stands up quite well against most current scrutiny. (These claims also ignore the anthropic argument/survivor bias—that if they ever were right about doom we wouldn't be around to observe it—but this is less important.) I especially would like to see a
17
2y
I'm thinking about the matching problem of "people with AI safety questions" and "people with AI safety answers". Snoop Dogg hears Geoff Hinton on CNN (or wherever), asks "what the fuck?", and then tries to find someone who can tell him what the fuck. I think normally people trust their local expertise landscape--if they think the CDC is the authority on masks they adopt the CDC's position, if they think their mom group on Facebook is the authority on masks they adopt the mom group's position--but AI risk is weird because it's mostly unclaimed territory in their local expertise landscape. (Snoop also asks "is we in a movie right now?" because movies are basically the only part of the local expertise landscape that has had any opinion on AI so far, for lots of people.) So maybe there's an opportunity here to claim that territory (after all, we've thought about it a lot!). I think we have some 'top experts' who are available for, like, mass-media things (podcasts, blog posts, etc.) and 1-1 conversations with people they're excited to talk to, but are otherwise busy / not interested in fielding ten thousand interview requests. Then I think we have tens (hundreds?) of people who are expert enough to field ten thousand interview requests, given that the standard is "better opinions than whoever they would talk to by default" instead of "speaking to the whole world" or w/e. But just like connecting people who want to pay to learn calculus and people who know calculus and will teach it for money, there's significant gains from trade from having some sort of clearinghouse / place where people can easily meet. Does this already exist? Is anyone trying to make it? (Do you want to make it and need support of some sort?)
13
2y
1
Together with a few volunteers, we prepared a policy document for the Campaign for AI Safety to serve as a list of demands by the campaign. It is called "Strong and appropriate regulation of advanced AI to protect humanity". It is currently geared towards Australiand and US policy-makers, and I think it's not its last version. I would appreciate any comments!
2
6mo
Test quick take with a tag
12
2y
6
Why aren't we engaging in direct action (including civil disobedience) to pause AI development? Here's the problem: Yudkowksy: "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die." Here's one solution: FLI Open Letter: "all AI labs...immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." Here's what direct action in the pursuit of that solution could look like (most examples are from the UK climate movement): Picketing AI offices (this already seems to be happening!) Mass non-disruptive protest Strikes/walk-outs (by AI developers/researchers/academics) Slow marches Roadblocks Occupation of AI offices Performative vandalism of AI offices Performative vandalism of art Sabotage of AI computing infrastructure (on the model of ecotage) Theory of change: In the words of Martin Luther King Jr., activists seek to "create such a crisis and foster such a tension that a community...is forced to confront the issue". Activists create disruption, gain publicity, generate (moral) outrage, and set an agenda; they force people – civil society, companies, governments – to think about an idea they weren't previously thinking about. This in turn can shift the Overton window, enact social change, and lead to political/legislative/policy change – e.g. a government-enforced moratorium on AI development. Final thoughts: AI-focused direct action on the model of climate activism currently seems extremely neglected and potentially highly effective. As a problem, the threat from AI is plausibly both more important and more tractable than climate change: a government-enforced global moratorium
10
2y
10
I suffer strongly from the following, and I suspect many EAs do too (all numbers are to approximations to illustrate my point): 1. I think that AGI is coming within the next 50 years, 90% probability, with medium confidence 2. I think that there is a ~10% chance that development of AGI leads to catastrophic outcomes for humanity, with very low confidence 3. I think there is a ~50% chance that development of AGI leads to massive amounts of flourishing for humanity, with very low confidence 4. Increasing my confidence in points 2 & 3 seems very difficult and time consuming, as the questions at hand are exceptionally complex, and even identifying personal cruxes will be a challenge 5. I feel a moral obligation to prevent catastrophes and enable flourishing, where I have the influence to do so 6. I want to take actions that accurately reflect my values 7. Given the probabilities above, not taking strong, if not radical, action to try to influence the outcomes feels like a failure to embody my values, and a moral failure. I'm still figuring out what to do about this. When you're highly uncertain it's obviously fine to hedge against being wrong, but again, given the numbers it's hard to justify hedging all the way down to inaction. I am trying to learn more about AI safety, but I'm not spending very much time on it currently. I'm trying to talk to others about it, but I'm not evangelising it, nor necessarily speaking with a great sense of urgency. At the moment, it's low down my de factor priority list, even though I think there's a significant chance it changes everything I know and care about. Is part of this a lack of visceral connection to the risks and rewards? What can I do to feel like my values are in line with my actions?
11
2y
18
Who else thinks we should be aiming for a global moratorium on AGI research at at this point? I'm considering ending every comment I make with "AGI research cessandum est", or "Furthermore, AGI research must be stopped".
Load more (8/30)