Part of this long but highly interesting blog series stood out to me
What the heck happened here? Why such a big difference? Was it:
1. His spending was not high at the time the podcast happened.
2. It was high, but 80k/EA didn't know about it.
3. It was high, and 80k/EA did know, but it was introduced like this anyway.
Does anyone have a sense or a link to if this was talked about elsewhere?
Introduction
In this post, I share some thoughts from this weekend about the scale of farmed animal suffering, compared to the expected lives lost from engineered pandemics. I make the case that animal welfare as a cause has a 100x higher scale than biorisk. I'd happily turn this in to a post if you have more you'd like to add either for or against.
Scale Comparisons
Farmed Animal Suffering. I was thinking about the scale of farmed animal suffering, which is on the order of 1011 lives per year. These animals endure what might be among the worst conditions on the planet, considering only land animals. My estimate for the moral weight of the average land animal is approximately 1% to 0.1% that of a human. At first glance, this suggests that farmed animal suffering is equivalent to the annual slaughter of between 100 million and 1 billion humans, without considering the quality of their lives before death. I want to make the case that the scale of this could be 100x or a 1000x that of engineered pandemics.
Engineered Pandemics. In The Precipice, Toby Ord lists engineered pandemics as yielding a 1 in 30 extinction risk this century. Since The Precipice was published in 2020, this equates to a 1 in 30 chance over 80 years, or approximately a 1 in 2,360 risk of extinction from engineered pandemics in any given year. If that happens, 1010 human lives would be lost, resulting in an expected loss of approximately four million human lives per year.
Reasons I might be wrong
Tractability & Neglectedness. If engineered pandemic preparedness is two orders of magnitude higher in neglectedness and/or tractability, that would outweigh the scale and make them tractable. I'd be happy to hear someone more knowledgeable give some comparisons here.
Extinction is Terrible. Human extinction might not equate to just 1011 lives lost, due to future lives lost. Further, the The Precipice only discusses extinction level pandemics, but as suggested by Rodriguez here, one in 100
Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
* Open Phil renaming it's EA Community Growth (Longtermism) Team to GCR Capacity Building
* This post from Claire Zabel (OP)
* Giving What We Can's new Cause Area Fund being named "Risk and Resilience," with the goal of "Reducing Global Catastrophic Risks"
* Longview-GWWC's Longtermism Fund being renamed the "Emerging Challenges Fund"
* Anecdotal data from conversations with people working on GCRs / X-risk / Longtermist causes
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.
Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering):
1. From a longtermist (~totalist classical utilitarian) perspective, there's a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance.
* (see Parfit Reasons and Persons for the full thought experiment)
2. From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesn't differentiate between "humanity prevents GCRs and realises 1% of it's potential" and "humanity prevents GCRs realises 99% of its potential"
* Preventing an extinction-level GCR might move u
AI Safety Needs To Get Serious About Chinese Political Culture
I worry that Leopold Aschenbrenner's "China will use AI to install a global dystopia" take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism / expansionism, and isn't based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what China's long term political goals or values are.
I'm no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isn't an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers.
This or a number of other heuristics / judgements / perspectives could change substantially how we think about whether China would race for AGI, and/or be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West.
Currently, this is a pretty superficial impression of mine, so I don't think it would be fair to write an article yet. I need to do my homework first:
* I need to actually read Leopold's own writing about this, instead of making impressions based on summaries of it,
* I've been recommended to look into what CSET and Brian Tse have written about China,
* Perhaps there are other things I should hear about this, feel free to make recommendations.
Alternatively, as always, I'd be really happy for someone who's already done the homework to write about this, particularly anyone specifically with expertise in Chinese political culture or international relations. Even if I write the article, all it'll really be able to be is an appeal to listen to experts in the field, or for one or more of those experts to step forwar
OpenAI have their first military partner in Anduril. Make no mistake—although these are defensive applications today, this is a clear softening, as their previous ToS banned all military applications. Ominous.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors
We should expect that the incentives and culture for AI-focused companies to make them uniquely terrible for producing safe AGI.
From a “safety from catastrophic risk” perspective, I suspect an “AI-focused company” (e.g. Anthropic, OpenAI, Mistral) is abstractly pretty close to the worst possible organizational structure for getting us towards AGI. I have two distinct but related reasons:
1. Incentives
2. Culture
From an incentives perspective, consider realistic alternative organizational structures to “AI-focused company” that nonetheless has enough firepower to host successful multibillion-dollar scientific/engineering projects:
1. As part of an intergovernmental effort (e.g. CERN’s Large Hadron Collider, the ISS)
2. As part of a governmental effort of a single country (e.g. Apollo Program, Manhattan Project, China’s Tiangong)
3. As part of a larger company (e.g. Google DeepMind, Meta AI)
In each of those cases, I claim that there are stronger (though still not ideal) organizational incentives to slow down, pause/stop, or roll back deployment if there is sufficient evidence or reason to believe that further development can result in major catastrophe. In contrast, an AI-focused company has every incentive to go ahead on AI when the case for pausing is uncertain, and minimal incentive to stop or even take things slowly.
From a culture perspective, I claim that without knowing any details of the specific companies, you should expect AI-focused companies to be more likely than plausible contenders to have the following cultural elements:
1. Ideological AGI Vision AI-focused companies may have a large contingent of “true believers” who are ideologically motivated to make AGI at all costs and
2. No Pre-existing Safety Culture AI-focused companies may have minimal or no strong “safety” culture where people deeply understand, have experience in, and are motivated by a desire to avoid catastrophic outcomes.
The first one should be self-explanatory. Th
For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it's worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, it's possible this doesn't get noticed because people take it as a background assumption and don't tend to discuss it directly, so they don't realize how much they disagree and how crucial that disagreement is.