This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
(I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I want to slightly push back against this post in two ways:
I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy-- I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists.
Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of "doing a lot more good matters a lot more" is really important, but it is still trading off against other values.
Helping people closer to you / in your community: many people think this has inherent value
Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh.
Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they place inherent value on justice. Both longtermists and GiveWell think they're similarly good modulo secondary consequences and decision theory.
Discount rate, risk aversion, etc.: There is no reason that having a 10% chance of saving 100 lives in 6,000 years is better than a 40% chance of saving 5 lives tomorrow, if you don't already believe in zero-discount expected value as the metric to optimize. The reason to believe in zero-discount expected value is a thought experiment involving the veil of ignorance, or maybe the VNM theorem. It is not caring doing the work here because both can be very caring acts, it is your belief in the thought experiment connecting your caring to the expected value.
In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.
[1] More important for me are: feeling moral obligation to make others' lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively put ourselves in their shoes. And that it might even incorporate elements of justice or fairness if we consider them a disenfranchised group without representation in today's decision making who we are potentially throwing under the bus for our own benefit, or something like that. So justice and empathy can easily be folded into longtermist thinking. This sounds like what you are saying here, except maybe I do want to stand by the fact that EA values aren't necessarily trading off against justice, depending on how you define it.
Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
If we go extinct, they won't exist, so won't be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don't come to exist don't experience suffering or harm and have lost nothing. They also don't experience injustice.
Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.
On the other hand, longtermism is still compatible with a primary concern for compassion or justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here.
That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.
Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual averts an existential catastrophe is usually quite low at best (e.g. 1 in a million), and the numbers are also pretty speculative.
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that's confusing though.
BTW, my personal views lean towards a suffering-focused ethics that isn't seeking to create happy people for their own sake. But I still think that, in coming to that view, I'm concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That's my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn't consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines.
Agreed there are other issues with longtermism — just wanted to respond to the "it's not about care or empathy" critique.
Being mindful of the incentives created by pressure campaigns
I've spent the past few months trying to think about the whys and hows of large-scale public pressure campaigns (especially those targeting companies — of the sort that have been successful in animal advocacy).
A high-level view of these campaigns is that they use public awareness and corporate reputation as a lever to adjust corporate incentives. But making sure that you are adjusting the right incentives is more challenging than it seems. Ironically, I think this is closely connected to specification gaming: it's often easy to accidentally incentivize companies to do more to look better, rather than doing more to be better.
For example, an AI-focused campaign calling out RSPs recently began running ads that single out AI labs for speaking openly about existential risk (quoting leaders acknowledging that things could go catastrophically wrong). I can see why this is a "juicy" lever — most of the public would be pretty astonished/outraged to learn some of the beliefs that are held by AI researchers. But I'm not sure if pulling this lever is really incentivizing the right thing.
As far as I can tell, AI leaders speaking openly about existential risk is good. It won't solve anything in and of itself, but it's a start — it encourages legislators and the public to take the issue seriously. In general, I think it's worth praising this when it happens. I think the same is true of implementing safety policies like RSPs, whether or not such policies are sufficient in and of themselves.
If these things are used as ammunition to try to squeeze out stronger concessions, it might just incentivize the company to stop doing the good-but-inadequate thing (i.e. CEOs are less inclined to speak about the dangers of their product when it will be used as a soundbite in a campaign, and labs are probably less inclined to release good-but-inadequate safety policies when doing so creates more public backlash than they were facing before releasing the policy). It also risks directing public and legislative scrutiny to actors who actually do things like speak openly about (or simply believe in) existential risks, as opposed to those who don't.
So, what do you do when companies are making progress, but not enough? I'm not sure, but it seems like a careful balance of carrots and sticks.
For example, animal welfare campaigns are full of press releases like this: Mercy for Animals "commends" Popeye's for making a commitment to broiler welfare reforms. Spoiler alert: it probably wasn't written by someone who thought that Popeye's had totally absolved themselves of animal abuse with a single commitment, but rather it served as a strategic signal to the company and to their competitors (basically, "If you lead relative to your competitors on animal welfare, we'll give you carrots. If you don't, we'll give you the stick." If they had reacted by demanding more (which in my heart I may feel is appropriate), it would have sent a very different message: "We'll punish you even if you make progress." Even when it's justified [1], the incentives it creates can leave everybody worse off.
There are lots of other ways that I think campaigns can warp incentives in the wrong ways, but this one feels topical.
Popeyes probably still does, in fact, have animal abuse in its supply chain ↩︎
So I'm sympathetic to this perspective, but I want to add a different perspective on this point:
an AI-focused campaign calling out RSPs recently began running ads that single out AI labs for speaking openly about existential risk (quoting leaders acknowledging that things could go catastrophically wrong). I can see why this is a "juicy" lever — most of the public would be pretty astonished/outraged to learn some of the beliefs that are held by AI researchers.
I don't think they view this as a 'juicy' lever, it might just be the right lever (from their PoV)
If some of these leaders/labs think that there is a non-credible chance that AGI could cause an existential risk in the near-term (let's say 10%+ within ~10/20 years) then I think 'letting the public know' has very strong normative and pragmatic support. The astonishment and outrage would rightfully come from the instinctive response of 'wait, if you believe this, then why the hell are you working on it at all?'
So I guess it's not just the beliefs the public would find astonishing, but the seeming dissonance between beliefs and actions - and I think that's a fair response.
I think just letting the public now about AI lab leaders’ p(dooms)s makes sense - in fact, I think most AI researchers are on board with that too (they wouldn’t say these things on podcasts or live on stage if not).
It seems to me this campaign isn’t just meant to raise awareness of X-risk though — it’s meant to punish a particular AI lab for releasing what they see as an inadequate safety policy, and to generate public/legislative opposition to that policy.
I think the public should know about X-risk, but I worry using soundbites of it to generate reputatonial harms and counter labs’ safety agendas might make it less likely they speak about it in the future. It’s kind of like a repeated game: if the behavior you want in the coming years is safety-oriented, you should cooperate when your opponent exhibits that behavior. Only when they don’t should you defect.
So for clarity I'm much closer to your position than the ctrl.ai position, and very much agree with your concerns.
But I think, from their perspective, the major AI labs are already defecting by scaling up models that are inherently unsafe despite knowing that this has a significant chance of wiping out humanity (my understanding of ctrl.ai, not my own opinion[1])
I'm going to write a response to Connor's main post and link to it here that might help explain where their perspective is coming from (based on my own interpretation) [update:my comment is here, which is my attempt to communicate what the ctrl.ai position is, or at least where their scepticism of RSP's has come from]
I would be most interested to think what seasoned animal rights campaigners think about this, but m not sure this take matces with the way social norms have changed in the past.
First I think it's useful to turn to what evidence we have. Animal rights and climate change campaigners have shown that somewhat counter intuitively, more extreme beligerant activism moves the overton window and actually makes it easier for mistake campaigners. There is a post on the forum and a talk at EA Nordic about this I can't find right now.
"So, what do you do when companies are making progress, but not enough? I'm not sure, but it seems like a careful balance of carrots and sticks."
On the basis of what evidence we have, I would more lean towards piling on more both more sticks and more carrots. I think the risk of AI lab heads going to ground publicly is close to zero. They don't want to lose the control they have of the discourse they have right now. If one goes to ground, others will take over the public sphere anyway.
One slightly more extreme organisation can call out the hypocrisy of AI leaders not taking publicly about their pdoom, while another org can praise then for the speaking out they are doing. Sticks and carrots.
I'm not sure there can ever be "too much pressure" put on that would cause Negative outcomes, but I could be wrong, it might help if you can point out a historical example. I think small victories can be followed by even more pressure.
Mercy for animals would probably be ok with commending Popeyes one day for making progress then haranguing then again the next day to do even better, but I could be wrong.
As a side note, I feel like we in the EA community might be at primary school level sometimes when discussing advocacy and activism. I would love to hear the take of some expert seasoned activists about where they think AI policy work and advocacy sure go.
[This comment is no longer endorsed by its author]Reply
I think the lesson we can draw from climate and animal rights that you mention - the radical flank effect - shows that extreme actions concerning an issue in general might make incremental change more palatable to the public. But I don’t think it shows that extreme action attacking incremental change makes that particular incremental change more likely.
If I had to guess, the analogue to this in the animal activist world would be groups like PETA raising awareness about the “scam” that is cage-free. I don’t think there’s any reason to think this has increased the likelihood of cage-free reforms taking place — in fact, my experience from advocating for cage-free tells me that it just worsened social myths that the reform was meaningless despite evidence showing it reduced total hours spent suffering by nearly 50%.
So, I would like to see an activist ecosystem where there are different groups with different tactics - and some who maybe never offer carrots. But directing the stick to incremental improvements seems to have gone badly in past movements, and I wouldn’t want to see the same mistake made here.
Thanks Tyler nice job explaining, I think I've changed my mind on the specific case of attacking a small positive incremental change. Like you I struggle to see how that's helpful. Better to praise the incremental change (or say nothing) then push harder.
Have retracted my previous comment.
I'm heartened as well that you have had experience in animal campaigns.
Some exciting news from the animal welfare world: this morning, in a very ideologically-diverse 5-4 ruling, the US Supreme Court upheld California's Proposition 12, one of the strongest animal welfare laws in the world!
Digital reading rulers are tools that create parallel lines across a page of text, usually tinted a certain color, which scroll along with the text as you read. They were originally designed as a tool to aid comprehension for dyslexic readers, based on what was once a very simple strategy: physically moving a ruler down a page as you read.
There is some recent evidence showing that reading rulers improve speed and comprehension in non-dyslexic readers, as well. Also, many reading disabilities are probably something of a spectrum disorder, and I suspect it’s possible to have minor challenges with reading that slightly limit speed/comprehension but don’t create enough of a problem to be noticed early in life or qualify for a diagnosis.
Because of this, I suggest most regular readers at least try using one and see what they think. I’ve had the surprising experience that reading has felt much easier to me while using one, so I plan to continue to use reading rulers for large books and articles in the foreseeable future.
There are browser extensions that can offer reading rulers for articles, and the Amazon Kindle app for iOS added reading rulers two years ago. I’d be curious to hear if anyone else has had a positive experience with them.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
(I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I want to slightly push back against this post in two ways:
In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.
[1] More important for me are: feeling moral obligation to make others' lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively put ourselves in their shoes. And that it might even incorporate elements of justice or fairness if we consider them a disenfranchised group without representation in today's decision making who we are potentially throwing under the bus for our own benefit, or something like that. So justice and empathy can easily be folded into longtermist thinking. This sounds like what you are saying here, except maybe I do want to stand by the fact that EA values aren't necessarily trading off against justice, depending on how you define it.
If we go extinct, they won't exist, so won't be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don't come to exist don't experience suffering or harm and have lost nothing. They also don't experience injustice.
Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.
On the other hand, longtermism is still compatible with a primary concern for compassion or justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here.
That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.
Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual averts an existential catastrophe is usually quite low at best (e.g. 1 in a million), and the numbers are also pretty speculative.
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that's confusing though.
BTW, my personal views lean towards a suffering-focused ethics that isn't seeking to create happy people for their own sake. But I still think that, in coming to that view, I'm concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That's my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn't consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines.
Agreed there are other issues with longtermism — just wanted to respond to the "it's not about care or empathy" critique.
Being mindful of the incentives created by pressure campaigns
I've spent the past few months trying to think about the whys and hows of large-scale public pressure campaigns (especially those targeting companies — of the sort that have been successful in animal advocacy).
A high-level view of these campaigns is that they use public awareness and corporate reputation as a lever to adjust corporate incentives. But making sure that you are adjusting the right incentives is more challenging than it seems. Ironically, I think this is closely connected to specification gaming: it's often easy to accidentally incentivize companies to do more to look better, rather than doing more to be better.
For example, an AI-focused campaign calling out RSPs recently began running ads that single out AI labs for speaking openly about existential risk (quoting leaders acknowledging that things could go catastrophically wrong). I can see why this is a "juicy" lever — most of the public would be pretty astonished/outraged to learn some of the beliefs that are held by AI researchers. But I'm not sure if pulling this lever is really incentivizing the right thing.
As far as I can tell, AI leaders speaking openly about existential risk is good. It won't solve anything in and of itself, but it's a start — it encourages legislators and the public to take the issue seriously. In general, I think it's worth praising this when it happens. I think the same is true of implementing safety policies like RSPs, whether or not such policies are sufficient in and of themselves.
If these things are used as ammunition to try to squeeze out stronger concessions, it might just incentivize the company to stop doing the good-but-inadequate thing (i.e. CEOs are less inclined to speak about the dangers of their product when it will be used as a soundbite in a campaign, and labs are probably less inclined to release good-but-inadequate safety policies when doing so creates more public backlash than they were facing before releasing the policy). It also risks directing public and legislative scrutiny to actors who actually do things like speak openly about (or simply believe in) existential risks, as opposed to those who don't.
So, what do you do when companies are making progress, but not enough? I'm not sure, but it seems like a careful balance of carrots and sticks.
For example, animal welfare campaigns are full of press releases like this: Mercy for Animals "commends" Popeye's for making a commitment to broiler welfare reforms. Spoiler alert: it probably wasn't written by someone who thought that Popeye's had totally absolved themselves of animal abuse with a single commitment, but rather it served as a strategic signal to the company and to their competitors (basically, "If you lead relative to your competitors on animal welfare, we'll give you carrots. If you don't, we'll give you the stick." If they had reacted by demanding more (which in my heart I may feel is appropriate), it would have sent a very different message: "We'll punish you even if you make progress." Even when it's justified [1], the incentives it creates can leave everybody worse off.
There are lots of other ways that I think campaigns can warp incentives in the wrong ways, but this one feels topical.
Popeyes probably still does, in fact, have animal abuse in its supply chain ↩︎
So I'm sympathetic to this perspective, but I want to add a different perspective on this point:
I don't think they view this as a 'juicy' lever, it might just be the right lever (from their PoV)
If some of these leaders/labs think that there is a non-credible chance that AGI could cause an existential risk in the near-term (let's say 10%+ within ~10/20 years) then I think 'letting the public know' has very strong normative and pragmatic support. The astonishment and outrage would rightfully come from the instinctive response of 'wait, if you believe this, then why the hell are you working on it at all?'
So I guess it's not just the beliefs the public would find astonishing, but the seeming dissonance between beliefs and actions - and I think that's a fair response.
I think just letting the public now about AI lab leaders’ p(dooms)s makes sense - in fact, I think most AI researchers are on board with that too (they wouldn’t say these things on podcasts or live on stage if not).
It seems to me this campaign isn’t just meant to raise awareness of X-risk though — it’s meant to punish a particular AI lab for releasing what they see as an inadequate safety policy, and to generate public/legislative opposition to that policy.
I think the public should know about X-risk, but I worry using soundbites of it to generate reputatonial harms and counter labs’ safety agendas might make it less likely they speak about it in the future. It’s kind of like a repeated game: if the behavior you want in the coming years is safety-oriented, you should cooperate when your opponent exhibits that behavior. Only when they don’t should you defect.
So for clarity I'm much closer to your position than the ctrl.ai position, and very much agree with your concerns.
But I think, from their perspective, the major AI labs are already defecting by scaling up models that are inherently unsafe despite knowing that this has a significant chance of wiping out humanity (my understanding of ctrl.ai, not my own opinion[1])
I'm going to write a response to Connor's main post and link to it here that might help explain where their perspective is coming from (based on my own interpretation) [update: my comment is here, which is my attempt to communicate what the ctrl.ai position is, or at least where their scepticism of RSP's has come from]
fwiw my opinion is here
I would be most interested to think what seasoned animal rights campaigners think about this, but m not sure this take matces with the way social norms have changed in the past.
First I think it's useful to turn to what evidence we have. Animal rights and climate change campaigners have shown that somewhat counter intuitively, more extreme beligerant activism moves the overton window and actually makes it easier for mistake campaigners. There is a post on the forum and a talk at EA Nordic about this I can't find right now.
"So, what do you do when companies are making progress, but not enough? I'm not sure, but it seems like a careful balance of carrots and sticks."
On the basis of what evidence we have, I would more lean towards piling on more both more sticks and more carrots. I think the risk of AI lab heads going to ground publicly is close to zero. They don't want to lose the control they have of the discourse they have right now. If one goes to ground, others will take over the public sphere anyway.
One slightly more extreme organisation can call out the hypocrisy of AI leaders not taking publicly about their pdoom, while another org can praise then for the speaking out they are doing. Sticks and carrots.
I'm not sure there can ever be "too much pressure" put on that would cause Negative outcomes, but I could be wrong, it might help if you can point out a historical example. I think small victories can be followed by even more pressure.
Mercy for animals would probably be ok with commending Popeyes one day for making progress then haranguing then again the next day to do even better, but I could be wrong.
As a side note, I feel like we in the EA community might be at primary school level sometimes when discussing advocacy and activism. I would love to hear the take of some expert seasoned activists about where they think AI policy work and advocacy sure go.
I think the lesson we can draw from climate and animal rights that you mention - the radical flank effect - shows that extreme actions concerning an issue in general might make incremental change more palatable to the public. But I don’t think it shows that extreme action attacking incremental change makes that particular incremental change more likely.
If I had to guess, the analogue to this in the animal activist world would be groups like PETA raising awareness about the “scam” that is cage-free. I don’t think there’s any reason to think this has increased the likelihood of cage-free reforms taking place — in fact, my experience from advocating for cage-free tells me that it just worsened social myths that the reform was meaningless despite evidence showing it reduced total hours spent suffering by nearly 50%.
So, I would like to see an activist ecosystem where there are different groups with different tactics - and some who maybe never offer carrots. But directing the stick to incremental improvements seems to have gone badly in past movements, and I wouldn’t want to see the same mistake made here.
Thanks Tyler nice job explaining, I think I've changed my mind on the specific case of attacking a small positive incremental change. Like you I struggle to see how that's helpful. Better to praise the incremental change (or say nothing) then push harder.
Have retracted my previous comment.
I'm heartened as well that you have had experience in animal campaigns.
Some exciting news from the animal welfare world: this morning, in a very ideologically-diverse 5-4 ruling, the US Supreme Court upheld California's Proposition 12, one of the strongest animal welfare laws in the world!
Consider Using a Reading Ruler!
Digital reading rulers are tools that create parallel lines across a page of text, usually tinted a certain color, which scroll along with the text as you read. They were originally designed as a tool to aid comprehension for dyslexic readers, based on what was once a very simple strategy: physically moving a ruler down a page as you read.
There is some recent evidence showing that reading rulers improve speed and comprehension in non-dyslexic readers, as well. Also, many reading disabilities are probably something of a spectrum disorder, and I suspect it’s possible to have minor challenges with reading that slightly limit speed/comprehension but don’t create enough of a problem to be noticed early in life or qualify for a diagnosis.
Because of this, I suggest most regular readers at least try using one and see what they think. I’ve had the surprising experience that reading has felt much easier to me while using one, so I plan to continue to use reading rulers for large books and articles in the foreseeable future.
There are browser extensions that can offer reading rulers for articles, and the Amazon Kindle app for iOS added reading rulers two years ago. I’d be curious to hear if anyone else has had a positive experience with them.