This is a special post for quick takes by SanteriK. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
I was quite suprised to see that 80k doesn't mention animals in their definition of 'impartial positive impact'.
Their definition: "We define ‘impartial positive impact’ as what helps the most people live better lives in the long term, treating everyone’s interests as equal."
I'm a bit unsettled by this. I hope they actually do assign value for non-human animals. But even if that's the case, failing to mention it would be weird.
In general, I'm concerned that longtermists don't value animals enough. From my experience visiting rationalist/longermist events & spaces, veganism/vegetarianism is less popular than I would have thought. I consider vegetarianism one of the least costly virtue signals, which is why I would expect most healthy people concerned about animal welfare to be vegetarians.
On their page explaining their definition of positive impact in more depth, footnote 1 clarifies:
"We often say “helping people” here for simplicity and brevity, but we don’t mean just humans — we mean anyone with experience that matters morally — e.g. nonhuman animals that can suffer or feel happiness, even conscious machines if they ever exist."
I think it would be better to make it clearer that animals are included. But its not the case that they exclude animals from moral consideration.
That's helpful to know, thanks! I still think the word "people" is quite misleading in the sense that people rarely associate it with nonhuman animals. I also think there might be an additional reason for not mentioning animals, which is to avoid alienating people who don't care about animals but who are interested in longtermist causes.
As has been pointed out, the page where we (80k) detail the definition of “social impact” in depth is explicit that we do consider animals to be a part of impartial social impact. It’s not just in a footnote. The body of the article mentions animals and non-human sentient beings several times, including in this paragraph:
>We mean that we strive to treat equal effects on different beings’ welfare as equally morally important, no matter who they are — including people who live far away or in the future. In addition, we think that the interests of many nonhuman animals, and even potentially sentient future digital beings, should be given significant weight, although we’re unsure of the exact amount. Thus, we don’t think social impact is limited to promoting the welfare of any particular group we happen to be partial to (such as people who are alive today, or human beings as a species).
Also note that in the core argument of our article on longtermism, we strove to make clear that we’re not just concerned with future humans, but all morally relevant beings:
We should care about how the lives of future individuals go.
The number of future individuals whose lives matter could be vast.
We have an opportunity to affect how the long-run future goes — whether there may be many flourishing individuals in the future, many suffering individuals in the future, or perhaps no one at all.
But there can be a trade off between succinctness and complete precision. Being succinct isn’t trivial — writing that is accessible and engaging can be much more effective than verbose academic prose. The page you linked to is a summary of our career planning course, so it's necessarily even more succinct than usual and doesn't delve into the details of each claim. Of course, we don’t want to mislead people about what we believe, so these kinds of decisions are always a balancing act, and we won’t always get it right.
Your post is a good reminder of how some ways of communicating these ideas can give the wrong impression, so we’re going to review whether and to what extent we should make changes to be clearer about this issue. The feedback is much appreciated!
Thanks for the response! It's great to see that animals are mentioned in many other pages on the website. I understand the difficulty of tradeoffs between succintness and precision.
If vegetarianism was cheap it wouldn't be an effective signal. To signal some underlying property, you have to undertake actions which would be very costly if you didn't have that property.
This seems only true if you expect a decent share of our community members to be dishonest or not disvalue giving dishonest signals? Being vegetarian just to appear to care about animals, and not because you care about them nor because you're trying to cooperate with those who do, seems like it would be costly to people's consciences.
There are also indirect expected benefits to vegetarianism for nonhuman animals or other future moral patients with similarly limited agency: increasing the salience of their interests day-to-day and potentially reducing cognitive dissonance. So, it's not just a signal that someone already cares about animals, it actually also makes them care more.
I'm not sure if I follow. I guess strict veganism would be a more effective signal then, since it would be more costly? But I see even less people in the EA/rationalist circles being vegan rather than vegetarian.
Did anyone reading this ever get a suggestion from 80k to consider working on animal welfare? Is anyone working on animal welfare because of 80k?
If you "still think factory farming is an urgent problem [...]. But in the end, decided to focus on something else." I would say that it makes sense to not alienate non-vegans who could help with AI Safety by mentioning non-human animals.
I agree that it makes sense, although using the term "people" and mentioning that it means also non-human animals in a footnote seems a bit dishonest to me. I'm uncertain about the tradeoffs between telling the truth and trying to maximize impact here.
I was quite suprised to see that 80k doesn't mention animals in their definition of 'impartial positive impact'.
Their definition: "We define ‘impartial positive impact’ as what helps the most people live better lives in the long term, treating everyone’s interests as equal."
I'm a bit unsettled by this. I hope they actually do assign value for non-human animals. But even if that's the case, failing to mention it would be weird.
In general, I'm concerned that longtermists don't value animals enough. From my experience visiting rationalist/longermist events & spaces, veganism/vegetarianism is less popular than I would have thought. I consider vegetarianism one of the least costly virtue signals, which is why I would expect most healthy people concerned about animal welfare to be vegetarians.
On their page explaining their definition of positive impact in more depth, footnote 1 clarifies:
"We often say “helping people” here for simplicity and brevity, but we don’t mean just humans — we mean anyone with experience that matters morally — e.g. nonhuman animals that can suffer or feel happiness, even conscious machines if they ever exist."
I think it would be better to make it clearer that animals are included. But its not the case that they exclude animals from moral consideration.
https://80000hours.org/articles/what-is-social-impact-definition/
That's helpful to know, thanks! I still think the word "people" is quite misleading in the sense that people rarely associate it with nonhuman animals. I also think there might be an additional reason for not mentioning animals, which is to avoid alienating people who don't care about animals but who are interested in longtermist causes.
Hi — thanks for raising this issue.
As has been pointed out, the page where we (80k) detail the definition of “social impact” in depth is explicit that we do consider animals to be a part of impartial social impact. It’s not just in a footnote. The body of the article mentions animals and non-human sentient beings several times, including in this paragraph:
>We mean that we strive to treat equal effects on different beings’ welfare as equally morally important, no matter who they are — including people who live far away or in the future. In addition, we think that the interests of many nonhuman animals, and even potentially sentient future digital beings, should be given significant weight, although we’re unsure of the exact amount. Thus, we don’t think social impact is limited to promoting the welfare of any particular group we happen to be partial to (such as people who are alive today, or human beings as a species).
Also note that in the core argument of our article on longtermism, we strove to make clear that we’re not just concerned with future humans, but all morally relevant beings:
But there can be a trade off between succinctness and complete precision. Being succinct isn’t trivial — writing that is accessible and engaging can be much more effective than verbose academic prose. The page you linked to is a summary of our career planning course, so it's necessarily even more succinct than usual and doesn't delve into the details of each claim. Of course, we don’t want to mislead people about what we believe, so these kinds of decisions are always a balancing act, and we won’t always get it right.
Your post is a good reminder of how some ways of communicating these ideas can give the wrong impression, so we’re going to review whether and to what extent we should make changes to be clearer about this issue. The feedback is much appreciated!
— Cody from 80k
Thanks for the response! It's great to see that animals are mentioned in many other pages on the website. I understand the difficulty of tradeoffs between succintness and precision.
If vegetarianism was cheap it wouldn't be an effective signal. To signal some underlying property, you have to undertake actions which would be very costly if you didn't have that property.
This seems only true if you expect a decent share of our community members to be dishonest or not disvalue giving dishonest signals? Being vegetarian just to appear to care about animals, and not because you care about them nor because you're trying to cooperate with those who do, seems like it would be costly to people's consciences.
There are also indirect expected benefits to vegetarianism for nonhuman animals or other future moral patients with similarly limited agency: increasing the salience of their interests day-to-day and potentially reducing cognitive dissonance. So, it's not just a signal that someone already cares about animals, it actually also makes them care more.
I'm not sure if I follow. I guess strict veganism would be a more effective signal then, since it would be more costly? But I see even less people in the EA/rationalist circles being vegan rather than vegetarian.
Did anyone reading this ever get a suggestion from 80k to consider working on animal welfare? Is anyone working on animal welfare because of 80k?
If you "still think factory farming is an urgent problem [...]. But in the end, decided to focus on something else." I would say that it makes sense to not alienate non-vegans who could help with AI Safety by mentioning non-human animals.
I agree that it makes sense, although using the term "people" and mentioning that it means also non-human animals in a footnote seems a bit dishonest to me. I'm uncertain about the tradeoffs between telling the truth and trying to maximize impact here.