This is a special post for quick takes by Eric Neyman. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo's recent post arguing that some of GiveWell's grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I'll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]
Let's say that a moral decision process is dogmatic if it's completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.
A central example of a dogmatic belief is: "Making a single human happy is more morally valuable than making any number of chickens happy." The corresponding moral decision process would be, given a choice to spend money on making a human happy or making chickens happy, spending the money on the human no matter what the number of chickens made happy is. Non-dogmatism rejects this decision-making process on the basis that it is dogmatic.
(Caveat: this seems fine for entities that are totally outside one's moral circle of concern. For instance, I'm intuitively fine with a decision-making process that spends money on making a human happy instead of spending money on making sure that a pile of rocks doesn't get trampled on, no matter the size of the pile of rocks. So maybe non-dogmatism says that so long as two entities are in your moral circle of concern -- so long as you assign nonzero weight to them -- there ought to exist numbers, at least in theory, for which either side of a moral trade-off could be better.)
And so when I see comments saying things like "I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative", I'm like... really? There's no empirical facts that could possibly cause the trade-off to go the other way?
Rejecting dogmatic beliefs requires more work. Rather than deciding that one side of a trade-off is better than the other no matter the underlying facts, you actually have to examine the facts and do the math. But, like, the real world is messy and complicated, and sometimes you just have to do the math if you want to figure out the right answer.
Per the Wikipedia article on scope neglect, scope sensitivity would mean actually doing multiplication: making 100 people happy is 100 times better than making 1 person happy. I'm not fully sold on scope sensitivity; I feel much more strongly about non-dogmatism, which means that the numbers have to at least enter the picture, even if not multiplicatively.
EDIT: Rereading, I'm not really disagreeing with you. I definitely agree with the sentiment here:
And so when I see comments saying things like "I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative", I'm like... really? There's no empirical facts that could possibly cause the trade-off to go the other way?
(Edited) So, rather than just the possibility that all tradeoffs between humans and chickens should favour humans, I take issue with >99% confidence in that position or otherwise treating it like it's true.
Whatever someone thinks makes humans infinitely more important than chickens[1] could actually be present in chickens in some similarly important form with non-tiny or even modest probability (examples here), or not actually be what makes humans important at all (more general related discussion, although that piece defends a disputed position). In my view, this should in principle warrant some tradeoffs favouring chickens.
Or, if they don't think there's anything at all, say except the mere fact of species membership, then this is just pure speciesism and seems arbitrary.
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn't it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don't like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn't matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.
Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying "X is lexicographically preferable to Y but Y has positive value", and "Y has no value"?
From SEP: "A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2."
I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.
I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn't respond with "well how big is the audience?". I don't think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bias against humans.
I like your framing though. Taken to its logical conclusion, you're implying:
(1) Some people have strong lexographic preferences for X over improved Y
(2) Insisting that the only valid ethical decision making framework is a mathematical total utilitarian one in which everything of value must be assigned cardinal weights implies that to maintain this preference they must reject the possibility of Y having value
(3) Acting within this framework implies they should also be indifferent to Y in all other circumstances, including when there is no tradeoff
(4) Demanding people with lexographic preferences for X shut up and multiply is likely to lead to lower total utility for Y. And more generally a world in which everyone acts as if any possibility they assign any value to may be multiplied and traded off against their own value sounds like a world in which most people will opt to care about as few possibilities as possible.
Amish Shah is a Democratic politician who's running for congress in Arizona. He appears to be a strong supporter of animal rights (see here).
He just won his primary election, and Cook Political Report rates the seat he's running for (AZ-01) as a tossup. My subjective probability that he wins the seat is 50% (Edit: now 30%). I want him to win primarily because of his positions on animal rights, and secondarily because I want Democrats to control the House of Representatives.
The page you linked is about candidates for the Arizona State House. Amish Shah is running for the U.S. House of Representatives. There are still campaign finance limits, though ($3,300 per election per candidate, where the primary and the general election count separately; see here).
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo's recent post arguing that some of GiveWell's grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I'll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]
Let's say that a moral decision process is dogmatic if it's completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.
A central example of a dogmatic belief is: "Making a single human happy is more morally valuable than making any number of chickens happy." The corresponding moral decision process would be, given a choice to spend money on making a human happy or making chickens happy, spending the money on the human no matter what the number of chickens made happy is. Non-dogmatism rejects this decision-making process on the basis that it is dogmatic.
(Caveat: this seems fine for entities that are totally outside one's moral circle of concern. For instance, I'm intuitively fine with a decision-making process that spends money on making a human happy instead of spending money on making sure that a pile of rocks doesn't get trampled on, no matter the size of the pile of rocks. So maybe non-dogmatism says that so long as two entities are in your moral circle of concern -- so long as you assign nonzero weight to them -- there ought to exist numbers, at least in theory, for which either side of a moral trade-off could be better.)
And so when I see comments saying things like "I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative", I'm like... really? There's no empirical facts that could possibly cause the trade-off to go the other way?
Rejecting dogmatic beliefs requires more work. Rather than deciding that one side of a trade-off is better than the other no matter the underlying facts, you actually have to examine the facts and do the math. But, like, the real world is messy and complicated, and sometimes you just have to do the math if you want to figure out the right answer.
Per the Wikipedia article on scope neglect, scope sensitivity would mean actually doing multiplication: making 100 people happy is 100 times better than making 1 person happy. I'm not fully sold on scope sensitivity; I feel much more strongly about non-dogmatism, which means that the numbers have to at least enter the picture, even if not multiplicatively.
EDIT: Rereading, I'm not really disagreeing with you. I definitely agree with the sentiment here:
(Edited) So, rather than just the possibility that all tradeoffs between humans and chickens should favour humans, I take issue with >99% confidence in that position or otherwise treating it like it's true.
Whatever someone thinks makes humans infinitely more important than chickens[1] could actually be present in chickens in some similarly important form with non-tiny or even modest probability (examples here), or not actually be what makes humans important at all (more general related discussion, although that piece defends a disputed position). In my view, this should in principle warrant some tradeoffs favouring chickens.
Or, if they don't think there's anything at all, say except the mere fact of species membership, then this is just pure speciesism and seems arbitrary.
Or makes humans matter at all, but chickens lack, so chickens don't matter at all.
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn't it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don't like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn't matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.
Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying "X is lexicographically preferable to Y but Y has positive value", and "Y has no value"?
From SEP: "A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2."
I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.
I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn't respond with "well how big is the audience?". I don't think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bias against humans.
I like your framing though. Taken to its logical conclusion, you're implying: (1) Some people have strong lexographic preferences for X over improved Y (2) Insisting that the only valid ethical decision making framework is a mathematical total utilitarian one in which everything of value must be assigned cardinal weights implies that to maintain this preference they must reject the possibility of Y having value (3) Acting within this framework implies they should also be indifferent to Y in all other circumstances, including when there is no tradeoff (4) Demanding people with lexographic preferences for X shut up and multiply is likely to lead to lower total utility for Y. And more generally a world in which everyone acts as if any possibility they assign any value to may be multiplied and traded off against their own value sounds like a world in which most people will opt to care about as few possibilities as possible.
This page could be a useful pointer?
Amish Shah is a Democratic politician who's running for congress in Arizona. He appears to be a strong supporter of animal rights (see here).
He just won his primary election, and Cook Political Report rates the seat he's running for (AZ-01) as a tossup. My subjective probability that he wins the seat is 50% (Edit: now 30%). I want him to win primarily because of his positions on animal rights, and secondarily because I want Democrats to control the House of Representatives.
You can donate to him here.
Applicable campaign finance limits: According to this page, individuals can donate up to $5,400 to legislative candidates per two-year election cycle.
The page you linked is about candidates for the Arizona State House. Amish Shah is running for the U.S. House of Representatives. There are still campaign finance limits, though ($3,300 per election per candidate, where the primary and the general election count separately; see here).