M

MichaelStJules

11020 karmaJoined

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2397

Topic contributions
12

You could also just replace everyone with beings with (much) more satisfied preferences on aggregate. Replacement or otherwise killing everyone against their preferences can be an issue for basically any utilitarian or consequentialist welfarist view that isn't person-affecting, including symmetric total preference utilitarianism.

I also left a comment related to neuron counts and things similar to experience size in point 2 here.

Ah sorry, I should have read more carefully. You were clearly referring to the intensity or size of the experience, not of the behaviour. 4 hours of sleep and commenting before going to the airport. :P

I wrote more about experience size in the comments on trammell's post.

My counter-assumption here is that where humans display anxiety, fear or self-protective behavior, both the behaviors themselves and the corresponding experience are likely to be more intense or bigger sized than a pig, chicken or shrimp that exhibits these behaviors[8].

What do you have in mind by "more intense or bigger sized"?

FWIW, here's how I would probably think about it:

  1. Do humans react with more urgency, desperation or in more extreme ways?
    1. I would guess it would be similar across all mammals and birds, but it's hard to say for shrimp. I would actually just check the proxy for "panic-like behavior", which is present in pigs and chickens, but unknown in shrimp. I think "panic-like behavior" is one of the most important proxies here, because it's the best evidence about the capacity for disabling and maybe excruciating pain/suffering.[1]
  2. Do humans have a greater diversity of responses, or more flexible responses in anxiety, fear and self-protective behavior?
    1. Yes, but, in my view, this probably tracks intelligence and not how much an animal cares. I wouldn't say young children suffer less because of less flexible responses, say. Young children have the capacity to develop more responses and more flexible responses than other animals, though, and maybe that could matter, but I'm personally skeptical.
    2. Still, there's a third possibility: it might track the extent to which we can say an animal can care about anything at all, under a graded view of consciousness, which could favour humans more (more here).
  1. ^

    Disabling. Pain at this level takes priority over most bids for behavioral execution and prevents most forms of enjoyment or positive welfare. Pain is continuously distressing. Individuals affected by harms in this category often change their activity levels drastically (the degree of disruption in the ability of an organism to function optimally should not be confused with the overt expression of pain behaviors, which is less likely in prey species). Inattention and unresponsiveness to milder forms of pain or other ongoing stimuli and surroundings is likely to be observed. Relief often requires higher drug dosages or more powerful drugs. The term Disabling refers to the disability caused by ‘pain’, not to any structural disability.

    Excruciating. All conditions and events associated with extreme levels of pain that are not normally tolerated even if only for a few seconds. In humans, it would mark the threshold of pain under which many people choose to take their lives rather than endure the pain. This is the case, for example, of scalding and severe burning events. Behavioral patterns associated with experiences in this category may include loud screaming, involuntary shaking, extreme muscle tension, or extreme restlessness. Another criterion is the manifestation of behaviors that individuals would strongly refrain from displaying under normal circumstances, as they threaten body integrity (e.g. running into hazardous areas or exposing oneself to sources of danger, such as predators, as a result of pain or of attempts to alleviate it). The attribution of conditions to this level must therefore be done cautiously. Concealment of pain is not possible.

     

Future debate week topics?

  1. Global health & wellbeing (including animal welfare) vs global catastrophic risks, based on Open Phil's classifications.
  2. Neartermism vs longtermism.
  3. Extinction risks vs risks of astronomical suffering (s-risks).
  4. Saving 1 horse-sized duck vs saving 100 duck-sized horses.

I like the idea of going through cause prioritization together on the EA Forum.

I never found psychological hedonism (or motivational hedonism) very plausible, but I think it's worth pointing out that the standard version — according to which everyone is ultimately motivated only by their own pleasure and pain — is a form of psychological egoism and seems incompatible with sincerely being a hedonistic utilitarian or caring about others and their interests for their own sake.

From https://www.britannica.com/topic/psychological-hedonism :

Psychological hedonism, in philosophical psychology, the view that all human action is ultimately motivated by desires for pleasure and the avoidance of pain. It has been espoused by a variety of distinguished thinkers, including Epicurus, Jeremy Bentham, and John Stuart Mill, and important discussions of it can also be found in works by Plato, Aristotle, Joseph Butler, G.E. Moore, and Henry Sidgwick.

Because its defenders generally assume that agents are motivated only by the prospect of their own pleasures and pains, psychological hedonism is a form of psychological egoism.

More concretely, a psychological hedonist who cares about others, but only based on how it makes them feel, would prefer to never find out that they've caused harm or are doing less good than they could, if it wouldn't make them (eventually) feel better overall. They don't actually want to do good, they just want to feel like they're doing good. Ignorance is bliss.

They could be more inclined to get in or stay in an experience machine, knowing they'd feel better even if it meant never actually helping anyone else.

That being said, they might feel bad about it if they know they're in or would be in an experience machine. So, they might refuse the experience machine by following their immediate feelings and ignoring the fact that they'd feel better overall in the long run. This kind of person seems practically indistinguishable from someone who sincerely cares about others, but does so through and based on their feelings.

Thanks for writing this and for everyone else's support! ❤️

(Edited)

I favour animal welfare, but some (near-term future) considerations that I'm most sympathetic to that could favour global health are:

  1. I'm not a hedonist. I care about every way any being can care consciously and terminally about anything. So, I care about others' (conscious or dispositionally conscious) hedonic states, desires, preferences, moral intuitions and other attitudes on their behalf. I'd guess that humans are much more willing to endure suffering, including fairly intense suffering, for their children and other goals than other animals are for anything. So human preferences might often be much stronger than other animals', if we normalize preferences by preferences about one's own suffering, say.
    1. This has some directly intuitive appeal, but my best guess is that this involves some wrong or unjustifiable assumptions, and I doubt that such preferences are even interpersonally comparable.[1]
    2. This reasoning could lead to large discrepancies between humans, because some humans are much more willing to suffer for things than others. The most fanatical humans might dominate. That could be pretty morally repugnant.
  2. Arguments for weighing ~proportionally with neuron counts:
    1. The only measures of subjective welfare that seem to me like they could ground interpersonal comparisons are based on attention (and alertness), e.g. how hard attention is pulled towards something important (motivational salience) or "how much" attention is used. I could imagine the "size" of attention, e.g. the number of distinguishable items in it, to scale with neuron counts, maybe even proportionally, which could favour global health on the margin.
      1. But probably with decreasing marginal returns to additional neurons, and I give substantial weight to the number of neurons not really mattering at all, once you have the right kind of attention.
    2. Some very weird and speculative possibilities of large numbers of conscious or value-generating subsystems in each brain could support weighing ~proportionally with neuron counts in expectation, even if you assign the possibilities fairly low but non-negligible probabilities (Fischer, Shriver & St. Jules, 2022).
      1. Maybe even faster scaling than proportional in expectation, but I think that leads to double counting I'd reject if it's even modestly faster than proportional.
  3. Animal welfare work has more steeply decreasing marginal cost-effectiveness.
  4. Cost-effectiveness estimates for marginal animal welfare work are more speculative than GiveWell's (RCT- and meta-analysis-based) estimates, at least for the more direct impacts considered. Maybe we're not skeptical enough of the causal effects of animal welfare work, and the welfare reforms would have happened soon anyway or aren't as likely to actually materialize as we think. I'm also inclined to give less weight to more extreme impacts when they're more ambiguous/speculative, similar to difference-making ambiguity aversion.
  5. I worry about lots of animal welfare work backfiring, and support for apparently safer work funging with work that backfires, so also backfiring.
    1. My best guess is that animal agriculture is good for wild animals, especially invertebrates, because it reduces their populations and I have very asymmetric views. So plant-based substitutes, cultured meat and other diet change work could backfire, if and because it harms wild invertebrates more than it helps animals used for food.
    2. I worry that nest deprivation for caged laying hens could be much less intensely painful than the long-term pain from keel bone fractures, so cage-free could be worse because of the apparent increase in keel bone fractures.
      1. I think we should support more work to reduce keel bone fractures in laying hens, and CE/AIM wants to start a new charity for this.
  6. Saving human lives, e.g. through AMF, probably reduces wild animal populations, so seems good for animals overall if you care enough about invertebrates (relative to animals used for food) and think they'd be better off not existing.
    1. Maybe farmed insect welfare work is even better, though.
  1. ^

    People probably just have different beliefs/preferences about how much their own suffering matters, and those preferences are plausibly not interpersonally comparable at all.

    Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can't easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.

    We can also imagine moral patients with conscious preferences who can't suffer at all, so we'd have to find something else to normalize by to make interpersonal comparisons with them.

    I discuss interpersonal comparisons more here.

My sequence might also be helpful. I didn't come up with too many directly useful estimates, but I looked into implications of desire-based and preference-based theories for moral weights and prioritization, and I would probably still prioritize nonhuman animals on such views. I guess most importantly:

  1. For endorsed/reflective/cognitive/belief-like desires or preferences, like life satisfaction and responses to hypotheticals like QALY tradeoff questions, I'm pretty skeptical of interpersonal utility comparisons in general, even between humans. I'm somewhat skeptical of comparisons for hedonic states between different species. I'm sympathetic to comparisons for "felt desires" across species, based on how attention is affected (motivational salience) and "how much attention" different beings have.[1] (More here, partly in footnotes)
  2. Perhaps surprisingly and controversially, I suspect many animals have simple versions of endorsed/reflective/cognitive/belief-like desires or preferences. It's not obvious they matter (much) less for being simpler, but this could go either way. (More here and here)
  3. Humans plausibly have many more preferences and desires, and about many more things than other animals, but this doesn't clearly dramatically favour humans.
    1. If we measure the intensity of preferences and desires by their effects on attention, then the number of them doesn't really seem to matter. Often our preferences and desires are dominated by a few broad terminal ones, like spending time with and the welfare of loved ones, being happy and free from suffering, career aspirations.
    2. I'm not aware of particularly plausible/attractive ways to ground interpersonal comparisons otherwise.
    3. Normalization approaches not grounding interpersonal comparisons don't usually even favour humans at all, but some specific ones might.
  4. Uncertainty about moral weights favours nonhumans, because we understand and value things by reference to our own experiences, so should normalize moral weights by the value we assign to our own experiences and can take expected values over that (More here).
  5. We could assume that how much we believe (or act like) our own suffering (or hedonic states or felt desires) matters is proportional to the intensity of our suffering (e.g. based on attention), across moral patients, including humans and other animals. I could see humans coming out quite far ahead this way, based on things like how much parents care about their children, people's ethical beliefs (utilitarian, deontological, religious), other important goals, and people's apparently greater willingness to suffer for these than other animals' willingness to suffer for anything.
    1. There's some intuitive appeal to this approach, but the motivating assumption seems probably wrong to me, and reasonably likely to not be even be justifiable as a rough approximation.[2]
    2. It also could lead to large discrepancies between humans, because some humans are much more willing to suffer for things than others. The most fanatical humans might dominate. That could be pretty morally repugnant.
  1. ^

    The quantity of attention, in roughly the most extreme case in my view, could scale proportionally with the number of (relevant) neurons, so humans would have, as a first guess, ~400 times as much moral weight as chickens. OTOH, I'd actually guess there are decreasing marginal returns to additional neurons, e.g. it could scale more like with the logarithm or the square root of the number of neurons. And it might not really scale with the number of neurons at all.

  2. ^

    People probably just have different beliefs about how much their own suffering matters, and these beliefs are plausibly not interpersonally comparable at all.

    Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can't easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.

    We can also imagine moral patients with conscious preferences who can't suffer at all, so we'd have to find something else to normalize by to make interpersonal comparisons with them.

I think we can put some reasonable bounds on our uncertainty and ranges, and they can tell us some useful things. Or, at least, I can, according to my own intuitions, and end up prioritizing animal welfare this way.

Also, I've argued here that uncertainty about moral weights actually tends to further favour prioritizing nonhumans.

Load more