M

MichaelStJules

10521 karmaJoined

Sequences
2

Human impacts on animals
Welfare and moral weights

Comments
2334

Topic contributions
12

DMRA could actually favour helping animals of uncertain sentience over helping humans or animals of more probable sentience, if and because helping humans can backfire badly for other animals in case other animals matter a lot (through the meat eater problem and effects on wild animals), and helping vertebrates can also backfire badly for wild invertebrates in case wild invertebrates matter a lot (especially through population effects through land use and fishing). Helping other animals seems less prone to backfire so much for humans, although it can. And helping farmed shrimp and insects seems less prone to backfire so much (relative to potential benefits) for other animals (vertebrates, invertebrates, farmed and wild)

I suppose you might prefer human-helping interventions with very little impact on animals. Maybe mental health? Or, you might combine human-helping interventions to try to mostly cancel out impacts on animals, like life-saving charities + family planning charities, which may have roughly opposite sign effects on animals. And maybe also hedge with some animal-helping interventions to make up for any remaining downside risk for animals. Their combination could be better than primarily animal-targeted interventions under DMRA, or at least inteventions aimed at helping animals unlikely to matter much.

Maybe chicken welfare reforms still look good enough on their own, though, if chickens are likely enough to matter enough, as I think RP showed in the CURVE sequence.

Another motivation I think worth mentioning is just objecting to fanaticism. As Tarsney showed, respecting stochastic dominance with statistically independent background value can force a total utilitarian to be pretty fanatical, although exactly how fanatical will depend on how wide the distribution of the background value is. Someone could still find that objectionably fanatical, even to the extent of rejecting stochastic dominance as a guide. They could still respect statewise dominance.

That being said, DMRA could also be "fanatical" about the risk of causing net harm, leading to paralysis and never doing anything or always sticking with the "default", so maybe the thing to do is to give less than proportional weight to both net positive impacts and net negative impacts, e.g. a sigmoid function of the difference.

I'm sympathetic to functionalism, and the attention, urgency or priority given to something seems likely defining of its intensity to me, at least for pain, and possibly generally. I don’t know what other effects would ground intensity in a way that’s not overly particular to specific physical/behavioural capacities or non-brain physiological responses (heart rate, stress hormones, etc.). (I don't think reinforcement strength is defining.)

There are some attempts at functional definitions of pain and pleasure intensities here, and they seem fairly symmetric:

https://welfarefootprint.org/technical-definitions/

and some more discussion here:

https://welfarefootprint.org/2024/03/12/positive-animal-welfare/

I'm afraid I don't know anywhere else these arguments are fleshed out in more detail than what I shared in my first comment (https://link.springer.com/article/10.1007/s13164-013-0171-2).

I'll add that our understanding of pleasure and suffering and the moral value we assign to them may be necessarily human-relative, so if those phenomena turn out to be functionally asymmetric in humans (e.g. one defined by the necessity of a certain function with no sufficiently similar/symmetric counterpart in the other), then our concepts of pleasure and suffering will also be functionally asymmetric. I make some similar/related arguments in https://forum.effectivealtruism.org/posts/L4Cv8hvuun6vNL8rm/solution-to-the-two-envelopes-problem-for-moral-weights

I lean towards functionalism and illusionism, but am quite skeptical of computationalism and computational functionalism, and I think it's important to distinguish them. Functionalism is, AFAIK, a fairly popular position among relevant experts, but computationalism much less so.

Under my favoured version of functionalism, the "functions" we should worry about are functional/causal roles with effects on things like attention and (dispositional or augmented hypothetical) externally directed behaviours, like approach, avoidance, beliefs, things we say (and how they are grounded through associations with real world states). These seem much less up to interpretation than computed mathematical "functions" like "0001, 0001 → 0010". However, you can find simple versions of these functional/causal roles in many places if you squint, hence fuzziness.

Functionalism this way is still compatible with digital consciousness.

And I think we can use debunking arguments to support functionalism of some kind, but it could end up being a very fine-grained view, even the kind of view you propose here, with the necessary functional/causal roles at the level of fundamental physics. I doubt we need such fine-grained roles, though, and suspect similar debunking arguments can rule out their necessity. And I think those roles would be digitally simulatable in principle anyway.

It seems unlikely a large share of our AI will be fine-grained simulations of biological brains like this, given its inefficiency and the direction of AI development, but the absolute number could still be large.

Or, we could end up with a version of functonalism where nonphysical properties or nonphysical substances actually play parts in some necessary functional/causal roles. But again, I'm skeptical, and those roles may also be digitally (and purely physically) simulatable.

It seems worth mentioning the possibility that progress can also be bottlenecked by events external to our civilization. Maybe we need to wait for some star to explode for some experiment or for it to reach some state to exploit it. Or maybe we will wait for the universe to cool to do something (like the aestivation hypothesis for aliens). Or maybe we need to wait for an alien civilization to mature or reach us before doing something.

And even if we don’t "wait" for such events, our advancement can be slowed, because we can't take advantage of them sooner or as effectively along with our internal advancement. Cumulatively, they could mean advancement is not lasting and doesn't make it to our end point.

But I suppose there's a substantial probability that none of this makes much difference, so that uniform internal advancement really does bring everything that matters forward roughly uniformly (ex ante), too.

And maybe we miss some important/useful events if we don't advance. For example, the expansion of the universe puts some stars permanently out of reach sooner if we don’t advance.

Another possible endogenous end point that could be advanced is meeting (or being detected by) an alien (or alien AI) civilization earlier and having our civilization destroyed by them earlier as a result.

Or maybe we enter an astronomical suffering or hyperexistential catastrophe due to conflict or stable totalitarianism earlier (internally or due to aliens we encounter earlier) and it lasts longer, until an exogenous end point. So, we replace some good with bad, or otherwise replace some value with worse value.

My thought experiment was aimed at showing that direct intuitive responses to such thought experiments are irrationally sensitive to framing and how concrete the explanations are.

The asymbolic child is almost identical to a typical child and acts the same way, so you would think people would be less hesitant to dismiss their apparent pain than a robot's. But I would guess people dismiss the asymbolic child's pain more easily.

My explanation for why the asymbolic child's pain doesn't matter (much) actually shouldn't make you more sure of the fact than the explanation given in the robot case. I've explained how and why the child is asymbolic, but in the robot case, we've just said "our best science reveals to us—correctly—that they are not sentient". "correctly" means 100% certainty that they aren't sentient. Making the explanation more concrete makes it more believable, easier to entertain and easier for intuitions to reflect appropriately. But it doesn't make it more probable!

However, on reflection, these probably push the other way and undermine my claim of irrational intuitive responses:

  1. My opportunity cost framing, e.g. thinking it's better to give the painkillers to the typical child doesn't mean you would normally want to perform surgery on the asymbolic child without painkillers, if they're cheap and not very supply-limited and the asymbolic child would protest less (pretend to be in pain less) if given painkillers.
  2. People aren't sure moral patienthood requires sentience, a still vague concept that may evolve into something they don't take to be necessary, but they're pretty sure that the pain responses in the asymbolic child don't indicate something that matters much, whatever the correct account of moral patienthood and value. It can be easier to identify and be confident in specific negative cases than put trust in a rule separating negative and positive cases.

(You may be aware of these already, but I figured they were worth sharing if not, and for the benefit of other readers.)

Some "preference-affecting views" do much better on these counts and can still be interpreted as basically utilitarian (although perhaps not based on "axiology" per se, depending on how that's characterized). In particular:

  1. Object versions of preference views, as defended in Rabinowicz & Österberg, 1996 and van Weeldon, 2019. These views are concerned with achieving the objects of preferences/desires, essentially taking on everyone's preferences/desires like moral views weighed against one another. They are not (necessarily) concerned with having satisfied preferences/desires per se, or just having more favourable attitudes (like hedonism and other experientialist views), or even objective/stance-independent measures of "value" across outcomes.[1]
  2. The narrow and hard asymmetric view of Thomas, 2019 (for binary choices), applied to preferences/desires instead of whole persons or whole person welfare. In binary choices, if we add a group of preferences/desires and assume no other preference/desire is affected, this asymmetry is indifferent to the addition of the group if their expected total value (summing the value in favourable and disfavourable attitudes) is non-negative, but recommends against it if their expected total value is negative. It is also indifferent between adding one favourable attitude and another even more favourable attitude. Wide views, which treat contingent counterparts as if they're necessary, lead to replacement.
  3. Actualism, applied to preferences instead of whole persons or whole person welfare (Hare, 2007, Bykvist, 2007, St. Jules, 2019, Cohen, 2020, Spencer, 2021, for binary choices).
  4. Dasgupta's view, or other modifications of the above views in a similar direction, for more than two options to choose from, applied to preferences instead of whole persons or whole person welfare. This can avoid repugnance and replacement in three option cases, as discussed here. (I'm working on other extensions to choices between more than two options.)

 

I think, perhaps by far, the least alienating (paternalistic?) moral views are preference-affecting "consequentialist" views, without any baked-in deontological constraints/presumptions, although they can adopt some deontological presumptions from the actual preferences of people with deontological intuitions. For example, many people don't care (much) more about being killed by another human over dying by natural causes (all else equal), so it would be alienating to treat their murder as (much) worse or worth avoiding (much) more than their death by natural causes on their behalf. But some people do care a lot about such differences, so we can be proportionately sensitive to those differences on their behalf, too. That being said, many preferences can't be assigned weights or values on the same scale in a way that seems intuitively justified to me, essentially the same problem as intertheoretic comparisons across very different moral views.

 

I'm working on some pieces outlining and defending preference-affecting views in more detail.

  1. ^

    Rabinowicz & Österberg, 1996:

    To the satisfaction and the object interpretations of the preference-based conception of value correspond, we believe, two different ways of viewing utilitarianism: the spectator and the participant models. According to the former, the utilitarian attitude is embodied in an impartial benevolent spectator, who evaluates the situation objectively and from the 'outside'. An ordinary person can approximate this attitude by detaching himself from his personal engagement in the situation. (Note, however, that, unlike the well-known meta-ethical ideal observer theory, the spectator model expounds a substantive axiological view rather than a theory about the meaning of value terms.) The participant model, on the other hand, puts forward as a utilitarian ideal an attitude of emotional participation in other people's projects: the situation is to be viewed from 'within', not just from my own perspective, but also from the others' points of view. The participant model assumes that, instead of distancing myself from my particular position in the world, I identify with other subjects: what it recommends is not a detached objectivity but a universalized subjectivity.

    Object vs attitude vs satisfaction/combination versions of preference/desire views are also discussed in Bykvist, 2022 and Lin, 2022, and there's some other related discussion by Rawls (1982pdf, p.181) and Arneson (2006pdf).

FWIW, I meant "How could they not be conscious?" kind of rhetorically, but I appreciate your response. Making it more concrete like this is helpful. My comment here is pretty object-level about the specific views in question, so feel free not to respond to it or any specific points here.

Global workspace theory (...)

There probably still needs to be "workspaces", e.g. working memory (+ voluntary attention?), or else the robots couldn't do many sophisticated things flexibly, and whatever those workspaces are could be global workspaces. Maybe each module has its own workspace, so is "global" to itself, and that's enough. Or, if the workspaces are considered together as one combined system, then it could be a more conventional "global workspace", just distributed. The differences don't seem significant at this level of abstraction. Maybe they are, but I'd want to know why. So, my direct intuitive reaction to "GWT is true and the robots aren't conscious" could be unreliable, because it's hard to entertain.

Higher order theories suggest that consciousness depends on having representations of our own mental states. A creature could have all sorts of direct concerns that it never reflected on, and these could look a lot like ours.

I think this one is more plausible and easier to entertain, although still weird.

I think it means that if you asked the mother robot if she cares about her child, she wouldn't say 'yes' (she might say 'no' or be confused). It seems the robots would all have complete alexithymia, and not just for emotions, but for all mental states, or at least all (the components of) mental states that could matter, e.g. valence, desires, preferences. But they'd still be intelligent and articulate. The mother would have no concept of desire, preference, caring, etc., or she'd be systematically unable to apply such concepts to herself, even though she might apply it to her child, e.g. she distinguishes her child from a "mere thing", and I imagine she recognizes that her child cares about things.

Or, maybe it could depend on the particulars of what's required of a higher order representation according to theory. The mother robot might have and apply a concept of desire, preference, caring, etc. to herself, but it's not the right kind of higher order representation.

IIT suggests that you could have a high level duplicate of a conscious system that was unconscious due to the fine grained details.

IIT is pretty panpsychist in practice, just needing recurrence, IIRC. I don't think you would have a complex society of intelligent robots without recurrence (networks of purely feedforward interactions would end up far too large, but the recurrence might be extended beyond their brains). And at any rate, IIT seems way off track to me as a theory. So, my direct intuitive reaction to "IIT is true and the robots aren't conscious" will probably be unreliable.

 

My impression was that you like theories that stress the mechanisms behind our judgments of the weirdness of consciousness as critical to conscious experiences. I could imagine a robot just like us but totally non-introspective, lacking phenomenal concepts, etc. Would you think such a thing was conscious? Could it not desire things in something like the way we do?

There are a few "lines" that seem potentially morally significant to me as an illusionist:

  1. As you mention, having and applying phenomenal concepts, or having illusions of phenomenal consciousness, e.g. finding aspects of our perceptions/information processing weird/mysterious/curious/ineffable (or unphysical, private and/or intrinsic, etc., although that's getting more specific, and there's probably more disagreement on this). I agree the robots could fail to matter in this way.
  2. Having states that would lead to illusions of phenomenal consciousness or the application of phenomenal concepts to them, finding them weird/mysterious/curious, etc., if those states were introspected on by a sufficiently sophisticated system in the right way (even if the existing system is incapable of introspection; we consider a hypothetical attaching another system to do it). This is Frankish's and I suspect Dennett's normative interpretation of illusionism, and their views of consciousness are highly graded. Maybe just cognitive impenetrability suffices, if/because the cognitive impenetrability of the things we introspect is what makes them seem weird/mysterious/curious/ineffable to us.[1] I'd guess the robots would matter in this way.
  3. The appearances of something mattering, in causal/functional terms — including desires, pleasure, unpleasantness, preferences, moral intuitions, normative beliefs, etc. — just are phenomenal illusions or (the application of) phenomenal concepts, or parts of phenomenal illusions or phenomenal concepts that matter even on their own. It's not just that consciousness seems weird (etc.), but that part of our phenomenal concepts for (morally relevant) conscious mental states is just that they seem to matter. And, in fact, it's the appearance of mattering that makes the mental states matter morally, not the apparent weirdness (etc.). We wouldn't care (much) about a person's specific experience of red unless they cared about it, too. An experience only matters morally in itself if it seems to matter to the individual, e.g. the individual takes a specific interest in it, or finds it pleasant, unpleasant, attractive, aversive, significant, etc.. Furthermore, it's not important that that "seeming to matter" applies to mental states in a higher-order way rather than "directly" to the intentional objects of mental states, like in the robots' desires; that's an arbitrary line.[2] The robots seem to matter in this way.

1 implies 2, and I suspect 3 implies 2, as well.

I also suspect we can't answer which of 1, 2 or 3 is (objectively, stance-independently) correct. It seems inherently normative and subjective (and I'm not a moral realist), although I've become pretty sympathetic to 3, basically for the reasons I give in 3. We could also go for a graded account of moral status, where each of 1, 2 and 3 ground different degrees of moral status.

  1. ^

    In defense of the necessity of the cognitive impenetrability of illusions of phenomenal consciousness, see Kammerer, 2022.

  2. ^

    Humphrey, another illusionist, said "Consciousness matters because it is its function to matter". However, he's skeptical animals other than mammals and birds are conscious. He thinks consciousness requires finding your own mental states/perceptions/sensations to matter, e.g. engaging in sensation-seeking or sensory play. Such animals find their perceptions themselves interesting, not just the intentional objects of those perceptions. So it's higher order-ish.

Not sure.

We could replace the agree/disagree slider with a cost-effectiveness ratio slider.

One issue could be that animal welfare has more quickly diminishing returns than GHD.

Load more