M

MichaelStJules

Independent researcher
11822 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty and cluelessness, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2509

Topic contributions
12

Thanks Erich!

Yes, it is pretty close to Korsgaard! I think I actually had Korsgaard in mind and might have checked some of her shorter pieces while working on this, although "object views" ended up being what I actually wanted here. Also, there's this piece by Jeff Sebo explaining Korsgaard's constructivism.

Fyi, the latter two of these links are broken.

Fixed! Thanks.

They aren't asserting that the whole universe, including the unreachable portion, is finite in extent with certainty. They're just saying that it's possible, and they also note infinite is possible too in the sentence after which that footnote follows.

Even if you think a universe with infinite spatial extent is very unlikely, you should still be entertaining the possibility. If there's a chance it's infinite and you can have infinite impact (before renormalizing), a risk neutral expected value reasoner should wager on that.

FWIW, I'm sympathetic to their arguments in that section against expected value maximization, or that at least undermine the arguments for it. I'm not totally convinced of expected value maximization myself.

However, that doesn't give a positive case for ignoring these infinities. I find infinite acausal impacts not too unlikely, personally, because both that acausal influence is possible seems more likely than not and that the universe is infinite in spatial extent (and in the right way to be influenced infinitely acausally) seems not too unlikely.

But I am optimistic about renormalization.

You could want to do acausal trades and cooperate with agents causally disconnected from you. You'll expect that those who reason (sufficiently) similarly would do the same in return, and that you would cooperate would be evidence for them cooperating and make it more likely.

If you were difference-making risk averse locally, e.g. you don't care about making a huge difference with very very tiny probability, by taking acausal influence into account, you should be (possibly much) less difference-making risk averse, according to Wilkinson.

There's a thread here about hardware between companies, with johnswentworth arguing AMD had better hardware than Nvidia.

If you maximize expcted value, you should be taking expected values through small probabilities, including that we have the physics wrong or that things could go on forever (or without hard upper bound) temporally. Unless you can be 100% in no infinities, then your expected values will be infinite or undefined. And there are, I think, hypotheses that can't be ruled out and that could involve infinite affectable value.

In response to Carl Shulman on acausal influence, David Manheim said to renormalize. I'm sympathetic and would probably agree with doing something similar, but the devil is in the details. There may be no very uniquely principled way to do this, and some things can still break down, e.g. you get actions that are morally incomparable.

EDIT: Rereading, I'm not really disagreeing with you. I definitely agree with the sentiment here:

And so when I see comments saying things like "I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative", I'm like... really? There's no empirical facts that could possibly cause the trade-off to go the other way?

 

(Edited) So, rather than just the possibility that all tradeoffs between humans and chickens should favour humans, I take issue with >99% confidence in that position or otherwise treating it like it's true.

Whatever someone thinks makes humans infinitely more important than chickens[1] could actually be present in chickens in some similarly important form with non-tiny or even modest probability (examples here), or not actually be what makes humans important at all (more general related discussion, although that piece defends a disputed position). In my view, this should in principle warrant some tradeoffs favouring chickens.

Or, if they don't think there's anything at all, say except the mere fact of species membership, then this is just pure speciesism and seems arbitrary.

  1. ^

    Or makes humans matter at all, but chickens lack, so chickens don't matter at all.

I only skimmed, but on the view that we should focus on the very worst experience (or very worst life, say), consider a thought experiment, where x is an extremely horrible experience and y is another extremely horrible experience, but slightly better:

  1. Alice suffers x, and 100 people live great lives without suffering.
  2. Alice suffers y, and the 100 people also each suffer y.

I would choose 1, even though the worst experience in it is worse than the worst experience in 2. I think modest tradeoffs between intensity and number should be made at least for similar intensities. Or, perhaps, we should care about the differences, not improving the worst experience, e.g. y-x, what's at stake for Alice, is actually much smaller in magnitude than y, (at least) what's at stake for each of the other 100 people.

Even if I find full aggregation and especially summation counterintuitive, at least some modest tradeoffs seem right to me. You might be interested in "partial aggregation".

More recent data for US beef cattle (APHIS USDA, 2017, p.iii):

Only 7.8 percent of calves born or expected to be born in 2017 had horns, indicating the widespread use of polled breeds. For horned calves that were dehorned, the average age at dehorning was 107.0 days.

FWIW, Molly's comment you linked to quoted and cited Welfare Footprint Project and basically addressed something like "grows to a bigger size more quickly":

The Welfare Footprint Project used the Cumulative Pain Framework to investigate how the adoption of the Better Chicken Commitment (BCC) and similar welfare certification programs affect the welfare of broilers. Specifically, they examined concerns that the use of slower-growing breeds may increase suffering by extending the life of chickens for the production of the same amount of meat. From their main findings they stated: 

'Our results strongly support the notion that adoption of BCC standards and slower-growing broiler strains have a net positive effect on the welfare of broiler chickens. Because most welfare offenses endured by broilers are strongly associated with fast growth, adoption of slower-growing breeds not only reduces the incidence of these offenses but also delays their onset. As a consequence, slower-growing birds are expected to experience a shorter, not longer, time in pain before being slaughtered.'

Ah, ya:

For the reformed scenario, represented by the use of a slower-growing strain, we assumed an average ADG of 45-46 g/day, hence that the same slaughter weight would be reached in approximately 56 days.

Load more