I mostly do philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty and cluelessness, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
Your argument is implicitly assuming IIA.
On a person-affecting view violating IIA but not transitivity, we could have the following:
There's no issue for transitivitiy, because the 4 cases involve 4 distinct relations (distinguished by their subsripts), each of which is transitive. The 4 relations don't have to agree.
FIrst of all, so long as we buy the transitivity of the better than relation that won't work.
This isn't true. I can just deny the independence of irrelevant alternatives instead.
Second, it's highly counterintuitive that the addition of extra good options makes an action worse.
It's highly counterintutive to you. It's intuitive to me because I'm sympathetic to the reasons that would justify it in some cases, and I outlined how this would work on my intuitions. The kinds of arguments you give probably aren't very persuasive to people with strong enough person-affecting intuitions, because those intuitions justify to them what you find counterintuitive.
I find it crazy and I think nearly all people do.
This doesn't seem like a reason that should really change anyone's mind about the issue. Or, at least not the mind of any moral antirealist like me.
I suppose a moral realist could be persuaded via epistemic modesty, but if you are epistemically modest, then this will undermine your own personal views that aren't (near-)consensus (among the informed). For example, you should give more weight to nonconsequentialist views.
By what standard are you judging it to be crazy? I don't think the view that there are no good states is crazy, and I'm pretty sympathetic to it myself. The view that it's good to create beings for their own sake is totally unintuitive to me (although I wouldn't call it or really any other view crazy).
How I would personally deal with your hypothetical under the kind of person-affecting views to which I'm sympathetic is this:
We don't have reason to press the first button if we'd expect to later undo the welfare improvement of the original person with the second button. This sequence of pressing both isn't better on person-affecting intuitions than doing nothing. When you reason about what to do, you should, in general, use backwards induction and consider what options you'll have later and what you'd do later.
If you don't use backwards induction, you will tend to do worse than otherwise and can be exploited, e.g. money pumped. This is true even for total utilitarians.
Congratulations on the publication!
FWIW, I don't find the denial of Sequential Desirability very counterintuitive, if and when it's done for certain person-affecting reasons, precisely because I am quite sympathetic to those person-affecting reasons. The discussion in the comments here seems relevant.
Also, a negative utilitarian would deny the coherence of Generative Improvement, because there's no positive utility. You could replace it with an improvement and generating a person with exactly 0 utility, or with utility less than the improvement. But from there, Modification Improvement is not possible.
I wonder if the effect is just too small to detect through the noise and other trends and shocks in animal product consumption, but it still exists and is still big in absolute terms for animals.
1.3 million people in Great Britain going vegan for the first time in January 2019 (source1, source2) out of 65 million people living in Great Britain is only about 2% of the population of Great Britain,[1] so we'd expect at most a 2% negative demand shift on meat purchses. Those going vegan were also probably eating less meat on average, so 2% would be an overestimate. And then you also have to adjust for elasticities, so the actual effect on meat production or sales would probably be <1%. That could be hard to detect, depending on how large differences in meat sales year-over-year and December to January typically are.
On the other hand, they report 1.31 million as 4.7% of the total GB adult population, but this seems wrong to me:
They came to the conclusion that 1.31 million people gave up animal products in Britain during January 2019 – that’s 4.7% of the total GB adult population and ten times the number of UK sign ups through the Veganuary website during the same time.
This would imply a GB adult population of 1.31 million / 0.047 = 27.9 million. But there were fewer than 16 million people under 18 in the UK in 2023 so GB should have an adult population of at least around 65 million (all GB) - 16 million (UK non-adults) = 49 million. I don't see how they got 4.7%.
Also, this January spiking pattern had been most clear and pronounced in the UK, where Veganuary has been most active (from the Economist):
I don't personally endorse Dennett's view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don't think we can just assume that animals can't be introspectively aware of their own experiences.
FWIW, Dennett ended up believing chickens, octopuses and bees are conscious, anyway. He was an illusionist, but I think his view, like Keith Frankish's, was not that an animal literally needs to have an illusion of phenomenal consciousness or be able to introspect to be conscious in a way that matters. The illusions and introspection just explain why we humans believe in phenomenal consciousness, but first-order consciousness still matters without them.
And he was a gradualist. He thought introspection and higher-order thoughts made for important differences and was skeptical of them in other animals (Dennett, 2018, p.168-169). I don't know how morally important he found these differences to be, though.
The brain modelling itself as having phenomenal properties would (partly) explain why people believe consciousness has phenomenal properties, i.e. that consciousness is phenomenal. In fact, you model yourself as having phenomenal properties whether or not illusionism is true, if it seems to you that you have phenomenal consciousness. That seeming, or appearance, has to have some basis in your brain, and that is a model.
Illusionism just says there aren't actually any phenomenal properties, so their appearance, i.e. their seeming to exist, is an illusion, and your model is wrong.
The hard problem is dissolved by illusionism because phenomenal consciousness doesn't exist under illusionism, because consciousness has no phenomenal properties. And we have a guide to solving the meta-problem under illusionism and verifying our dissolution of the hard problem:
On the other hand, saying consciousness just is information integration and denying phenomenal properties together would indeed also dissolve the hard problem. Saying phenomenal consciousness just is information integration would solve the hard problem.
But both information integration accounts are poorly motivated, and I don't think anyone should give much credence to either. A good (dis)solution should be accompanied by an explanation for why many people believe consciousness has phenomenal properties and so solve the meta-problem, or at least give us a path to solving it. I don’t think this would happen with (phenomenal) consciousness as mere information integration. Why would information integration, generically, lead to beliefs in phenomenal consciousness?
There doesn't seem to be much logical connection here. Of course, beliefs in phenomenal consciousness depend on information integration, but very few instances of information integration seem to have any connection to such beliefs at all. Information integration is nowhere close to a sufficient explanation.
And this seems to me to be the case for every attempted solution to the hard problem I've seen: they never give a good explanation for the causes of our beliefs in phenomenal consciousness.
Why would consciousness (or moral patienthood) require having a self-model?
From my comment above:
But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.
To prevent any misunderstanding, illusionism doesn't deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.
I'm guessing there isn't much more we can gain by discussing further, and we'll have to agree to disagree. I'll just report my own intuitions here and some pointers, reframing things I've already said in this thread and elaborating.
It's useful to separate the outcomes from the actions here. Let's label the outcomes:
Nothing: the result of pressing neither button.
A: Bob getting an extra 1 util and Todd being created with a util, the result of only button 1 being pressed.
B: Todd being created with 3 utils, the result of both buttons being pressed.
On my person-affecting intuitions, I'd rank the outcomes as follows (using a different betterness relation for each set of outcomes, violating the independence of irrelevant alternatives but not transitivity):
Now, I can say how I'd act, given the above.
If I already pressed button 1 and Nothing is no longer attainable, then we're in case 2, so pressing button 2 and so pressing both buttons is better than only pressing button 1, because it means choosing B over A.
If starting with all three options still available, and I expect with certainty that if I press button 1, Nothing will no longer be available and I will then press button 2 — say because I know I will follow the rankings in the previous paragraph at that point —, then the outcome of pressing button 1 is B, by backward induction. Then I would be indifferent between pressing button 1 and getting outcome B, and not pressing it and getting Nothing, because B ~ Nothing.[1]
If starting with all three options still available, and for whatever reason I think there's a chance I won't press button 2 if I press button 1, then using statewise dominance reasoning:
Similarly if I'm not 100% sure that button 2 will actually even be available after pressing button 1.
My intuitions are guided mostly by something like the (actualist[2]) object interpretation and participant model of Rabinowicz and Österberg (1996)[3] and backward induction.
We might say I'm in case 3 here, because I've psychologically ruled out A knowing I'd definitely pick B over A. But B ~ Nothing whether we're in case 3 or case 4.
For more on actualism as a population ethical view, see Hare (2007) and Spencer (2021). I'm developing my own actualist(-ish) view, close to weak actualism in those two papers. I'm also sympathetic to Thomas (2019) and Pummer (2024).
Rabinowicz and Österberg (1996) write:
and