I also left a comment related to neuron counts and things similar to experience size in point 2 here.
My counter-assumption here is that where humans display anxiety, fear or self-protective behavior, both the behaviors themselves and the corresponding experience are likely to be more intense or bigger sized than a pig, chicken or shrimp that exhibits these behaviors[8].
What do you have in mind by "more intense or bigger sized"?
FWIW, here's how I would probably think about it:
Disabling. Pain at this level takes priority over most bids for behavioral execution and prevents most forms of enjoyment or positive welfare. Pain is continuously distressing. Individuals affected by harms in this category often change their activity levels drastically (the degree of disruption in the ability of an organism to function optimally should not be confused with the overt expression of pain behaviors, which is less likely in prey species). Inattention and unresponsiveness to milder forms of pain or other ongoing stimuli and surroundings is likely to be observed. Relief often requires higher drug dosages or more powerful drugs. The term Disabling refers to the disability caused by ‘pain’, not to any structural disability.
Excruciating. All conditions and events associated with extreme levels of pain that are not normally tolerated even if only for a few seconds. In humans, it would mark the threshold of pain under which many people choose to take their lives rather than endure the pain. This is the case, for example, of scalding and severe burning events. Behavioral patterns associated with experiences in this category may include loud screaming, involuntary shaking, extreme muscle tension, or extreme restlessness. Another criterion is the manifestation of behaviors that individuals would strongly refrain from displaying under normal circumstances, as they threaten body integrity (e.g. running into hazardous areas or exposing oneself to sources of danger, such as predators, as a result of pain or of attempts to alleviate it). The attribution of conditions to this level must therefore be done cautiously. Concealment of pain is not possible.
Future debate week topics?
I like the idea of going through cause prioritization together on the EA Forum.
I never found psychological hedonism (or motivational hedonism) very plausible, but I think it's worth pointing out that the standard version — according to which everyone is ultimately motivated only by their own pleasure and pain — is a form of psychological egoism and seems incompatible with sincerely being a hedonistic utilitarian or caring about others and their interests for their own sake.
From https://www.britannica.com/topic/psychological-hedonism :
Psychological hedonism, in philosophical psychology, the view that all human action is ultimately motivated by desires for pleasure and the avoidance of pain. It has been espoused by a variety of distinguished thinkers, including Epicurus, Jeremy Bentham, and John Stuart Mill, and important discussions of it can also be found in works by Plato, Aristotle, Joseph Butler, G.E. Moore, and Henry Sidgwick.
Because its defenders generally assume that agents are motivated only by the prospect of their own pleasures and pains, psychological hedonism is a form of psychological egoism.
More concretely, a psychological hedonist who cares about others, but only based on how it makes them feel, would prefer to never find out that they've caused harm or are doing less good than they could, if it wouldn't make them (eventually) feel better overall. They don't actually want to do good, they just want to feel like they're doing good. Ignorance is bliss.
They could be more inclined to get in or stay in an experience machine, knowing they'd feel better even if it meant never actually helping anyone else.
That being said, they might feel bad about it if they know they're in or would be in an experience machine. So, they might refuse the experience machine by following their immediate feelings and ignoring the fact that they'd feel better overall in the long run. This kind of person seems practically indistinguishable from someone who sincerely cares about others, but does so through and based on their feelings.
(Edited)
I favour animal welfare, but some (near-term future) considerations that I'm most sympathetic to that could favour global health are:
People probably just have different beliefs/preferences about how much their own suffering matters, and those preferences are plausibly not interpersonally comparable at all.
Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can't easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.
We can also imagine moral patients with conscious preferences who can't suffer at all, so we'd have to find something else to normalize by to make interpersonal comparisons with them.
I discuss interpersonal comparisons more here.
My sequence might also be helpful. I didn't come up with too many directly useful estimates, but I looked into implications of desire-based and preference-based theories for moral weights and prioritization, and I would probably still prioritize nonhuman animals on such views. I guess most importantly:
The quantity of attention, in roughly the most extreme case in my view, could scale proportionally with the number of (relevant) neurons, so humans would have, as a first guess, ~400 times as much moral weight as chickens. OTOH, I'd actually guess there are decreasing marginal returns to additional neurons, e.g. it could scale more like with the logarithm or the square root of the number of neurons. And it might not really scale with the number of neurons at all.
People probably just have different beliefs about how much their own suffering matters, and these beliefs are plausibly not interpersonally comparable at all.
Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can't easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.
We can also imagine moral patients with conscious preferences who can't suffer at all, so we'd have to find something else to normalize by to make interpersonal comparisons with them.
I think we can put some reasonable bounds on our uncertainty and ranges, and they can tell us some useful things. Or, at least, I can, according to my own intuitions, and end up prioritizing animal welfare this way.
Also, I've argued here that uncertainty about moral weights actually tends to further favour prioritizing nonhumans.
You could also just replace everyone with beings with (much) more satisfied preferences on aggregate. Replacement or otherwise killing everyone against their preferences can be an issue for basically any utilitarian or consequentialist welfarist view that isn't person-affecting, including symmetric total preference utilitarianism.