Thanks! This is a great set of context and a great way to ask for specifics. :-)
I think the situation is like this: I'm hypothetically in a position to exercise a lot of power over reproductive choices -- perhaps by backing tax plans which either reward or punish having children. I think what you're asking is "suppose you know that your plan to offer a child tax credit will result in a miserable population, should you stay with the plan because there'll be so many miserable people that it'll be better on utilitarian grounds"? The answer is no, I should not do that. I shouldn't exercise power I have to make a world which I believe will contain a lot of miserable people.
I think a better power-inversion question is: "suppose you are given dictatorial control of one million miserable and hungry people. Should you slaughter 999,000 of them so the other 1000 can be well fed and happy." My answer is, again, unsurprisingly, No. No, I shouldn't use dictatorial power to genocide this unhappy group. Instead I should use it to implement policies I think will lead over time to a sustainable 1000-member happy population, perhaps by the same kind of anti-natalist policies that would in other happier circumstances be abhorrent.
My suspicion I think I share with you: that consequentialism's advice is imperfect. My sense is it is imperfect mostly not because of unfamiliar galactic-scale reasons or other failures in reacting to odd situations involving unbelievably powerful political forces. If that's where it broke down it'd be mostly immaterial to considering alternatives to consequentialism in everyday situations (IMO).
A bit repetitive to what I replied below but it isn't clear to me that minimally -conscious beings can't suffer (or be made to not be able to suffer).
On relatively more stable ground wrt power to choose between a world optimized for insects vs humans, I'm happy to report I'm a humanity partisan. :-)
In theory of mind terms , it sounds like we differ in estimating likelihood insects will be thought of as having conscious experience as we learn more. (Other invertebrates I think the analysis may be very different.) My sense given extraordinary capabilities of really-clearly-not-conscious ML systems is that pretty sophisticated behaviors are well within reach for unconscious organisms. Moreso than I might have thought a few years ago.
More like the former.
In other words, the moral weight of the choice we're asked to make is about the use of power. An example that's familiar and more successful because the power being exercised is much more clear is the drowning child example. The power here is going into the pond to rescue the child. Should one exercise that power, or are there reasons not to?
The powers bring appealed to in these population ethics scenarios are truly staggering. The question of how they should be used is (in my opinion) usually ignored in favor of treating them as preferences of states of affairs. I suspect this is the reason they end up being confusing--when you instead ask whether setting up forced reproduction camps is a morally acceptable use of the power to craft an arbitrary society, there's just very little room for moral people to disagree any more.
Relative to creating large numbers of beings with the property of being unable to experience any negative utility but only small amounts of positive utility, it isn't clear this power exists logically. (The same might be said about enforcing pan galactic totalitarianism but repugnant conclusion effects IMO start being noticeable on scales where we can be quite sure power does exist.)
If the power to create such beings exists, it implies a quite robust power to shape the minds and experiences of created beings. If it were used to prohibit the existence of beings with tremendous capacity for pleasure I think that would be an immoral application. Another scenario though might be the creation of large numbers of minimally-sentient beings who (in some sense) mildly "enjoy" being useful and supportive of such high-experience people. Do toasters and dishwashers and future helpful robots qualify here? It depends what kind of pan psychism ends up being like for hypothetical people with this kind of very advanced mind design power. I could see it being true that such a world is possible, but I think this framing in terms of power exercise removes the repugnance from the situation as well. Is a world of leisure supported by minimally-aware robots repugnant. Nah, not really. :-)
Another approach to thinking about these difficulties could be to take counsel from the Maxwell Demon problem in thermodynamics. There, it looks like you can get a "repugnant conclusion" in which the second law is violated, if you don't address the details of Maxwell's demon directly and carefully.
I suspect there is a corresponding gap in analyses of situations at the edges of population ethics. Call it the "repugnant demon." Meaning, in this hypothetical world full of trillions of people we're being asked to create, what powers do we have to bestow on our demon which is responsible for enforcing barely livable conditions? These trillions of people want better lives, otherwise by definition they would not be suffering. So the demon must be given the power to prevent them from having those improved lives. How?
Pretty clearly, what we're actually being asked is whether we want to create a totalitarian autocratic transgalactic prison state with total domination over its population. Is such a society one you wish to create or do you prefer to use the power of your demon which it would take to produce this result in a different way?
A much smaller scale check here is whether it is good to send altruistic donations to existing autocratic rulers or not. Their populations are not committing suicide, so the people must have positive life utility. The dictator can force the population to increase, so the implementation here would be finding dictators who will accept altruistic donations for setting up forced birth camps in their countries.
In other words, I suspect when you finish defining in detail what "repugnant demon" powers need to be created in world-building awful conditions for even very comparatively small populations, it becomes clear immediately where the "missing negative utility" is in these cases: it is that the utilitarian power in producing conditions of very low satisfaction are actually very large. Therefore using that power in the evil act of setting up a totalitarian prison camp instead of a different and morally preferable society is to be condemned.
A big difference in button 1 (small benefit for someone) and 1A (small chance of a small benefit for a large number of people) is the kind of system required for these outcomes.
Button 1 requires basically a days worth of investment by someone making a choice to give it to another. Button 1A requires... perhaps a million times as much effort? We're talking about the equivalent of passing a national holiday act. This ends up requiring an enormous amount of coordination and investment. And the results do not scale linearly at all. That is, a person investing a day's worth of effort to try and pass a national holiday act don't have a 10E-8 chance of working. They have a much much smaller chance. Many many orders of magnitude less.
In other words, the worlds posited by a realistic interpretation of what these buttons mean are completely different, and the world where button 1A process succeeds is to be preferred by at least six orders of magnitude. In other words, the colloquial understanding of the "big" impact is closer to right than the multiplication suggests.
I'm not sure exactly how that impacts the overall conclusions, but I think this same dynamic applies to several odd conclusions -- the flaw is that the button is doing much much much more work in some situations than in others described as identical, and that descriptive flaw is pumping our intuitions to ignore those differences rather than address them.
Completely agree it is difficult to find "uniquely human" behaviors that seem indicative of consciousness as animals share so many of them.
Any animals which don't rear young I am much more likely to believe have behaviors much more genetically determined and so therefore operating at time scales that don't really satisfy what I think makes sense to call consciousness. I'm thinking of the famous Sphex wasp hacks for instance where complex behavior turns out to be pretty algorithmic and likely not indicative of anything approximating consciousness. Thanks for the pointer to the report!
WRT AI consciousness, I work on ML systems and have a lot of exposure to sophisticated models. My sense is that we are not close to that threshold, even with sophisticated systems that are obviously able to pass naive Turing tests (and have). My sense is we have a really powerful approach to world-model-building with unsupervised noise prediction now, and that current techniques (including RL) are just nowhere near enough to provide the kind of interiority that AI systems need to start me worrying there's conscious elements there.
IOW, I'm not a "scale is all you need" person -- I don't think current ideas on memory/long-range augmentation or current planning type long-range state modeling is workable. I mean, maybe times 10^100 it is all you need? But that's just sort of another way of saying it isn't. :-) The sort of "self-talk" modularity that some LLMs are being experimented with strikes me as the most promising current direction for this (e.g. LAMDA paper) but currently the scale and ingredients are way too small for that to emerge IMO.
I do suspect that building conscious AI will teach us way more about non-verbal-report consciousness. We have some access to these mechanisms with neuroscience experiments but it is difficult going. My belief is we have enough of those to be quite certain many animals share something best called conscious experience.