So here's a reply to that philosopher's scenario, which I have yet to hear any philosopher's victim give:
"You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty. But since I am running on corrupted hardware, I can't occupy the epistemic state you want me to imagine. Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree. However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings.
ie. the repugnant conclusion tells us a lot more about how human cognition works rather than how consequentionalism fundamentally works.
I think this is the standard reply to the repugnant conclusion, from Yudkowsky's Ends Don't Justify Means (Among Humans) (emphasis mine).
ie. the repugnant conclusion tells us a lot more about how human cognition works rather than how consequentionalism fundamentally works.