I'm very much aligned with the version of utilitarianism that Bostrom and Ord generally put forth, but a question came up in a conversation regarding this philosophy and view of sustainability. As a thought experiment what would be consistent with this philosophy if we discover that a very clear way to minimize existential risk due to X requires a genocide of half or a significant subset of the population?
Bostrom and Ord do not put forth any version of utilitarianism. Bostrom isn't even a consequentialist, let alone a utilitarian. Both authors take moral uncertainty seriously. (Ord defends a version of global consequentialism, but not in the context of arguing for prioritizing existential risk reduction.) Nor does concern for existential risk reduction presuppose a particular moral theory. See the ethics of existential risk.
Separately, the dilemma you raise isn't specific to existential risk reduction. E.g. one can also describe imaginary scenarios in which trillions and trillions of sentient beings exploited for human consumption could be spared lives filled with suffering only if we do something horrendous to innocent people. And all reasonable moral theories, not just utilitarianism, must grapple with these dilemmas.
Maybe it'd be good to link from here to this collection of relevant things I've written, since some are shortforms or are on LessWrong (and thus I can't just give them the The Precipice tag).
But I feel squeamish about unilaterally adding mention of my own stuff too much, so I'll let someone else decide, or maybe return and do it later if it still seems like a good idea to me then.
I'm very much aligned with the version of utilitarianism that Bostrom and Ord generally put forth, but a question came up in a conversation regarding this philosophy and view of sustainability. As a thought experiment what would be consistent with this philosophy if we discover that a very clear way to minimize existential risk due to X requires a genocide of half or a significant subset of the population?
Hi Jose,
Bostrom and Ord do not put forth any version of utilitarianism. Bostrom isn't even a consequentialist, let alone a utilitarian. Both authors take moral uncertainty seriously. (Ord defends a version of global consequentialism, but not in the context of arguing for prioritizing existential risk reduction.) Nor does concern for existential risk reduction presuppose a particular moral theory. See the ethics of existential risk.
Separately, the dilemma you raise isn't specific to existential risk reduction. E.g. one can also describe imaginary scenarios in which trillions and trillions of sentient beings exploited for human consumption could be spared lives filled with suffering only if we do something horrendous to innocent people. And all reasonable moral theories, not just utilitarianism, must grapple with these dilemmas.
Maybe it'd be good to link from here to this collection of relevant things I've written, since some are shortforms or are on LessWrong (and thus I can't just give them the The Precipice tag).
But I feel squeamish about unilaterally adding mention of my own stuff too much, so I'll let someone else decide, or maybe return and do it later if it still seems like a good idea to me then.
Just saw this—added.