I teach and research philosophy at the University of Bristol in the UK. My research areas include Bayesian epistemology, foundations of statistics, rational choice theory, and social epistemology (longer ago, philosophy and foundations of mathematics as well). I've written about accuracy-centred approaches to epistemology, decision-making in the face of transformative experience, and the debate between subjective and objective Bayesianism.
I'm particularly interested in ways in which rational choice theory and its conclusions affect the arguments for longtermism, but I'm not deeply immersed in the details of those arguments, so I'd appreciate help understanding those.
Reach out if you have any questions about Bayesian epistemology or rational choice theory. I'd be delighted to share any expertise I have.
Thanks for the comments, Richard!
On (1): the standard response here is that this won't work across the board because of something like the Allais preferences. In that case, there just isn't any way to assign utilities to the outcomes in such a way that ordering by expected utility gives you the Allais preferences. So, while the Sheila case is a simple way to illustrate the risk-averse phenomenon, it's much broader, and there are cases in which diminishing marginal utility of pleasure won't account for our intuitive responses.
On (2): it's possible you might do something like this, but it seems a strange thing to put into axiology. Why should benefits to Bob contribute less to the goodness of a situation just because of the risk attitudes he has?
On the main objection: I think you're probably right about the response many would have to this question, but that's also true if you ask them 'Should we do something that increases the probability of our billion-year existence by 1 in 10^14 rather than saving a million lives right now?' I think expected utility theory comes out as pretty unintuitive when we're thinking about longterm scenarios too. It's not just a problem for Buchak. And, in any case, the standard response by ordinary people might reflect the fact that they're not total hedonist utilitarians more than it does the fact that they are not Buchakians.
Thanks for the comment, Alejandro, and apologies for the delay responding -- a vacation and then a bout of covid.
On the first: yes, it's definitely true that there are other theories besides Buchak's that capture the Allais preferences, and a regret averse theory that incorporates the regret into the utilities would do that. So we'd have to look at a particular regret-averse theory and then look to cases in which it and Buchak's theory come apart and see what we make of them. Buchak herself does offer an explicit argument in favour of her theory that goes beyond just appeal to the intuitive response to Allais. But it's quite involved and it might not have the same force.
On the second: thanks for the link!
On the third: I'm not so sure about the analogy with Zero. It's true that Sheila needn't defer to Zero's axiology, since we might think that which axiology is correct is a matter of fact and so Sheila might just think she's right and Zero is wrong. But risk attitudes aren't like that. They're not matters of fact, but something more like subjective preferences. But I can see that it is certainly a consistent moral view to be total hedonist utilitarian and think that you have no obligation to take the risk attitudes of others into account. I suppose I just think that it's not the correct moral view, even for someone whose axiology is total hedonist utilitarian. For them, that axiology should supply their utility function for moral decisions, but their risk attitudes should be supplied in the way suggested by the Risk Principle*. But I'm not clear how to adjudicate this.
Thanks, Gavin! I look forward to your comment! I'm new to the forum, and didn't really understand the implications of the downvotes. But in any case, yes, the post is definitely meant in good faith! It's my attempt to grapple with a tension between my philanthropic commitments and my views on how risk should be incorporated in both prudential and moral decision-making.
Yes, it's reasonably sensitive to this, though as you increase how risk averse you are, you also get extinction winning out even for lower and lower probabilities of lm. It's really a tradeoff between those two.
On your concerns about the probability of lm: I think people very often don't commit suicide even when their life falls below the level at which it's worth living. This might be because of optimism about the future, or connection to others and the feeling of obligation towards them, or because of an instinct for survival.