Hi Karthik, thanks for writing this! I appreciate the precision; I wish I saw more content like this. But if you’ll allow me to object:
I feel there’s a bit of tension in you stating that “I don't think we should sidestep the philosophical aspect of this debate” while later concluding that “Worldview diversification is a useful and practical way for the EA community to make decisions.” Insofar as we are interested in a normatively satisfying foundation for diversifying donations (as a marginal funder), one would presumably need an argument in favour of something like minmax regret or risk aversion—on altruistic grounds.
Your results are microfoundations, as you write. Similarly, risk lovingness microfounds the prediction that a person will buy tickets for lotteries with negative monetary expectations. But that doesn’t imply that the person should do so. Likewise with risk aversion and diversification.
I think the important question here is whether e.g. risk aversion w.r.t. value created is reasonable.
In economics we’re used to treating basically any functional form for utility as permissible, so this is somewhat strange, but here we’re thinking about normative ethics rather than consumption choices. While it seems natural to exhibit diminishing marginal utility in consumption (hence risk aversion), it’s a bit more strange to say that one values additional wellbeing less the more lives have already been benefited. After all, the new beneficiary values it just as much as the previous one, and altruism is meant to be about them.
Here’s a thought experiment that brings out the counter-intuitiveness. Suppose you could pick either (i) a lottery giving a 60% chance of helping two people or else nobody, or (ii) a 100% chance of helping just one person but the lucky person is chosen by a fair coin. Then sufficient risk aversion will lead you to choose (ii) even though all potential beneficiaries prefer (i).
These aren’t meant as particularly good arguments for risk neutrality w.r.t terminal value; just pointers to the kind of considerations I think are more relevant to thinking about the reasonableness of altruistically-motivated funding diversification. There are others too, though (example: phil version / econ version; accessible summary here).
Nodding profusely while reading; thanks for the rant.
I'm unsure if there's much disagreement left to unpack here, so I'll just note this:
This was helpful; I agree with most of the problems you raise, but I think they're objecting to something a bit different than what I have in mind.
Agreement: 1a,1b,2a
Differences: 2b,3a,alternatives
The mechanism I have in mind is a bit nebulous. It's in the vein of my response to (2a), i.e., creating intellectual precedent, making odd ideas seem more normal, etc. to create an environment (e.g., in politics) more receptive to proposals and collaboration. This doesn't have to be through widespread understanding of the topics. One (unresearched) analogue might be antibiotic resistance. People in general, including myself, know next to nothing about it, but this weird concept has become respectable enough that when a policymaker Googles it, they know it's not just some kooky fear than nobody outside strangely named research centres worry about or respectfully engage with.
Enjoyed the post but I'd like to mention a potential issue with points like these:
I’m skeptical that we should give much weight to message testing with the “educated general public” or the reaction of people on Twitter, at least when writing for an audience including lots of potential direct work contributors.
I think impact is heavy-tailed and we should target talented people with a scout mindset who are willing to take weird ideas seriously.
I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e.g., during 'crunch time' or when trying to handle value lock-in. If this is true and WWOTF helps achieve this, it can justify writing a book focusing less on people who are already prone to react in ways we typically assoicate with a scout mindset. Increasing direct work in the usual sense is one thing to optimise for; another is creating an enviroment receptive to proposals and cooperation with those who do direct work.
So although I understand that you're not making strong claims about other groups like the general public or policymakers, I think it's worth mentioning that "I'd rather recommend The Precipice to people who might do impactful work" and "WWOTF should have been written differently" are very importantly distinct claims.
Reading this post reminded me of someone whose work may be interesting to look into: Rufus Pollock, a former academic economist who founded the Open Knowledge Foundation. His short book (freely available here) makes the case for replacing traditional IP, like patents and copyright, with a novel kind of remuneration. The major benefits he mentions include increasing innovation and creativity in art, science, technology, etc.
Thanks for the thoughtful reply!
I understand you don't want to debate risk attitudes, but I hope it's alright that I try to expand on my thought just a bit to make sure I get it accross well—no need to respond.
To be clear: I think risk aversion is entirely fine. My utility in apples is concave, of course. That's not really up for 'debate'. Likewise for other consumption preferences.
But ethics seems different. Philosophers debate what's permissible, mandatory, etc. in the context of ethics (not so much in the context of consumption). The EA enterprise is partly a result of this.
And choosing between uncertain altruistic interventions is of course in part a problem of ethics. Risk preferences w.r.t. wellbeing in the world make moral recommendations independently of empirical facts. This is why I see them as more up for debate. (Here's a great overview of such debates.)
We often argue about the mertis of ethical views under certainty: should our social welfare function concavify individual utilities before adding them up (prioritarianism) or not (utilitarianism)? Similarly, under uncertainty, we may ask: should our social welfare function concavify the sum of individual utilities (moral risk aversion) or not (moral risk neutrality)?
These are the sorts of questions I meant were relevant; I agree risk aversion per se is completely unproblematic.
By the way, this is irrelevant to the methodological point above, but I'll point out the interesting fact that risk aversion alone doesn't get rid of the problem of the St Petersburg paradox:
∑∞n=1(12)n×2n=∞.