S

SamiPetersen

78 karmaJoined

Comments
8

Thanks for the thoughtful reply!

I understand you don't want to debate risk attitudes, but I hope it's alright that I try to expand on my thought just a bit to make sure I get it accross well—no need to respond. 

To be clear: I think risk aversion is entirely fine. My utility in apples is concave, of course. That's not really up for 'debate'. Likewise for other consumption preferences.

But ethics seems different. Philosophers debate what's permissible, mandatory, etc. in the context of ethics (not so much in the context of consumption). The EA enterprise is partly a result of this.

And choosing between uncertain altruistic interventions is of course in part a problem of ethics. Risk preferences w.r.t. wellbeing in the world make moral recommendations independently of empirical facts. This is why I see them as more up for debate. (Here's a great overview of such debates.)

We often argue about the mertis of ethical views under certainty: should our social welfare function concavify individual utilities before adding them up (prioritarianism) or not (utilitarianism)? Similarly, under uncertainty, we may ask: should our social welfare function concavify the sum of individual utilities (moral risk aversion) or not (moral risk neutrality)?

These are the sorts of questions I meant were relevant; I agree risk aversion per se is completely unproblematic.


By the way, this is irrelevant to the methodological point above, but I'll point out the interesting fact that risk aversion alone doesn't get rid of the problem of the St Petersburg paradox:

  •  chance of winning £ with linear utility: 
    .
  •  chance of winning winning £ with log utility: .

Hi Karthik, thanks for writing this! I appreciate the precision; I wish I saw more content like this. But if you’ll allow me to object:

I feel there’s a bit of tension in you stating that “I don't think we should sidestep the philosophical aspect of this debate” while later concluding that “Worldview diversification is a useful and practical way for the EA community to make decisions.” Insofar as we are interested in a normatively satisfying foundation for diversifying donations (as a marginal funder), one would presumably need an argument in favour of something like minmax regret or risk aversion—on altruistic grounds.

Your results are microfoundations, as you write. Similarly, risk lovingness microfounds the prediction that a person will buy tickets for lotteries with negative monetary expectations. But that doesn’t imply that the person should do so. Likewise with risk aversion and diversification.

I think the important question here is whether e.g. risk aversion w.r.t. value created is reasonable.

In economics we’re used to treating basically any functional form for utility as permissible, so this is somewhat strange, but here we’re thinking about normative ethics rather than consumption choices. While it seems natural to exhibit diminishing marginal utility in consumption (hence risk aversion), it’s a bit more strange to say that one values additional wellbeing less the more lives have already been benefited. After all, the new beneficiary values it just as much as the previous one, and altruism is meant to be about them.

Here’s a thought experiment that brings out the counter-intuitiveness. Suppose you could pick either (i) a lottery giving a 60% chance of helping two people or else nobody, or (ii) a 100% chance of helping just one person but the lucky person is chosen by a fair coin. Then sufficient risk aversion will lead you to choose (ii) even though all potential beneficiaries prefer (i).

These aren’t meant as particularly good arguments for risk neutrality w.r.t terminal value; just pointers to the kind of considerations I think are more relevant to thinking about the reasonableness of altruistically-motivated funding diversification. There are others too, though (example: phil version / econ version; accessible summary here).

Nodding profusely while reading; thanks for the rant.

I'm unsure if there's much disagreement left to unpack here, so I'll just note this:

  • If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there'd be much upside anyway  given what's already in the book.) 
  • If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community,  but I can also see stories for the benefits I mentioned before. I find it non-obvious that a lack of prioritisation/quantification in WWOTF leads to a notably lower-quality EA community as misconceptions may be largely corrected when people try to engage with the existing community. Though I could very easily change my mind on this; e.g., it would worry me to see lots of new members with similar misconceptions enter at the same time. The magnitude of the pros and cons of the framing seems like an interestingly tough empirical question.

This was helpful; I agree with most of the problems you raise, but I think they're objecting to something a bit different than what I have in mind.

Agreement: 1a,1b,2a

  • I am also very sceptical that >25% of the general public satisfies (1a) or (1b). I don't think these are the main mechanisms through which the general public could matter regarding TAI. The same applies to (2a).

Differences: 2b,3a,alternatives

  • On (2b): I'm a bit sceptical that politicians or policymakers are sufficiently nitpicky for this to be a big issue, but I'm not confident here. WWOTF might just have the effect of bringing certain issues closer to the edges of the Overton window. I find it plausible that the most effective way to make AI risk one of these issues is in the way WWOTF does it: get mainstream public figures and magazines talking about it in a very positive way. I could see how this might've been far harder with a book that allows people to brush it off as tech-bro BS more easily.

    On there being intellectually dishonesty: I worry a bit about this, but maybe Will is just providing his perspective and that's fine.  We can still have others in the longtermist community disagree on various estimates. Will for one has explicitly tried not to be seen as a leader of a movement of people who just follow his ideas. I'd be surprised if differences within the community become widely seen as intellectual dishonesty from the outside (though of course isolated claims like these have been made already).

    So, maybe what we want from politicians and policymakers during important moments is for them to be receptive to good ideas. The perceived prioritisation of AI within longtermist writing might just not turn out to be that crucial. I'm open to change my mind on this but I don't expect there to be much conflict between different longtermist priorities such that policymakers will in fact need to choose between them. That's a reason I'd expect that the best we can do is to make certain problems more palatable so that when an organisation tells policymakers "we need policy X, else we raise the risk of AI catastrophe" they are more likely to listen.
     
  • On (3a): I'm also very uncertain here but conditional on some kind of intent alignment, it becomes a lot more plausible to me that coordination with the world outside top labs becomes valuable, e.g., on values, managing transitions, etc. (especially if takeoff is slow).
     
  • On alternative uses of time: Those three project seem great and might be better EV per effort spent, but that's consistent with great writers and speakers like Will having a comparative advantage in writing WWOTF.
     

The mechanism I have in mind is a bit nebulous. It's in the vein of my response to (2a), i.e., creating intellectual precedent, making odd ideas seem more normal, etc. to create an environment (e.g., in politics) more receptive to proposals and collaboration. This doesn't have to be through widespread understanding of the topics. One (unresearched) analogue might be antibiotic resistance. People in general, including myself, know next to nothing about it, but this weird concept has become respectable enough that when a policymaker Googles it, they know it's not just some kooky fear than nobody outside strangely named research centres worry about or respectfully engage with.

Enjoyed the post but I'd like to mention a potential issue with points like these:

I’m skeptical that we should give much weight to message testing with the “educated general public” or the reaction of people on Twitter, at least when writing for an audience including lots of potential direct work contributors. 

I think impact is heavy-tailed and we should target talented people with a scout mindset who are willing to take weird ideas seriously.

I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e.g., during 'crunch time' or when trying to handle value lock-in. If this is true and WWOTF helps achieve this, it can justify writing a book focusing less on people who are already prone to react in ways we typically assoicate with a scout mindset. Increasing direct work in the usual sense is one thing to optimise for; another is creating an enviroment receptive to proposals and cooperation with those who do direct work.

So although I understand that you're not making strong claims about other groups like the general public or policymakers, I think it's worth mentioning that "I'd rather recommend The Precipice to people who might do impactful work" and "WWOTF should have been written differently" are very importantly distinct claims.

Reading this post reminded me of someone whose work may be interesting to look into: Rufus Pollock, a former academic economist who founded the Open Knowledge Foundation. His short book (freely available here) makes the case for replacing traditional IP, like patents and copyright, with a novel kind of remuneration. The major benefits he mentions include increasing innovation and creativity in art, science, technology, etc.