This is a special post for quick takes by Trish. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Is the future good in expectation? Thoughts on Will MacAskill's most recent 80k Hours podcast

My full summary of this podcast is on my website here. Below are my thoughts on the question posed above - whether the future is good in expectation.

Why Will thinks the future looks good  

Will thinks the future trajectory looks good. He mainly relies on an asymmetry between altruism and sadism in reaching this conclusion: some altruistic agents will systematically pursue things that are good, but very, very few sadistic agents will systematically pursue things that are bad. 

Will therefore believes there’s a strong asymmetry where the very best possible futures are somewhat plausible, but the very worst possible futures are not. He accepts that it’s entirely plausible that we squander our potential and bring about a society that’s not the very best, but he finds it much, much less plausible that we bring about the truly worst society.

My thoughts

I am highly uncertain on this point and, while I have not thought about it as much as Will seems to have, I found his reasoning unpersuasive. In particular:

  • I think that it’s largely irrelevant whether a “truly worst” society is plausible. It makes more sense to focus on the respective likelihoods of all “worse than existence” and all “better than existence” societies (weighted by just how good or bad each future would be), rather than just the likelihoods of the best and worst-case scenarios.
  • It makes sense to view altruism as positive and sadism as negative, but I don’t think indifference equates to 0 as Rob and Will imply. By “indifference”, they really seem to mean “selfish”, and the latter has more negative connotations than the former. Pure indifference or selfishness may well end up as negative overall (though obviously not as negative as sadism) because of negative externalities. I think the vast majority of agents in the world are largely indifferent, and only a few are altruists, particularly if you include non-human beings in the calculation. The world could still be a net negative in this case.
  • There’s also a question about the distribution of power. Even if the impact of indifferents really were 0, and even if altruists and indifferents outnumber sadists, sadists may have more power and more ability to affect the world negatively. It seems plausible there is a positive correlation between having power and being sadistic. Put another way, there could be a negative correlation between having power and being altruistic. This may be because:
    • Altruistic people would be more inclined to give away money and power than sadistic or selfish people; and
    • Obtaining power sometimes requires morally dubious acts that altruistic people may not want to do.
  • (On the other hand, I think that people tend to be more altruistic once they have a certain level of wealth and power. It’s like Maslow’s hierarchy of needs – once your basic needs are accounted for, you can focus on helping others.)
  • People are also not entirely altruistic, entirely sadistic, or entirely selfish. Even someone who identifies as an altruist (effective or otherwise) will be selfish to some degree, and their selfish acts may have larger impacts than their altruistic ones (perhaps more so for non-effective altruists who are less focused on impact). It’s therefore possible for altruistic groups to outnumber sadistic groups, but for sadistic acts to outnumber – or out-impact – altruistic acts.

To the people who have disagreed with this comment - I would be interested to learn why you disagree, if you care to share. What am I missing or getting wrong?

I upvoted but disagreed. I have a rosier view of plausible future worlds where people are as selfish as they are now, just smarter. They'd be coordinating better, and be more wisely selfish, which means they'd benefit the world more in order to benefit from trade. I admit it could go either way, however. If they just selfishly want factory-farmed meat, and the torture is just a byproduct.

I realise that this view doesn't go against what you say at all, so I retract my disagreement.

(I should mention that the best comments are always the ones that are upvoted but disagreed with, since those tend to be the most informative or most needed. ^^)

Thanks for the explanation. I agree it's possible that smarter people could coordinate better and produce better outcomes for the world.  I did recognise in my original post that a factor suggesting the future could be better was that, as people get richer and have their basic needs met, it's easier to become altruistic. I find that argument very plausible; it was the asymmetry one I found unconvincing.  

FWIW, I'm fine with others disagreeing with my view. It would be great to find out I'm wrong and that there is more evidence to suggest the future is rosier in expectation than I had originally thought. I just wanted people to let me know if there was a logical error or something in my original post, so thank you for taking the time to explain your thinking (and for retracting your disagreement on further consideration).

I think it's healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you're misunderstanding something. But if you believed the same thing, then you for sure aren't outperforming them. : )

I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.

More from Trish
Curated and popular this week
Relevant opportunities