Hide table of contents

This Friday I'm again interviewing William MacAskill, this time just about his upcoming book 'What We Owe The Future', for what may become 80,000 Hours' new audio intro to longtermism.

We've got 3 or so hours — what should I ask him?

Previous interviews:

  1. On moral uncertainty, utilitarianism & how to avoid being a moral monster
  2. On the paralysis argument, whether we're at the hinge of history, & his new priorities
  3. On balancing frugality with ambition, whether you need longtermism, & mental health under pressure

14

0
0

Reactions

0
0
New Answer
New Comment

9 Answers sorted by

Previous MCE projects like abolitionism or liberal projects like extending suffrage to non-landowning non-whitemales were fighting against the forcible removal of voice from people who had the ability to speak for themselves. Contemporary MCE projects like animals and future people do not share this property; I believe that animals cannot advocate for themselves, and the best proxy for future peoples' political interests I can think of falls really short. In this light, does it make any sense at all to say that there's a continuity of MCE activism across domains/problem areas? 

I think it makes sense for, say, covid-era vaccine administrators to think of themselves as carrying on the legacy of the groups who put smallpox in the ground, but it may not make the same sense for longtermists to think of themselves as carrying on the legacy of slavery abolition just because both families of projects in some sense look like MCE. 

Related, does classifying abolitionism as an MCE project downplay the agency of the slaves and over emphasize the actions of non-enslaved altruists/activists? 

In other words, contemporary MCE/liberalism may actually be agents fighting for patients, whereas prior MCE/liberalism was agents who happen to have political recognition fighting with agents who happen to lack recognition. Does this distinction hold water with respect to your research? 

Given that longtermism seems to have turned out to be a crucial consideration which a prior might have been considered counterintuitive or very absurd, should we be on the lookout for similarly important but wild & out-there options? How far should the EA community be willing to ride the train to crazy town (or, rather, how much variance should there be in the EA community for this? Normal or log-normal)?

For example one could consider things like multiverse-wide cooperation, acausal trade, options of creating infinite amounts of value and how to compare those (although I guess this is already been thought about in the area of infinite ethics, and try to actively search for them & figure out their implications (which doesn't appear to have much prominence in EA at the moment). (Other examples listed here)

I remember a post by Tomasik (can't find it right now) where he argues that the expected size of a new crucial consideration should be the the average of all past instances of such new instances, if we apply this here, the possible value seems high.

A bit late but it might be this post

What about future crucial considerations that Andrew hasn't yet discovered? Can he make any statements about them? One way to do so would be to model unknown unknowns (UUs) as being sampled from some probability distribution P: UUi ~ P for all i. The distribution of UUs so far was {3, -5, -2, 10, -1}. The sample mean is 1, and the standard error is 2.6. The standard error is big enough that Andrew can't have much confidence about future UUs, though the sample mean very weakly suggests future UUs are more likely on

... (read more)
2
niplav
Thanks! Definitely not too late, I'm often looking for this particular cite.
  1. What are the odds of extinction from nuclear, AI, bio, climate change, etc.?
  2. His thoughts on the threat of "population collapse"?
  3. How work on existential risk compares to work on animal welfare and global poverty in expected value (is it 50% better? 100x better?)
  4. How does work on animal welfare and global poverty affect existential risk and the quality of the long-term future?
  5. Where do Nick Bostrom, Toby Ord, Eliezer Yudkowsky, etc. go wrong that leads them to believe in substantially higher levels of AI risk than you?
  6. What new E.A. projects would you like to see which haven't been recommended by OpenPhil, FTX Future Fund, etc.
  7. Do you believe in the perrenialist philosophy (the perspective in philosophy and spirituality that views all of the world's religious traditions share a single, metaphysical truth or origin from which all esoteric and exoteric knowledge and doctrine has grown)? What would the discovery of absolute truth mean for the long-term future?
  8. What problems need to be solved before we've created the "best possible world"? Or can we just rely on AGI to solve our problems?
  9. Which values (besides MCE) are important for making sure the future goes well?
  10. How can we "improve institutions to promote development" as is recommended as a potentially pressing longtermist issue by 80000 Hours?
  11. What bad, non-extinction risks does AI pose?
  12. Does E.A. underestimate the importance of becoming a space-faring species for ensuring the survival of humanity?
  13. How can we prevent totalitarianism?
  14. Where do you differ from SBF on E.A. priorities? How would you spend $1 billion?

For some classes of meta-ethical dilemmas, Moral Uncertainty recommends using variance voting, which requires you to know the mean and variance of each theory under consideration.

How is this applied in practice? Say I give 95% weight to Total Utilitarianism and 5% weight to Average Utilitarianism, and I'm evaluating an intervention that's valued differently by each theory. Do I literally attempt to calculate values for variance? Or am I just reasoning abstractly about possible values?

Can Longtermism succeed without creating a benevolent stable authoritarianism given that it is unlikely that all humans will converge to the same values? Without such a hegemony or convergence of values, doesn't it seem like conflicting interests among different humans will eventually lead to a catastrophic outcome?

I have an intuition that eliminating the severe suffering of say, 1 million people, might be more important than creating hundreds of trillions of happy people who would otherwise never exist. It's not that I think there is no value in creating new happy people. It's just that I think (a) the value of creating new happy people is qualitatively different than that of reducing severe suffering, and (b) sometimes, when two things are of qualitatively different value, no amount of 1 can add up to a certain amount of the other.  

For example, consider two "intelligence machines" with qualitatively different kinds of intelligences. One does complex abstract reasoning and the other counts. I think it would be the case that no matter how much better you made the counting machine at counting, it would never surpass the intelligence of the abstract machine. Even though the counting machine gest more intelligent with each improvement, it never matches the intelligence of the abstract machine since the latter is of a qualitatively different and superior nature. Similarly, I value both deep romantic love and eating french fries, but I wouldn't trade in a deep and fulfilling romance for any amount of french fries (even if I never got sick of fries). And I value human happiness and ant happiness, but wouldn't trade in a million happy humans for any amount of happy ants. 

In the same vein, I suspect that the value of reducing the severe suffering of millions is qualitatively different from and superior to the value of creating new happy people such that the latter can never match the former. 

Do you think there's anything to this intuition?

  1. What are his thoughts on person-affecting views and their implications with respect to longtermism, including asymmetric ones, especially Teruji Thomas's The Asymmetry, Uncertainty, and the Long Term?
  2. How much does longtermism depend on expected value maximization, especially maximizing a utility function that's additive over moral patients?
  3. What are the best arguments for and against expected value maximization as normatively required?
    1. What does he think about the vulnerability to Dutch books and money pumps and violating the sure-thing principle with expected value maximization with unbounded (including additive) utility functions? See, e.g. Paul Christiano's comment with St. Petersburg lotteries.
    2. What does he think about stochastic dominance as an alternative decision theory? Are there any other decision theories he likes?
  4. What are his thoughts about the importance and implications of the possibility of aliens with respect to existential risks, including both extinction risks and s-risks? What about grabby aliens in particular? Should we expect to be replaced (or have our descendants replaced) with aliens eventually anyway? Should we worry about conflicts with aliens leading to s-risks?
  5. If the correct normative view is impartial, is (Bayesian) expected value maximization too agent-centered, like ambiguity aversion with respect to the difference one makes (the latter is discussed in The case for strong longtermism)? Basically, a Bayesian uses their own single joint probability distribution, without good justification for choosing their own over many others. One alternative would be to use something like the maximality rule, where multiple probability distributions are all checked, without committing to a fairly arbitrarily chosen single one.
  6. What is his position on EDT vs CDT and other alternatives? What are the main practical implications?
  7. For moral uncertainty, in what (important) cases does he think intertheoretic comparisons are justified (and not arbitrary, i.e. alternative normalizations with vastly different implications aren't as justifiable)?
  8. What are his meta-ethical views? Is he a moral realist or antirealist? What kind? What are the main practical implications?

Bostrom's vulnerable world hypothesis paper seems to suggest that existential security (xsec) isn't going to happen, that we need a dual of the yudkowsky-moore law of mad science that raises our vigilance every timestep to keep up with the drops in minimal IQ it costs to destroy the world. A lifestyle of such constant vigilance seems leagues away from the goals that futurists tend to get excited about, like long reflections, spacefaring, or a comprehensive assault on suffering itself. Is xsec (in the sense of freedom from extinction being reliable and permanent enough to permit us to do common futurist goals) the kind of thing you would actually expect to see if you lived till the year 3000, 30000, or do you think the world would be in a state of constant vigilance (fear, paranoia) as a bargain for staying alive? What are the most compelling reasons to think that a strong form of xsec, one that doesn't depend on some positive rate of heightening vigilance in perpetuity, is worth thinking about at all? 

My comment on your previous post should have been saved for this one. I copy the questions below:

  • What do you think is the best approach to achieving existential security and how confident are you on this?
  • Which chapter/part of "What We Owe The Future" do you think most deviates from the EA mainstream?
  • In what way(s) would you change the focus of the EA longtermist community if you could?
  • Do you think more EAs should be choosing careers focused on boosting economic growth/tech progress?
  • Would you rather see marginal EA resources go towards reducing specific existential risks or boosting economic growth/tech progress?
  • The Future Fund website highlights immigration reform, slowing down demographic decline, and innovative educational experiments to empower young people with exceptional potential as effective ways to boost economic growth. How confident are you that these are the most effective ways to boost growth?
  • Where would you donate to most improve the long-term future?
    • Would you rather give to the Long-Term Future Fund or the Patient Philanthropy Fund?
  • Do you think you differ to most longtermist EAs on the "most influential century" debate and, if so, why?
  • How important do you think Moral Circle Expansion (MCE) is and what do you think are the most promising ways to achieve it?
  • What do you think is the best objection to longtermism/strong longtermism?
    • Fanaticism? Cluelessness? Arbitrariness?
  • How do you think most human lives today compare to the zero wellbeing level?
Comments1
Sorted by Click to highlight new comments since:

How does it feel to have a large group of (mostly) younger people accept your word as truth and align their careers accordingly?

Curated and popular this week
Relevant opportunities