Hide table of contents

For Existential Choices Debate Week, we’re trying out a new type of event: the Existential Choices Symposium. It'll be a written discussion between invited guests and any Forum user who'd like to join in. 

How it works:

  • Any forum user can write a top-level comment that asks a question or introduces a consideration, the answer of which might affect people’s answer to the debate statement[1]. For example: “Are there any interventions aimed at increasing the value of the future that are as widely morally supported as extinction-risk reduction?” You can start writing these comments now.
  • The symposium’s signed-up participants, Will MacAskill, Tyler John, Michael St Jules, Andreas Mogensen and Greg Colbourn, will respond to questions, and discuss them with each other and other forum users, in the comments.
  • To be 100% clear - you, the reader, are very welcome to join in any conversation on this post. You don't have to be a listed participant to take part. 

This is an experiment. We’ll see how it goes and maybe run something similar next time. Feedback is welcome (message me with feedback here).

The symposium participants will be online between 3 - 5 pm GMT on Monday the 17th.

Brief bios for participants (mistakes mine):

  • Will MacAskill is an Associate Professor of moral philosophy at the University of Oxford, and Senior Research Fellow at Forethought. He wrote the books Doing Good Better, Moral Uncertainty, and What We Owe The Future. He is the cofounder of Giving What We Can, 80,000 Hours, Centre for Effective Altruism and the Global Priorities Institute.
  • Tyler John is an AI researcher, grantmaker, and philanthropic advisor. He is an incoming Visiting Scholar at the Cambridge Leverhulme Centre for the Future of Intelligence and an advisor to multiple philanthropists. He was previously the Programme Officer for emerging technology governance and Head of Research at Longview Philanthropy. Tyler holds a PhD in philosophy from Rutgers University—New Brunswick, where his dissertation focused on longtermist political philosophy and mechanism design, and the case for moral trajectory change.
  • Michael St Jules is an independent researcher, who has written on “philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals”.
  • Andreas Mogensen is a Senior Research Fellow in Philosophy at the Global Priorities Institute, part of the University of Oxford’s Faculty of Philosophy. His current research interests are primarily in normative and applied ethics. His previous publications have addressed topics in meta-ethics and moral epistemology, especially those associated with evolutionary debunking arguments.
  • Greg Colbourn is the co-founder of CEEALAR and currently an advocate for Pause AI, which promotes the idea of a global AI moratorium.

Thanks for reading! If you'd like to contribute to this discussion, write some questions below which could be discussed in the symposium. 

  1. ^

    You can find the debate statement, and all its caveats, here.

59

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since:

How much of the argument for working towards positive futures rather than existential security rests on conditional value, as opposed to expected value?

One could argue for conditional value, that in worlds where strong AI is easy and AI safety is hard, we are doomed regardless of effort, so we should concentrate on worlds where we could plausibly have good outcomes.

Alternatively, one could be confident that the probability of safety is relatively high, and make the argument that we should spend more time focused on positive futures because it's likely already - either due to efforts towards superintelligence safety are likely to work, (and if so, which ones?) or because alignment by default seems likely.

(Or, I guess, lastly, one could assume, or argue, that no superintelligence is possible, or it is unlikely.)

Thank you for organizing this debate! 

Here are several questions. They are related to two hypotheses, that could, if both significantly true, make impartial longtermists update the value of Extinction-Risk reduction downward (potentially by 75% to 90%).

  • Civ-Saturation Hypothesis: Most resources will be claimed by Space-Faring Civilizations (SFCs) regardless of whether humanity creates an SFC.
  • Civ-Similarity Hypothesis: Humanity's Space-Faring Civilization would produce utility similar to other SFCs (per unit of resource controlled).

For context, I recently introduced these hypotheses here, and I will publish a few posts producing preliminary evaluations of those during the debate week.

General questions:

  • What are the best arguments against these hypotheses?
  • Is the AI Safety community already primarily working on reducing Alignment-Risks and not on reducing Extinction-Risks?
    • By Alignment-Risks, I mean "increasing the value of futures where Earth-originating intelligent-life survive".
    • By Extinction-Risks, I mean "reducing the chance of Earth-originating intelligent-life extinction".
  • What are the current relative importance given to Extinction-Risks and Alignment-Risks in the EA community? E.g., what are the relative grant allocations?
  • Should the EA community do more to study the relative priorities of Extinction-Risks and Alignment-Risks, or are we already allocating significant attention to this question?

Specific questions:

  • Should we prioritize interventions given EDT (or other evidential decision theories) or CDT? How should we deal with uncertainty there?
    • I am interested in this question because the Civ-Saturation hypothesis may be significantly true when assuming EDT (and thus at least assuming we control our exact copies, and they exist). However, this hypothesis may be otherwise pretty incorrect assuming CDT.
  • We are strongly uncertain about how the characteristics of ancestors of space-faring civilizations (e.g., Humanity) would impact the value space-faring civilizations would produce in the far future. Given this uncertainty, should we expect it to be hard to argue that Humanity's future space-faring civilization would produce significantly different value than other space-faring civilizations?
    • I am interested in this question, because I believe we should use the Mediocrity Principle as a starting point when comparing our future potential impact with that of aliens, and that it is likely (and also in practice) very hard to find robust enough arguments to update significantly away from this principle, especially given that we can find many arguments reinforcing the mediocrity principle prior (e.g., selection pressures and convergence arguments).
  • What are our best arguments supporting that Humanity's space-faring civilization would produce significantly more value than other space-faring civilizations?
  • How should we aggregate beliefs over possible worlds in which we could have OOMs of difference in impact?

Will MacAskill stated in a recent 80,000 hours podcast that he believes marginal work on trajectory change toward a best possible future rather than a mediocre future seems likely significantly more valuable than marginal work on extinction risk.

Could you explain what the key crucial considerations are for this claim to be true, and a basic argument for why think each of the crucial considerations resolves in favor of this claim?

Would also love to hear if others have any other crucial considerations they think weigh in one direction or the other.

This is a cool idea! Will this be recorded for people who can't attend live? 

Edit: nevermind, I think I'm confused; I take it this is all happening in writing/in the comments.

Yep it'll all be in the comments, so if you aren't around you can read it later (and I'm sure a bunch of the conversations will continue, just potentially without the guests)
this was a good flag btw - I've changed the first sentence to be clearer!

Curated and popular this week
Relevant opportunities