Hide table of contents

I'm pretty confused here, so any comments and feedback are much appreciated, including criticism.

Toy Model

Let  be the value of the longterm future. Let  be the probability that our descendants safely reach technological maturity. Let  be the expected quality of the longterm future, given that we safely reach technological maturity.  Then the value of the longterm future is:

This ignores all the value in the longterm future that occurs when our descendants don't safely reach technological maturity. 

Assume that we can choose between doing some urgent longtermist work, say existential risk reduction - , or some patient longtermist work, let's call this global priorities research - . Assume that the existential risk reduction work increases the probability that our descendants safely reach technological maturity, but has no other effect on the quality of the future. Assume that the global priorities research increases the quality of the longterm future conditional on it occurring, but has no effect on existential risk.

Consider some small change in either existential risk reduction work or global priorities research. You can imagine this as $10 trillion, or 'what the EA community focuses on for the next 50 years', or something like that. Then for some small finite change in risk reduction, , or in global priorities research, , the change in the value of the longterm future will be:

Dropping the subscripts and dividing the first equation by the other:

Rewriting in more intuitive terms:

Critiquing the Model

I've made the assumption that x-risk reduction work doesn't otherwise affect the quality of the future, and patient longtermist work doesn't affect the probability of existential risk. Obviously, this isn't true. I'm not sure how much this reduces the value of the mode. If one type of work was much more valuable than the other, I could see this assumption being unproblematic. Eg. if GPR was 10x as cost effective as XRR, then the value of XRR-focussed work might mainly be in the quality improvements, not the probability improvements.

I've made the assumption that we can ignore all value other than worlds where we safely reach technological maturity.  This seems pretty intuitive to me, given the likely quality, size, and duration of a technologically mature society, and my ethical views. 

Putting some numbers in 

Let's put some numbers in. Toby Ord thinks that with a big effort, humanity can reduce the probability of existential risk this century from  to 1/6. That would make the fractional increase in probability of survival  (it goes from  to ). Assume for simplicity that x-risk after this century is zero. 

For GPR to be cost effective with XRR given these numbers (so the above equation is ), the fractional increase in the value of the future for a comparable amount of work would have to be .

Though Toby's numbers are really quite favourable to XRR, so putting in your own seems good. 

Eg. If you think X-risk is , and we could reduce it to  with some amount of effort, then the fractional increase in probability of survival is about  (it goes from  to ). So for GPR to be cost competitive, we'd have to be able to increase the value of the future by  with a similar amount of work that the XRR would have taken.


Would it take a similar amount of effort to reduce the probability of existential risk this century from  to 1/6 and to increase the fractional value of the future conditional on it occuring by ? My intuition is that the latter is actually much harder than the former. Remember, you've got to make the whole future  better for all time. What do you think?

Some things going into this are:

  • I think it's pretty likely () that there will be highly transformative events over the next two centuries. It seems really hard to make detailed plans with steps that happen after these highly transformative events.
  • I'm not sure if research about how the world works now actually helps much for people understanding how the world works after these highly transformative events. If we're all digital minds, or in space, or highly genetically modified then understanding how today's poverty, ecosystems, or governments worked might not be very helpful.
  • The minds doing research after the transition might be much more powerful than current researchers. A lower bound seems like 200+IQ humans (and lots more of them than are researchers now), a reasonable expectation seems like a group of superhuman narrow AIs, an upper bound seems like a superintelligent general AI. I think these could do much better research, much faster than current humans working in our current institutions. Of course, building the field means these future researchers have more to work with when they get started. But I guess this is negligible compared to increasing the probability that these future researchers exist, given how much faster they would be.

Having said that, I don't have a great understanding of the route to value of longtermist research there is that doesn't contribute to reducing or understanding existential risk (and I think it probably valuable for epistemic modesty reasons). 

I should also say that lots of actual 'global priorities research' does a lot to understand and reduce x-risk, and could understood as XRR work. I wonder how useful a concept 'global priorities research is', and whether it's too broad. 


  • What's the best way to conceptualise the value of non-XRR longtermist work? Is it 'make the future go better for the rest of time'? Does it rely on a lock-in event, like transformative technologies, to make the benefits permanent?
  • What numbers do you think are appropriate to put into this model? If a given unit of XRR work increases the probability of survival by , how much value could it have created via trajectory change? Any vague/half-baked considerations here are appreciated.
  • Do you think this model is accurate enough to be useful?
  • Do you think that the spillover of XRR on increasing the quality of the future and of GPR on increasing the probability of the future can be neglected?





More posts like this

Sorted by Click to highlight new comments since:

I think that ignoring all the value in futures where we don't safely reach technological maturity kind of stacks the deck against GPR, which I intuitively think is better than your model suggests. This seems especially the case if we have a suffering-focused ethics (I mean by this: there is an asymmetry between suffering and happiness, such that decreasing suffering by x is better than increasing happiness by x). 

Including 'bad futures' would, I suspect, affect how easy you think it is to increase the value of the future by 1/4 (or equivalent). This is because there are lots of different ways the future could be really bad, with loads and loads of moral patients who suffer a lot, and avoiding one of these sources of suffering feels to me like it's more tractable than making the 'good future' even better (especially by some large fraction like 1/4). It would be even easier to improve the value of these 'bad futures' if we have a suffering-focused ethics rather than a symmetrical view of ethics. 

(Note: I wrote this comment with one meaning of 'technological maturity' in mind, but now I'm actually not sure if that was what you meant by it, so maybe the answer is you would be including the kind of futures I mean. In that case, we probably differ on how easy we think it would be to affect these futures.)

Hey Alex. Really interesting post! To have a go at your last question, my intuition is that the spillover effects of GPR on increasing the probability of the future cannot be neglected. I suppose my view differs in that where you define "patient longtermist work" as GPR and distinct from XRR, I don't see that it has to be. For example, I may believe that XRR is the more impactful cause in the long run, but just believe that I should wait a couple hundred years before putting my resources towards this. Or we should figure out if we are living at the hinge of history first (which I'd classify as GPR). Does that make sense?

I suppose one other observation is that working on s-risks typically falls within the scope of XRR and clearly also improves the quality of the future, but maybe this ignores your assumption of safely reaching technological maturity.

  1. I think I've conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient. 
    1. (Side note: There are so many possible longtermist strategies! Any combination of  is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there's actually at least six other strategies)
  2. This model completely neglects meta strategic work along the lines of 'are we at the hinge of history?' and 'should we work on XRR or something else?'. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I'm not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
  3. I had s-risks in mind when I caveated it as 'safely' reaching technological maturity, and was including s-risk reduction in XRR. But I'm not sure if that's the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like 'quality increasing' than 'probability increasing'. The argument for them being 'probability increasing' is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience) 

Re 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality.

Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.

Curated and popular this week
Relevant opportunities