Around this time of year, GiveWell traditionally spends a lot of time thinking about game theoretic considerations – specifically, what funding recommendation it ought to make to Good Ventures so that Good Ventures allocates its resources wisely. (Here are GiveWell's game theoretic posts from 2014 & 2015.)

The main considerations here are:

  1. How should Good Ventures act in an environment where individual donors & other foundations are also giving money?
  2. How should Good Ventures value its current giving opportunities compared to the giving opportunities it will have in the future?

I'm more interested in the second consideration, so that's what I'll engage with here. If present-day opportunities seem better than expected future opportunities, Good Ventures should fully take advantage of its current opportunities, because they are the best giving opportunities it will ever encounter. Conversely, if present-day opportunities seem worse than expected future opportunities, Good Ventures should give sparsely now, preserving its resources for the superior upcoming opportunities.

Personally, I'm bullish on present-day opportunities. Present-day opportunities seem more attractive than future ones for a couple reasons:

  1. The world is improving, so giving opportunities will get worse if current trends continue.
  2. There's a non-negligible chance that a global catastrophic risk (GCR) occurs within Good Ventures' lifetime (it's a "burn-down" foundation), thus nullifying any future giving opportunities.
  3. Strong AI might emerge sometime in the next 30 years. This could be a global catastrophe, or it could ferry humanity into a post-scarcity environment, wherein philanthropic giving opportunities are either dramatically reduced or entirely absent.

So far, my reasoning has been qualitative, and if it's worth doing, it's worth doing with made-up numbers, so let's assign some subjective probabilities to the different scenarios we could encounter (in the next 30 years):

  • P(current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs) = 30%
  • P(current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity) = 56%
  • P(strong AI leads to a post-scarcity economy) = 5%
  • P(strong AI leads to a global catastrophe) = 2%
  • P(a different GCR occurs) = 7%

To assess the expected value of these scenarios, we also have to assign a utility score to each scenario (obviously, the following is incredibly rough):

  • Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = Baseline
  • Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity = 2x as good as baseline
  • Strong AI leads to a post-scarcity economy = 100x as good as baseline
  • Strong AI leads to a global catastrophe = 0x as good as baseline
  • A different GCR occurs = 0x as good as baseline

Before calculating the expected value of each scenario, let's unpack my assessments a bit. I'm imagining "baseline" goodness as essentially things as they are right now, with no dramatic changes to human happiness in the next 30 years. If quality of life broadly construed continues to improve over the next 30 years, I assess that as twice as good as the baseline scenario.

Achieving post-scarcity in the next 30 years is assessed as 100x as good as the baseline scenario of no improvement. (Arguably this could be nearly infinitely better than baseline, but to avoid Pascal's mugging we'll cap it at 100x.)

A global catastrophe in the next 30 years is assessed as 0x as good as baseline.

Again, this is all very rough.

Now, calculating the expected value of each outcome is straightforward:

  • Expected value of current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = 0.3 x 1 = 0.3
  • Expected value of current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity = 0.56 x 2 = 1.12
  • Expected value of strong AI leads to a post-scarcity economy = 0.05 x 100 = 5
  • Expected value of strong AI leads to a global catastrophe = 0.02 * 0 = 0
  • Expected value of a different GCR occurs = 0.07 * 0 = 0

And each scenario maps to a now-or-later giving decision:

  • Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs –> Give later (because new opportunities may be discovered)
  • Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity –> Give now (because the best giving opportunities are the ones we're currently aware of)
  • Strong AI leads to a post-scarcity economy –> Give now (because philanthropy is obsolete in post-scarcity)
  • Strong AI leads to a global catastrophe (GCR) –> Give now (because philanthropy is nullified by a global catastrophe)
  • A different GCR occurs –> Give now (because philanthropy is nullified by a global catastrophe)

So, we can add up the expected values of all the "give now" scenarios and all the "give later" scenarios, and see which sum is higher:

  • Give now total expected value = 1.12 + 5 + 0 + 0 = 6.12
  • Give later total expected value = 0.3  = 0.3

This is a little strange because GCR outcomes are given no weight, but in reality if we were faced with a substantial risk of a global catastrophe, that would strongly influence our decision-making. Maybe the proper way to do this is to assign a negative value to GCR outcomes and include them in the "give later" bucket, but that pushes even further in the direction of "give now" so I'm not going to fiddle with it here.

Comparing the sums shows that, in expectation, giving now will lead to substantially more value. Most of this is driven by the post-scarcity variable, but even with post-scarcity outcomes excluded, I still assess "give now" scenarios to have about 4x the expected value as "give later" scenarios.

Yes, this exercise is ad-hoc and a little silly. Others could assign different probabilities & utilities, which would lead them to different conclusions. But the point the exercise illustrates is important: if you're like me in thinking that, over the next 30 years, things are most likely going to continue slowly improving with some chance of a trend reversal and a tail risk of major societal disruption, then in expectation, present-day giving opportunities are a better bet than future giving opportunities.

 ---

Disclosure: I used to work at GiveWell.

A version of this post appeared on my personal blog.

4

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since:

Sorry, this is going to be a "you're doing it wrong" comment. I will try to criticize constructively!

There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can't be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can't influence the scenarios' probabilities. Any of these could have decisive influence over your conclusion.

But there's also a problem with your calculation. Your conclusion is based on the fact that you expect higher utility to result from scenarios in which you believe giving now will be better. That's not actually an argument for deciding to give now, as it doesn't assess whether the world will be happier as a result of the giving decision. You would need to estimate the relative impact of giving now vs. giving later under each of those scenarios, and then weight the relative impacts by the probabilities of the scenarios.

Don't stop trying to quantify things. But remember the pitfalls. In particular, simplicity is paramount. You want to have as few "weak links" in your model as possible; i.e. moving parts that are not supported by evidence and that have significant influence on your conclusion. If it's just one or two numbers or assumptions that are arbitrary, then the model can help you understand the implications of your uncertainty about them, and you might also be able to draw some kind of conclusion after appropriate sensitivity testing. However, if it's 10 or 20, then you're probably going to be led astray by spurious results.

I basically agree with your critique, though I'd say my assumptions are more naïve than arbitrary (mostly semantic; the issues persist either way). On reflection, I don't think I've arrived at any solid conclusions here, and this exercise's main fruit is a renewed appreciation of how tangled these questions are.


I'm getting hung up on your last paragraph: "However, if it's 10 or 20, then you're probably going to be led astray by spurious results."

This is pretty unsatisfying – thinking about the future is necessarily speculative, so people are going to have to use "arbitrary" inputs in their models for want of empirical data. If they only use a few arbitrary inputs, their models will likely be too simplistic to be meaningful. But if they use many arbitrary inputs, their models will give spurious results? It sort of feels like an impossible bind for the project of modeling the future.

Or maybe I'm misunderstanding your definition of "arbitrary" inputs, and there is another class of speculative input that we should be using for model building.

Sure. When I say "arbitrary", I mean not based on evidence, or on any kind of robust reasoning. I think that's the same as your conception of it.

The "conclusion" of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don't go as far as to actually make a recommendation.

To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it "felt" right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?

Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical "if this, then that" analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you've made lots of arbitrary assumptions (say 10-20); it's difficult to get any helpful insights from "if this and this and this and this and........, then that".

So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said "prediction is difficult, especially about the future"? ;-) But models that aren't sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.

current giving opportunities have value as training data so that you act correctly when a potentially better opportunity comes along.

[anonymous]1
0
0
under revision

Exploration/exploitation. If you want to give away X dollars to do the most good, you'd like to spend some fraction of X on exploring the space of opportunities to do good. You can sit around and reason about the payouts people say various slot machines have, but this has a cost as well. You can also go out and start pulling handles to get a sense of how the slot machines function.

Thank you for the article! One thought that came to mind: If the current view of the researchers is that “the world is improving,” then I would count that as a strong indicator for investing much more time into research first. It indicates that the causes discovered so far are of a kind where all the likely scenarios see the cause decreasing in scale and getting solved at some point. Assuming that the future is very long, the logarithm of that trajectory, the suffering over time, is very limited compared to hypothetical causes where the trajectory is uncertain or even such that are unlikely to decrease in scale.

So if the current view of researchers is that the causes they know of are in the process of getting solved (are decreasing in scale) then the search for Cause X should have priority over investing early. But I think the recent post on Cause X already contained many candidates of causes whose trajectories are highly uncertain or who are on an upward trajectory, so investing later is called for to give the researchers time to become convinced of the importance of these causes and to find or create giving opportunities to address them.

Curated and popular this week
Relevant opportunities