I am a Senior Policy Analyst at the Bipartisan Policy Center (BPC), working on US immigration and workforce policy. I previously worked as an economist and a management consultant.
I am interested in longtermism, global priorities research and animal welfare. Check out my blog The Ethical Economist.
Please get in touch if you would like to have a chat sometime, or connect with me on LinkedIn.
There seems to be a typo in this link (on my laptop I can't access your other links, not sure why).
On page 22 I think you're missing the word "in" in the sentence below (I have added it in bold):
"If the lasting benefit would have been achieved later in the default trajectory, then it is only temporary so not a true gain."
One approach would be to say the curve represents the instrumental effects of humanity on intrinsic value of all beings at that time. This might work, though does have some surprising effects, such as that even after our extinction, the trajectory might not stay at zero, and different trajectories could have different behaviour after our extinction.
This seems very natural to me and I'd like us to normalise including non-human animal wellbeing, and indeed the wellbeing of any other sentience, together with human wellbeing in analyses such as these.
We should use a different term than "humanity". I'm not sure what the best choice is, perhaps "Sentientity" or "Sentientkind".
I'm not sure your answer is very helpful. You act like OPs question isn't meaningful, but I think it is. If you want to, interpret it as "why are we still here"?
One can answer we haven't been destroyed by a nuclear apocalypse because of safeguards or game theoretic considerations for example. Just as one can answer why we haven't been destroyed by power-seeking AI through explanations such as life is very rare and so AI hasn't been created yet. Not saying those are correct answers, just saying that providing a useful answer seems possible.
I'm not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it's worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it's plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don't create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
So I'm a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.
I recall reading others saying that total-population utilitarian is very much pivotal to the case for prioritizing the mitigation of x-risk relative to near term interventions.
Not exactly, and this is what I was trying to explain with my first comment.
Person-affecting views (PAVs) strongly count against interventions that aim to reduce the probability of human extinction (although not entirely because extinction still means the deaths of billions of people which PAVs do find bad).
The key point is that existential risks are broader than extinction risks. Some existential risks can lock us into bad states of the world in which humanity still continues to exist.
For example, if AI enslaves us and we have little to no hope of escaping until the universe becomes uninhabitable, that still seems very bad according to pretty much any plausible PAV. Michael may even agree with this as non-identity issues don't usually bite when we're considering future lives that are worse than neutral (as might be the case if we're literally enslaved by AI!).
More generally, any intervention that improves the wellbeing of future people (and not by ensuring these people actually exist) should be good under plausible PAVs. Michael seems to disagree with this by raising the non-identity problem but I don't find this persuasive and I don't think others would either. I list my proposed longtermist interventions that still work under PAVs again:
Hmm. Do you seriously think that philosophers have been too quick to dismiss such person-affecting views?
If you accept that impacts on the future generally don't matter because you won't really be harming anyone, as they wouldn't have existed if you hadn't done the act, then you can justify doing some things that I'd imagine pretty much everyone would agree is wrong.
For example, you could justify going around putting millions of landmines underground set to blow up in 200 years time causing immense misery to future people for no other reason than you want to cause their suffering. Provided those people will still live net positive lives overall, your logic says this isn't a bad thing to do. Do you really think it's OK to place the mines? Do you think anyone bar a psychopath thinks it's OK to place the mines?
Of course, as you imply, there are other ways to respond to the non-identity problem. You could resort to an impersonal utilitarianism where you say no, don't place the mines because it will cause immense suffering and suffering is intrinsically bad. Do you really think this is a weaker response?
Thanks Michael. My main concern is that it doesn't seem that there is enough clarity on the spillovers, and spillovers are likely to be a large component of the total impact. As Joel says there is a lack of data, and James Snowden's critique implies your current estimate is likely to be an overestimate for a number of reasons. Joel says in a comment "a high quality RCT would be very welcome for informing our views and settling our disagreements". This implies even Joel accepts that, given the current strength of evidence, there isn't clarity on spillovers.
Therefore I would personally be more inclined to fund a study estimating spillovers than funding Strongminds. I find it disappointing that you essentially rule out suggesting funding research when it is at least plausible that this is the most effective way to improve happiness as it might enable better use of funds (it just wouldn't increase happiness immediately).
This is a good point, and it's worth pointing out that increasing ¯v is always good whereas increasing τ is only good if the future is of positive value. So risk aversion reduces the value of increasing τ relative to increasing ¯v, provided we put some probability on a bad future.
What do you mean by civilisation? Maybe I'm nitpicking but it seems that even if there is a low upper bound on value for a civilisation, you may still be able to increase ¯v by creating a greater number of civilisations e.g. by spreading further in the universe or creating more "digital civilisations".