Hide table of contents

Lundgren and Kudlek (2024), in their recent article, discuss several challenges to longtermism as it currently stands. Below is a summary of these challenges. 

The Far Future is Irrelevant for Moral Decision-Making

  • Longtermists have not convincingly shown that taking the far future into account impacts decision-making in practice. In the examples given, the moral decisions remain the same even if the far future is disregarded.
    • Example: Slavery
      We didn’t need to consider the far future to recognize that abolishing slavery was morally right, as its benefits were evident in the short term.
    • Example: Existential Risk
      The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
  • As a result, the far future has little relevance to most moral decisions. Policies that are good for the far future are often also good for the present and can be justified based on their benefits to the near future.

The Far Future Must Conflict with the Near Future to be Morally Relevant

  • For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
  • Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.

We Are Not in a Position to Predict the Best Actions for the Far Future

  • There are two main reasons for this:
    1. Unpredictability of Future Effects
      It's nearly impossible to predict how our actions today will influence the far future. For instance, antibiotics once seemed like the greatest medical discovery, estimating the long-term effects of medical research in 10,000 years—or even millions of years—is beyond our capacity.
    2. Unpredictability of Future Values
      Technological advancements significantly change moral values and social norms over time. For example, contraceptives contributed to shifts in values regarding sexual autonomy during the sexual revolution. We cannot reliably predict what future generations will value.

Implementing Longtermism is Practically Implausible

  • Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future.
  • Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations in principle, our resources are constrained.
  • Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
  • Implementing longtermism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.

I'm interested to hear your opinions on these challenges and how they relate to understanding longtermism. 

30

1
6

Reactions

1
6

More posts like this

Comments18
Sorted by Click to highlight new comments since:

I don't have time to look into this in full depth, but it looks like a good paper, making useful good-faith critiques, which I very much appreciate. Note that the paper is principally arguing against 'strong longtermism' and doesn't necessarily disagree with longtermism. For the record, I don't endorse strong longtermism either, and I think that the paper delineating it which came out before any defenses of (non-strong) longtermism has been bad for the ability to have conversations about the form of the view that is much more widely endorsed by 'longtermists'.

My main response to the points in the paper would be by analogy to cosmopolitanism (or to environmentalism or animal welfare). We are saying that something (the lives of people in of future generations) matters a great deal more than most people think (at least judging by their actions). In all cases, this does mean that adding a new priority will mean a reduction in resources going to existing priorities. But that doesn't mean these expansions of the moral circle are in error. I worry that the lines of argument in this paper apply just as well to denying previous steps like cosmopolitanism (caring deeply about people's lives across national borders). e.g. here is the final set of bullets you listed with minor revisions:

  • Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future distant countries.
  • Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations people in distant countries in principle, our resources are constrained.
  • Focusing on the far future distant countries comes at a cost to addressing present-day local needs and crises, such as health issues and poverty.
  • Implementing longtermism cosmopolitanism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.

What I'm trying to show here is that these arguments apply just as well to argue against previous moral circle expansions which most moral philosophers would think were major points of progress in moral thinking. So I think they are suspect, and that the argument would instead need to address things that are distinctive about longtermism, such as arguing positively that future peoples' lives don't matter morally as much as present people.

The "distant country" objection does not defend against the argument that "We Are Not in a Position to Predict the Best Actions for the Far Future". 

We can go to a distant country and observe what is going on there, and make reasonably informed decisions about how to help them. A more accurate analogy would be if we were trying to help a distant country that we hadn't seen, couldn't communicate with and knew next to nothing about.  

It also doesn't work as a counterargument for  "The Far Future Must Conflict with the Near Future to be Morally Relevant". The authors are claiming that anything that helps the far future can also be accomplished by helping people in the present. The analogous argument that anything that helps distant countries can also be accomplished by helping people in this country is just wrong. 

We can go to a distant country and observe what is going on there, and make reasonably informed decisions about how to help them.

We can make meaningful decisions about how to help people in the distant future. For example, to allow them to exist at all, to allow them to exist with a complex civilisation that hasn't collapsed, to give them more prosperity that they can use as they choose, to avoid destroying their environment, to avoid collapsing their options by other irreversible choices, etc. Basically, to aim and giving them things near the base of Maslow's Hierarchy of Needs or to give them universal goods — resources or options that can be traded for whatever it is they know they need at the time. And the same is often true for international aid.

In both cases, it isn't always easy to know that our actions will actually secure these basic needs, rather than making things worse in some way. But it is possible. One way to do it for the distant future is to avoid catastrophes that have predictable longterm effects, which is a major reason I focus on that and suggest others do too.

I don't see it as an objection to Longtermism if it recommends the same things as traditional morality — that is just as much a problem for traditional theories, by symmetry. It is especially not a problem when traditional theories might (if their adherents were careful) recommend much more focus on existential risks but in fact almost always neglect the issue substantially. If they admit that Longtermists are right that these are the biggest issues of our time and that the world should massively scale up focus and resources on them, and that they weren't saying this before we came along, then that is a big win for Longtermism. If they don't think it is all that important actually, then we disagree and the theory is quite distinctive in practice. Either way the distinctiveness objection also fails.

The authors are claiming that anything that helps the far future can also be accomplished by helping people in the present.

This is in tension with "We Are Not in a Position to Predict the Best Actions for the Far Future", isn't it?

It is rather that longtermists have not provided any examples of moral decisions that would be different if we were to consider the far future versus the near future. All current focus areas, the authors argue, can be justified by appealing to the near future. 

Yeah, perhaps I am subtly misrepresenting the argument. Trying again, I interpret it as saying:

People have justified longtermism by pointing to actions that seem sensible, such as the claim that it made sense in the past to end slavery, and it makes sense currently to prevent existential risk. But both of these examples can be justified with a lot more certainty by appealing to the short term future. So in order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty. 

It might help to clarify that in the article they are defining “long term future” as a scale of millions of years. 

So i order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty.

If you're referring to the first point I would reword this to:

In order to justify longtermism in particular, you have to point out proposed policies that can't be justified by drawing on the near future.

What is the more widely endorsed view of longtermists? 

I largely agree with your "distant countries" objection. Just because something is practically implausible does not make it morally wrong, or not worthy of attention. I also think it's not necessarily true that implementing longtermism requires radical changes to human psychology or social institutions. We need not necessarily convince every human on the planet to care about the lives of future generations, only those who might have a meaningful impact (which could be a small number).

Nevertheless, I think the other three objections that you don't mention provide some interesting and potentially serious challenges for longtermism, perhaps for weaker forms as well.

It's slightly odd this paper argues that:

The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.

But then also says:

Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.

I'm left uncertain if the authors are in favor of spending to address existential risk, which would of course lead to less money to address present-day suffering due to health issues and poverty.

 

  • For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
  • Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.

This seems a very strange view. If we knew the future would not last long - perhaps a black hole would swallow up humanity in 200 years - then the future would not be very vast, it would have less moral weight, and aiding it would be less demanding. Would this really leave longtermism more palatable to the critics?

In the article the authors are somewhat ambiguous about the meaning of 'near future'. They do at one point refer to the present and a few generations, as their potential time stamp. But your point raises an interesting question for the longtermists: How long does the future need to be in order for future people to have moral weight?

Although we might want to qualify it slightly in that the element of interest is not necessarily the number of years into the future but rather how many people (or beings) will be in the future. The question then becomes: How many people need to be alive in the future in order for their lives to have moral weight?

If we knew a black hole would swallow humanity in 200 years, on some estimates, there could still be ~15 billion human lives to come. If we knew that the future held only 15 more billion lives, would that justify not focusing on existential risks?

I'm not sure I buy the "We are not in a position to predict the best actions for the far future"

estimating the impact of current actions on medical research in 10,000 years—or even millions of years—is beyond our capacity.

I would say the following would, in expectation, boost medical research in millions of years:

  • Not going extinct or becoming disempowered: if you're extinct or completely disempowered you can't do medical research (and of course wellbeing would be zero or low!).
  • Investing in medical research now: if we invest in such research now we bring forward progress. So, in theory, in millions of years we would be ahead where we would have been if we had not invested now. If there's a point at which we plateau with medical research then we we would just reach that plateau earlier and have more time to enjoy with the highest possible level of medical research.

We cannot reliably predict what future generations will value.

They will probably value:

  • Being alive: another argument for not going extinct.
  • Having the ability to do what they want: another argument for not becoming permanently disempowered. Or not to have a totalitarian regime control the world (e.g. through superintelligent AI).
  • Minimizing suffering: OK maybe they will like suffering who knows, but in my mind that would mean things have gone very wrong. Assuming they want to minimize suffering we should try to, for example, ensure factory farming does not spread out to other planets and therefore persist for millenia. Or advocate for the moral status of digital minds.

Perhaps that could have been worded better in my summary. It is not that we cannot predict what could boost medical research in the far future. Rather it is that we cannot predict the effect that medical research will have on the far future. For example the magnitude of the effect may be so incredibly large that it might out prioritize traditional existential risks, either because it leads to a good future, or perhaps to a bad future. Or perhaps further investments in medical research will not lead to any significant gains in the things we care about. Either way we don't have a means of predicting how our current actions will influence the far future.

With regards to value, being alive, having the ability to do what we want, and minimizing suffering, might very well be things that people in the far future value, but they are also things that we currently value now. On the authors account therefore, these values can guide our moral decision making by virtue of being things we value now and into the near future and referencing that they will also be valued by the far future is an irrelevant extra piece of information, i.e it does no additional work in guiding our moral decision making.

FWIW I think it's pretty unclear that something like reducing existential risk should be prioritised just based on near-term effects (e.g. see here). So I think factoring in that future people may value being alive and that they won't want to be disempowered can shift the balance to reducing existential risk. 

If future people don't want to be alive they can in theory go extinct (this is the option value argument for reducing existential risk). The idea that future generations will want to be disempowered is pretty barmy, but again they can disempower themselves if they want to so it seems good to at least give them the option.

Thanks for linking to that research by Laura Duffy, that's really interesting. It would have been relevant for the authors of the current article as well.

According to their analysis, spending on conservative existential risk interventions are cost competitive (within an order of magnitude) to spending on AMF. Further, compared to plausible less conservative existential risk interventions, AMF is "probably" an order of magnitude less cost-effective. Under Rethink Priorities' estimates for welfare ranges, for cage-free campaigns and the hypothetical shrimp welfare intervention, existential risk interventions are either cost competitive, or an order of magnitude less cost-effective. 

I think that actually gives some reasonable weight to the idea that existential risk can be justified without reference to the far future. Duffy used a timeline of <200 years and even then a case can be made that interventions focussing on existential risk should be prioritised. At the very least it adds level of uncertainty about the relevance of the far future in moral-decision making.

existential risk can be justified without reference to the far future

This is pretty vague. If existential risk is roughly on par with other cause areas then we would be justified in giving any amount of resources to it. If existential risk is orders of magnitude more important then we should greatly prioritize it over other areas (at least on the current margin). So factoring in the far future does seem to be very consequential here.

According to the authors of the linked article, longtermists have not convincingly shown that taking the far future in account impacts decision-making in practice. Their claim is that the burden of proof here lies for the longtermist. If the far future is important for moral decision-making then this claim needs to be justified. A surface level justification that people in the far future would want to be alive, is equally justified by reference to the near future. 

You linked a quantitative attempt at answering the question of whether focus on existential risk requires priority if we consider <200 years, and the answer appears to be in the affirmative (depending on weightings). Is there a corresponding attempt at making this case using the far future as a reference point?

In order to provide a justification for preventative x-risk policies with reference to their impact on the far future we would need to compare it with the impact of other focus areas and how they would influence the far future. That is in part where the 'We Are Not in a Position to Predict the Best Actions for the Far Future' claim fits in because how are we supposed to do an analysis of the influence of any intervention (such as medical research, but including x-risk interventions) on people living millions of years into the future. It's possible that if we did have that kind of predictive power, many other focus areas may turn out to be orders of magnitude more important than focus on existential risks.

The analysis I linked to isn't conclusive on longtermism being the clear winner if only considering the short-term. Under certain assumptions it won't be the best. Therefore if only considering the short-term, many may choose not to give to longtermist interventions. Indeed this is what we see in the EA movement where global health still reigns supreme as the highest priority cause area.

What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here). In short, significantly more value is at stake with reducing existential risk because now you care about enabling far future beings to live and thrive. If longtermism is the clear winner then we shouldn't see a movement that clearly prioritises global health, we should see a movement that clearly prioritises longtermist causes. This would be a big shift from the status quo.

As for your final point, I think I understand what you / the authors were saying now. I don't think we have no idea what the far future effects of interventions like medical research are.  We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering. Could that be wrong - sure - but we're just talking about expectational value. With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation. The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.

Curated and popular this week
Relevant opportunities