A key (new-ish) proposition in EA discussions is "Strong Longtermism," that the vast majority of the value in the universe is in the far future, and that we need to focus on it. This far future is often understood to be so valuable that almost any amount of preference for the long term is justifiable.
In this brief post, I want to argue that this strong claim is unnecessary compared to a weaker argument, creates new problems that are easily avoided otherwise, and should be replaced with the weaker claim. (I am far from the first to propose this.)
The 'regular longtermism' claim, as I present it, is that we should assign approximately similar value to the long term future as we do to the short-term. This is a philosophically difficult position which nonetheless, I argue, is superior to either status quo, or strong longtermism.
Philosophical grounding
The typical presentation of longtermism is that if we do not discount future lives exponentially, almost any weight placed on the future, which almost certainly can be massively larger than the present, will overwhelm the value of the present. This is hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term risks.
The typical alternative is presented by naïve economic discounting, which assumes that we should exponentially discount the far future at some finite rate. This leads to claims that a candy bar today is worth more than the entire future of humanity starting in, say, 10,000 years. This is also hard to justify intuitively.
A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future. This preserves both the value of the long-term future of humanity if positive, and the preference for the present. Lacking any strong justification for setting the balance, I will very tentatively claim they should be weighted approximately equally, but this is not critical - almost any non trivial weight on the far future would be a large shift from the status quo towards longer-term thinking. This may be non-rigorous, but has many attractive features.
The key question, it seems, is whether the new view is different, and/or whether the exact weights for the near and long term will matter in practice.
Does 'regular longtermism' say anything?
Do the different positions lead to different conclusions in the short term? If they do not, there is clearly no reason to prefer strong longtermism. If they do, it seems that almost all of these differences are intuitively worrying. Strong longtermism implies we should engage in much larger near term sacrifices, and justifies ignoring near-term problems like global poverty, unless they have large impacts on the far future. Strong neartermism, AKA strict exponential discounting, implies that we should do approximately nothing about the long term future.
So, does regular longtermism suggest less focus on reducing existential risks, compared to the status quo? Clearly not. In fact, it suggests overwhelmingly more effort should be spent on avoiding existential risk than is currently available for the task. It may suggest less effort than strong longtermism, but only to the extent that we have very strong epistemic reasons for thinking that very large short term sacrifices are effective.
What now?
I am unsure that there is anything new in this post. At the same time, it seems that the debate has crystallized into two camps which I strongly disagree with - the "anti-longtermist" camp, typified by Phil Torres, who is horrified by the potentially abusive view of longtermism, and Vaden Masrani, who wrote a criticism of the idea, versus the "strong longtermism" camp, typified by Toby Ord and (Edit: see Toby's comment) Will MacAskill, (Edit: See Will's comment.) who seems to imply that Effective Altruism should focus entirely on longtermism. (Edit: I should now say that it turns out that this is a weak-man argument, but also note that several commenters explicitly say they embrace this viewpoint.)
Given the putative dispute, I would be very grateful if we could start to figure out as a community whether the strong form of longtermism is a tentative question about how to work out a coherent position that doesn't have potentially worrying implications, or if it is intended as a philosophical shibboleth. I will note that my typical mind fallacy view is that both sides actually endorse, or at least only slightly disagree with, my mid-point view, but I may be completely wrong.
- Note that Will has called this "very strong longtermism", but it seems unclear how a line is drawn between very strong and strong forms. This is true especially because the definition-based version he proposes, that human lives in the far future are equally valuable and should not be discounted, seems to lead directly to this very strong longtermist conclusion.
- (Edited to add:) In contrast, any split of value between near-term and long-term value completely changes the burden of proof for longtermist interventions. As noted here, given strong longtermism, we would have a clear case for any positive-expectation risk reduction measure, and the only possible response to refute it is a claim that the expectation in terms of reduced risk is negative. With a weaker form, we can perform cost-benefit analysis to decide whether the loss in the near-term is worthwhile.
The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.
The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.
(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)
This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just li... (read more)
I'm not sure this is the case. E.g. Steven Pinker in Better Angels makes the case that utopian movements systematically tend to commit atrocities because this all-important end goal justifies anyting in the medium term. I haven't rigorously examined this argument and think it would be valuable for someone to do so, but much of longtermism in the EA community, especially of the strong variety, is based on something like utopia.
One reason why you might intuitively think there would be a relationship is that shorter-term impacts are typically somewhat more bounded; e.g. if thousands of American schoolchildren are getting suboptimal lunches, this obviously doesn't justify torturing hundreds of thousands of people. With the strong longtermist claims it's much less clear that there's any sort of upper bound, so to draw a firm line against atrocities you end up looking to somewhat more convoluted reasoning (e.g. some notion of deontological restraint that isn't completely absolute but yet can withstand astronomical consequences, or a sketchy and loose notion that atrocities have an instrumental downside).
There’s nothing convoluted about it! We just observe that historical experience shows that the supposed benefits never actually appear, leaving just the atrocity! That’s it! That’s the actual reason you know the real result would be net bad and therefore you need to find a reason to argue against it! If historically it worked great and exactly as promised every time, you would have different heuristics about it now!
The final conclusion here strikes me as just the sort of conclusion that you might arrive at as your real bottom line, if in fact you had an arrived at an inner equilibrium between some inner parts of you that enjoy doing something other than longtermism, and your longtermist parts. This inner equilibrium, in my opinion, is fine; and in fact, it is so fine that we ought not to need to search desperately for a utilitarian defense of it. It is wildly unlikely that our utilitarian parts ought to arrive at the conclusion that the present weighs about 50% as much as our long-term future, or 25% or 75%; it is, on the other hand, entirely reasonable that the balance of what our inner parts vote on will end up that way. I am broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative. But you're just not going to end up with a utilitarian defense of that bottom line; if the future can matter at all, to the parts of us that care abstractly and according to numbers, it's going to end up mattering much more than th... (read more)
Are there two different proposals?
I think Eliezer is proposing (2), but David is proposing (1). Worldview diversification seems more like (2).
I have an intuition these lead different places – would be interested in thoughts.
Edit: Maybe if 'energy' is understood as 'votes from your parts' then (2) ends up the same as (1).
Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.
If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don't, that seems intuitively wrong also, analogous to a group of people who don't take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven't thought carefully about this.)
It strikes me as a fine internal bargain for some nonhuman but human-adjacent species; I would not expect the internal parts of a human to able to abide well by that bargain.
No-one is proposing we go 100% on strong longtermism, and ignore all other worldviews, uncertainty and moral considerations.
You say:
They wrote a paper about strong longtermism, but this paper is about clearly laying out a philosophical position, and is not intended as an all-considered assessment of what we should do. (Edit: And even the paper is only making a claim about what's best at the margin; they say in footnote 14 they're unsure whether strong longtermism would be justified if more resources were already spent on longtermism.)
In The Precipice – which is more intended that way - Toby is clear that he thinks existential risk should be seen as "a" key global priority, rather than "the only" priority.
He also suggests the rough target of spending 0.1% of GDP on reducing existential risk, which is quite a bit less than 100%.
And he's clearly supported other issues with his life.
Will is taking a similar approach in his new book about longtermism.
Even the most longtermist members of effective altruism typically think... (read more)
Not that it undermines your main point - which I agree with, but a fair minority of longtermists certainly say and believe this.
There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
Basically, in this context the same points apply that Brian Tomasik made in his essay "Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness" (https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/)
I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.
I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> 'more economic growth' --> 'more innovation')
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community's skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I'm not sure that... (read more)
This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term "astronomical" is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don't we pretty much get to (ii)?
Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.
Cool. Yeah, EA funds != cause areas. Because people may think that work done by EA funds in a cause area is net positive, whereas the total of work done in that area is negative. Or they may think that work done on some cause is 1/100th as useful another cause, but only because it might recruit talent to the other, which is the sort of hard-line view that one might want to mention.
Indeed, I took that survey one year, and the reason why I wouldn't put the difference at 10^23 or something extremely large than that is because there are flowthrough effects of other cause areas that still help with longtermist stuff (like, GiveWell has been pretty helpful for also getting more work to happen on longtermist stuff).
I do think that as a cause area from a utilitarian perspective, interventions that affect the longterm future are astronomically more effective than things that help the short term future but are very unlikely to have any effect on the long term, or even slightly harm the longterm.
To be clear, my primary reason for why EA shouldn't entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn't the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.
To some degree my response to this situation is "let's create a separate longtermist community, so that I can indeed invest in that in a way that doesn't get diluted with all the other things that seem relatively unimportant to me". If we had a large and thriving longtermist community, it would definitely seem bad to me to suddenly start investing into all of these other things that EA does that don't really seem to check out (to me) from a utilitarian perspective, and I would be sad to see almost any marginal resources moved towards the other causes.
I'm strongly opposed to this, and think we need to be clear: EA is a movement of people with different but compatible values, dedicated to understanding and it's fine for you to discuss why you think longtermism is valuable, but it's not as though anyone gets to tell the community what values the community should have.
The idea that there is a single "good" which we can objectively find and then maximize is a bit confusing to me, given that we know values differ. (And this has implications for AI alignment, obviously.) Instead, EA is a collaborative endeavor of people with compatible interests - if strong-longtermists' interests really are incompatible with most of EA, as yours seem to be, that's a huge problem - especially because many of the people who seem to embrace this viewpoint are in leadership positions. I didn't think it was the case that there was such a split, but perhaps I am wrong.
I think we don't disagree?
I agree, EA is a movement of different but compatible values, and given its existence, I don't want to force anything on it, or force anyone to change their values. It's a great collaboration of a number of people with different perspectives, and I am glad it exists. Indeed the interests of different people in the community are pretty compatible, as evidenced by the many meta interventions that seem to help many causes at the same time.
I don't think my interests are incompatible with most of EA, and am not sure why you think that? I've clearly invested a huge amount of my resources into making the broader EA community better in a wide variety of domains, and generally care a lot about seeing EA broadly get more successful and grow and attract resources, etc.
But I think it's important to be clear which of these benefits are gains from trade, vs. things I "intrinsically care about" (speaking a bit imprecisely here). If I could somehow get all of these resources and benefits without having to trade things away, and instead just build something that was more directly aligned with my values of similar scale and level of success, that seems better to me. I think historically this wasn't really possible, but with longtermist stuff finding more traction, I am now more optimistic about it. But also, I still expect EA to provide value for the broad range of perspectives under its tend, and expect that investing in it in some capacity or another will continue to be valuable.
I do think it is important to distinguish these moral uncertainty reasons from moral trade and cooperation and strategic considerations for hedging. My argument for putting some focus on near-termist causes would be of this latter kind; the putative moral uncertainty/worldview diversification arguments for hedging carry little weight with me.
As an example, Greaves and Ord argue that under the expected choiceworthiness approach, our metanormative ought is practically the same as the total utilitarian ought.
It's tricky because the paper on strong longtermism makes the theory sound like it does want to completely ignore other causes - eg 'short-term effects can be ignored'. I think it would be useful to have a source to point to that states 'the case for longtermism' without giving the impression that no other causes matter.
Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place.
Also- I think the author would be able to avoid what they see as a "non-rigorous" decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an uneasiness with their totally impartial budget taking up more space in their life. I think everyone I have talked to about this feels a pull to support present day people and problems alongside the future, so it might help to just bracket off the present day section of your commitments away from the totally impartial side, especially if the argument against the longtermist conclusion is that it precludes other things you care about. No one can live an entirely impartial life and we should recognise that, but this doesn't necessarily mean that the arguments for the rightness of doing so are wrong.
FWIW, my own views are more like 'regular longtermism' than 'strong longtermism,' and I would agree with Toby that existential risk should be a global priority, not the global priority. I've focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn't have gotten into it when I did if I didn't think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.
Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing (much more than the ~0 current world effort), but not extreme fanatical ones.
There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not... (read more)
I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn't even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn't know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk with a given unit of resources.
The language-game of 'writing a philosophy article' is very different than 'stating your exact views on a topic' (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying eve... (read more)
I agree that it would be good to have a name for a less contentious form of longtermism similar to the one you propose, which says something like: the longterm deserves a seat at the top table with other commonly accept near-term priorities.
I suspect one common response might be that due to normative uncertainty, we don't put all of our weight on longtermism but instead hedge across different plausible views. I haven't yet seen a defence of that view that I would view as compelling, so I think it would be valuable to have a less contentious version that we would be willing to stand behind in public
I don't think I'm a proponent of strong longtermism at all — at least not on the definition given in the earlier draft of Will and Hilary's paper on the topic that got a lot of attention here a while back and which is what most people will associate with the name. I am happy to call myself a longtermist, though that also doesn't have an agreed definition at the moment.
Here is how I put it in The Precipice:
My preferred use of the term is akin to being an environmentalist: it doesn't mean that the only thing that matters is the environment, just that it is a core part of what you care about and informs a lot of your thinking.
I'm also not defending or promoting strong longtermism in my next book. I defend (non-strong) longtermism, and the definition I use is: "longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time." I agree with Toby on the analogy to environmentalism.
(The definition I use of strong longtermism is that it's the view that positively influencing the longterm future is the moral priority of our time.)
It would indeed be ironic - the fact that Toby and Will are major proponents of moral uncertainty seems like more evidence in favour of the view in my top level comment.
Thank you for this post David. I'd like to add two points that emphasize how important this discussion is, and that its implications are beyond the moral stances of individuals:
1. I believe that when looking at this distinction as a movement, we should also take into account how people are put off by strong longtermism - whether we view regular longtermism as a good entry point for EA ideas, or if we endorse it as a legitimate 'camp'. I think that the core idea of regular longtermism is very appealing when discussing the next few generations, while strong longtermism does imply disregarding current generations and thinking of "all future generations" (which obviously requires most people to think far beyond their current moral circle).
2. In practice, I think that an EA community that has a welcoming space for this mid-point view, would have more emphasis on interventions that are on mid-point position in the tradeoff between tractability (they're more likely to make a change) and importance (they're not as rewarding as preventing human extinction). We would see more emphasis than we currently have on improving institutions, interventions for improving developing economies, meta-science, and others.
I feel that EA shouldn't spend all or nearly all of its resources on the far future, but I'm uncomfortable with incorporating a moral discount rate for future humans as part of "regular longtermism" since it's very intuitive to me that future lives should matter the same amount as present ones.
I prefer objections from the epistemic c... (read more)
I wonder if a heavy dose of skepticism about longtermist-oriented interventions wouldn't result in a somewhat similar mix of near termist and longtermist prioritization in practice. Specifically, someone might reasonably start with a prior that most interventions aimed at affecting the far future (especially those that don't do so by tangibly changing something in the near term so that there could be strong feedbacks) come out as roughly a wash. This might then put a high burden of evidence on these interventions so that only a few very well founded ones w... (read more)
Should "reduction" in the quote below (my emphasis) read "increase?"
"This is hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term value."
Me, reading through the post: “I think I might have a minor comment to add, and for once I’m here the day of posting…”
Also me, seeing that there are already 31 comments: “Oh, well then.”
IMO, the best argument against strong longtermism ATM is moral cluelessness.