Hide table of contents

The phrase "long-termism" is occupying an increasing share of EA community "branding". For example, the Long-Term Future Fund, the FTX Future Fund ("we support ambitious projects to improve humanity's long-term prospects"), and the impending launch of What We Owe The Future ("making the case for long-termism").

Will MacAskill describes long-termism as:

I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it.

In The Very Short Run, We're All Dead

AI alignment is a central example of a supposedly long-termist cause.

But Ajeya Cotra's Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner.

Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one. 

But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?" 

Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.

The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100. 

The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know.

Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?

I think yes, but pretty rarely, in ways that rarely affect real practice.

Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries.  "Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.

In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there's a sense that in the next 100 years, we'll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems -  or at least profoundly change the way we think about things like "GDP growth". Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes - which puts them on the same page as thoughtful short-termists planning for the next 100 years.

Long-termists might also rate x-risks differently from suffering alleviation. For example, suppose you could choose between saving 1 billion people from poverty (with certainty), or preventing a nuclear war that killed all 10 billion people (with probability 1%), and we assume that poverty is 10% as bad as death. A short-termist might be indifferent between these two causes, but a long-termist would consider the war prevention much more important, since they're thinking of all the future generations who would never be born if humanity was wiped out.

In practice, I think there's almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist. A 1/1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%. So I'm skeptical that problems like this are likely to come up in real life.

When people allocate money to causes other than existential risk, I think it's more often as a sort of moral parliament maneuver, rather than because they calculated it out and the other cause is better in a way that would change if we considered the long-term future.

"Long-termism" vs. "existential risk"

Philosophers shouldn't be constrained by PR considerations. If they're actually long-termist, and that's what's motivating them, they should say so.

But when I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)

I'm interested in hearing whether other people have different reasons for preferring the "long-termism" framework that I'm missing.

Comments81
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hey Scott - thanks for writing this, and sorry for being so slow to the party on this one!

I think you’ve raised an important question, and it’s certainly something that keeps me up at night. That said, I want to push back on the thrust of the post. Here are some responses and comments! :)

The main view I’m putting forward  in this comment is “we should promote a diversity of memes that we believe, see which ones catch on, and mould the ones that are catching on so that they are vibrant and compelling (in ways we endorse).” These memes include both “existential risk” and “longtermism”.


What is longtermism?

The quote of mine you give above comes from Spring 2020. Since then, I’ve distinguished between longtermism and strong longtermism.

My current preferred slogan definitions of each:

  • Longtermism is the view that we should do much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.)
  • Strong longtermism is the view that protecting the interests of future generations should be the key moral priority of our time. (That’s similar to the quote of mine you give.)

In WWOTF, I promote the weak... (read more)

Thanks for writing this! That overall seems pretty reasonable, and from a marketing perspective I am much more excited about promoting "weak" longtermism than strong longtermism.

A few points of pushback:

  • I think that to work on AI Risk, you need to buy into AI Risk arguments. I'm unconvinced that buying longtermism first really shifts the difficulty of figuring this point out. And I think that if you buy AI Risk, longtermism isn't really that cruxy. So if our goal is to get people working on AI Risk, marketing longtermism first is strictly harder (even if it may be much easier)
    • I think that very few people say "I buy the standard AI X-Risk arguments and that this is a pressing thing, but I don't care about future people so I'm going to rationally work on a more pressing problem" - if someone genuinely goes through that reasoning then more power to them!
    • I also expect that people have done much more message testing + refinement on longtermism than AI Risk, and that good framings could do much better - I basically buy the claim that it's a harder sell though
    • Caveat: This reasoning applies more to "can we get people working on AI X-Risk with their careers" more so than things like broad s
... (read more)

On this particular point

message testing from Rethink suggests that longtermism and existential risk have similarly-good reactions from the educated general public

I can't find info on Rethink's site, is there anything you can link to? 

Of the three best-performing messages you've linked, I think the first two emphasise risk much more heavily than longtermism. The third does sound more longtermist, but I still suspect the risk-ish phrase 'ensure a good future' is a large part of what resonates.

All that said, more info on the tests they ran would obviously update my position.

So people actually pretty like messages that are about unspecified, and not necessarily high-probability threats, to the (albeit nearer-term) future.

This seems correct to me, and I would be excited to see more of them. However, I wouldn't interpret this as meaning 'longtermism and existential risk have similarly-good reactions from the educated general public', I would read this as risk messaging performing better. 

Also, messages 'about unspecified, and not necessarily high-probability threats' is not how I would characterize most of the EA-related press I've seen recently (NYTimes, BBC, Time, Vox).

(More ... (read more)

5
MaxRa
Thanks for explaining, really interesting and glad so much careful thinking is going into communication issues!  FWIW I find the "meme" framing you use here offputting. The framing feels kinda uncooperative, as if we're trying to trick people into believing in something, instead of making arguments to convince people who want to understand the merits of an idea. I associate memes with ideas that are selected for being easy and fun to spread, that likely affirm our biases, and that mostly without the constraint whether the ideas are convincing upon reflection, true or helpful for the brain that gets "infected" by the meme. Some support for this interpretation from the Wikipedia introduction:

I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.

However, for people who already understand the huge importance of minimizing X risk, there's a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom'n'gloom, when we might ask ourselves 'what about humanity is really worth saving?' or 'why should we really care about the long-term future, it it'll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?'

In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.

But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views ... (read more)

Based on my memory of how people thought while growing up in the church, I don't think increasing the number of saveable souls is something that makes sense for a Christian -- or even any sort of long termist utilitarian framework at all.

Ultimately god is in control of everything. Your actions are fundamentally about your own soul, and your own eternal future, and not about other people. Their fate is between them and God, and he who knows when each sparrow falls will not forget them.

7
Justin Helps
I remember my father explicitly saying that he regretted not having more children because he's since learned that God wants us to create more souls for him. Didn't make sense to me even as a Christian at the time, but the idea is out there.
1
pete
There are fringe movements (ex: Quiverfull) that focus on procreation as a way of living out God's will, but very few. What resonates with Christians is a "stewardship" mindset - using our God-given abilities and opportunities wisely. The Bible is full of stories of an otherwise-unspecial person being at the right time and place to make a historically impactful decision.
8
quinn
Eliezer's underrated fun theory sequence tackles this. 
1
Vasco Grilo🔸
"However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future -- who would all have save-able souls -- could vastly exceed the current number of Christians". I had thought about the above before, thanks for pointing it out!

Agree that X-risk is a better initial framing than longtermism - it matches what the community is actually doing a lot better. For this reason, I'm totally on board with "x-risk" replacing "longtermism" in outreach and intro materials. However, I don't think the idea of longtermism is totally obsolete, for a few reasons:

  • Longtermism produces a strategic focus on "the last person" that this "near-term x-risk" view doesn't. This isn't super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don't make much sense.
  • S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.
  • The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don't seem totally implausible. The world is b
... (read more)

S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.

Why not?

An existential risk is a risk that threatens the destruction of humanity's long-term potential. But s-risks are worrisome not only because of the potential they threaten to destroy, but also because of what they threaten to replace this potential with (astronomical amounts of suffering).

7
MichaelStJules
I think the "short-term x-risk view" is meant to refer to everyone dying, and ignoring the lost long-term potential. Maybe s-risks could be similarly harmful in the short term, too.
2
Hank_B
Spreading wild animals to space isn't bad for any currently existing humans or animals, so it isn't counted under thoughtful short-termism or is discounted heavily. Same with a variety of S-risks (e.g. eventual stable totalitarian regime 100+ years out, slow space colonization, slow build up of Matrioshka brains with suffering simulations/sub-routines, etc.)
2
james.lucassen
Oop, thanks for correction. To be honest I'm not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho  >:/

No offense to Neel's writing, but it's instructive that Scott manages to write the same thesis so much better. It:

  • is 1/3 the length
    • Caveats are naturally interspersed, e.g. "Philosophers shouldn't be constrained by PR."
    • No extraneous content about Norman Borlaug, leverage, etc
  • has a less bossy title
  • distills the core question using crisp phrasing, e.g. "Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?" (my emphasis)

...and a ton of other things. Long-live the short EA Forum post!

FWIW I would not be offended if someone said Scott's writing is better than mine. Scott's writing is better than almost everyone's.

Your comment inspired me to work harder to make my writings more Scott-like.

Thanks, I had read that but failed to internalize how much it was saying this same thing. Sorry to Neel for accidentally plagiarizing him.

I didn't mean to imply that you were plagiarising Neel. I more wanted to point out that that many reasonable people (see also Carl Shulman's podcast) are pointing out that the existential risk argument can go through without the longtermism argument. 

I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. Just because someone is in one section doesn't mean they have to be, or are, committed to others.

No worries, I'm excited to see more people saying this! (Though I did have some eerie deja vu when reading your post initially...)

I'd be curious if you have any easy-to-articulate feedback re why my post didn't feel like it was saying the same thing, or how to edit it to be better? 

(EDIT: I guess the easiest object-level fix is to edit in a link at the top to your's, and say that I consider you to be making substantially the same point...)

I'm not so sure about this. Speaking as someone who talks with new EAs semi-frequently, it seems much easier to get people to take the basic ideas behind longtermism seriously than, say, the idea that there is a significant risk that they will personally die from unaligned AI. I do think that diving deeper into each issue sometimes flips reactions - longtermism takes you to weird places on sufficient reflection, AI risk looks terrifying just from compiling expert opinions - but favoring the approach that shifts the burden from the philosophical controversy to the empirical controversy doesn't seem like an obviously winning move. The move that seems both best for hedging this, and just the most honest, is being upfront both about your views on the philosophical and the empirical questions, and assume that convincing someone of even a somewhat more moderate version of either or both views will make them take the issues much more seriously.

2
timunderwood
Hmmmm, that is weird in a way, but also as someone who has in the last year been talking with new EAs semi-frequently, my intuition is that they often will not think about things the way I expect them to.
1
Devin Kalish
Really? I didn't find their reactions very weird, how would you expect them to react?

Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart.  I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
 

  • Extinction versus Global Catastrophic Risks (GCRs)
    • It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
    • To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
  • Sensitivity to views of risk
    • Some people may be more sceptical of x-risk estimates this century, but might still reach the
... (read more)

To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.

ALLFED-type work is likely highly cost effective from the short-term perspective; see global and country (US) specific analyses.

I don't have a strong preference. There a some aspects in which longerism can be better framing, at least sometimes.

I. In a "longetermist" framework, x-risk reduction is the most important thing to work on for many orders of magnitude of uncertainty about the probability of x-risk in the next e.g. 30 years. (due to the weight of the long term future). Even if AI related x-risk is only 10ˆ-3 in next 30 years, it is still an extremely important problem or the most important one. In a "short-termist" view with, say, a discount rate of 5%, it is not nearly so clear.

The short-termist urgency of x-risk ("you and everyone you know will die") depends on the x-risk probability being actually high, like of order 1 percent, or tens of percents . Arguments why this probability is actually so high are usually brittle pieces of mathematical philosophy (eg many specific individual claims by Eliezer Yudkowsky) or brittle use of proxies with lot of variables obviously missing from the reasoning (eg the report by Ajeya Cotra). Actual disagreements about probabilities are often in fact grounded in black-box intuitions  about esoteric mathematical concepts.  It is relatively easy to come wit... (read more)

It's not clear the loss of human life dominates the welfare effects in the short term, depending on how much moral weight you assign to nonhuman animals and how their lives are affected by continued human presence and activity. It seems like human extinction would be good for farmed animals (dominated by chickens, fish and invertebrates), and would have unclear sign for wild animals (although my own best guess is that it would be bad for wild animals).

Of course, if you take a view that's totally neutral about moral patients who don't yet exist, then few of the nonhuman animals that would be affected are alive today, and what happens to the rest wouldn't matter in itself.

I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.

Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.

Achieving existential security may require novel approaches. Some have said AI can help us achieve it, others say we need to promote international cooperation, and others say we may need to maximise economic growth or technological progress to speed through the time of perils. These approaches may seem lacking to a thoughtful shorttermist who may prefer reducing specific risks.

4
timunderwood
Maybe, I mean I've been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he's basically right that it can push people towards ideas that don't have any guard rails. A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth. That after all is what shutting up and multiplying tells you -- so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me. Of course there is also the other direction: If there was a 1/1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.   Also, of course, model error, and any estimate where someone actually uses numbers like '1/1 trillion' that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.

I think ASB's recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.

 MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk). 

I think the term's agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.

I think that the longtermist EA community mostly acts as if we're close to the hinge of history, because most influential longtermists disagree with Will on this. If Will's take was more influential, I think we'd do quite different things than we're currently doing.

I'd love to hear what you think we'd be doing differently. With JackM, I think if we thought that hinginess was pretty evenly distributed across centuries ex ante we'd be doing a lot of movement-building and saving, and then distributing some of our resources at the hingiest opportunities we come across at each time interval. And in fact that looks like what we're doing. Would you just expect a bigger focus on investment? I'm not sure I would, given how much EA is poised to grow and how comparably little we've spent so far. (Cf. Phil Trammell's disbursement tool https://www.philiptrammell.com/dpptool/)

I think if we’re at the most influential point in history “EA community building” doesn’t make much sense. As others have said it would probably make more sense to be shouting about why we’re at the most influential point in history i.e. do “x-risk community building” or of course do more direct x-risk work.

I suspect we’d also do less global priorities research (although perhaps we don’t do that much as it is). If you think we’re at the most influential time you probably have a good reason for thinking that (x-risk abnormally high) which then informs what we should do (reduce it). So you wouldn’t need much more global priorities research. You would still need more granular research into how to reduce x-risk though.

More is also being said on the possibility of investing for the future financially which isn’t a great idea if we’re at the most influential time in history.

I agree the movement is mostly “hingy” in nature but perhaps not to the same extent you do. 80,000 Hours is an influential body that promotes EA community building, global priorities research, and to some extent investing for the future.

8
Stefan_Schubert
I'm not sure I agree with that. It seems to me that EA community building is channelling quite a few people to direct existential risk reduction work.

My point is that you could engage in "x-risk community building" which may more effectively get people working on reducing x-risk than "EA community building" would.

3
Stefan_Schubert
There is a bunch of consideration affecting that, including that we already do EA community building and that big switches tend to be costly. However that pans out in aggregate I think "doesn't make much sense" is an overstatement.

I never actually said we should switch, but if we knew from the start “oh wow we live at the most influential time ever because x-risk is so high” we probably would have created an x-risk community not an EA one.

And to be clear I’m not sure where I personally come out on the hinginess debate. In fact I would say I’m probably more sympathetic to Will’s view that we currently aren’t at the most influential time than most others are.

6
timunderwood
My feeling is that it was a bit that people who wanted to attack global poverty efficiently decided to call themselves effective altruists, and then a bunch of Less Wrongers came over and convinced (a lot of) them that 'hey, going extinct is an even biggler deal', but the name still stuck, because names are sticky things.
8
Jay Bailey
That also depends on how wide you consider a "point". A lot of longtermists talk of this as the "most important century", not the most important year, or even decade. Considering EA as a whole is less than twenty years old, investing in EA and global priorities research might still make sense, even under a simplified model where 100% of the impact EA will ever have occurs by 2100, and then we don't care any more. Given a standard explore/exploit  algorithm, we should spend around 37% of the space exploring, so if we assume EA started around 2005, we should still be exploring until 2040 or so before pivoting and going all-in on the best things we've found.

Some loose data on this: 

Of the ~900 people who filled my Twitter poll about whether we lived in the most important century, about 1/3 said "yes," about 1/3 said "no," and about 1/3 said "maybe."

As Nathan Young mentioned in his comment, this argument is also similar to Carl Shulman's view expressed in this podcast: https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/

Speaking about AI Risk particularly, I haven't bought into the idea there's a "cognitively substantial" chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven't either. There's two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:

  • Transformative AI will happen likely happen within 10 years, or 30
  • There's a significant chance it will kill us all, or at least a catastrophic number of people (e.g. >100m)

It's not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn't matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.

If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1/10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven't seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care a... (read more)

7
Jay Bailey
The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask "How many people, probabilistically, would die each year from this?" So, if I think there's a 10% chance AI kills us in the next 100 years, that's 1 in 1,000 people "killed" by AI each year, or 7 million per year - roughly 17x more than malaria.  If I think there's a 1% chance, AI risk kills 700,000 - it's still just as important as malaria prevention, and much more neglected. If I think there's an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not worth spending as many resources on as more likely concerns. That said, this only covers part of the inferential distance - people in Week 5 of the Intro to EA cohort are already used to reasoning quantitatively about things and analysing cost-effectiveness.

Thank you for writing this! This helped me understand my negative feelings towards long-termist arguments so much better. 

In talking to many EA University students and organizers, so many of them have serious reservations about long-termism as a philosophy, but not as a practical project because long-termism as a practical project usually means don't die in the next 100 years, which is something we can pretty clearly make progress on (which is important since the usual objection is that maybe we can't influence the long-term future). 

I've been frustrated that in the intro fellowship and in EA conversations we must take such a strange path to something so intuitive: let's try to avoid billions of people dying this century. 

Scott, thanks so much for this post.  It's been years coming in my opinion.  FWIW, my reason for making ARCHES (AI Research Considerations for Human Existential Safety) explicitly about existential risk, and not about "AI safety" or some other glomarization, is that I think x-risk and x-safety are not long-term/far-off concerns that can be procrastinated away.  

https://forum.effectivealtruism.org/posts/aYg2ceChLMRbwqkyQ/ai-research-considerations-for-human-existential-safety  (with David Krueger)

Ideally, we need to engage as many researchers as possible, thinking about as many aspects of a functioning civilization as possible, to assess how A(G)I can creep into those corners of civilization and pose an x-risk, with cybersecurity / internet infrastructure and social media being extremely vulnerable fronts that are easily salient today.  

As I say this, I worry that other EAs will get worried that talking to folks working on cybersecurity or recommender systems necessarily means abandoning existential risk as a priority, because those fields have not historically taken x-risk seriously.   

However, for better or for worse, it's becoming increasingly e... (read more)

I think this is post is mistaken. (If I remember correctly, not an expert,) estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace. Only AI Safety researchers think AI doom is a highly likely default (presumably due to selection effects.) So from near-termist perspective AI deserves relatively less attention.

Bio-risk and climate change, and maybe nuclear war, on the other hand, I think are all highly concerning from a near-termist perspective, but unlikely to kill EVERYONE, and so relatively low priority for long-termists.

3
Linch
"only" 5-10% of ~8 billion people dying this century is still 400-800 million deaths! Certainly higher than e.g. estimates of malarial deaths within this century!  What's the case for climate change being highly concerning from a near-termist perspective? It seems unlikely to me that marginal $s in fighting climate change are a better investment in global health than marginal $s spent directly on global health. And also particularly unlikely to be killing >400 million people.  I agree some biosecurity spending may be more cost-effective on neartermist grounds. 
3
Jordan Arel
Hmm.. I’d have to think more carefully about it. Was very much off-the-cuff. I mostly agree with your criticism, I think I was mainly thinking bio-risk makes most sense as a near-termist priority and so would get most of x-risk funding until solved, since it is much more tractable than AI Risk. Maybe this is the main point I’m trying to make, and so the spirit of the post seems off, since near-termist x-risky stuff would mostly fund bio-risk and long-termist x-risky stuff would mostly go to AI.

Imagine it's 2022. You wake up and check the EA forum to see that Scott Alexander has a post knocking the premise of longtermism and it's sitting in at 200 karma. On top, Holden Karnofsky has a post saying he may be only 20% convinced that x-risk itself is overwhelmingly important. Also, Joey Savoie is hanging in there
 

 

Obviously, I’ll write in to support longtermism.

Below is a one long story about how some people might change their views, in this story, x-risk alone wouldn't work. 

TLDR; Some people think the future is really bad and don't value it. You need something besides x-risk, to engage them, like a competent and coordinated movement to improve the future. Without this, x-risk and other EA work might be meaningless too. This explanation below has an intuitive or experiential quality, not numerical. I don't know if this is actually longtermism.
 

Many people don't consider future generations valuable because they have a pessimistic view of human society. I think this is justifiable. 

Then, if you think society will remain in its current state, it's reasonable that you might not want to preserve it. If you only ever think about one or two generations into the future, like I think most people do, it's hard to see the possibility of change. So I think this "negative" mentality is self-reinforcing, they're stuck. 

To these people, the idea of x-risk doesn't make sense, not because these dangers aren't real but because there isn't anything to preserve. To these people, giant numbers like 10^30 are really, especially unconvincing, because they seem silly and, if anything, we owe the future a small society. 

I think the above is an incredibly ma... (read more)

Are there actually any short-termists? Eg. people who have nonzero pure time preference?

4
Vanessa
IMO everyone have pure time preference (descriptively, as a revealed preference). To me it just seems commonsensical, but it is also very hard to mathematically make sense of rationality without pure time preference, because of issues with divergent/unbounded/discontinuous utility functions. My speculative 1st approximation theory of pure time preference for humans is: choose a policy according to minimax regret over all exponential time discount constants starting from around the scale of a natural human lifetime and going to infinity. For a better approximation, you need to also account for hyperbolic time discount.

Can't you get the integral to converge with discounting for exogenous extinction risk and diminishing marginal utility? You can have pure time preference = 0 but still have a positive discount rate.

The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically  believe in extinction risk, you can get convergence but then it's pretty close to having intrinsic time discount.  To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.

7
Michael_Wiebe
Yes, if the extinction rate is high (and precise) enough , then it converges, but otherwise not. Regarding your first comment, I'm focusing on the normative question, not descriptive (ie. what should a social planner do?). So I'm asking if there are EAs who think a social planner should have nonzero pure time preference.
0
Vanessa
I dunno if I count as "EA", but I think that a social planner should have nonzero pure time preference, yes.
2
Michael_Wiebe
Why?
2
Vanessa
Because, ceteris paribus I care about things that happen sooner more than about things that happen latter. And, like I said, not having pure time preference seems incoherent.  As a meta-sidenote, I find that arguments about ethics are rarely constructive, since there is too little in the way of agreed-upon objective criteria and too much in the way of social incentives to voice / not voice certain positions. In particular when someone asks why I have a particular preference, I have no idea what kind of justification they expect (from some ethical principle they presuppose? evolutionary psychology? social contract / game theory?)
7
JackM
This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations. Surely the fact that I'd rather have some cake today rather than tomorrow cannot be relevant when I'm considering whether or not I should abate carbon emissions so my great grandchildren can live in a nice world - these simply seem separate considerations with no obvious link to each other. If we're talking about policies whose effects don't (predictably) span generations I can perhaps see the relevance of my personal impatience, but otherwise I don't. Also,  having non-zero pure time preference has counterintuitive implications. From here: So if hypothetically we were alive around King Tut's time and we were given the mandatory choice to either torture him or, with certainty, cause the torture of all 7 billion humans today we would easily choose the latter with a 1% rate of pure time preference (which seems obviously wrong to me). If you do want non-zero rate of pure time preference you will probably need it to decline quickly over time to make much ethical sense (see here and my explanation here).
3
Vanessa
I am a moral anti-realist. I don't believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as "ethics". Hypothetically, if I have time preference and other people don't then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference. You can avoid this kind of conclusions if you accept my decision rule of minimax regret over all discount timescales from some finite value to infinity.
3
JackM
Most people do indeed have pure time preference in the sense that they are impatient and want things earlier rather than later. However, this says nothing about their attitude to future generations. Being impatient means you place more importance on your present self than your future self, but it doesn't mean you care more about the wellbeing of some random dude alive now than another random dude alive in 100 years. That simply isn't what "impatience" means. For example - I am impatient. I personally want things sooner rather than later in my life. I don't however think that the wellbeing of a random person now is more important than the wellbeing of a random person alive in 100 years. That's an entirely separate consideration to my personal impatience.
1
Guy Raveh
I mean, physics solves the divergence/unboundedness Problem with the universe achieveing heat death eventually. So one can assume some distribution on the time bound, at the very least. Whether that makes having no time discount reasonable in practice, I highly doubt.
4
MichaelDickens
I don't know of any EAs or philosophers with a nonzero pure time preference, but it's pretty common to believe that creating new lives is morally neutral. Someone who believes this might plausibly be a short-termist. I have a few friends who are short-termist for that reason.
1
Michael_Wiebe
Hmm, is it consistent to have zero pure time preference and be indifferent to creating new lives?
2
MichaelDickens
Yeah, the two things are orthogonal as far as I can see. The person-affecting view is perfectly with consistent with either a zero or a nonzero pure time preference.
1
Michael_Wiebe
Okay, so you could hold the person-affecting view and be indifferent to creating new lives, but also have zero pure time preference in that you don't value future lives any less because they're in the future. So this is really getting at creating new lives vs how to treat them given that they already exist.

Yes! Thanks for this Scott. X-risk prevention is a cause that both neartermists and longtermists can get behind. I think it should be reinstated as a top-level EA cause area in it's own right, distinct from longtermism (as I've said here).

if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know.

It's a sobering thought. See also: AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.

Longtermism =/ existential risk, though it seems the community has more or less decided they mean similar things (at least while at our current point in history).

Here is an argument to the contrary- "the civilization dice roll": Current Human society becoming grabby will be worse for the future of our lightcone than the counterfactual society that will(might) exist and end up becoming grabby if we die out/ our civilization collapses.

Now, to directly answer your point on x-risk vs longtermism, yes you are correct. Fear mongering will always trump empathy mo... (read more)

But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%.

Agreed. Linch's .01% Fund post proposes a research/funding entity that identifies projects that can reduce existential risk by 0.01% for $100M-$1B. This is 3x-30x as cost-effective as the quoted text and targeting a reduction 100x the size.

A key difference also surrounds which risks to care about more

  • all global catastrophic risks or only likely existential ones

and what to do about them

  • focus on preventing them/reducing their suffering and deaths… … or make them survivable by at least a small contingent to repopulate the world/universe.

If I don’t have a total population utilitarian view (which seems to me like the main crux belief of longtermism) I may not care as much about the extinction part of the risks. ‪

I have been working on a tweet length version of this argument for a while. I encourage someone to beat me to it. I agree with Neel and Scott (and Carl Shulman) that this argument is much more succinct and emotive and I think I should get better at making it.

Something like:

[quote tweeting a poll on survival to 2100] 38% of my followers think there is a > 5% chance all humans are dead by 2100. Let's assume they are way wrong and it's only .5%. 

[how does this compare to other things that might kill you]

[how does this compare in terms of spending to how much ought to be spent to how much is]

3
Nathan Young
Here is v1.0. Can you do better? https://twitter.com/NathanpmYoung/status/1512000005254664194?s=20&t=LnIr0K87oWgFlqP6qKH4IQ

In practice, I think there's almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist.

GiveDirectly could get pretty high probabilities (or close for a smaller number of people at lower cost), although it's not the favoured intervention of those focused on global health and poverty.

Another notable remaining difference is that extinction is all or nothing, so your ... (read more)

projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries

Michael Wiebe comments: "Can we please stop talking about GDP growth like this? There's no growth dial that you can turn up by 0.01, and then the economy grows at that rate forever. In practice, policy changes have one-off effects on the level of GDP, and at best can increase the growth rate for a short time before fading out. We don't have the ability to increase the growth rate for many centuries."

"Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.

This is the first I have seen reference to norm changing in EA. Is there other writing on this idea?

Hello

At a lecture I attended, a leading banker said  "long term thinking should not be used as an excuse for short term failure". At the time, he was defending short term profit making as against long term investment, but when applied to discussions of longtermist  the point is similar. Our policies and actions can only be implemented in the present and must succeed in the short term as well as the long term. This means careful risk assessment/management  but as the future can never be predicted with absolute certainty,the  long term ef... (read more)

Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know.

I'm not sure how we can expect the public, or even experts, to meaningfully engage a threat as abstract, speculative and undefined as unaligned AI when very close to the entire culture, including experts of all kinds, relentlessly ignores the very easily understood nuclear weapons which literally could kill us all right now, today, before we sit down to lunch.

What I learned from studying nuclear weapons as an average citizen is th... (read more)

Curated and popular this week
Relevant opportunities