I want to draw a distinction between the longtermist thesis, and the longtermist frame.
- A longtermist thesis is an explicit statement that provides direct support for investing in canonically 'longtermist' cause areas. Think "Most of the moral value is in the far-future", or "X-risk this century is high, and we should do something about it". These are the kinds of crisp propositions which might be defended in a philosophy paper.
- The longtermist frame refers to a specific collection of “concepts, beliefs, heuristics, instincts, habits, mental models, and values”, which tend to activate as a bundle. Like other frames, the longtermist frame is not neatly decomposed into a set of propositions, which can easily be assigned credences, or evaluated as true or false.
I view the longtermist frame as the generator for many of the more specific propositions endorsed by longtermists. I see it as a set of archetypes and dispositions—motivating ideals we use to guide our informal reasoning, even if we don’t always live up to them perfectly. I outline these dispositions in this post.
If you’re wondering why this might matter, you can read the next section.
Why Care?
- Historically, the longtermist community has been a large driver of research into topics like AI safety and biosecurity, which have since garnered public attention. I think this prescience stems from applying certain epistemic ideals - making the longtermist frame worth analyzing.
- More specifically, I expect the future to throw up a host of unexpected surprises. I’d like morally motivated actors to have similar foresight going forward. Outlining the longtermist frame helps serve that end.
- Clearly conveying the longtermist frame promotes transparency. As we’ve been influential in generating concerns about certain causes, other people may naturally wonder why we’ve raised such concerns.
- Although some of the more adversarial explanations trying to describe longtermists are likely to seem confused (or even conspiratorial) to longtermists themselves, there’s a reason these takes have emerged. I think that longtermist frame is a way to identify ourselves. If we don't try to identify ourselves, then others will, to the likely detriment of causes we care about.
With that in mind, here are some archetypes which help form the longtermist frame.
Epistemic Chutzpah
Epistemic Chutzpah is about pursuing a values-first, rather than a methods-first approach to reasoning.
We can contrast Epistemic Chutzpah with scientism, standardly used to refer to an overreach of scientific methods to places where they’re less suited. For my purposes, ‘scientism’ will be a view claiming either: (i) most important questions can be answered by the methodologies prevalent in the academic sciences, or (ii) if questions can’t be answered with respect to standard academic methodologies, it’s an epistemic free-for-all.
Epistemic Chutzpah rejects scientism. More concretely, Epistemic Chutzpah is exemplified here, by Rob Wiblin:
“I think [there’s a tendency which is] common across many areas of science and engineering — where people learn that the best way to do their job … [involves trying] to be super empirical and ignore theoretical arguments even when they kind of sound good … I think [this is] actually an incredibly dangerous tendency when you’re dealing with technologies [that] could be dangerous … where you can’t be confident that the trend that you’re currently working with is simply going to continue.
The only way … to [predict] the fog of where your research might lead you is by being somewhat more theoretical, [and] more conceptual — because you can’t do the empirics yet, [as] you’re trying to estimate where you’ll be in 5, 10, or 15 years, several steps down the road.”
For me, epistemic chutzpah evokes the image of a concerned parent, whose child is suffering from a puzzling ailment. I imagine them increasingly frustrated with inconclusive hospital visits, and looking to Google to find out what they can. They find no peer-reviewed studies, with only a handful of doctors informally speculating about similar cases on personal blogs. I imagine them wondering, on optimistic days, whether their child is fine, and asking themselves what to do. I imagine them considering whether they should declare an absence of rigorous evidence, and simply hope for the best.
And I imagine them saying No. The parent stays true to what they value, and they try. They do what they can, squeezing out every ounce of information from all available material, and finding a way to act, somehow — even if they’re uncertain, and even if it means making speculative bets on treatment. In the absence of clear empirics, their task is harder. But they try. They know – rationally, when they can bear to think about it – that they’re more likely to fail than in a domain with clearer empirics. It doesn’t matter. They try anyway.
Epistemic Chutzpah is about acknowledging that you're faced with tricky problems, and doing what you can anyway — even if it requires theory, speculation, and a tolerance for less rigorous argumentation. You realize wild-animal suffering could be a big deal, so you read a bunch of reports, and put together estimates for the number of wild animals. You hear about early arguments for AI x-risk, and try to find ways ML research would be helpful for a problem many seem to be ignoring. When evidence is sparse, our worries could turn out to be unfounded. But that’s where the discourse starts, not where it ends.
Scout Mindset
Of course, people are drawn to scientism for a reason. When you lack unambiguous empirical data, it’s easy to fool yourself with clever theories. While Epistemic Chutzpah opens the door to reasoned speculation when evidence is lacking, Scout Mindset describes a disposition that allows us to productively engage in such speculation without fooling ourselves.
Specifically, Scout Mindset involves reasoning about speculative topics while holding our theories lightly, and searching for flaws.[1] Longtermists and their critics alike may wish for more rigorous evidence. However, part of Scout Mindset is about acknowledging that rigorous evidence will be hard to find, and searching for other ways we might be tricking themselves. Perhaps through funding relevant prediction markets, making public bets on one's beliefs, or trying to improve one's calibration. This stress-testing is part of Scout Mindset.
Although we sometimes use theoretical reasoning to reach unusual conclusions, I see Scout Mindset embodied in the acknowledgement that we’re unaware “of anyone who successfully took complex deliberate *non-obvious* action many years or decades ahead of time on the basis of speculation about how some technology would change the future”, and treating that as a sign of caution — not hopelessness. Epistemic Chutzpah gives you the confidence to engage in speculative reasoning at all. Scout Mindset prevents you from fully trusting your speculative theories — at least not immediately, and not without connecting them to concrete mechanisms.
In practice, most longtermists (correctly) don’t put all of their weight behind speculative reasoning. You can see this in some of our metaphors. We don’t actually “step off the train to Crazy Town”, because there’s not actually a train – it’s our own colorful idiom for claims like “I don’t accept this argument; if the theory has that conclusion, I guess I don’t believe the theory”.
Heartfelt Abstraction
One common route into EA is noticing a disconnect between what feels real (say, a child drowning in front of you), and what you intellectually judge to be real (say, the 1.5million children dying of vaccine-preventable diseases every year). You might decide that you want your actions to be driven by your judgments of what is likely to be real, and not just what feels real. I'll call this cognitive move Heartfelt Abstraction: an attempt to minimize the gap between knowing and internalizing.
While Heartfelt Abstraction is clearly related to scope-sensitivity, I think it's broader. This is because large numbers are not the only thing that's hard to internalize. Even if there's a part of your mind which believes the arguments for AI risk, or a considerable chance of AGI by 2030, these statements, too, might not feel fully real. Prior to COVID, you might have had similar thoughts about a global pandemic. To be driven by Heartfelt Abstraction is to be driven by an attempt to live up to your more abstract judgments, rather than that which feels most viscerally real.
Exercising Heartfelt Abstraction can be especially unfun for longtermists. If you treat claims like “AGI is plausibly coming within the next 10 years” as actually real, it might suggest that you should make unusual decisions, in ways that may be alien to others. Even in more mundane cases, Heartfelt Abstraction can sound bad, or silly.
Daily life demonstrates this. My family like to hear about what I'm interested in, and I've mentioned, off-hand, that some of my friends research the welfare of digital minds. Upon seeing their face, I'll realize that I've failed to effectively communicate. I've failed to make the topic feel actually real to my interlocutor.
In these situations, one could (correctly) point out that discussions of digital minds play a limited role in actual longtermist discussions, and shift the topic to something more respectable. If weird topics have already been brought up, I tend to think we shouldn't do this. I think experiencing initial shock is reasonable, and conceals a question worth meeting on its terms. A question which asks why (even a minority of) longtermists consider the issue at all, given all of the world's obviously real contemporary problems.
The answer, for me, at least, would make reference to Heartfelt Abstraction. It's Heartfelt Abstraction which engenders a level of distrust towards claims like “let's wait for abstractions to feel real; once reality hits us over the head, we can intervene then.” If we (for example) create digital minds with the capacity for suffering and joy, then we might not treat them well, and Reality might not wait for abstractions to feel real to us before it inflicts tragedy. Atomic bombs seemed real to the victims of Hiroshima, who are no more alive because the cause of their deaths was judged as speculative ten years earlier. Catastrophic harms can emerge via many routes. They won’t always hit you over the head before they happen.
Conclusion
I think the dispositions outlined capture distinctive aspects of the longtermist frame.
We can see this by examining some of longtermism's critics, like Thorstad. He objects to longtermist epistemics, citing different attitudes toward standard scientific practices, like anonymized pre-publication peer review. Compared to longtermists, Thorstad is more skeptical of the epistemic progress that can be achieved outside standard institutions. Longtermist concerns often are speculative. Still, even once we grant that the nature of many longtermist concerns are ill-suited to publication in reputable academic journals,[2] a separate question remains: how much value is provided by trying to carefully reason about speculative questions, and is this worthwhile even when the results are ill-suited for academic journals?
Differing answers to that question appear downstream of judgments about Epistemic Chutzpah. I see critics like Thorstad as more pessimistic about research outside the standard academic system, with longtermists (to varying degrees) on the more optimistic end.
Longtermism also seems unusual in explicitly recognizing Scout Mindset as a virtue. I view the unusual prominence of prediction markets and funding contests as an attempt to inculcate a culture of Scout Mindset, by promoting institutions aimed to stress-test some of longtermists' more speculative arguments. Open Phil have supported Metaculus. So has the Infrastructure Fund, with the aim of encouraging a “culture of accountable and accurate predictions and scenario planning”.
Finally, I think claiming that long-term risks are “distractions” from more immediate concerns arise from a failure to heartfully abstract. A failure to properly inhabit the world where issues of AI risk could actually occur — before we're hit over the head with obvious arguments, and obvious tragedies. I think Heartfelt Abstraction requires at least considering (even if ultimately rejecting) the possibility that we live in a world where longtermists are like Leo Szilárd — a Hungarian physicist who presciently planned to rescue fleeing academics from Nazi Germany before he was forced to flee,[3] and forecasted the Germans creating an atomic bomb to be used in War.[4] I doubt Szilárd could have published his concerns in a peer-reviewed journal, and his views were not universally endorsed. But he genuinely seemed to get something right, and took his arguments seriously enough to take action.
Some have criticized the historic promotion of the longtermist thesis, and they might be right. But I think there's something valuable in the longtermist frame, and we should promote it more.
- ^
I should note that I'm attempting to broadly characterize the longtermist frame. I'm not providing an extensive account of how individual longtermists actually reason in practice. Longtermists often fail to live up to Scout Mindset. I view the longtermist frame as a set of informal dispositions which longtermists would explicitly recognize as valuable, and strive towards– with varying degrees of success.
- ^
Though not all of our concerns. I do think that we could’ve done better on this front, historically.
- ^
He left Berlin a day before trains became full, and the borders became guarded.
- ^
Szilárd filed a patent on the chain reaction in 1934, and advised the British War Office to keep it secret.
