This article expresses a concern about how, despite its appeals, the Long Reflection could go quite badly, because the restrictions on physical progress could undermine our rationality and capability for moral progress. There has been a variety of pieces discussing the Long Reflection (see e.g. those linked here); it is possible that a steelman version of what the authors really meant (or should have meant) would not be subject to this critique. Please consider this a traditional red-team exercise: thinking about what could go wrong if we attempted to implement the plan, including plausible ways it might be mis-implemented, contrary to the intentions of the original author.
Unless otherwise noted all quotes are from Will’s book What We Owe the Future, p98-102.
What is the Long Reflection?
The long reflection is a plan for an extended period of time, after we have successfully pushed existential risk down to very low levels, during which mankind will avoid making any irreversible decisions and instead will try to figure out what we should be doing before then moving on to execute on this plan.
“I call it the long reflection, which is you get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing.” (source)
I think this ‘figure out’ falls into basically two categories. Partially it will involve thinking, arguing, debating and so on, in the traditional Enlightenment/Academic/Rational style, as we attempt to marshal more evidence and arguments, and evaluate them correctly. This seems like a prima facie reasonable approach to abstract philosophical issues like Wei Dai’s questions. As a philosophy major I appreciate the idea on an instinctive level!
“[A] stable state of the world in which we are safe from calamity and we can reflect on and debate the nature of the good life working out what the most flourishing society would be.”
There will also be less abstract elements, including social experimentation between different groups and immigration to determine how people best like living according to their revealed preferences rather than mere cheap talk.
“...increasing cultural cultural and intellectual diversity if possible… we would want to structure things such that, globally, cultural evolution guides us towards morally better views and societies … Fairly free migration would also be helpful. If people emigrate from one society to another, that gives us at least some evidence that the latter society is better for those who migrated there.”
However, some forms of experimentation and contest will not be favoured. In particular, fecundity, economic growth and military prowess seem not to be valued. Space colonisation is vetoed; man must remain in his cradle until he has reached moral maturity:
“That one society has greater fertility than another or exhibits faster economic growth does not imply that that society is morally superior ...
“It would therefore be worth spending many centuries … before spreading across the stars”
It is not entirely clear that economic growth is strictly prohibited during the Long Reflection, though the prohibition on interstellar colonisation presumably impose some ultimately binding resource constraints. However, I think fairly harsh anti-growth attitudes are a natural interpretation of the Long Reflection, and one that might be chosen by future generations absent pushback, so that scenario is what I am red-teaming.
Truth-seeking requires Grounding in Reality
My concern is that these constraints will significantly separate our deliberations from reality. Historically, progress has often been driven by necessity. Primitive tribes can only become so self-destructive until they lose the ability to hunt effectively. Later, the pressures of war rewarded groups that could understand the world, selecting against those who turned inwards. In peacetime, commerce rewards firms and individuals who understand how the world works, and how best to satisfy people’s desires with the resources available to us. Artists often produce better work when solving for some constraints, rather than being given a totally free remit and blank canvas of arbitrary pixel manipulation. The world provides information, it incentivises rationally using that information, and selects against those that do not.
Removing these constraints seems like it could have significantly negative effects on our ability to seek the truth. Sakoku-era Japan may have shut out the world, but it did not to my knowledge take advantage of this to produce any great advance.
The Long Reflection’s primary concern is with the discovery of moral truths, so you could argue that these sorts of processes are not helpful. Competition and challenge result in striving for physical mastery, but not moral truths, because of the is-ought gap. Perhaps the orthogonality thesis means we can achieve arbitrary moral progress in an arbitrarily backward physical environment.
A response open to some metaethics is that at least some moral truths are closely entangled with empirical facts - that things like the value of autonomy, or love, are inexplicably related to what we know about the consequences of giving people freedom to make choices or partaking in loving relationships.
More compelling, I think, is the response that reasoning about the physical world trains and rewards beneficial habits of careful thought - openmindedness to new arguments, ability to follow chains of logic, and so on - that can then be usefully deployed in moral philosophy. Even if the engineers aren’t the ones doing philosophy, engineering raises the status of logical thinking and ensures the required background is readily available.
One way of thinking about ideas is Dawkin’s meme theory, according to which ideas, via their human hosts, undergo reproduction and mutation, and hence natural selection for memetically ‘fitter’ ideas. Memetic fitness can have many components; for example, memes that make their adherents more likely to survive and procreate will be fitter, all else equal, assuming they are at least somewhat hereditary. But this effect would be significantly reduced in the Long Reflection, with little (no?) war, population caps, and no space colonisation. Deliberate selection by humans attempting to rationally choose memes (the objective of the Long Reflection) would remain, and is a positive. But irrational components of memetic spread would also remain - memes that were better at ‘hacking’ human psychology and sociology to spread themselves in a viral way. These methods include being fun to believe, signalling some desirable property, coordinating demands for resources for fellow believers, and stigmatising non-believers. My concern is that, absent the clear eye and sharpened power of war, or the animating contest of freedom and progress, these arational drivers of memetic spread will dominate over the rational.
A telling example of this, I think, is the rise of largely non-truth-orientated ‘woke’ memes in recent years. These seem to have become much more entrenched in universities than in private business. Even though the former is ostensibly dedicated to the impartial pursuit of truth, the difficulty in objectively determining success - and the lack of the inherent negative feedback of P&L - has left the academy much more vulnerable. In business, though not perfect, most investors and employers care dramatically more about the productivity of their employees than their beliefs, and few firms are willing to pay significantly higher prices to patronise right-thinking suppliers. In contrast, in academia the opinion of your peers is all important, and they will suffer few if any negative consequences for bias against you in hiring, publication, promotion or dismissal.
Additionally, I think it is a lot more difficult than many people think to just hit ‘pause’ on economic growth. Growth allows for the majority of people to experience progression and advancement over their lives; in its absence society will become essentially zero sum. In societies where collaboration for mutual profit is impossible, and output is divorced from reward, efforts are reallocated away from socially beneficial production into political manoeuvring for more resources. Moral appeals are often a valuable tool in such conflicts, but this form of competition does not promote honest truth-seeking moral reflection: it selects for moral propaganda, whose conclusions - that the author is morally deserving - was written in advance.
There are some protections against this in the Great Reflection. Some of the dark arts will be prohibited as unconductive to truth seeking.
"It seems that techniques for duping people - lying, bullshitting, and brainwashing - should be discouraged, and should be especially off limits for people in positions of power, such as those in political office."
Here I think a great deal depends on how exhaustive this index artium prohibitorum was intended to be. When I think about the issues degrading afflicting universities, explicit lying seems less of an issue than softer issues like p-hacking and social desirability bias. An insidious memeplex doesn’t need to have its adherents explicitly lie if it can make people mentally flinch before they pursue some thought or research that they know might result in social sanction, or keep re-running their analysis until they get the results they know are correct. When we think about the epistemic standards we expect in EA or on LW, merely not actively lying is a very low bar - beyond this, we expect people to exhibit a scout mindset, to use the principle of charity, to welcome dissent, to undergo pre-mortems and red-teaming, and if the entirety of humanity is going to be dedicated to philosophical inquiry for hundreds of years I would expect at least as high a standards.
And if moral progress turns out to be impossible? I think I’d prefer we at least made physical progress, while the anchor of competition and survival prevents too much arbitrary value drift over time (though perhaps not!).
Does immigration address this?
Immigration, and especially emigration, could potentially provide such a check during the Long Reflection, albeit constrained by the high costs involved in moving. Historically people’s ability to flee to the hills has been a constraint on the ability of empires to tyrannise their population; it is no surprise that the communist regimes have had to keep their people in at gunpoint. It’s definitely correct that people’s decisions in where they move are a highly credible signal for which societies seem better to them.
However, I think this is unsatisfactory, for two reasons.
Firstly, it does not deal with parasitism. If one society is very effective at begging or extorting resources from others, it could appear to be a quite pleasant place to live. One example of this would be the issue of anti-natalism. If you have two societies, one which has a high birth rate whose people then emigrate to the very-slightly higher happiness other society, which does not produce children, a net-immigration-flow metric will judge the latter to be better, even though it could not exist without the former.
Secondly, it not entirely clear how much immigration there will be during the Long Reflection, because we also have the constraint that no one country (or, presumably, closely allied group of countries) will be allowed to become too powerful. Since immigration is a method for increasing manpower and hence military power, under the Long Reflection apparently we may need to prevent any one country from getting too many people:
“At the same time, we would want to prevent any one culture from becoming so powerful that it could conquer all other cultures through economic or military domination. Potentially, the could require international norms or laws preventing any single country from becoming too populous, just as antitrust regulations prevent any single company from dominating a market and exerting monopoly power.”
The US, especially in concert with close allies like the UK and Canada, is already extremely powerful, and it seems not entirely implausible to me that they could conquer the entire world today, if there was the will. So if we were to enter the Long Reflection tomorrow, it seems quite possible that the US might have to impose an immediate moratorium on further immigration, and perhaps pursue deportations.
If this is the case however, immigration ceases to act as a ‘reality check’ that shows us people’s revealed preferences for where to live, because no matter how much more desirable the US (+ similar countries) is than elsewhere, no-one will be immigrating.
There many other perverse consequences of maximum population rules - e.g. the potential for a ‘bank run’ where everyone races to immigrate as fast as possible if they think the cap will be reached, and the odious question of how to enforce a population maximum without atrocities if the ‘problem’ is reproduction rather than immigration - but here we are primarily concerned about whether the Long Reflection will produced the promised omelette, not how many eggs get cracked along the way.
What’s especially strange about this is that the motivating example - that this is “just as” antitrust laws deal with monopolies - is incorrect. Antitrust law does not make it illegal to be a monopoly (US, EU). What it does do is prohibit some ‘unfair’ methods for attempting to become a monopoly, and possibly some other types of conduct if you happen to become one. However, if you become a monopolist through legitimate means, like having much better technology or just operating much more efficiently (or, sadly, by getting the government to give you a monopoly) this is perfectly legal. If we applied this analogy to the Long Reflection case, it would suggest we should prohibit societies from gaining population through illegitimate means, like slavery, but accept it if occurred through legitimate methods, like immigration or natural procreation.
Conclusion
Investing a lot of effort into making sure we’re not making a lot of moral mistakes makes a lot of sense. However, while slowing growth and imposing statis might give us more time to think, they could also make us worse at thinking. It seems likely to me that the Long Reflection should be contemporaneous with economic advance and intergalactic colonisation, not sequentially prior.
Great post! The section "Truth-seeking requires grounding in reality" describes some points I've previously wanted to make but didn't have good examples for.
I discuss a few similar issues in my post The Moral Uncertainty Rabbit Hole, Fully Excavated. Instead of discussing "the Long Reflection" as MacAskill described it, my post there discusses the more general class of "reflection procedures" (could be society-wide or just for a given individual) where we hit pause and think about values for a long time. The post points out how reflection procedures change the way we reflect and how this requires us to make judgment calls about which of these changes are intended or okay. I also discuss "pitfalls" of reflection procedures (things that are unwanted and avoidable at least in theory, but might make reflection somewhat risky in practice).
One consideration I discovered seems particularly underappreciated among EAs in the sense that I haven't seen it discussed anywhere. I've called it "lack of morally urgent causes." In short, I think high levels of altruistic dedication and people forming self-identities as altruists dedicated to a particular cause often come from a kind of desperation about the state of the world (see Nate Soares' "On Caring"). During the Long Reflection (or other "reflection procedures" more generally), the state of the world is assumed to be okay/good/taken care of. So, any serious problems are assumed to be mostly taken care of or put on hold. What results is a "lack of morally urgent causes" – which will likely affect the values and self-identities that people who are reflecting might form. That is, compared to someone who forms their values prior to the moral reflection, people in the moral reflection may be less likely to adopt identities that were strongly shaped by ongoing "morally urgent causes." For better or worse. This is neither good nor bad per se – it just seems like something to be aware of.
Here's a longer excerpt from the post where I provide a non-exhaustive list of factors to consider for setting up reflection environments and choosing reflection strategies:
I have a stronger version of the same concerns, fwiw. I can't imagine a 'Long Reflection' that didn't involve an extremely repressive government clamping down on private industry every time a company tried to do anything too ambitious, and that didn't effectively promote some caste of philosopher kings above all others to the resentment of the populace. It's hard to believe this could lead to anything other than substantially worse social values.
I also don't see any a priori reason to think 'reflecting' gravitates people towards moral truth or better values. Philosophers have been reflecting for centuries, and there's still very little consensus among them or any particular sign that they're approaching one.