Hide table of contents

I take the utilitarian longtermist position to be that we ought to prioritize maximizing the probability that intelligent life is able to take advantage of the cosmic endowment.

I phrase it that way in order to be species agnostic. Given our position of ignorance about intelligent life in the universe, and our significant existential risks we face over the next couple centuries, it seems to me that we can right now increase the chance of intelligent life taking advantage of the cosmic endowment by increasing the chance that life exists beyond earth.

We can do this through directed panspermia, and calculate that with enough seeds emitted, evolution will be able to eventually produce more intelligent life with some probability that counteracts the probability that we destroy ourselves.

I think the decision is a difficult one, much more difficult than it’s been given credit, with the default being to protect the sterility or potential biome of planets during our exploration efforts. However, if our long term plan is to become interplanetary, then we already plan on directed panspermia. Why not buy down the objective risk that there is a universe void of intelligent life through panspermia now? Call it a biotic hedge.

4

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

(I think Denis Drescher makes a lot of good points, and some of this answer overlaps with points made in that thread.)

My answer would be: "Utilitarian longtermism does not necessarily or directly imply we should put resources towards directed panspermia, nor even that directed panspermia would be good (i.e., if we could have it for free."

Utilitarianism is about maximising net wellbeing (or something like that), and doesn't intrinsically value things like the amount or survival of life forms or intelligence. The latter things are very likely very instrumentally valuable, but whether and how valuable they are doesn't fall directly out of utilitarianism, and instead relies on some other assumptions or details.

Here are some further considerations that I think come into play:

  • As noted by edcon, it seems likely that it would take a lot of resources to actually implement directed panspermia, or even develop the ability to "switch it on" if needed. So even if that would be good to do, it may not be worth utilitarian longtermists prioritising that.
    • Though maybe having one person write a paper analysing the idea could be worthwhile. Although also it's possible that that already exists, and I'm pretty sure there's at least been tangential discussion in various places, such as discussion of the potential downsides by suffering-focused EAs.
  • "Existential risks" is not the same as "extinction risks". Instead, they're the destruction of humanity's long-term potential (or that of humanity's "descendants", so to speak). (I'm not saying you don't know this, but it seems worth emphasising here.) So directed panspermia could perhaps itself be an existential catastrophe, or increase existential risks. This would be the case if it had irreversible consequences that prevent us from reaching something close to the best future possible, or if it increases the chances of such consequences occurring. Here are three speculative sketches of how that might happen:
    • There's a proliferation of other civilizations, which are on average less aligned with "good" values than we are (perhaps because we're in a slightly unlikely good equilibrium; some somewhat relevant discussion here). Perhaps this makes it harder for us to expand and use more resources in a "really good" way. Or perhaps it raises the chances that those civilizations wipe us out.
    • There's a proliferation of net-negative lives, which we lack the will or ability to improve or "euthanise".
    • There's a proliferation of net-positive lives, but we engage in conflicts with them to seize more resources, perhaps based on beliefs or rationalisations that one of the above two scenarios is happening. And this ends up causing a lot of damage.
  • Directed panspermia might not reduce the biggest current x-risks much in any case. Ord has a box "Security among the stars?" in Chapter 7 that discusses the idea that humanity can reduce x-risk by spreading to other planets (which is different to directed panspermia, but similar in some respects). He notes that this only helps with risks that are statistically independent between planets, and that many risks (e.g., unaligned AGI) are likely to be quite correlated, such that, if catastrophe strikes somewhere, it's likely to spread to other planets too. (Though spreading to other planets would still help with some risks.)
  • I'd guess we could capture much of the value of directed panspermia, with much fewer downsides, by accelerating space colonisation. Though even with that, I think I'd favour us having some portion of a "Long Reflection" before going very far with that, essentially for the reason Ord gives in the passage Denis Drescher quotes.
  • Another option that might capture some of the benefits, with fewer risks, is "leav[ing] a helpful message for future civilizations, just in case humanity dies out" (discussed in this 80k episode with Paul Christiano).
  • This article has some good discussion on things like the possibility of intelligent alien life or future evolution on Earth, and the implications of that. That seems relevant here in some ways.
  • I think metaethics is also important here. In particular, I'd guess that direct panspermia looks worse from various types of subjectivist perspectives than from various types of (robust) moral realist perspectives, because that'll influence how happy we'll be with the value systems other civilizations might somewhat "randomly" land on, compared to our own, or influence how "random" we think their value systems will be. (This is a quick take, and somewhat unclearly phrased.)

We are probably gaining the ability to spread life via directed panspermia (as a feasible option to eliminate correlated risks and build a safety net) decades before we gain the ability to bring civilisation to other solar systems. 

The "long reflection" could lead to an increase in biotic ethics, favoring further investments in directed panspermia.

Comments15
Sorted by Click to highlight new comments since:

Such an effort would likely be irreversible or at least very slow and costly to reverse. It comes at an immense cost of option value.

Directed panspermia also bears greater average-case risks than a controlled expansion of our civilization because we’ll have less control over the functioning and the welfare standards of the civilization (if any) and thus the welfare of the individuals at the destination.

Toby Ord’s Precipice more or less touches on this:

Other bold actions could pose similar risks, for instance spreading out beyond our Solar System into a federation of independent worlds, each drifting in its own cultural direction.

This is not to reject such changes to the human condition—they may well be essential to realizing humanity’s full potential. What I am saying is that these are the kind of bold changes that would need to come after the Long Reflection. Or at least after enough reflection to fully understand the consequences of that particular change. We need to take our time, and choose our path with great care. For once we have existential security we are almost assured success if we take things slowly and carefully: the game is ours to lose; there are only unforced errors.

Absent the breakthroughs the Long Reflection will hopefully, we can’t even be sure that the moral value of a species spread out across many solar systems will be positive even if its expected aggregate welfare is positive. They may not be willing to trade suffering and happiness 1:1.

I could imagine benefits in scenarios where Earth gets locked into some small-scale, stable, but undesirable state. Then there’d still be a chance that another civilization emerges elsewhere and expands to reclaim the space around our solar system. (If they become causally disconnected from us before they reach that level of capability, they’re probably not so different from any independently evolved life elsewhere in the universe.) But that would come at a great cost.

The approach seems similar to that of r-strategist species that have hundreds of offspring of which on average only two survive. These are thought to be among the major sources of disvalue in nature. In the case of directed panspermia we could also be sure of the high degree of phenomenal consciousness of the quasi-offspring so that the expected disvalue would be even greater than in the case of the r-strategist species where many of the offspring die while they’re still eggs.

In most other scenarios, risks are either on a planetary scale or remain so long as there aren’t any clusters that are so far apart as to be causally isolated. So in those scenarios, an expansion beyond our solar system would buy minimal risk reduction. That can be achieved at a much lesser cost in terms of expected disvalue.

So I’d be more comfortable to defer such grave and near-irreversible decisions to future generations that have deliberated all their aspects and implications thoroughly and even-handedly for a long time and have reached a widely shared consensus.

Thanks for the thoughtful response! I think you do a good job identifying the downsides of directed panspermia. However, in my description of the problem, I want to draw your attention to two claims drawn from Ord’s broader argument.

First, the premise that there is roughly 1/6 probability humanity does not successfully navigate through The Precipice and reach the Long Reflection. Second, the fact that for all we know we might be the universe’s only chance at intelligently flourishing.

My question is whether there is an implication here that directed panspermia is a warranted biotic hedge during The Precipice phase, perhaps prepared now and only acted on if existential catastrophe odds increase. If we make it to The Long Reflection, I’m in total agreement that we do not rapidly engage in directed panspermia. However, for the sake of increasing the universe’s chance of having some intelligent flourishing, perhaps a biotic hedge should at least be prepare now, to be executed when things look especially dire. But at what point would it be justified?

I think this reasoning is exactly the same as the utilitarian longtermist argument that we should invest more resources now addressing x risk, especially Parfit’s argument for the value of potential future persons.

Assume three cases: A. All life in the universe is ended because weapon X is deployed on earth. B. All life on earth is ended by weapon X but life is preserved in the universe because of earth’s directed panspermia. C. Earth originating life makes it through the Precipice and flourishes in the cosmic endowment for billions of years.

It seems C > B > A, with the difference between A and B greater than the difference between B and C.

A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.

So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?

I think that the empirically the effort to prepare the biotic hedge, is likely to be be expensive in terms of resources and influence, as I suspect a lot of people would be strongly averse to directed panspernia, as it would be likely negative in some forms of negative utilitarianism, and other value systems. So it would be better for longterm future to reduce existential risk specifically.

I think SETI type searches are different, as you have to consider negative effects from contact to cuurent civilisation. Nice piece from paul christano https://sideways-view.com/2018/03/23/on-seti/

I think I’m not well placed to answer that at this point and would rather defer that to someone who has thought about this more than I have from the vantage points of many ethical theories rather than just from my (or their) own. (I try, but this issue has never been a priority for me.) Then again this is a good exercise for me in moral perspective-taking or what it’s called. ^^

It seems C > B > A, with the difference between A and B greater than the difference between B and C.

In the previous reply I tried to give broadly applicable reasons to be careful about it, but those were mostly just from Precipice. The reason is that if I ask myself, e.g., how long I would be willing to endure extreme torture to gain ten years of ultimate bliss (apparently a popular thought experiment), I might be ready to invest a few seconds if any, for a tradeoff ratio of 1e7 or 1e8 to 1. So from my vantage point, the r-strategist style “procreation” is very disvaluable. It seems like it may well be disvaluable in expectation, but either way, it seems like an enormous cost to bear for a highly uncertain payoff. I’m much more comfortable with careful, K-strategist “procreation” on a species level. (Magnus Vinding has a great book coming out soon that covers this problem in detail.)

But assuming the agnostic position again, for practice, I suppose A and C are clear cut: C is overwhelmingly good (assuming the Long Reflection works out well and we successfully maximize what we really terminally care about, but I suppose that’s your assumption) and A is sort of clear because we know roughly (though not very viscerally) how much disvalue our ancestors have paid forward over the past millions of years so that we can hopefully eventually create a utopia.

But B is wide open. It may go much more negative than A even considering all our past generations – suffering risks, dystopian-totalitarian lock-ins, permanent prehistoric lock-ins, etc. The less certain it is, the more of this disvalue we’d have to pay forward to get one utopia out of it. And it may also go positive of course, almost like C, just with lower probability and a delay.

People have probably thought about how to spread self-replicating probes to other planets so that they produce everything a species will need at the destination to rebuild a flourishing civilization. Maybe there’ll be some DNA but also computers with all sorts of knowledge, and child-rearing robots, etc. ^^ But a civilization needs so many interlocking parts to function well – all sorts of government-like institutions, trust, trade, resources, … – that it seems to me like the vast majority of these civilizations either won’t get off the ground in the first place and remain locked in a probably disvaluable Stone Age type of state, or will permanently fall short of the utopia we’re hoping for eventually.

I suppose a way forward may to consider the greatest uncertainties about the project – probabilities and magnitudes at the places where things can go most badly net negative or most awesomely net positive.

Maybe one could look into Great Filters (they may exist less necessarily than I had previously thought), because if we are now past the (or a) Great Filter, and the Great Filter is something about civilization rather than something about evolution, we should probably assign a very low probability to a civilization like ours emerging under very different conditions through the probably very narrow panspermia bottleneck. I suppose this could be tested on some remote islands? (Ethics committees may object to that, but then these objections also and even more apply to untested panspermia, so they should be taken very seriously. Then again they may not have read Bostrom or Ord. Or Pearce, Gloor, Tomasik, or Vinding for that matter.)

Oh, here’s an idea: The Drake Equation has the parameter f_i for the probability that existing life develops (probably roughly human-level?) intelligence, f_c that intelligent life becomes detectable, and L for the longevity of the civilization. The probability that intelligent life creates a civilization with similar values and potential is probably a bit less than f_c (these civilizations could have any moral values) but more than the product of the two fs. The paper above has a table that says “f_i: log-uniform from 0.001 to 1” and “f_c: log-uniform from 0.01 to 1.” So I suppose we have some 2–5 orders of magnitude uncertainty from this source.

The longevity of a civilization is “L: log-uniform from 100 to 10,000,000,000” in the paper. An advanced civilization that exists for 10–100k years may be likely to have passed the Precipice… Not sure at all about this because of the risk of lock-ins. And I’d have to put this distribution into Guesstimate to get a range of probabilities out of this. But it seems like a major source of uncertainty too.

The ethical tradeoff question above feels almost okay to me with a 1e8 to 1 tradeoff but others are okay with a 1e3 or 1e4 to 1 tradeoff. Others again refuse it on deontological or lexical grounds that I also empathize with. It feels like there are easily five orders of magnitude uncertainty here, so maybe this is the bigger question. (I’m thinking more in terms of an optimal compromise utility function than in moral realist terms, but I suppose that doesn’t change much in this case.)

In the best case within B, there’s also the question whether it’ll be a delay compared to C of thousands or of tens of thousands of years, and how much that would shrink the cosmic endowment.

I don’t trust myself to be properly morally impartial about this after such a cursory investigation, but that said, I would suppose that most moral systems would put a great burden of proof on the intervention because it can be so extremely good and so extremely bad. But tackling these three to four sources of uncertainty and maybe others can perhaps shed more light on how desirable it really is.

I empathize with the notion that some things can’t wait until the Long Reflection, at least as part in a greater portfolio, because it seems to me that suffering risks (s-risks) are a great risk (in expectation) even or especially now in the span until the Long Reflection. They can perhaps be addressed through different and more tractable avenues than other longterm risks and by researchers with different comparative advantages.

A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.

Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?

So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?

There are all these risks from drawing the attention of hostile civilizations. I haven’t thought about what the risk and benefits are there. It feels like that came up in Precipice too, but I could be mixing something up.

There are all these risks from drawing the attention of hostile civilizations. I haven’t thought about what the risk and benefits are there. It feels like that came up in Precipice too, but I could be mixing something up.

Yes, Ord discusses that in Chapter 5. Here's one relevant passage that I happened to have in my notes:

The extra-terrestrial risk that looms largest in popular culture is conflict with a spacefaring alien civilization. [...] perhaps more public discussion should be had before we engage in active SETI (sending powerful signals to attract the attention of distant aliens). And even passive SETI (listening for their messages) could hold dangers, as the message could be designed to entrap us. These dangers are small, but poorly understood and not yet well managed.

(Note that, perhaps contrary to what "before we engage in active SETI" might imply, I believe humanity is already engaging in some active SETI.)

Great job identifying some relevant uncertainties to investigate. I will think about that some more.

My goal here is not so much to resolve the question of “should we prepare a biotic hedge?” but rather “Does utilitarian Longtermism imply that we should prepare it now, and if faced with a certain threshold of confidence that existential catastrophe is imminent, deploy it?” So I am comfortable not addressing the moral uncertainty arguments against the idea for now. If I become confident that utilitarian Longtermism does imply that we should, I would examine how other normative theories might come down on the question.

Me: “A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.”

You: “Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?”

No it would not change the relative order of A B C. The total order (including D) for me would be C > B > D > A, where |v(B) - v(A)| > |v(C) - v(D)|.

I was trying to make a Parfit style argument that A is so very bad that spending significant resources now to hedge against it is justified. Given that we fail to reach the Long Reflection, it is vastly preferable that we engage in a biotic hedge. I did a bad job of laying it out, and it seems that reasonable people think the outcome of B might actually be worse than A, based on your response.

Oh yeah, I was also talking about it only from utilitarian perspectives. (Except for one aside, “Others again refuse it on deontological or lexical grounds that I also empathize with.”) Just utilitarianism doesn’t make a prescription as to the exchange rate of intensity/energy expenditure/… of individual positive experiences to individual negative experiences.

It seems that reasonable people think the outcome of B might actually be worse than A, based on your response.

Yes, I hope they do. :-)

Sorry for responding so briefly! I’m falling behind on some reading.

Yes I think I messed up the Parfit style argument here. Perhaps the only relevant cases are A, B, and D, because I’m supposing we fail to reach the Long Reflection and asking what the best history line is on utility Longtermist grounds.

If we conclude from this that a biotic hedge is justified on those grounds, then the question would be what is its priority relative to directly preventing x risks, as edcon said.

My question is whether there is an implication here that directed panspermia is a warranted biotic hedge during The Precipice phase, perhaps prepared now and only acted on if existential catastrophe odds increase. If we make it to The Long Reflection, I’m in total agreement that we do not rapidly engage in directed panspermia. However, for the sake of increasing the universe’s chance of having some intelligent flourishing, perhaps a biotic hedge should at least be prepare now, to be executed when things look especially dire. [emphasis added]

I'd definitely much prefer that approach to just aiming for actually implementing directed panspermia ASAP. Though I'm still very unsure whether directed panspermia would even be good in expectation, and doubt it should be near the top of a longtermist's list of priorities, for reasons given in my main answer.

I just wanted to highlight that passage because I think that this relates to a general category of (or approach to) x-risk intervention which I think we might call "Developing, but not deploying, drastic backup plans", or just "Drastic Plan Bs". (Or, to be nerdier, "Preparing saving throws".)

I noticed that as a general category of intervention when reading endnote 92 in Chapter 4 of the Precipice:

Using geoengineering as a last resort could lower overall existential risk even if the technique is more risky than climate change itself. This is because we could adopt the strategy of only deploying it in the unlikely case where climate change is much worse than currently expected, giving is a second roll of the dice.
[Ord gives a simple numerical example]
The key is waiting for a situation when the risk of using geoengineering is appreciably lower than the risk of not using it. A similar strategy may be applicable for other kinds of existential risk too.

I'd be interested in someone naming this general approach, exploring the general pros and cons of this approach, and exploring examples of this approach.

Relevant to this question and discussion, here are a few recent papers that discuss the ethics of directed panspermia:

  • Oskari Sivula (2022) examines planetary seeding from a longtermist perspective in his article The Cosmic Significance of Directed Panspermia: Should Humanity Spread Life to Other Solar Systems? published in Utilitas:  https://doi.org/10.1017/S095382082100042X
  • Gary O'Brien (2022) looks more careful into wild animal suffering and planetary seeding in his paper Directed Panspermia, Wild Animal Suffering, and the Ethics of World-Creation published in Journal of Applied Philosophy:  https://onlinelibrary.wiley.com/doi/abs/10.1111/japp.12538 

Thanks. I might put together a response. Time to quadruple down on Parfit and argue from the vast multitudes of potential sentient evolved beings denied existence through inaction.

Panspermia (from Ancient Greek πᾶν (pan), meaning 'all', and σπέρμα (sperma), meaning 'seed') is the hypothesis that life exists throughout the Universe, distributed by space dust,[1] meteoroids,[2] asteroids, comets,[3] planetoids,[4] and also by spacecraft carrying unintended contamination by microorganisms.[5][6][7] Distribution may have occurred spanning galaxies, and so may not be restricted to the limited scale of solar systems.[8][9]

From Wikipedia. 

Sounds interesting! The article on direct panspermia has an ethical objection from a welfarist perspective:

A third argument against engaging in directed panspermia derives from the view that wild animals do not —on the average— have lives worth living, and thus spreading life would be morally wrong. Ng supports this view,[36] and other authors agree or disagree, because it is not possible to measure animal pleasure or pain. In any case, directed panspermia will send microbes that will continue life but cannot enjoy it or suffer. They may evolve in eons into conscious species whose nature we cannot predict. Therefore, these arguments are premature in relation to directed panspermia. .

I have not read about the subject deeply. Is panspermia close to being plausible?

Based on NASA’s extensive planetary protection efforts to prevent interplanetary contamination of the explored worlds, I think it is plausible now. https://en.m.wikipedia.org/wiki/Planetary_protection

Take the scenario where there was a directed panspermia mission towards europa that containing a range of organisms up to the complexity, a simple fish and a range of species to make a self sustaining ecosystem that have been picked to be adapted to the enviroment they are going to and they successfully colonise . Would have to consider probabilities of where great filter is. If great filter is before this level of complexity then panspernia would be good, if think that on balance the whole space of possible civilisations are on net positive. However in a case that the great filter is after this for example from great ape level intelligence to humans requires very specific evolutionary incentives, and it is unlikely to get past. Then could have a very high chance of something similar in value to the 'wild' animal population and a low probability of human level civilisation. If place the value of human level of civilisations as many orders of magnitude better than the (possible) negative welfare then the argument could go through, as being positive EV even if placed lowed probability of going from small fish ecosystem to human level civilisation.

I'm not a utilitarian, but if I were, I would emphasize quality over quantity. There are two ways in which quantity can harm quality. The first is when there's a trade-off and spending resources on quantity causes you to spend fewer resources on quality. So if you spend money and attention on implementing panspermia, you can't spend the same money and attention on improving the quality of life of sentient systems. The second is even worse: On those margins where quality is negative, quantity actively hurts the total. So you had better be really sure that the quality is positive before you spend resources on quantity. In the context of panspermia, I'd worry about the suffering and preference-frustration that will be caused by the project.

I'm not a utilitarian, however, so I wouldn't donate to such a project either way.

Curated and popular this week
Relevant opportunities