Hide table of contents

I am working on a project about estimating alien density, their expected utility, and arguing for strategic implications. I am optimistic about the project producing valuable content to inform the AI safety strategy. But I want to probe what the community thinks beforehand.

Main question:

Assumptions:

  • Alien and Earth space-faring civilizations produce similar expected utilities.[1] The utility is computed using our CEV.
  • Alien space-faring civilizations are frequent enough such that there are at least several of them per affectable light cone and very few resources are left unused.

Question:

  • Given the assumptions, what would the strategic implications be for the AI safety community?

Secondary question:

  • Without any assumption. What are your arguments for expecting Alien space-faring civilizations to have similar, or lower (e.g. 0), or higher expected utility than a future Earth-originating space-faring civilization?

Note: Please reach out to me by MP if you want to contribute or provide feedback on the project.

  1. ^

    I am talking about the expected utility per unit of controlled resources, after considering everything. This is the “final” expected utility created per unit of resource. As an illustration, it accounts for the impact of trades and conflicts, causal or acausal, happening in the far future.

6

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

What are your arguments for expecting Alien space-faring civilizations to have similar, or lower (e.g. 0), or higher expected utility than a future Earth-originating space-faring civilization?

For me, the most important factor is whether the aliens are altruistic. If they're altruistic, they'll do acausal trade to reduce suffering in other lightcones.

How common altruism is in evolved life seems like a complex question to answer. If you do research it (in particular), I'd be interested to see your conclusions, although it probably wouldn't be decision-relevant imv (I'd probably still think the best thing I can do is to work on alignment).

There is a way in which the (relative) frequency of altruistic superintelligences can be decision-relevant, though, when certain other conditions are met. Consider what we would want to do to reduce s-risks in each of these two circumstances:

  • Toy model premise: Earth has a 40% chance of resulting in an aligned ASI and a 1% chance in resulting in an indefinite s-event causer of a kind that does not accept acausal trades. 
    1. In the broader universe, we think there are probably more altruistically-used lightcones than lightcones controlled by s-event causers who are willing to engage in acausal trade.[1] That is, we think probably all the s-risks that are preventable through trade will be prevented.
    2. In the broader universe, we think the situation described in (1) is probably not true; we think there are probably less altruists.

In (2)'s case, where altruism is uncommon (relatively), increasing the amount of altruist lightcones is impactful. Also, in (2)'s case, making sure we're not risking creating more theoretically-preventable s-risks would be more impactful (because they won't actually be prevented, in (2)'s case).

In (1)'s case, reducing the probability we'll end up somehow creating an s-event-causer that can't be averted by acausal trade would be more important, while adding another altruist lightcone would be less likely to prevent s-events (because the preventable ones would be prevented anyways)

I don't know how non-relatively important considerations like this actually are, though.

Edit: Another thing that seems worth saying explicitly, in response to "I am working on a project about estimating alien density", is that in my model, density per se is not relevant.

  1. ^

    (premise: Some s-risks are preventable through acausal trade; namely those caused by entities which also value non-suffering things, and who would not arrange matter into suffering in return for a sufficient amount of those other things being arranged in other lightcones. Although I don't expect all values will be neatly maximizer-y to begin with, this is just a simplified model)

Curated and popular this week
Relevant opportunities