This is a new paper in the Global Priorities Institute working paper series by Teruji Thomas.
Abstract
I present a new argument for the claim that I’m much more likely to be a person living in a computer simulation than a person living in the ground-level of reality. I consider whether this argument can be blocked by an externalist view of what my evidence supports, and I urge caution against the easy assumption that actually finding lots of simulations would increase the odds that I myself am in one.
Introduction
Here’s a way the world might be. At some point there exist conscious beings whose experience of the world is much like ours—let’s just call them people. And at some point in their history, these people run computer simulations of whole worlds, so powerful that these worlds are inhabited by other such conscious beings—let’s call them simulant people. And these simulant people might even run further simulations on their (simulant) computers, containing other simulant people, and so on. Only if we live in the non-simulant, ground-level of reality (if there even is such a thing!) are we our-selves non-simulant people. [1]
I will present an argument for
sim. It is much more likely that I am a simulant person than a non-simulant person.
Bostrom (2003) presents a closely related argument (with a correction in Bostrom and Kulczycki (2011)), known as the Simulation Argument. It has inspired a great deal of philosophical and popular discussion. However, the Simulation Argument is not an argument for sim. The relevant part of Bostrom’s argument, slightly reconstructed, simply claims
conditional sim. Conditional on the ratio of simulant people to non-simulant people being high, it is much more likely that I am a simulant person than a non-simulant person.
The ratio here involves people who exist at any point in time, not just at present. One way to argue for sim would be to add to the Simulation Argument an argument for the condition in conditional sim. We could, in other words, argue for
high ratio. The ratio of simulant to non-simulant people is high.
Bostrom did not argue for high ratio, and he appears to end up with roughly a 1/3 credence in it. For all the Simulation Argument says, this is compatible with a 1/3 credence that I am a simulant person and a 2/3 credence that I am a non-simulant person. More generally, for all the Simulation Argument says, it could be much less likely that I’m a simulant person than a non-simulant person, as long as high ratio is unlikely.
Thus, one problem for arguing for sim from high ratio is that we don’t have much reason to be confident in high ratio. Perhaps more interestingly, it is hard to see how we could reasonably be confident in high ratio in a way that is compatible with sim. For high ratio is, in part, a claim about the number of people in the ground-level of reality, and, if we our- selves are in a simulation, that’s not the sort of thing about which we could have much evidence. [2]
So I will not argue for high ratio. Instead, the analogous premiss in my argument is
high expectation. Conditional on my being a non-simulant person, the expected ratio of simulant to non-simulant people in my reference class is high.
The restriction to ‘my reference class’ is a delicate one, to be discussed later, but the rough idea is to consider only people to whom the world appears in broad strokes like our own: they live on minor variants of 21st century Earth. [3]
Although I will not argue for high expectation in any detail, it seems fairly plausible, if we grant Bostrom’s claims about feasible computing power. Here’s the idea. Suppose I’m a non-simulant person. It may be quite un- likely that our descendants will run simulations of their ancestral 21st century. But they could in principle run enormously many, at negligible cost to themselves, and even on a whim. [4] Those simulated 21st century people would be in my reference class. So there’s at least a small probability that the ratio of simulant to non-simulant people in my reference class is enormous. As long as the probability is not too small, high expectation is true. In contrast, this line of reasoning does not particularly support high ratio.
The main advance in this paper is to replace high ratio by high expectation, thus giving us more reason to take seriously the possibility that we are simulant people. I develop the argument in sections 2 to 5. This move does not (however) resolve some other issues which I will discuss in section 6. I will especially consider, and tentatively respond to, a worry raised by Weatherson (2003) about the nature of my evidence if I am in fact a non- simulant person. And I will urge caution against the easy assumption that actually finding lots of simulations being run would increase the odds that I am in a simulation.
Read the rest of the paper
Without claiming that it’s a settled question, I’ll just assume that there might be simulant people with mental lives relevantly like our own. In general, I’ll leave it loose what counts as a ‘person’. But I won’t assume that it’s certain on my evidence that I am a person at all. That allows us to exclude ‘freak observers’ like Boltzmann brains, and to set aside the question of whether I am most likely overall to be a non-simulant freak observer, for discussion of which see Crawford (2013). Relatedly, I will assume that, if I am a non-simulant person, then my experiences are generally veridical. ↩︎
This is the main idea of Birch (2013), especially his section 3, and related to the second objection of Crawford (2013). One can thus read this paper as a response to theirs: I give an argument for sim on roughly the same grounds but immune to this problem. Brueckner (2008) objects more simply that the probability of high ratio is inscrutable; this does not affect my argument either. For further skeptical worries, e.g. associated with the possibility that I’m a Boltzmann brain, see footnote 1. ↩︎
conditional sim may also involve a reference class restriction of some sort, e.g. to beings with what Bostrom calls ‘human-type experiences’; I’ve just assumed that this is baked into my vague characterisation of ‘people’. ↩︎
According to Bostrom (2003, 247–8), ‘A single [planetary-mass] computer could sim- ulate the entire mental history of humankind…by using less than one millionth of its pro- cessing power for one second. A posthuman civilization may eventually build an astro- nomical number of such computers.’ ↩︎