This paper was published as a GPI working paper in March 2024.
Abstract
The Fading Qualia Argument is perhaps the strongest argument supporting the view that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system’s inputs and outputs. I show how the argument can be resisted given two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. I take this to show that what is arguably our strongest argument supporting the view that consciousness is substrate independent has important weaknesses, as a result of which we should decrease our confidence that consciousness can be realized in systems whose physical composition is very different from our own.
Introduction
Many believe that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system’s inputs and outputs. As a result, many also believe that the right software could in principle allow there to be something it is like to inhabit a digital computer, controlled by an integrated circuit etched in silicon. A recent expert report concludes that if consciousness requires only the right causal relations among a system’s inputs, internal states, and outputs, then “conscious AI systems could realistically be built in the near term.” (Butlin et al. 2023: 6) If that were to happen, it could be of enormous moral importance, since digital minds could have superhuman capacities for well-being and ill-being (Shulman and Bostrom 2021).
But is it really plausible that any system with the right functional organization will be conscious - even if it is made of beer-cans and string (Searle 1980) or consists of a large assembly of people with walky-talkies (Block 1978)? My goal in this paper is to raise doubts about what I take to be our strongest argument supporting the view that consciousness is substrate independent in something like this sense.[1] The argument I have in mind is Chalmers’ Fading Qualia Argument (Chalmers 1996: 253–263). I show how it is possible to resist the argument by appeal to two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. Since these assumptions are controversial, I claim only to have exposed important weaknesses in the Fading Qualia Argument.
I’ll begin in section 2 by explaining what the Fading Qualia Argument is supposed to show and the broader dialectical context it inhabits. In section 3, I give a detailed presentation of the argument. In section 4, I show how the argument can be answered given the right assumptions about vagueness and the structure of conscious neural activity. At this point, I rely on the assumption that vagueness gives rise to truth-value gaps. In section 5, I explain how the argument can be answered even if we reject that assumption. In section 6, I say more about the particular assumption about the holistic structure of conscious neural activity needed to resist the Fading Qualia Argument in the way I outline. I take the need to rely on this assumption to be the greatest weakness of the proposed response.
Read the rest of the paper
- ^
See the third paragraph in section 2 for discussion of two ways in which the conclusion supported by this argument is weaker than some may expect a principle of substrate independence to be.
(EDIT: Split this up into two comments, the other here.)
I think that there's probably a minimum level of substrate independence we should accept, e.g. that it doesn't matter exactly what matter a "brain" is made out of, as long as the causal structure is similar enough on a fine enough level. The mere fact that neurons are largely made out of carbon doesn't seem essential. Furthermore, human and (apparently) conscious animal brains are noisy and vary substantially from one another, so exact duplication of the causal structure doesn't seem necessary, as long as the errors don't accumulate so much that the result isn't similar to a plausible state for a plausible conscious biological brain.[1] So, I'm inclined to say that we could replace biological neurons with artificial neurons and retain consciousness, at least in principle, but it could depend on the artificial neurons.
It's worth pointing out that the China brain[2] and a digital mind (or digital simulation of a mind, on computers like today's) aren't really causally isomorphic to biological brains even if you ignore a lot of the details of biological brains. Obviously, you also have to ignore a lot of the details of the China brain and digital minds. But I could imagine that the extra details in the China brain and digital minds make a difference.
These extra details make me less sure that we should attribute consciousness to the China brain and digital minds, but they don’t seem decisive.
From footnote 4 from Godfrey-Smith, 2023 (based on the talk he gave):
From the Wikipedia page:
(China's population, at 1.4 billion, isn't large enough for each person to only simulate one neuron and so simulate a whole human brain with >80 billion neurons, but we could imagine a larger population, or a smaller animal brain being simulated, e.g. various mammals or birds.)
Some other arguments that push in favour of functionalism, the consciousness of simulated brains, including the China brain and digital minds, and brains with other artificial neurons:
This is essentially the coincidence argument for illusionism in Chalmers, 2018.
It seems weird to meaningfully update in favour of some concrete view on the basis that something might be true but that
I agree there is something a bit weird about it, but I'm not sure I endorse that reaction. This doesn’t seem so different from p-zombies, and probably some moral thought experiments.
I don't think it's true that everything we know about the universe would be equally undermined. Most things wouldn't be undermined at all or at worst would need to be slightly reinterpreted. Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There's just more stuff outside our universe.
I guess you can imagine short simulations where all our understanding of physics is actually just implanted memories and fabricated records. But in doing so, you're throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable. Longer simulations can preserve that causal structure.
I'm not sure what you mean here. That the simulation argument doesn't seem different from those? Or that the argument that 'we have no evidence of their existence and therefore shouldn't update on speculation about them' is comparable to what I'm saying about the simulation hypothesis?
If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only option we can think of, but one from which philosophers don't feel nearly enough urgency to find alternatives to which to move.
I don't see how this would allow us to update on anything based on speculation about the 'more stuff'. Yeah, we might choose to presume our pocket simulation will continue to behave as it has, but we don't get to then say 'there's some class of matter other than our own simulated matter which generates consciousness therefore consciousnessness is substrate independence.
As you say in your other comment, there's probably some minimal level of substrate independence that non-solipsists have to accept, but that turns it into an empirical question (as it should be) - so an imagined metaverse gives us no reason to change our view on how substrate independent consciousness is.
This seems like an argument from sadness. What we would lose by imagining some outcomes shouldn't affect our overall epistemics.
Is there are online version of the case for the fading qualia argument? This feels a bit abstract without it...
The best argument for functionalism* in my opinion is that there aren't really any good alternatives. If mental state kinds aren't functional kinds, they'd presumably have to be neuroscientific kinds. But if that's right, then we can already know now aliens without neurons aren't conscious. Which seems wild to me: how can we possibly know if aliens are conscious till we meet them, and observe their behavior and how it depends on what goes on inside them? And surely once we do meet them, no one is going to say "oh consciousness is this sort of neurobiological property, we looked in their head with our scanner and they have no neurons, problem solved, we know they aren't conscious. People seem to want there to be some intermediate view that says "oh of course there might be conscious aliens with different biology, we just mean to rule out weird functional duplicates of humans like a robot controlled by radio signals running the same program as human**", but it's really unclear how to do that in a principled way. (And I suspect the root of the desire to do so is a sort of primitive sense that living matter can have feelings but dead matter can't, which I think people would consciously disavow if they understood it was driving their views)
*There's an incredibly technical complication here about the fact that "functionalism" is usually defined in opposition to mind-body dualism, but in the current context it makes more sense to classify certain forms of dualism as functionalist, since they agree with functionalism about what guarantees something is conscious in the actual world. But I'm going to ignore it because I don't think I can explain it to non-philosophers quickly and easily.
**https://en.wikipedia.org/wiki/China_brain
Related recent talk:
and notes.