Hide table of contents

We just published an interview: Jonathan Birch on the edge cases of sentience and why they matter. Listen on Spotify, watch on Youtube, or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.

Episode summary

In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed.

People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous.

— Jonathan Birch

In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)

They cover:

  • Candidates for sentience — such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
  • Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
  • Chilling tales about overconfident policies that probably caused significant suffering for decades.
  • How policymakers can act ethically given real uncertainty.
  • Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
  • How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
  • Why Jonathan is so excited about citizens’ assemblies.
  • Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Highlights

The history of neonatal surgery without anaesthetic

Jonathan Birch: It’s another case I found unbelievable: in the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. There was a public campaign led by someone called Jill Lawson, whose baby son had been operated on in this way and had died.

And at the same time, evidence was being gathered to bear on the questions by some pretty courageous scientists, I would say. They got very heavily attacked for doing this work, but they knew evidence was needed to change clinical practice. And they showed that, if this protocol is done, there were massive stress responses in the baby, massive stress responses that reduce the chances of survival and lead to long-term developmental damage. So as soon as they looked for evidence, the evidence showed that this practice was completely indefensible and then the clinical practice was changed.

So, in a way, people don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous.

Luisa Rodriguez: Yeah, it really does. I’m sure that had I lived in a different time, I’d at least have been much more susceptible to this particular mistake. But from where I’m standing now, it’s impossible for me to imagine thinking that newborns don’t feel pain, and therefore you can do massively invasive surgery on them without anaesthetic.

Jonathan Birch: It’s a hard one to believe, isn’t it? Of course, the consideration was sometimes made that anaesthesia has risks — and of course it does, but operating without anaesthesia also has risks. So there was real naivete about how the surgeons here were thinking about risk. And it’s what philosophers of science sometimes called the “epistemology of ignorance”: they were worried about the risks of anaesthesia, which is their job to worry about that, so they just neglected the risks on the other side. That’s the truly unbelievable.

Overconfidence around disorders of consciousness

Jonathan Birch: The book talks about a major taskforce report from the 1990s that was very influential in shaping clinical practice, that just very overconfidently states that pain depends on cortical mechanisms that are clearly inactive in these patients, so they can’t experience any pain.

You know, it shocked me, actually. It shocked me to think in 1994, when there barely was a science of consciousness — and you could argue that 30 years later, maybe the science hasn’t progressed as much as we hoped it would, but in the mid-’90s, it barely existed — it didn’t stop a taskforce of experts assembled to rule on this question from extremely confidently proclaiming that these patients were not conscious.

And one has to think about why this is, and about the issue of inductive risk, as philosophers of science call it: where you’re moving from uncertain evidence to a pronouncement — that is, an action where implicitly you’re valuing possible consequences in certain ways. Presumably, the people making that statement feared the consequences of it becoming accepted that the vegetative patients might be feeling things. To me, that’s the wrong way to think about inductive risk in this setting. There’s strong reasons to err on the side of caution, and hopefully that is what we’re now starting to see from clinicians in this area.

Luisa Rodriguez: Yeah, I’m interested in understanding how the field has changed, but I have the sense that the impetus for the field changing has actually been concrete cases where people who have experienced some kind of disorder of consciousness have recovered and revealed that they were experiencing things, sometimes suffering. Can you talk about a case like that?

Jonathan Birch: That’s right. I tend to think that is the best evidence that we can get, that they were indeed experiencing something.

The case of Kate Bainbridge that I discuss in the book is a case where Kate fell into what was perceived by her doctors to be a vegetative state, and sadly was treated in a way that presumed no need for pain relief — when in fact she was experiencing what was happening to her, did require pain relief, did want things to be explained to her that were going on. That didn’t happen. When she later recovered, she was able to write this quite harrowing testimony of what it had actually been like for her. So in these cases, there’s not much room for doubt. They were indeed experiencing what they report having experienced.

In other cases, you get a little more room for doubt. There’s these celebrated cases from Adrian Owen’s group, where patients presumed vegetative have been put into fMRI scanners, and they’ve come up with this protocol where they ask them yes/no questions, and they say, “If the answer is yes, imagine playing tennis. If the answer is no, imagine finding a way around your house.” These generate very different fMRI signatures in healthy subjects, and they found in some of these patients the same signatures that they found in the healthy subjects, giving clear yes/no answers to their questions.

That’s not as clear-cut as someone actually telling you after recovering, but it’s pretty clear-cut. So I think this has got the attention of the medical community, and it is starting to filter through to changes in clinical practice.

Separating abortion from the issue of foetal sentience

Luisa Rodriguez: You argue that, presumably, if newborns can feel pain, it’s not like newborns suddenly start to feel pain as soon as they are born and exit the womb. Before we get into evidence about foetal sentience, my first reaction when you raise this issue in the book was anxiety about what this was going to mean for arguments about abortion. But you argue that we should really separate these two issues: the question of foetal sentience and the question of whether abortion should be legal and acceptable. Why is that?

Jonathan Birch: Yes, my view is that these issues should be separated. I recognise that opinions might vary on that. I think that abortion generates these extremely polarised debates, famously polarised in America.

I think there’s much less polarisation on this issue in the UK, as far as I can see — and I think it’s because there’s something close to consensus in the UK on what are we trying to do here with this right to access abortion? Why is it such an important right? It’s because of there being really something very seriously bad about the idea of a forced pregnancy. It’s just one of the worst kinds of coercion you could imagine, and a very serious violation of bodily autonomy of the woman. In the philosophical literature, Judith Jarvis Thomson made that argument a long time ago.

I think in the UK it’s really got traction, and I think it’s probably the right view. Why is the right to access abortion important? Not because the foetus is not sentient. It’d be kind of strange to assume that foetuses were not sentient, I think. And I don’t think that’s what is at the basis of the claim to the right to access abortion, and I don’t think it should be the basis of that claim.

What I fear is a situation where people who want to argue for this very important right end up tying it to the question of sentience, and then get ambushed by the evidence — because the evidence may, in the future, drag them to ever-earlier time points when it becomes plausible that the foetus is sentient, and then the argument has the rug pulled out from under it. So I don’t think that’s the kind of argument that we should be making when defending this right.

Luisa Rodriguez: So concretely, it might be me saying, “I don’t think a three-and-a-half-month-old foetus is sentient, and so I think that women should be able to abort them before that date.” And if it turns out that the evidence points toward something like sentience emerging at earlier and earlier dates, then this right to abortion will be really seriously undermined.

Jonathan Birch: It’s quite a mistake, yeah. It’s a comparable mistake to thinking that the time limit should be tied to viability. This doesn’t work, because medical technology is improving all the time, so the point at which a foetus becomes viable is getting earlier and earlier. So if that’s your moral case for why this right is important, that case is going to get eroded. Similarly, if the case is based on sentience, and on this claim that a foetus is not sentient, there’s every possibility that evidence will erode that case as well.

So I think it’s important to recognise that that’s not the case. The real basis of this right is bodily autonomy. So I have my own views, and the book doesn’t hide those views.

The cases for and against neural organoids

Luisa Rodriguez: To start us off, what exactly is a neural organoid?

Jonathan Birch: This is another very fast-moving area of emerging technology. Basically, it uses human stem cells that are induced to form neural tissue. The aim is to produce a 3D model of some brain region, or in some cases a whole developing brain.

Luisa Rodriguez: And what’s the case for creating them?

Jonathan Birch: I think it’s a very exciting area of research. You can make organoids for any organ, really. In a way, it’s a potential replacement for animal research. If you ask what we do now, usually people do research on whole animals, which are undeniably sentient. And here we have a potential way to gain insight into the human version of the organ. It could be a better model, and it’s much less likely to be sentient if it’s something like a kidney organoid or a stomach organoid. It’s really only when we’re looking at the case of the brain and neural organoids that the possibility of sentience starts to reemerge.

Luisa Rodriguez: Yeah. And intuitively, the case for sentience does feel like it immediately lands for me. If you are trying to make an organoid that is enough like a brain that we can learn about brains, it doesn’t seem totally outrageous that it would be a sentience candidate. What is the evidence that we have so far?

Jonathan Birch: It’s a complicated picture. I think there are reasons to be quite sceptical about organoids as they are now, but the technology is moving so fast, there’s always a risk of being ambushed by some new development. At present, it really doesn’t seem like there’s clear sleep/wake cycles; it doesn’t seem like those brainstem structures or midbrain structures that regulate sleep/wake cycles and that are so important on the Merker/Panksepp view are in place.

But there are reasons to be worried. For me, the main reason to be worried was a study from 2019 that allowed organoids to grow for about a year, I think, and compared them to the brains of preterm infants using EEG. So they used EEG data from the preterm infants to train a model, and then they used that model to try and guess the age of the organoid from its EEG data, and the model performed better than chance.

Luisa Rodriguez: Wow.

Jonathan Birch: So it’s hard to interpret this kind of study, because some people, I suppose, read it superficially as saying these organoids are like the brains of preterm infants. And that’s an exaggeration, because they’re very different and much smaller. But still, there’s enough resemblance in the EEG to allow estimates of the age that are better than chance.

Luisa Rodriguez: It’s definitely something. I find it unsettling, for sure.

Jonathan Birch: It is, yeah. I think a lot of people had that reaction as well, and I think that’s why we’re now seeing quite a lively debate in bioethics about how to regulate this emerging area of research. It’s currently pretty unregulated, and it raises this worrying prospect of scientists taking things too far — where they will often say, “These systems are only a million neurons; we want to go up to 10 million, but that’s so tiny compared to a human brain.” And it is tiny compared to a human brain. But if you compare it to the number of neurons in a bee brain, for example, that’s about a million. So these near-future organoids will be about the size of 10 bee brains in terms of neuron counts.

Artificial sentience arising from whole brain emulations of roundworms and fruit flies

Luisa Rodriguez: You emphasise that AI sentience could arise in a number of ways. I think I intuitively imagine it arising either intentionally or unintentionally as a result of work on LLMs. But one of these other ways is whole brain emulation. And one case I hadn’t heard that much about is OpenWorm. Can you talk a bit about the goals of OpenWorm and how that project has gone?

Jonathan Birch: This was a project that caught my eye around 2014, I think, because the goal was to emulate the entire nervous system of the nematode worm C. elegans in software. And they had some striking initial results, where they put their emulation in charge of a robot, and the robot did some kind of worm-like things in terms of navigating the environment, turning round when it hit an obstacle, that kind of thing. And it generated some initial hype. …

Luisa Rodriguez: It feels naive now, but it was eye-opening to me when you pointed out that we actually just wouldn’t need whole brain emulation in humans or of human brains to start thinking about the risks from AI sentience. We just need to go from OpenWorm to OpenZebrafish or OpenMouse, or maybe even OpenDrosophila — which sounds like not an insane step from just where we are now. How likely is it, do you think, that researchers would try to create something like OpenMouse?

Jonathan Birch: Oh, it’s very likely. If they knew how, of course they would. I think one of the main themes of that part of the book is that once we see the decoupling of sentience and intelligence — which is very important, to think of these as distinct ideas — we realise that artificial sentience might not be the sort of thing that goes along with the most intelligent systems. It might actually be more likely to be created by attempts to emulate the brain of an insect, for example — where the intelligence would not be outperforming ChatGPT on any benchmarks, but perhaps more of the relevant brain dynamics might be recreated.

Luisa Rodriguez: Yeah, it was a jarring observation for me. I think part of it is that it hadn’t occurred to me that people would be as motivated as they are to create something like OpenMouse. Can you say more about what the motivation is? Does it have scientific value beyond being cool? Or is the fact that it’s just a cool thing to do enough?

Jonathan Birch: I think it would have an immense scientific value. It would appear to be a long way in the future still, as things stand. But of course, we’re talking here about understanding the brain. I think when you emulate the functions of the C. elegans nervous system, you can really say you understand what is going on — and that just isn’t true for human brains, currently. We have no idea. At quite a fundamental level, our understanding of C. elegans is in some ways far better.

And it’s another step again, if you don’t just understand how lesioning bits of the nervous system affects function, but you can also recreate the whole system in computer software, would be a tremendous step.

And it holds the promise over the long term of giving us a way to replace animal research. Because once you’ve got a functioning emulation of a brain, you can step past that very crude method of just taking the living brain and injuring it, which is what a lot of research involves, or modifying it through genome editing. You can instead go straight to the system itself and do incredibly precise manipulations.

So I feel like, if anything, it hasn’t been hyped enough. I want more of this kind of thing, to be honest, than has been the case so far.

Luisa Rodriguez: Intuitively, it seems plausible — and maybe even likely — that if you were able to emulate a mind that we thought was sentient, that the emulation would also be sentient. But is there a reason to think those come apart? Maybe we just don’t know.

Jonathan Birch: It’s another space where you get reasonable disagreement, because I think we have to take seriously the view that philosophers call “computational functionalism”: a view on which, if you recreate the computations, you also recreate the subjective experience. And that leads to further questions about at what grain does one have to recreate the computations? Is it enough to recreate the general type of computation? Or does every algorithm at every level, including the within-neuron level, have to be recreated? And there too, there’s disagreement.

I think we have to take seriously the possibility that recreating the general types of computations might be enough. I call this view “large-scale computational functionalism”: that it might be a matter of simply creating a global workspace or something like that, even if the details are quite different from how the global workspace is implemented in the human brain.

And if we take that view seriously, as we should, it really does suggest a kind of parity. I wouldn’t want to overstate it, because I’d say that the probability of sentience is higher in the biological system than in its software emulation. But still, that software emulation is potentially a candidate.

Using citizens' assemblies to do policymaking

Jonathan Birch: That’s how I propose doing it in the book: that you need to have scientific experts who can convey the zone of reasonable disagreement — experts who are not scientifically partisan, they’re not just going to bang a drum for their favourite theory of consciousness, but will try to give a sense of the different views that exist in the scientific community on the question. And of course, experts have to do that, and the public should not then be asked to referee that dispute, because that would go horribly wrong.

What they need to be asked is evaluative questions about what would be proportionate to specific identified risks. And I give this “pragmatic analysis,” as I call it, of proportionality in terms of four tests: it’s about looking for responses that are permissible in principle, that are adequate, that are reasonably necessary, and that are consistent. These questions are asking people about their values, and people can answer questions about their values. I think it’s not unrealistic to think that an assembly could come to a judgement of whether this proposed policy is proportionate to this identified risk.

Luisa Rodriguez: Yeah, I’m interested in hearing about the case where you participated. Did you say it was genomics?

Jonathan Birch: It was about genome editing of farm animals. It was run by the Nuffield Council on Bioethics, and it was quite a positive experience for me. I was participating as an expert, obviously, and had a worry going in that these panels are just exercises in expertise laundering: that the experts state their views, and then those views come back freshly washed as the will of the public.

And that wasn’t what happened at all in this case. The public was in some sense better than the experts at challenging assumptions and breaking out of groupthink. Groups of experts are very susceptible to forms of groupthink, and groups of randomly selected citizens seem to suffer from it rather less.

In the case of genome editing, it’s a separate issue of course, but there’s very big issues around which corporations are going to benefit from a liberalisation of the law in this area, and how much should we trust the narrative they’re presenting about how it will help animal welfare rather than making it worse? And a lot of this narrative experts were accepting and the public just wasn’t taking, and I found that quite reassuring.

Luisa Rodriguez: Wow. So something like, there was a narrative about how genome editing might be used to improve the welfare of farmed animals?

Jonathan Birch: There very much is, yeah.

Luisa Rodriguez: And experts had kind of accepted that. Then in practice, the public participating in the panel was like, “It’s not clear that’s the main use. Maybe this is actually going to be used for things we don’t endorse, like making chickens fatter.” Is that the kind of thing?

Jonathan Birch: Yeah.

Luisa Rodriguez: That’s really impressive.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities