I wrote this post because I’m looking to get feedback on my plans, to describe a neglected research path to others, and to seek funding for continuing to research in this area. I don’t have a strong academic background in this area (I just have a philosophy BA) but I’ve been especially interested in the question for around six years now. I’ve now finished a contract with Rethink Priorities on invertebrate sentience. I’ve also spent around the last nine months studying it. If you’re interested in funding me or collaborating please get in touch! Use the term "sentience" synonymously with "phenomenal consciousness."

Doing research on invertebrate sentience would also tend to involve some research on vertebrate sentience because much of the literature of which criteria to use to assess sentience is done in the context of vertebrates. This would mean that invertebrate sentience research would also contribute to research on vertebrate sentience, though this would not be the priority.

I’m generally not talking about cephalopod sentience because cephalopods have a much stronger case for their sentience already, they are much less numerous than smaller invertebrates, and more people are already concerned for their welfare. My position on sentience is relatively similar to Luke Muehlhauser’s, as presented in his Report on Consciousness and Moral Patienthood. My approach is inspired by him.

In addition to the agent-neutral reasons I will present in this post about why we might want to focus more resources on invertebrate sentience, I think I have a strong comparative advantage in doing invertebrate sentience research compared to other things that we have strong reasons to focus on, like AI safety research.

Importance:

The two main ways that I see this research having an impact is by better informing us to make decisions about improving well-being and by functioning as a form of indirect advocacy. I think that it can help us better make decisions about current interventions (such as humane insecticides) as well as understand consciousness better so that we can create happy minds and not create suffering minds.

Though much research remains to be done on the question, humane insecticides seem to me to have a lot of potential as a robust way of reducing suffering now. The idea would be to try to get farmers to adopt insecticides that either killed more quickly than current insecticides, killed through a less painful mechanism, or perhaps did not kill nontarget species of insects.

Humane insecticides as an intervention seems to both have higher expected value and to depend on fewer ecological assumptions than other methods of helping wild animals. The sign of the intervention also does not depend on whether insects live net positive or negative lives. Research on invertebrate sentience is probably complementary to work on getting humane insecticides implemented.

One of the main ways research may also be able to help is as a form of indirect advocacy. I believe that society should be thinking a lot more about invertebrate sentience, and research can raise awareness about this issue. Research might be less directly persuasive then more direct forms of advocacy (because it is not optimized for that purpose), but I think that there is also less worry about backlash from it. It also seems to me like there needs to be a combination of advocacy and research to support that advocacy, or else the advocacy will not be well seen. One reason for this is that advocacy is more zero-sum than research because from the perspective of society advocacy shifts the distribution of the pie whereas research increases it. This more cooperative nature is another reason why research is important. In practice is probably a fine line between pure research and direct advocacy, but it is possible to strike different balances between them.

I also believe that invertebrate sentience research may be promising if you accept the overwhelming importance of the far future. This is because invertebrate sentience research may be applicable to the question of digital sentience. There may be many simple, potentially conscious, digital minds in the future, and understanding if they are conscious seems important so that we can avoid creating a lot of suffering and create a lot of happiness.

When I bring this up with EAs who are focused on AI safety, many of them suggest that we only need to get AI safety right and then the AI can solve the question of what consciousness is. This seems like a plausible response to me. However, there are some possible future scenarios where this might not be true. If we have to directly specify our values to a superintelligent AI, rather than it learning the value more indirectly, we might have to specify a definition of consciousness for it. It might also be good to have a failsafe mechanism that would prevent an AI from switching off before implementing any scenario that involved a lot of suffering, and to do this we might have to roughly understand in advance which beings are and are not conscious.

While some of these scenarios seem plausible to me, they are also somewhat specific scenarios that depend on certain assumptions about the importance of AI in the future and how the process of AI alignment might go. I think understanding digital sentience may be more robustly good than AI alignment research because in a greater variety of future worlds, understanding digital sentience will be important.

In my research I intend to focus somewhat on the question of digital sentience, but still focus mainly on invertebrate sentience even though I view the question of digital sentience as more important. This is because I view invertebrate sentience research as more robustly good, more tractable, less weird looking, and as also contributing significantly to our understanding of digital sentience. If we were closer to achieving digital sentience then I would focus more directly on that question.

I believe research on invertebrate sentience contributes to our understanding of digital sentience and vice versa. Indeed, all research on sentience is helpful for doing research on any other kind of sentience. For understanding the possibility of sentience in beings who are very different from us it is helpful to understand cases of sentience that are clearer to us, so that we can understand what we are looking for. Researching digital sentience (and the possible sentience of other entities such as plants or bacteria) can also give us perspective that helps with our understanding of invertebrate sentience, but for now I think researching invertebrate sentience is more promising.

Neglectedness:

Invertebrate sentience research may be very neglected. With rare exceptions, most people do not care at all about invertebrate sentience. This lack of caring may be largely due to factors that are not morally defensible, such as the fact that invertebrates look very different from us or are much smaller than us.

Invertebrate sentience research is somewhat less neglected than general efforts to help invertebrates. Some relevant research gets done out of intellectual curiosity, and quite a bit more research than that gets done because it helps humans in some way. There is a much smaller amount of research done directly on the question of invertebrate sentience. I tend to think that invertebrate sentience research is more important than current efforts to help invertebrates because invertebrate sentience research may shape the far future more than interventions to help invertebrates now.

Invertebrate sentience research lies at the intersection between biology, philosophy, neuroscience, and computer science to some extent. This tends to mean that there are fewer experts in this area than one might expect. Many invertebrate biologists who might otherwise have a lot to contribute in the area are not philosophically inclined, and have not thought about the ethical implications of their knowledge, and so become confused about the question of insect sentience.

Tractability:

I don’t believe that research on invertebrate sentience is very tractable. This is because consciousness is a thorny issue. Some authors such as Daniel Dennett have claimed to explain consciousness, but they haven’t offered us plausible criteria for determining that an entity is or is not conscious. And this is what we need to find. However, I do think that due to its neglectedness, there is some room for making more progress on the question than might be expected given its intractability.

Luke Muehlhauser claims (and I agree) that we would be hard-pressed to assign a very high or very low probability to invertebrate sentience. This would suggest that there is a ceiling on how useful further research may be on this question. Muehlhauser also mentions (and I agree) that adjusting the probabilities you assign the sentience of different entities does not affect the expected value dramatically. For example, the difference between assigning a 5% probability and a 50% probability is epistemically vast but arguably practically insignificant. It merely affects the amount of expected value represented by invertebrates by one order of magnitude. There are very roughly 10^18 insects in the world, and this number is still multiple orders of magnitude higher than the number of vertebrate animals.

This would suggest that working on invertebrate sentience is not important. However, there are some good arguments that talking about degree of sentience makes sense, and our best assessment of the degree of sentience of an entity is something that more plausibly can be shifted multiple orders of magnitude through research into the question.

It also seems plausible that many of the same arguments about why an invertebrate species might be more likely to be sentient also apply as arguments about why that species might have a higher degree of sentient (if they are sentient at all) . This is because most of the best arguments why an invertebrate species might be more likely to be sentient points to a similarity related to sentience in us between our mind and theirs. Similarities between our minds and theirs are also evidence of similar degrees of sentience. This means that we may not have to prioritize much between these two approaches. This has been my impression so far, but I imagine future research might indicate to me more ways that would provide evidence for one of these without providing evidence for the other.

Most people do not make an expected value calculation when it comes to these questions, and so doing research on the likelihood of invertebrate sentience can still be useful to update and persuade them. In my experience, most people typically roughly treat the question of invertebrate sentience as being either yes (100%) or no (0%).

What my next steps would be:

I particularly want to do research into degree of sentience and write about that. I plan to write shorter blog posts as I go because it is easy enough to do, helps me consolidate knowledge, and gives me a faster feedback loop. I will also try to work on some longer, more polished, documents. I will in all likelihood continue to update the table of potential consciousness indicating features that will be published with the report on invertebrate consciousness I worked on with Rethink Priorities.

Comments24
Sorted by Click to highlight new comments since:

I remain skeptical of how much this type of research will influence EA-minded decisions, e.g. how many people would switch donations from farmed animal welfare campaigns to humane insecticide campaigns if they increased their estimate of insect sentience by 50%? But I still think the EA community should be allocating substantially more resources to it than they are now, and you seem to be approaching it in a smart way, so I hope you get funding!

I'm especially excited about the impact of this research on general concern for invertebrate sentience (e.g. establishing norms that there are at least some smart humans are actively working on insect welfare policy) and on helping humans better consider artificial sentience when important tech policy decisions are made (e.g. on AI ethics).

My prior here is brain size weighting for suffering, which means insects are similar importance to humans currently. But I would guess they would be less tractable than humans (though obviously far more neglected). So I think if there could be compelling evidence that we should be weighting insects 5% as much as humans, that would be an enormous update and make invertebrates the dominant consideration in the near future.

Based on Georgia Ray's estimates, it looks like there are > 100x more neurons in soil arthropods than in humans.

Soil arthropods:

Using this, we get 1E22-1E23 neurons from large arthropods and 6E22 neurons from smaller arthropods, for a total of 6E22-2E23 neurons in soil arthropods.

Humans:

[...] we get 6.29E20 neurons in humans [...]

Shouldn't we weigh neurons by level of graph/central complexity? (eg neurons by how "central" they are to the system). Many neurons simply don't factor into evaluations of hedons (even motor and sensory neurons)

I agree, but I'm not sure how available this info has been, maybe until recently. This might be useful approximation:

https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Sensory-associative_structure

Number of synapses could also be relevant, but I'd assume this data is even harder to find.

Yeah, even the information for total number of neurons is absent for many invertebrates. More specific information like that would rarely be available.

Thanks! However, neurons in smaller organisms tend to be smaller. So I think the actual brain mass of humans would be similar to the land arthropods and the nematodes. Fish are larger organisms, so it does look like the brain mass of fish would be significantly larger than humans. There is the question of whether a larger neuron could provide more value or dis-value than a smaller neuron. If it is the same, then neuron count would be the relevant number.

Nice post Max. I found this by backtracking from the recent posts from Rethink Priorities on invertebrate sentence and am glad that this is starting to gain research traction. A few comments:

Research might be less directly persuasive then more direct forms of advocacy (because it is not optimized for that purpose), but I think that there is also less worry about backlash from it.

Research on invertebrate sentiance is controversial in research, and I expect it will be hard to except for vertebrate focused researchers. For instance, Andy Barron's PNAS article received three rebuttal letters. It has also received a lot of citations, and while I have not looked through them in detail, I suspect it would not be referred to favourably in vertebrate literature (looking at these citations could in itself be an interesting subproject to see how this high profile paper was received in different fields). Academic research can be quite political, and professors often maintain their stance on controversial topics longer than the evidence suggests they should. It's hard to predict how this will influence public opinion, but as the media often likes to get both sides of a story, any press describing an invertebrate sentience study is likely to note the controversy with an unfavorable quote from a vertebrate researcher. Perhaps a form of research advocacy could involve synthesising the arguments for invertebrate sentience in a non-confrontational and comparative (to vertebrates) way for publication in a vertebrate focused specialist journal.

Many invertebrate biologists who might otherwise have a lot to contribute in the area are not philosophically inclined, and have not thought about the ethical implications of their knowledge, and so become confused about the question of insect sentience.

Agreed, I have a background in robotics and computational-behavioral-invertebrate-sensorimotor-neuroscience (ok, that a bit of smash together of fields) although I am now doing more work in computational physics and 3D imaging. When doing neuroscience studies on invertebrates explaining a behaviour as conscious would be completely unacceptable by publication stage (although invertebrate researchers do tend to anthropomorphize the actions of their study animals while in the lab). Even behaviour that seems quite intelligent (like learning) becomes practically reflexive as soon as you can pin down the underlying neural circuit in an invertebrate. This partly from my group's approach which which was always mechanistic and focused on a reductionist approach. However, I suspect that research on similar topics in humans doesn't result in the 'magic' of intelligence being lost when, say, learning can be described by a circuit. Since becoming involved with EA I have become more aware of the philosophical discussion around invertebrate neuroscience, but I suspect there are not many others.


When I bring this up with EAs who are focused on AI safety, many of them suggest that we only need to get AI safety right and then the AI can solve the question of what consciousness is. This seems like a plausible response to me. However, there are some possible future scenarios where this might not be true. If we have to directly specify our values to a superintelligent AI, rather than it learning the value more indirectly, we might have to specify a definition of consciousness for it. It might also be good to have a failsafe mechanism that would prevent an AI from switching off before implementing any scenario that involved a lot of suffering, and to do this we might have to roughly understand in advance which beings are and are not conscious.

It seems like there is some asymmetry here as is common with extinction risk arguments: if we think that we will, eventually, figure out what consciousness is then, as long as we don't go extinct, we will eventually create positive AGI. Whereas, if we focus on consciousness and then AGI kills everyone, we never get to a positive outcome.

I think the original argument works if our values get "locked in" once we create AGI, which is not an unreasonable thing to assume, but also doesn't seem guaranteed. Am I thinking through this correctly?

There's some related discussion here.

Lock in can also apply to "value-precursors" that determine how one goes about moral reflection, or which types of appeals one ends up finding convincing. I think these would get locked in to some degree (because something has to be fixed for it to be meaningful to talk about goalposts at all), and by affecting the precursors, moral or meta-philosophical reflection before aligned AGI can plausibly affect the outcomes post-AGI. It's not very clear however whether that's important, and from whose perspective it is important, because some of the things that mask as moral uncertainty might be humans having underdetermined values.

Kudos for taking on something important & outside the canonical EA paths!

---

My initial thought on invertebrate sentience is similar to my thinking about animal welfare interventions generally: in the long run, the main effect of this kind of work will probably be how much it impacts humanity's moral circle.

In optimistic scenarios, humans will steer the fates of all other species for the foreseeable future (we will at least give a lot of input to our successors, in the transhumanist case).

So the main thing to get right is ensuring that humans steer in a good direction. More on this here.

(Only applies if you hold a longtermist view.)

Thanks Max - More research in this space feels important. For me, degrees of sentience should determine how much moral consideration we should grant to things (animals, humans, maybe even aliens and AGIs).

I wrote this re: sentientism - may be of interest https://secularhumanism.org/2019/04/humanism-needs-an-upgrade-is-sentientism-the-philosophy-that-could-save-the-world/ .

Thanks Jamie!

Nice article. Thanks for the link.

I don't think I agree with your claim in the article that degrees of sentience has been scientifically demonstrated. Is there a source you have in mind for that? I've been looking at the literature on the topic and it seems like the arguments that there do exist degrees of sentience are based in philosophy and none are that strong.

I guess the reason you are using sentientism rather than hedonistic utilitarianism is because you think the term sounds better/has a better framing?

Thanks Max.

I'm an amateur here so my confidence level isn't necessarily that high. I am taking "degrees of sentience" from the research (as summarised in Luke's paper) that shows varying levels of complexity in the nervous systems that generate sentience and the behaviours that demonstrate it. Given sentience is a subjective experience it's hard to judge its quality or intensity directly. However, from examining behaviour and hardware / biology, it does appear that some types of sentience are likely to be richer than others (insect vs. human for example). Arguably, that could warrant different degrees of moral consideration. I suspect that, while we will want to define a lower boundary of sentience for ethical consideration reasons, we may never find a clear binary edge. Sentience is likely to be just a particular class of advanced information processing.

I'm using the term sentientism partly because it helps focus on sentience as the primary determinant of which beings deserve moral consideration. We can use it to take decisions about whether to have compassion for humans, non-human animals and potentially even sentient AGIs or aliens. Hedonistic Utilitarianism implies sentience (given it focuses on the experiences of pleasure / suffering) - but has traditionally (despite Bentham) focused only on human experience.

Sentientism, like Humanism, also has an explicit commitment to evidence and reason - rejecting supernatural rationales for morality. As I understand hedonistic utilitarianism it is neutral on that perspective.

For anyone interested in refining these ideas, we run a friendly, global group re: Sentientism here: https://www.facebook.com/groups/sentientism/ . All welcome whether or not the term fits personally. Philosophers, writers, activists, policy people + interested lay people (like me) from 43 countries so far.

Yeah, fair enough. I wish you good luck with your group and project :)

"For example, the difference between assigning a 5% probability and a 50% probability is epistemically vast but arguably practically insignificant. It merely affects the amount of expected value represented by invertebrates by one order of magnitude. There are very roughly 10^18 insects in the world, and this number is still multiple orders of magnitude higher than the number of vertebrate animals."

Given this point, and the implications of Jacy's comment, perhaps it would be preferable to conceptualise the impact of this research/career plan in this area as a form of advocacy, rather than as a form of enhancing our knowledge and affecting cause prioritisation?

In some ways, your rough career trajectory might look similar, but it might affect some decisions e.g. how to split your time between focusing on further research and focusing on giving talks to EA groups, academic settings etc.

I think you may be right that I should pivot more in that direction.

Research on degrees of sentience (including if that idea makes sense) and what degree of sentience different invertebrates have might still be relevant despite the argument that you're quoting.

When I bring this up with EAs who are focused on AI safety, many of them suggest that we only need to get AI safety right and then the AI can solve the question of what consciousness is.

I find this somewhat frustrating. Obviously there's a range of views in the EA community on this issue, but I think the most plausible arguments for focusing on AI safety are that there is a low but non-negligible chance of a huge impact. If that's true, then "getting AI safety right" leaves a lot of things unaddressed, because in most scenarios "getting AI safety" right is only a small portion of the picture. In general I think we need to find ways to hold two thoughts at the same time, that AI safety is critical and that there's a very significant chance of other things mattering too.

[anonymous]1
0
0
If that's true, then "getting AI safety right" leaves a lot of things unaddressed, because in most scenarios "getting AI safety" right is only a small portion of the picture.

I didn't understand this. Could you explain more?

I guess I'm confused about the relationship between digital sentience & invertebrate sentience.

Indeed, all research on sentience is helpful for doing research on any other kind of sentience.

Could you expand on this more?

Seems like you're saying something similar to "doing work on one philosophical question is helpful to all other philosophical questions", which I probably disagree with though haven't thought about closely.

Work on digital sentience probably has to think a lot about e.g. the Chinese room, whereas I imagine invertebrate sentience work as thinking more about the border between animals that seem clearly sentient and animals that we're unsure about.

I think it's somewhat stronger than "doing work on one philosophical question is relevant to all other philosophical questions."

I guess if you were particularly sceptical about the possibility of digital sentience then you might focus on things like the Chinese room thought experiment, and that wouldn't have that much overlap with invertebrate sentience research. I'm relatively confident that digital sentience is possible so I wasn't really thinking about that when I made the claim that there is substantial overlap in all sentience research.

Some ways in which I think there is overlap are that looking at different potential cases of sentience can give us insight into which features give the best evidence of sentience. For example, many people think that mirror self-recognition is somehow important to sentience, but reflecting on the fact that you can specifically design a robot to pass something like a mirror test can give you perspective as to what aspects if any other test are actually suggestive of sentience.

Getting a better idea of what sentience is and what theories of it are most plausible is also useful for assessing sentience in any entity. One way of getting a better idea of what it is is to research cases of it that we are more confident in such as humans and to a lesser extent other vertebrates.

Reflecting on the mirror test - nice pun!

I see – I was imagining more skepticism about the possibility of digital sentience.

This recent book review about octopus consciousness on LessWrong might be helpful.

Thanks for the link! I'm a pretty big fan of that book.

Curated and popular this week
Relevant opportunities