Hide table of contents

TL;DR: Current AI systems often exhibit speciesism, a bias where intelligence is often used to justify harmful treatment of less intelligent species. As AI progresses and may surpass human intelligence by mid-century, there's a risk that such systems could treat us as we currently treat animals, based on the same species-based bias. To mitigate this risk and ensure AI aligns with the interests of all sentient beings, we must train AI to overcome speciesist biases.

Current Focus of AI Safety and Ethics

The current fields of AI safety and ethics are almost exclusively focused on aligning AI systems with human interests (whether short term or long term), while the concerns of non-human animals are rarely, if ever, considered. This poses not only immediate short term risk to animals, but also significant existential risk to humanity longer term.

AI Learning from Human Biases

Our most powerful AI systems today have predominantly learned from vast quantities of data produced by humans and scraped from the internet en-masse. Therefore, it should come as no surprise that early versions of these AI systems trained on our collective knowledge quickly started exhibiting some of our most problematic tendencies. For example, this research paper found significant biases against certain racial or religious groups in large language models whilst this one found significant biases on the basis of gender, race, and religion.

Progress in Reducing Human-Centric Biases

Fortunately, we swiftly recognized this issue and have made substantial progress in addressing it within the human context by using techniques like Reinforcement Learning from Human Feedback or RLHF (a machine learning technique where AI learns desired behaviours by receiving feedback from humans). By doing this we’ve significantly reduced biases like racism, sexism or homophobia in modern AI systems, but we’ve ignored one important bias: speciesism.

The Overlooked Bias: Speciesism

Speciesism is defined as discrimination or unjustified treatment based on an individual's species membership and it is rampant in modern AI systems. One research paper found that ”speciesist biases are solidified by many mainstream AI applications, especially in the fields of computer vision, as well as natural language processing” and another found that “language models tend to associate harmful words with nonhuman animals and have a bias toward using speciesist language for some nonhuman animal names”.

The Intelligence Justification

Speciesism is very often justified based on intelligence, for example, it is common for people to justify killing and eating non-human animals on the basis that they are less intelligent than humans. This research paper found that most people believe “intelligence [is a] relevant factor in how animals should be treated” and perceive the animals they consume as less intelligent than the ones they don’t consume, even though this is empirically untrue.

Predictions and Risks of Super-Intelligent AI

The majority of AI experts think that the chance of creating AI systems more intelligent than humans by 2040-2050 is more than 50%, and there’s a 1 in 3 chance that it will be “bad” or “extremely bad” for humans.

When we do eventually have AI systems more intelligent than humans, there is a significant risk of those super-intelligent systems viewing us the way we typically view animals and justifying harming or exploiting us because we are the less intelligent species.

The Human-Induced Extinction Crisis

“We now face a massive human-induced extinction crisis, with extinction rates estimated at 1000 to 10,000 times the expected rate” according to this research paper. This crisis is largely driven by a human-centred view of the natural world, where speciesism—discrimination based on species membership—often justifies the exploitation of other species. This bias is evident in the way humans prioritise their own short-term interests over the long-term survival of other species and is often rationalised by the perceived lower intelligence of other species, which is used to justify their exploitation and the destruction of their habitats for human gain.

The Parallel Between AI and Human Threats

If future AI systems were to adopt a similar bias, valuing intelligence as the primary metric for the moral worth of a species, humanity could face significant existential risks. Just as humans have historically justified the subjugation of less intelligent species, a super-intelligent AI might use the same justification for the unfavourable treatment of humans. The risk is that an AI, operating on an intelligence-based hierarchy, could disregard human interests or well-being, just as humans have done with other species.

Aligning AI with All Sentient Beings

If we want to stand the best possible chance of keeping those super-intelligent AI systems aligned with human interests, it seems like a logical place to start would be training AI to recognise and respect the interests of all sentient beings, regardless of their intelligence, instead of training them that it is acceptable to exploit and harm less intelligent species. Training speciesism out of AI systems will help us ensure that the future of AI benefits all living beings, not just whichever species happens to be the most intelligent at the time. 

Conclusion

The risk of speciesist bias in AI is not just a concern for non-human animals but a potential existential threat to humanity itself. As we move forward with AI development, it's imperative that we broaden the scope of AI ethics to include all sentient beings. By proactively training AI systems to recognize and value the interests of all forms of intelligence, we can strive to create a future where AI acts as a benevolent force for the entire biosphere, not just the dominant species. This shift in perspective is not only an ethical imperative but a crucial step towards ensuring a safer coexistence with the intelligent machines of the future.

Comments1
Sorted by Click to highlight new comments since:

Hey, thank you for this post!

It sounds extremely plausible that it should be a priority to avoid speciesism (on the grounds of intelligence and other factors) in AI (and reduce it in humans).
Just out of curiosity, it seems important to identify how our current species bias on the grounds of intelligence looks like:

a) 'any species with an average intelligence that is lower than average human intelligence is viewed to be morally less significant.'

b) 'any species that is (on average) less intelligent than one's own species, is morally less significant.'


If (a), would this imply that AI would not harm us based on speciesism on the grounds of inferior intelligence? 
Would love to hear people's thoughts on this (although I realise that such a discussion, in general context, might be a distraction from the most important aspects to focus on: avoiding speciesism).

Curated and popular this week
Relevant opportunities