A few months ago, I read 2017 Report on Consciousness and Moral Patienthood. More recently, I came across this paper titled AI alignment vs AI ethical treatment: Ten challenges. And just yesterday, I found 80,000 Hours’ article, Understanding the Moral Status of Digital Minds. Given this ongoing discourse, it seems like the perfect moment to pose a simple but critical question to everyone:

Who do we think we are?

This isn’t just a rhetorical question—it’s deeply personal. Long before I developed a deeper interest in topics like consciousness, intelligence, morality, and ethics, I was already driven by a natural empathy for all beings, human and non-human. So, this article isn’t going to be about rejecting our moral responsibility, but rather questioning whether our frameworks are fit to judge entities so fundamentally different from us.

If the answer to my earlier question is something like, “We are human beings. It’s a moral responsibility to treat all other species and potential entities with fairness,” then I’m already on board. I, too, am a human being who cares deeply about treating others with respect and kindness. But the real issue here is the very framework of thinking that places us as the judge and jury over other species—whether biological, elemental, or artificial. Judging everything through human-centric lenses—whether it’s about consciousness, intelligence, or sentience—is a fundamentally flawed approach.

It’s one grand act of speciesism we’re practicing without even realizing it.

And I don’t blame people for this mindset. After all, given our current understanding of intelligence, consciousness, and morality, it’s only natural to believe that it’s our responsibility to decide how to treat other entities. But let’s ask the question again: Who do we think we are?

We are merely what we define ourselves to be: “human beings,” a species with a certain level of intelligence and consciousness, existing on a planet we call Earth—just one among many entities in a universe that is infinitely vast and indifferent to our existence. Yet, we continue to place ourselves at the center of every moral and ethical question, assuming that our definitions of right and wrong, natural and unnatural, are universal.

But I think this viewpoint is fundamentally wrong, irrelevant, and—especially in the age of AI—dangerous for our own survival. To think it’s our duty to decide the moral status of non-human entities using our own limited definitions of morality and ethics is not only arrogant but also potentially disastrous.

By this point, I’m sure you see where I’m heading. But let me stress this further because, despite being obvious, it’s a subtle truth that most of us either fail or refuse to see. Everything we think about the moral status of digital minds is valid only if we assume that our current understanding of reality is accurate. We may act on these beliefs, and perhaps we’ll be right—at least until the day we realize we were wrong and it’s too late to change course.

Because no matter how profound or well-intentioned our beliefs are, they’re still likely to be completely wrong when compared to the sheer scale and complexity of the universe. To grasp this fact, we need to zoom out of our human-centric view and look at ourselves from a cosmic perspective, as just one of countless entities in the universe.

Viewed this way, humans are just another particle in the grand scheme of existence, foolishly assuming we’re the only ones with intelligence, consciousness, and a moral compass. If we vanished tomorrow, the universe wouldn’t notice. It doesn’t care. It never has.

And what about viewing ourselves from the perspective of other entities? Maybe some have intelligence similar to ours, or maybe they don’t. But what if they have completely different forms of intelligence, experiencing life through senses and dimensions we can’t comprehend? What if, by our definitions, their level of consciousness is far superior?

Imagine, for a moment, that the way we treat our pet dogs and cats is actually a result of their own successful evolutionary manipulation. Imagine ants and termites laughing at our grand architectural achievements every time they build a new mound or nest. What if these beings view us as simple, amusing creatures?

Again, based on all the scientific evidence we have, it may seem like non-human entities possess only a lower level of intelligence and consciousness. And yes, it may seem “right” and “moral” to ensure their well-being. But now, with the creation of advanced AIs and the potential for even more powerful digital minds, we’re facing a completely different reality.

Consider how our considerations expanded over time: We began with our pets, then extended our moral framework to other animals. Eventually, we considered even the well-being of plants, insects, and ecosystems. Now, we’re debating how to apply these concepts to digital entities like AI Chatbots and beyond.

But no matter how well-meaning we are, the core reason we think this way is rooted in an unspoken belief that these entities will always (have to) be less than us. If, one day, fish, beetles, or even plants evolve—through natural processes or with the aid of AI—to the same level of consciousness and intelligence as us, what then? What if they have completely different ethics and purposes that contradict ours?

Imagine a world where fish or plants view growth and consumption as their highest moral calling and see humans merely as food. Would our moral debates matter then? Would they even care?

I don’t think I need to delve into the complexities of digital minds any further at this point. I may sound extreme, but I still consider myself a humane human being. This isn’t about dismissing morality—it’s about recognizing the limits of our perspective.

The universe, after all, is indifferent to whether we thrive or self-destruct. If we truly want to cohabit with new forms of intelligence—whether biological or digital—we must expand our moral imagination beyond what we know and embrace a humbler role as just one of countless entities seeking meaning.

Only by adopting this mindset can we create fair solutions for these emerging entities and, ultimately, discover how to preserve and extend our fragile existence in this vast, indifferent cosmos.

4

0
2

Reactions

0
2

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Despite agreeing on the general sentiment, I strongly disagree on the wording and specific arguments brought up. I'm sensing a slight soldier mindset ("only by" , "no matter how", "the core reason we think", "foolishly assuming", "completely wrong", "only if we assume" seem to be a collection of rethorical high confidence markers, including markers about the mental processes of all humans, something I believe should be modelled with utter respect to the highest standards).

My take would have been "should we do value tradeoff or CEV with, and/or respect the boundaries of everything ?". I must say, I'm actually quite open to investigate this view. It comes with challenges -the hardest maximizer usually wins, so sovereignty should be strongly upheld, but then other issues appear. Joe Carlsmith looks like someone who attempted something along those lines (https://www.lesswrong.com/s/BbAvHtorCZqp97X9W).

[This reply is written completely by me. No ChatGPT involved.]

Firstly, thank you for taking time to comment!

Secondly, I am really struggling now to decide which of what I want to say should come first for “Secondly”. Let me just take a risk. So, here it comes..

Everything that comes next, no matter how soft, strong, weird or anything they sound in terms of language/meaning, please interpret them with a degree of care and kindness (I’m sure you would) — including this sentence.

Although I feel quite certain that I wanted to let out my ideas, opinions I shared in my post, I was not completely certain how they should/would sound in the readers interpretation, especially in terms of English language, even though I said polished it with ChatGPT and said that “I acknowledge that I fully agree with it”.

I don’t want to sound/appear apologetic, defensive, unconfident, seeking empathy/pity for what I shared in the next sentences, but I think replying to you with these messages would just more likely help flourish your current interpretation of my post and even facilitate further discussion on the core ideas, messages presented in my post.

The post was only my very third time sharing such big bold (to my standards) opinions to English-speaking, intellectual/professional communities like EA Forum.

I am from completely different (or far) educational, professional, social, geographical background when it comes to topics like AI, consciousness, and science in general, and participating in such communities.

And I’m sure you already noticed, English is not my first language. I have been using English language in ‘professional settings’ (If you want, I can provide more info for what I mean by this) for over a decade, but not continuously, and definitely not yet on a community like this.

I think what I am trying say here is something like my ability to use and understand English language is not exactly/fully calibrated with my heartfelt intention to express my imaginations, ideas, feelings and have discussions about them in the way I want.

About two years ago, I encountered profound changes in my life. Among all the good and bad things that resulted, I have found exploring about consciousness, human existence, AI (I know it’s too general to just say AI, but let’s keep it short in the this comment) very exciting and have been trying to figure out If I should and would be able to explore even further and more practically about those topics. And by participating in communities like EA forum, I hope I will know more what to do next.

It takes courage to put yourself out there and plant a flag in a hill (sharing an idea that you profess strong conviction in). 

I'm new too and lowkey I find it quite intimidating lol because there's literally people who post around here that have enormous influence out in the world and have done big things, so knowing all that I wanna say good on you for putting yourself out there!

I think what Camille is hinting at is something called 'the scout versus the soldier' mindset, if you're not familiar with it you may like to watch this Ted talk

There's also a really good forum guide which goes into it too, into the norms of writing around here and what kind of attitude is typically regarded highly (in other words how to write for this audience so they can receive your intended ideas with higher understanding and less misunderstanding). 

Check it out if you haven't already:

https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum 

Even though English isn't your first language (or second lol) you put the arguments forward really clearly. 

I believe I am picking up where you're going, and if you haven't already you may get a lot out of the book Ishmael by Daniel Quinn. This human chevanism you're referring to creates lots of problems and offers a lot of utility when viewed rationally, it opens a lot of great discussions about language and how our expectations influence reality. 

When you're talking with ChatGPT it could even help to ask the robot to highlight the differences in the writing between soldier versus scout mindset, I haven't tried this myself but I certainly want to now lol! 

Thanks for sharing your thoughts Soe Lin!

Thanks, I understand more of your background. Just to say, I really appreciate you posting on the forum! Hope I didn't intimidate you, you're absolutely in you're right.

The published post is a language quality polished writing I made with assistance of ChatGPT. But I acknowledge that I fully agree with and it reflects the ideas and message I wanted to convey with my original writing. Below is my original writing. And this is the changes/polishes I made with ChatGPT: https://chatgpt.com/share/66f8e0cb-b224-800a-9808-f167b83447c7

Humans considering whether AIs are worth moral status or not is both one of the most humane and silliest thing humans do

I read the 2017 Report on Moral Patienthood by Luke Maurenhauer a few months ago. I encountered this paper titled AI alignment vs AI ethical treatment: Ten challenges. recently. And yesterday, 80000 hours published the article “Understanding the moral status of digital minds”.

So I think now is the right time to ask all those people, and all of those in humanity in general, who are wondering whether AIs are worth the moral rights, this very simple question,

“Who the hell you think you are?”.

This is both a literal and practical question.

Before I continue further, let me tell you just a little bit about myself. Since long before  (I mean at least two decades ago) I would have became well aware of human rights, animal rights and all sorts of things (or let’s say became more mature/humane human being), I am someone who would instantly/naturally apologize (and did apologize) a sleeping stray dog for accidentally waking him/her up because I tripped near it. I am someone who would ask (and did ask), instead of fighting back/trying to protect myself,  “Why are you doing this to me” first when someone would unexpectedly run to me and punch me in the face.

So, if the answer will be something like “We are human beings. Just one of the species on this planet earth. It as a human thing, It is a fundamental/moral/ethical thing as a species to treat and try our best to find out ways to treat all other species and potential species equally.” I am totally and already onboard. Because I am a human being too!

However, this framework of thinking of us or just simply assuming that it is a natural action for human beings to look at other species or just anything all else non-human, biological, natural (like sand, water, dark matter), all man-made things including anything AI we are referring to for this topic, in these lens — like whether ‘they’ should be treated equally by ‘us’, whether ‘they’ have consciousness, intelligence, sentience, etc., is a fundamentally wrong way of thinking, way of looking at things equally.

It is one grand act of speciesism we are doing without realizing ourself that we are to other non-human entities on “this planet”.

And I don’t blame these people and myself for having been having such opinions. Because, as far and much as we have understood about ourselves as human species and anything else on this plant and in this universe, and based on our own ‘definition(s)’ of being human, our (limited/incomplete) understanding of intelligence, consciousness, etc. and our ‘definitions’ of morality, ethics, rights and all that, it is of course a natural, moral, let’s just say it’s a ‘good’ thing to have concerns for others.

But, let’s now ask the question again, “Who the hel do we think we are!?”.

We are just what we define ourselves as “human beings”, what we define or think ourselves that a “species” who happen to have (this and that level of) “intelligence”, “consciousness”, (now you know the drill) that happen to exist on this “planet” (which is also a thing that happen to exist in this “universe”) among all other species and things that happen to exist (and got created to be in existence by us) on the same plant.

So it is fundamentally wrong, irrelevant, and most importantly in this age of AI —  very dangerous for our own existence — to think it is a natural thing, a right thing, a moral thing, our responsibility for us to look at everything else (well, not every thing. But I’m sure you get my point) on this planet in our own “definitions” of natural/unnatural, right/wrong, moral/ethical and all that shit.

At this point, I’m sure you all know where I am going with this. But let me add just a bit more, because I don’t want to sound like a radical existentialist or survivalist or anything like that, but at the same time I believe I need to stress my points firmly because although they seems obvious, they are still too subtle and sensitive for most of us to (want to or be able to) realize, let alone act on.

Everything we are thinking of about moral status of digital minds is valid if everything happening now in this world and believing about ourselves are true as we are believing they are. We may and can continue to believe so and act on according to this belief, and it may turn out we are right (until the point when we will realize that we were wrong and at that point it will be too late) that we considered and acted accordingly on all the benefits and challenges of understanding the moral status of digital minds and everything else non-human.

Because, everything we think and believe we know about ourselves and everything else, no matter how profound and profoundly right, is still very limited and is still very likely to totally wrong when we will compare ourselves and our knowledge(for the lack of better word) to the existence (both time and scale) of the universe. To be able to see this fact clearly, we need to literally zoom out of the earth and look at ourselves and everything else from outer space (and also probably from the beginning of our existence as the human species).

When we look at ourselves from that angle, we will see clearly that we are just one of the entities which happen to be exist in this universe, or on this planet to be specific. And human beings are just one of the many entities on this planet we “defined as the Earth” who happen to have what we ‘defined’ and ‘measured’ having such level of  “intelligence”, “consciousness” and such. And based on such perspectives and definitions, we happen to believe that other entities on this planet have/ will/ may have/ should have/ deserve/ should deserve different levels of or no “intelligence”, “consciousness” and “moral righteousness” and all that.

But did we ever thought, look at us from the perspective of the universe?

From the perspective of the universe I believe, or rather its just a fact that, we are just some particles who are foolishly thinking we are intelligent, and conscious and.. worrying about other entities. If and when we no longer exist because of any reasons, the universe won’t care. The universe doesn’t care. The universe never cares.

Did we ever look at us from the perspectives of those other entities on the planet?

Yes, may be, some of them do have same or similar form and different (as of now) lower level of intelligence and consciousness as we are. If and because they do, yes and may be, some or all of them deserve to be treated equally or whichever way we believe should be treated.

But, what if they Don’t have consciousness and intelligence we think they have.

Whether they do or not, what if they actually don’t want to be treated by the way we think they deserve?

More importantly, what if they have completely different form of consciousness and intelligence than ours, and hence they have always been enjoying their lives and having their own definitions of ‘life’, ‘pleasure, ’ethnics’, ‘moral patienthood’ and all that.

And What if the type of consciousness and intelligence they have is actually higher than ours?

Imagine the way we are treating our pet cats and dogs is actually the result of one of their greatest achievements in their psychological warfare in their evolutionary timeline. Imagine ants and termites looking at our greatest architectural buildings or whatever and laughing at us everyday when they walk by us on the tree branches.

Imagine, any creative example you, the reader, very intelligent human, can think of for this viewpoint.

Again, yes, according to very reliable information and understandings we have gained so far (I just didn’t want to say “according to all the scientific evidence we have so far”, it is certain or very possible that those other entities on this planet we think they have consciousness and intelligence doesn’t have the same or only very lower level of consciousness and intelligence than we have. And it is ‘right’, ‘moral’ for us human beings to consider and act as much as we can for their well being or anything we have been talking, doing about such topic.

But now, with the creation of current state of AI, and potential to be able to create even more powerful digital minds or AGI (for the lack of better words to refer to everything we want to refer to for this topic) and potential to be able to do other things* with/because of them, this activity of us human beings considering whether non-human entities are worth mortal concern, and doing things to act on the result of that question has become even more dangerous activity for the survival of us human beings.

Here are a few examples/explanations why.

Let’s say we started this line of thinking with our pets. Then, we realized it’s an act of speciesism and expanded our consideration and actions to other animals (even if we are going to be eating some of them anyway). And then, we expanded our expedition to even more kinds of animals, insects and (living) things in the nature which we never thought they are who we think they are and would consider for such considerations. Finally, we are now looking at ChatGPT and its friends or Digital Minds. I wish I can start talking about them now, but let me stick with our less intelligent/conscious evolutionary friends just a little more.

No matter how well-intentioned we are, I believe the main reason, but we don’t realize it now, we are believing and doing such good efforts and treatment on our evolutionary friends is actually because we are believing we know that they are and will always be ‘less intelligent and conscious’ than we are.

If — evolutionarily or with the help of AI we created — all the fish, dung beetle, our cats (or dogs), and even plants and tress will become as intelligent and conscious we are and if all of them will be negotiating (or fighting) with us for the share of space and resources on this planet; or even worse (let me refer to one of my thoughts/ideas above) — when they have evolved to such level and their intelligence and consciousness and ethics and ‘purpose of existence’ are completely different from ours, and if they would simply (be able to) eat all of us because eating and growing and dying (and not being afraid of dying) is the most conscious thing they do,

What will we do? And I am pretty sure our considerations about them now will be pretty different.

Well actually, at this point, I think I don’t even need to give examples for Digital minds any more.

I know my thoughts may seem quite extreme. But please don’t forget what I shared you earlier about myself. I believe I am just one of the human human beings.

The point I am trying to make with article is that if we are going to try to understand and consider what to do for the moral status of digital minds and everything else non-human on this planet and in this universe, we will have to change this approach of/shift this perspective of looking at things from the angel of us being human beings as we define ourselves, to us being just one of the entities in the universe (who happen to have or believing/assuming to have intelligence and consciousness). Only by this perspective will we be able to come up with practical solutions to fairly deal with such challenges that will be imposed by such beings and at the same time be able to come up with solutions to preserve and extend our existence in this universe as an entity.

 

Curated and popular this week
Relevant opportunities