N

nathanhb

15 karmaJoined

Comments
38

Ok, I just read this post and the discussion on it (again, great insights from MichaelStJules). https://forum.effectivealtruism.org/posts/AvubGwD2xkCD4tGtd/only-mammals-and-birds-are-sentient-according-to Ipsundrum is the concept I haven't had a word for, of the self-modeling feedback loops in the brain.

So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/quality of ipsundrum across species.

Also, I have an intuition around qualitative distinctions that emerge from different quantities/qualities/interpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.

Okay, this is rough and incomplete, but better to answer sooner than keep trying to find better words.

Not just contractualism. I think the cluster of (contractualism, justice, fairness, governance-design) is important, especially for arguing against majority-vs-minority situations, but it's only part of the picture. 

Important to also consider the entity in question, it's preferences. It's appreciation of life and its potential for suffering. So in part I do agree with some of the pro-pleasure/anti-suffering ideas, but with important differences that I'll try to explain.

Alongside this, also the values I mentioned in my other comment. 

I would argue that there should be some weighting on something which does somewhat correlate with brain complexity, in the context of self and world modeling.

For an entity to experience what I would call suffering, I think it can be argued that there must be a sufficiently complex computation (potentially, but not necessarily, running on biological neurons) associated with a process which can plausibly be tied to this self model.

There must be something which is running this suffering calculation. 

This is not distributed evenly throughout the brain, it's a calculation performed by certain specific areas within the brain. I would not expect someone with a lesion in their visual cortex to be any less capable of suffering. I would expect someone with lessons in their prefrontal cortex, basal ganglia, or prefrontal-cortex-associated area of the cerebellum to have deficits in suffering capacity. But even then, not all of the prefrontal cortex is involved, only specific parts. 

I don't think suffering happens in sensory neurons receptive to aversive stimuli. I don't think an agent choosing to avoid aversive stimuli or act towards self-preservation is sufficient for suffering.

Í think I need a different word than suffering to describe a human's experience. I want to say that an insect doesn't suffer, a dog does, but a human does yet an additional more important kind of suffering thing than a dog does. It is this emergent qualitative difference due to expansion and complexification of relevant brain areas which I think leads to humans having a wider richer set of internal mental experiences than other animals.

Imagine a nociceptive neuron alone in a petri dish. A chemical is added to the liquid medium that causes the neuron to fire action potentials. Is this neuron suffering? Clearly not. It is fulfilling its duty, transmitting a message. The programs instantiated within it by its phenotype and proteome do not suffer. Those programs aren't complex enough for a concept such as suffering. Even if they were, this isn't what suffering would be like for them. The nociceptive neuron thrives on response to the opportunity to do the job it has evolved for. 

So what would be a minimum circuit for aversion? There needs to be quite a few neurons wired up into a specific network pattern within a central nervous system to interpret an incoming sensory signal, and assign it a positive or negative reaction. Far more central nervous system neurons to create a worldview and predictive self-model which can create the pattern of computation necessary for an entity who perceives themself to suffer. As we can see in humans, even though a particular pain-related sensory neuron firing isn't enough to induce suffering. Many people deliberately stimulate some of their pain-related sensory neurons in the course of pleasure-seeking activities. To contribute to suffering, the sensory information needs to be interpreted as such by a central processing network which creates a suffering-signal-pattern in response to the aversive-sensory-stimuli signal pattern.

 

Consider a simpler circuit in the human body: the spinal reflex circuit. The spinal reflex circuit enables us to react to aversive stimuli (e.g. heat) faster than is possible for our brains to perceive it. The loop goes from the sensory neuron, in to the spinal cord, through some  interneurons, and then directly to output motor neurons. Before the signal has made it to the brain, the muscles are moving in response to the spinal reflex, contracting the limb. I argue that even though this is a behavioral output in reaction to aversive sensory stimuli, there is no suffering in that loop. It is too simple. It's just a simple program like a thermostat. The suffering only happens in the brain once the brain perceives the sensory information and interprets it as a pattern that it associates with suffering. 

 I think that the reactions of creatures as simple as shrimp and fruit flies are much closer to a spinal reflex than to a predictive self with a concept of suffering. I think that imagining a fruit fly to be suffering is imagining that there is more 'perceiver' there, more 'self' there than is in fact the case. The fruit fly is in fact closer to being a simple machine than it is to being a tiny person.

 

The strategic landscape as I see it

I believe we are at a hinge in history, where everything we do matters primarily insofar as it channels through AI risk and development trajectories. In five to ten years, I expect the world to be radically transformed, and all of humanity's material woes to be over. Either we triumph, and it will be easy to afford 'luxury charity' like taking care of animals alongside eliminating poverty and disease, or we fail and the AI destroys the world. There's no in-between, I don't expect any half-wins.

 

Some of my moral intuitions

I think we have to each depend on our moral intuitions to at least some extent as well. I feel like any theory taken to an extreme without that grounding goes to bad places quickly. I also think my point of view is easier to understand perhaps if I'm trying to honestly lay out on the table what I feel to be true alongside my reasoning.

(assuming a healthy young person with many years ahead of them)

Torturing a million puppies for a hundred years to prevent one person from stubbing their toe: bad.

Torturing a million puppies for a hundred years to prevent one person from dying: maybe bad?

Torturing a 100  puppies for a year to prevent one young person from dying: good.

Torturing a million shrimp for a hundred years to prevent one person from stubbing their toe: maybe bad?

Torturing a million shrimp for a hundred years to prevent one person from dying: great!

Torturing a million chickens for a hundred years to prevent one person from stubbing their toe: bad.

Torturing a million chickens  for a hundred years to prevent one person from dying: good.

Torturing a million chickens for a hundred years to prevent one puppy from dying: bad.

Torturing a million chickens for a hundred years to prevent dogs from going extinct: great!

Are there specific sources or arguments which you recall as being the key influences in you changing your mind?

I agree that "activist" comments don't imply that someone isn't truthseeking. I think that whether an activist mindset or a philosophical mindset should be brought to bear on a given problem is highly context dependent.

I was trying to make the point that I was disappointed that the responses to this question of cause prioritization (human welfare vs animal welfare) seemed to be predominantly activist mindset oriented. To me, it seems this question is a context that, at the very least, requires a balance of philosophy and activism, if not predominantly philosophy. This interpretation is, I think, supported by this question being asked in the context of a "debate week", where the implied goal is for us to explain our viewpoints and attempt to resolve our differences in worldviews.

An example of a question where I would be disappointed to see predominantly philosophical debate instead of activist planning would be: "Given the assumption that there is a 1:1e6 moral value tradeoff for cows to shrimp, and how best should we allocate a budget of 1 million dollars between this set of existing charities: (list of charities)." To respond to a question like that with philosophical debate of the premise would seem off-topic to me. The question specifies a premise, and if you want to fight the hypothesis you ought to initiate an entirely separate conversation.

In your specific case, Ariel, I'd like to thank you for your above comment explaining your philosophical journey and giving links to sources you found influential. This is exactly the sort of comment I would like to see in a conversation like this. I will take the time to read what you have linked, and think carefully about it, then get back to you on where your info has changed my mind and where I might still disagree.

I am delighted by Michael's comments and intend to reply to them all once I've had the chance to carefully examine and consider his linked materials.

Overall, I feel quite disappointed in this comment thread for being in what I would call an "activist" mindset, where the correctness of one's view is taken for granted, and the focus is on practical details of bringing about change in the world in accordance with this view.

I think the question of prioritization of human welfare versus animal welfare should be approached from a "philosopher" mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other.

Michael StJules is one of the few animal welfare advocates I've encountered who is willing to engage on this philosophical level.

Here's some quotes from elsewhere in this comment section that I think exemplify what I mean by activist mindset rather than philosopher mindset: (Single line separators indicate the comments were in a thread responding to each other)



emre kaplan

Disclaimer: I'm funded by EA for animal welfare work.

Some thoughts:

a. So much of the debate feels like a debate on identities and values. I'd really love to see people nitpicking into technical details of cost-effectiveness estimates instead.

... (Truncated)



Ariel Simnegar

So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF.

I've run into a similar dilemma before, where I'm trying to convince non-EAs to direct some of their charity to AMF rather than their favorite local charity. I believe animal welfare charities are orders of magnitude more cost-effective than AMF, so it's probably higher EV to try to convince them to direct that charity to e.g. THL rather than AMF. But that request is much less likely to succeed, and could also alienate them (because animal welfare is "weird") from making more effective donations in the future. Curious about your thoughts about the best way to approach that.


CB

Another option, if they're sensible to the environment, is to redirect them to charities that are also impactful for sustainability, such as The Good Food Institute. According to the best guess by Giving Green, they can avoid 17 tons of CO2eq for 50$.

This way, they can make a positive contribution for the environment (not to mention the positive impact on human health pandemics).

I've done it for a charity that does similar stuff in my country and at the very least people didn't give any pushback and seemed comprehensive. You can mention concrete stuff about the progress of alternative proteins like they're the default choice at burger king.


Jason

I have a sense that there could be a mutually beneficial trade between cause areas lurking in this kind of situation, but it would be tricky to pull off as a practical manner.

One could envision animal-welfare EAs nudging non-EA donors toward GiveWell-style charities when they feel that is the highest-EV option with a reasonable probability of success, and EA global-health donors paying them a "commission" of sorts by counterfactually switching some smaller sum of their own donations from GH to AW.

In addition to challenges with implementation, there would be a potential concern that not as much net money is going to GH as the non-EA donor thinks. On the other hand, funging seems to be almost an inevitable part of the charitable landscape whether it is being done deliberately or not.


Ben Millwood

Yeah, this seems a little... sneaky, for want of a better word. It might be useful to imagine how you think the non-EA donors would feel if the "commission" were proactively disclosed. (Not necessarily terribly! After all, fundraising is often a paid job. Just seems like a useful intuition prompt.)



Stijn

"So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF." More generally, I think it is more important to convince an EA human health and development supporter to diversify and donate say 50% of the donation budget to the most effective animal welfare causes, than to convince a non-EA human charity supporter to diversify and donate say 50% of the donation budget to AMF or similar high-impact human-focused charities.

I agree that there are difficult unresolved philosophical questions in regards to hypothetical not-yet-extant people who are varyingly likely to exist depending on the actions of currently extant people (which may be a group that includes blastocysts, for instance).

In regards to non-human animals, and digital entities, I think we need to lean more heavily into computational functionalism (as the video you shared discussed). This point too, is up for debate, but I personally feel much more confident about supporting computational functionalism than biological chauvinism.

In the case of complex-brained animals (e.g. parrots), I do think that there is something importantly distinct about them as compared to simple-brained animals (e.g. invertebrates).

Some invertebrates do tend to their young, even potentially sacrificing their own lives on behalf of their brood. See: https://entomologytoday.org/2018/05/11/research-confirms-insect-moms-are-the-best/

I think that in order to differentiate the underlying qualia associated with this behavior in insects versus the qualia experienced by the parrots defending their young, we must turn to neuroscience.

In a bird or mammal neuroscience is able to offer evidence of the computations of specific sets of neurons carrying out computations such as self-modeling and other-modeling, and things like fondness or dislike of specific other modelled agents. In insects (and shrimp, jellyfish, etc), neuroscience can show us that the insect brains consistently lack sets of neurons which could plausibly be carrying out such complex self/other social modeling. Insect brains have various sets of neurons for sensory processing, for motor control, and other such basic functions. Recently, we have made a comprehensive map of every neuron and nearly all their associated synapses in the preserved brain of an individual fruit fly. We can analyze this entire connectome and label the specific functions of every neuron. I recently attended a talk by a neuroscientist who built a computational model of a portion of this fruit fly connectome, and showed that a specific set of simulated inputs (presentation of sugar to taste sensors on legs) resulted in the expected stereotypical reaction of the simulated body (extending the proboscis).

That, to me, is a good start on compelling evidence that our model of the functions of these neurons is correct.

Thus, I would argue that parrots are in a fundamentally different moral category from fruit flies.

For the case of comparing complex-brained non-human animals to humans, the neuroscientific evidence is less clear cut and more complex. I believe there is a case to be made, but it is beyond the scope of this comment.

Thanks for your thoughtful engagement on this matter.

Well, if AI goes well, things on my short list for what to focus on next with the incredible power unlocked by this unprecedentedly large acceleration in technological development are: alleviating all material poverty, curing all diseases, extending human life, and (as a lower priority) ending cruel factory farming practices. This critical juncture isn't just about preventing a harm, it's a fork in the road that goes either to catastrophe or huge wins on every current challenge. Of course, new challenges then arise, such as questions of offense-defense balance in technological advancements, rights of digital beings, government surveillance, etc.

Edit: for additional details on the changes I expect in the world if AI goes well, please see: https://darioamodei.com/machines-of-loving-grace

Thank you, Michael, for your insightful comment and very interesting source material! If you are willing, I'd love to hear your take on this comment thread on the same subject: https://www.lesswrong.com/posts/RaS97GGeBXZDnFi2L/llms-are-likely-not-conscious?commentId=KHJgAQs4wRSb289NN

I have upvoted your use of an LLM because this comment is more thoughtful, balanced, and relevant than your average comment. And much more so than the average commenter's comment in this particular comment thread. I normally don't post LLM outputs directly, but this comment thread is so full of unconsidered and unelaborated-upon opinions, I figured this would be a rare place in which the LLM mediocrity would be an convenient way to raise the average quality of the content. My hope was to stimulate thought and debate; to initiate a conversation, not to provide a conclusion to a debate.

Load more