Hide table of contents

Before participating in the Arete Fellowship program, I had engaged with the philosophy of effective altruism (EA). My first introduction to EA was through a podcast hosted by economist Steven Levitt, featuring Peter Singer, a leading philosopher in the EA community. Many aspects of Singer’s approach to effective altruism appealed to me. Its focus on impartiality, cause prioritization, and cos-effectiveness provided a framework to quantify and rationalize my innate drive to do good. These principles sparked my interest and motivated my participation in the Arete Fellowship program, where I hoped to gain a more comprehensive understanding of both the philosophical underpinnings and practical applications of EA. 

While Singer’s principles resonated with me, I have grappled with certain implications and extensions of these moral premises and found them difficult to reconcile. 

1. “Pain is bad, and similar amounts of pain are equally bad, no matter whose pain it might be.” and its implications against patriotic and familial values when doing good 

2. “We are responsible both for what we do and for what we fail to do/could have done.” and its implications on considering people who fail to do something as immoral 

3. “The seriousness of taking a life depends not on the race, sex, or species of the life killed, but on its individual characteristics, such as its desire to continue living or the kind of life it is capable of living.” and its implications for animal welfare and issues like the rights of an anencephalic infant 

These concerns have lingered in the back of my mind for some time. Following the intro fellowship, where we discussed practical applications of EA, I want to revisit these foundational philosophical questions with fresh insights. As a novice in the EA community, I offer not answers, but further questions and reflections on these principles. 

First, concerning moral premise 1 and its implications, I have the following unassorted concerns 

● Living consistently with this principle would seem to require abandoning patriotic tendencies and, as Peter Singer argues, even setting aside our emotional preferences for friends and family to achieve truly ethical thinking. Singer often uses the example of a "Bengali child vs. a child in your neighborhood" to illustrate the equal pain principle. That one seems to make sense intuitively. However, consider a different scenario. An effective altruist’s child is gravely ill, and treatment in the U.S. costs $10,000. Intuitively, one might choose to spend that $10,000 to save their child. However, the effective altruist knows that the same $10,000 could save the lives of 10 Bengali children through donations to a charity addressing infectious diseases in a developing nation, where medical costs are significantly lower.

● By the equal pain principle, the pain of a single Bengali child dying is no less significant than the pain of the altruist’s own child dying. Moreover, saving 10 lives has a value 10 times greater than saving one. Thus, by effective altruism's logic, the altruist should prioritize the Bengali children and direct the $10,000 to the charity. 

● If the effective altruist has additional resources, they are morally obligated to continue allocating those funds to save hundreds more Bengali children rather than addressing any needs closer to home. Consequently, under this framework, the effective altruist's own child may never receive the treatment necessary to survive their illness. 

This feels intuitively wrong to me. Addressing potential concerns with this scenario: 

● Some will note that this scenario seems to strawman Singer’s position. In practice, Singer only advocates for people to donate about 10% of income, or what people generally consider “excess income”. The money spent on treating one’s own child’s illness should not be considered excess. However, considering that Singer has explicitly named all identities and relationships as unimportant when it comes to measuring pain (he argues that the pain of a dictator is intrinsically as bad as the pain of a normal person), I don’t believe treating one’s child’s illness qualifies as one’s own spending under his framework. 

My reflections on this 

● Having read Singer’s replies to similar questions, I think Singer would admit that most people (even members of the EA community) will probably opt for saving their child. Inconsistency in implementation is normal. But he will also note that what most people do is not the same as what is truly moral.  In the end, he would still consider the lives of ten Bengali children a greater moral priority. This feels counter-intuitive, but I’ve noted that Singer seems willing to give counterintuitive, sometimes controversial replies to these edge cases when he thinks it’s consistent with his principles. 

● Personally, as I’m very emotionally opposed to upholding the principle in this edge case, it leads me to question whether I would want to pursue what Singer considers a “truly ethical” state at all. I don’t know if I’ll ever develop my thinking to the point where I’d find choosing the death of my own child morally agreeable. 

Concerning moral premise 2 and its implications

● Rationally, I fully accept this principle. I suppose this is sort of analogous to loss aversion biases: people perceive losses of what they already possess as more significant than equivalent gains. Similarly, individuals tend to attribute greater significance to their actions than to their inactions, even when the moral weight of both is comparable. This tendency, rooted in cognitive bias, makes the principle inherently challenging for many to internalize and act upon. 

● I aim to hold myself accountable to this standard, but my concerns arise when considering its application to others. For instance, suppose a billionaire, whose wealth is derived from completely legal and ethical sources, has ample excess income but chooses not to donate to charity. Can we justify labeling them as immoral? Furthermore, do we bear a responsibility to nudge them toward fulfilling their moral obligation? 

● Answering "yes" to these questions introduces a potential tension: does taking such a stance imply entitlement or hubris on our part? By what right do we position ourselves as arbiters of morality, especially when others’ values or priorities may differ? 

My reflection on this 

● I thought this would be an important question that would help me navigate the norms of the EA community. Specifically, within the EA community, beyond working on their own projects, do people have the tendency to remind & suggest to others “what they could have done but didn’t?” 

● If, in most cases, such tendencies are presented in the form of friendly nudges, to what point do friendly nudges become “self-entitled” judgments? 

Concerning moral premise 3 and its implications, I have the following concerns 

● Singer makes a very controversial argument based on this moral premise. He first states that it is the individual characteristics, not generalizing factors like gender, race, or species, that determine the seriousness of taking one’s life. This rationally makes sense to me, as we should certainly not apply the stereotypes of an entire group of people to every individual in that group to judge their value. 

● Even when extended to the topic of animal welfare, I still find this principle mostly agreeable. Indeed, animals also have a desire to live and are capable of living a life. Their pleasure and pain ought to be accounted for as well. 

● The principle becomes more challenging to accept when Singer extends it to a particular edge case. In an example, he compares an anencephalic infant (basically someone without consciousness) to an animal like a baboon. According to the principle, we determine an individual’s right to life by their individual characteristics, specifically, by their “desire to continue living or the kind of life it is capable of living.” Singer notes that the infant is unconscious and thus doesn’t have a desire to live and is not capable of living any kind of life. The baboon, though generally less conscious and less capable of living a good life compared to the average human, is probably more conscious and capable than the anencephalic infant. Thus it would have been more morally serious to take the life of the baboon than to take the life of that human infant. 

● An implication may be that it would be morally justified to take the organs of a surely dying anencephalic person, under consent, and transplant them to sustain the life of a baboon in need.

My reflection 

● My intuition against these implications made me realize that I do hold a bias and think that humans are naturally more worthy of life than other animals, regardless of their individual characteristics. Emotionally, I’d still find the death of the anencephalic infant more serious than the death of the baboon. Rationally, I do not have a justification for my intuition. 

● Is there anything that makes a less conscious human more worthy of life than a more conscious baboon? For sure there are legal differences. Harming an anencephalic infant would be considered a serious crime, perhaps perceived worse than harming a normal human by the public; harming a baboon may lead to condemnations as well, but certainly to a lesser degree. 

● Again, this is an instance where Singer’s principle plays against the majority’s common intuition. I do wonder why most people, including myself, have the “specie-ist” bias. If racist ideologies are shaped by educational norms, should I be able to identify similar components in my educational norm that promote “specie-ist” thinking? 

As evident from these concerns and reflections, Singer's moral principles often lead to conclusions that feel deeply counterintuitive, particularly in edge cases. Whether it involves prioritizing distant lives over familial bonds, passing moral judgment on those who fail to act, or assessing the comparative value of human and non-human lives, I find myself wrestling with intuition and emotion. This stands in stark contrast to the practical aspects of EA I encountered in the Arete program, where most proposals and solutions felt both enlightening and intuitively sensible. However, these practical and intuitive methods are ultimately grounded in Singer’s deeply counterintuitive moral premises. 

While the EA community naturally prioritizes maximizing good through practical research and impactful projects, I believe it is equally important for members to devote some attention to the movement's philosophical foundations. Doing so would not only refine these principles but also serve as a critical reminder of why we chose to work within the EA framework in the first place.

10

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I used to think that the exact philosophical axiologies and the handling of corner cases were really important to guide altruistic action, but I now think that many good things are robustly good under most reasonable moral frameworks.

 

these practical and intuitive methods are ultimately grounded in Singer’s deeply counterintuitive moral premises.

I don't think this is necessarily true. Many (I would argue most) other moral premises can lead you to value preventing child deaths or stunting, limiting the suffering of animals in factory farms, or ensuring future generations live positive, meaningful lives.

@WillieG mentioned Christianity, and indeed, EA for Christians has many Christians who care deeply about helping others and come from a very different moral background. (I think sometimes they mention this parable)

 

within the EA community, beyond working on their own projects, do people have the tendency to remind & suggest to others “what they could have done but didn’t?”

I don't have an answer to this question, but you might like these posts: Invisible impact loss (and why we can be too error-averse)  and Uncertain Optimizing and Opportunity Costs 

I think people regularly do encourage themselves and others to consider opportunity costs and counterfactuals, but I don't think it's specific to the EA community.

 

The principle becomes more challenging to accept when Singer extends it to a particular edge case.

I think this is the nature of edge cases. I don't think you need to agree with Singer on edge cases to value helping others. This vaguely reminded me of this Q&A answer from Derek Parfit where he very briefly talks about borderline cases and normative truths.

 

I do think things get trickier for e.g. shrimp welfare and digital sentience, and in those cases philosophical considerations are really important. But in my opinion the majority of EA work is not particularly sensitive to one's stance on utilitarianism.

One of my favorite tongue-in-cheek reviews of the rationalist community is "STEM nerds discovering philosophy." I'm the other way around--a philosophy and theology nerd who is discovering STEM. My priors suggest there are no easy answers, and you will struggle with big questions like your post throughout your life. 

In Christian theology, there are sins of commission and sins of omission ("We confess that we have sinned against you in thought, word, and deed, by what we have done, and by what we have left undone.") Singer's idea that you can be morally at fault for doing nothing is quite an old idea.

Yet...the same book that introduces the Parable of the Good Samartian and answers "Who is my neighbor?" in the most expansive way possible, also says this:

"But if anyone does not provide for his relatives, and especially for members of his household, he has denied the faith and is worse than an unbeliever." 1 Timothy 5:8

IMO, EA is somewhat of a "luxury belief," in the sense that it's something one should engage in after the basic necessities of life are met. Maslow's hierarchy applies at all times. 

There's a common criticism made of utilitarianism: Utilitarianism requires that you calculate the probabilities of every outcome for every action, which is impossible to do.

And the standard response to this is that, no, spending your entire life calculating probabilities is unlikely to lead to the greatest happiness, so it's fine to follow some other procedure for making decisions. I think a similar sort of response applies to some of the points in your post.

For example, are you really going to do the most good if you completely "set aside your emotional preferences for friends and family"? Probably not. You might get a reputation as someone who's callous, manipulative, or traitorous. Without emotional attachments to friends and family, your mental health might suffer. You might not have people to support you when you're at your low points. You might not have people willing to cooperate with you to achieve ambitious projects. Etc. In other words, there are many reasons why our emotional attachments make sense even under a utilitarian perspective.

And what if we're forced to make a decision between the life of our own child and those of many others'? Does utilitarianism say that our own child's death is "morally agreeable"? No! The death of our child will be a tragedy, since presumably they could have otherwise lived a long and happy life if not for our decision. The point of utilitarianism is not to minimize this tragedy. Rather, a utilitarian will point out that the death of someone else's child is just as much a tragedy. And 10 deaths will be 10 times as much a tragedy, even if those people's lives aren't personally related to you. This seems correct to me.

Thanks for the post, you bring up some interesting points. I think one of the key things that's missing from Singer's approach is just how important personal responsibility is to well-being. Unfortunately, I don't have my alternative framework all figured out yet, but here's a start towards it. One example is that we have most responsibility for our own children since we brought them into existence and they generally can't fend for themselves, so, under many circumstances, giving them priority is the most overall well-being-promoting thing to do. 

I'm glad to see you are questioning some of the philosophy behind EA, and I hope that more people will do so. I believe a shift to protecting rights (e.g., fighting corruption) and promoting responsibility (of which mental health is a big subset since it involves taking responsibility for your emotions) could potentially help make EA as a movement much more effective.

Executive summary: The post explores the author's grappling with Peter Singer's moral premises foundational to effective altruism, highlighting personal struggles with counterintuitive implications of those principles and their impact on familial and patriotic values. 

Key points: 

  1. The author appreciates the framework of effective altruism for its emphasis on impartiality, cause prioritization, and cost-effectiveness, which motivated their participation in the Arete Fellowship.
  2. Singer's principle that "pain is bad and equal regardless of who experiences it" challenges the author's patriotic and familial instincts, particularly when considering the ethical choice between saving one's own child or multiple children abroad with the same amount of resources.
  3. The principle stating we are responsible for our actions and inactions causes discomfort for the author when considering its application to others, raising ethical questions about judgment and moral obligations.
  4. Singer's view on the moral equivalence in taking lives, based on individual characteristics rather than race, sex, or species, extends to controversial comparisons, such as between an anencephalic infant and a baboon, challenging the author's intuitions about human and animal lives.
  5. The author is conflicted by Singer’s insistence on ethical consistency even in edge cases, which contradicts their emotional responses and leads to a broader reflection on the nature of moral judgments and biases.
  6. While the practical applications of effective altruism resonate with the author, they find it crucial for the EA community to also engage deeply with its philosophical underpinnings to ensure a comprehensive understanding of its principles.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from JC
Curated and popular this week
Relevant opportunities