[repost from amor et licentia]

I lately have a lot of uneasiness about the utilitarian foundations of EA. It is somewhat relevant to how to reform EA after 2022’s scandals I think.

One attempt to express this uneasiness was my post EA might systematically generate a scarcity mindset that produces low-integrity actors. Here is another attempt.

Over the last years of learning to be a good utilitarian, I learned that utilitarianism is very much not human brain-shaped, and can make you go insane if you try to take it at face value. Sometimes, utilitarianism turns people into cranky modafinil zombies who don’t have time to bring out the trash. Sometimes, it makes them try to self-improve through dangerous psychotechnologies until they break. Sometimes, it makes people try to optimize even the last bit of their free time in a way that is not in line with their actual needs so that they burn out completely.

I had a hunch about this for a while. So already a year ago, when I was still more utilitarian-insane than now, I described myself to a friend as a "9-to-5 utilitarian" who uses utilitarian reasoning while planning his day, week, year, or decade, but navigates by gut feeling pretty much anytime else. As utilitarians like simple rules and that is not a simple rule, he stared at me in terror and confusion.

In my understanding, living morally is mostly about practical knowing. That means it is more similar to knowing how to ride a bike than to knowing the capital city of Zimbabwe.

Accordingly, I think that moral philosophy and moral psychology can sort of describe the mechanics of ethical living, just like Newtonian physics can sort of describe the mechanics of how to ride a bike. But, actually moving in the world as a cyclist/moral agent relies on a TON of tacit knowledge, or what some rationalists like to call "system 1-thinking". This means that if you try to be a good person from explicit reasoning alone and ignore what your gut tells you it endorses, you throw out a LOT of the information processing your brain does in the background, all of which would be useful evidence.

Just like a cyclist would fall very rapidly if they had no sense of balance but tried to calculate every tiny balancing motions in verbalized thought, a person who tries to do all the utilitarian calculations explicitly and ignores their commonsense/gut morality will fall and maybe destroy the world by pulling an SBF.

I think the virtue ethicists where closer to the truth: Being a good person is something you have to cultivate continually throughout your life, and will never stop failing at. And if you listen, the world gives you loads of feedback in rapid cycles on how to be better.

In my current understanding, becoming a better moral agent consists mostly of three things:

  1. Resolving internal disagreement so we don’t want to do all sorts of contradictory things and bully ourselves and others into going against our actual needs.
  2. Expanding our (intuitive) circle of moral concern.
  3. Using first-hand experience and thought experiments to clarify our moral intuitions.

Of course, our moral intuitions are not always right. Humans left to their own devices tend to develop the most peculiar philosophical convictions and sometimes do horrible things.

Plus, our brains evolved for a stone-age world, and that’s what our moral intuitions are calibrated for by default. So, we have to recalibrate them a bit. For example, by grokking that they can’t calculate. But throwing out our intuitions completely does not only remove some cognitive biases, it also removes much of our motivation and actually useful unconscious processing. As virtue ethicist Aristotle claimed is true for most things, the thing to aim for is the middle between being a head-only and a heart-only person rather than one of the extremes. Combining head and heart, so to speak.

In academic philosophical jargon, this is called finding a reflective equilibrium.

Sometimes, building large, beautiful, and coherent towers of ideas can help with this pursuit. Sometimes, getting all tangled up in them makes you do galaxy brainy things like tarnish EA and crypto alongside by unskillfully defrauding people from a Bahamas villa.

22

0
0

Reactions

0
0

More posts like this

Comments12
Sorted by Click to highlight new comments since:

I think it is widely acknowledged that virtue ethics is perhaps easier to live by / more motivating / produces better incentives / etc, on an individual level, than trying to be a hardcore utilitarian in all your daily-life actions.  And I agree with Stefan Schubert's linked posts.

But when people look at morality from the perspective of what works best on an individual level, they miss some of the most advantageous things about utilitarianism as it pertains to EA:

  • Utilitarianism is a more legible framework that makes it easier for many people to debate, research, and learn under a common framework.  It would be much harder to compare cause areas and debate intervention effectiveness if we didn't have a roughly utilitarian framework.  Having a roughly utilitarian mindset thus allows the existence of EA as a community/movement.
  • Utilitarianism often seems awkward and unnatural on the scale of individual / interpersonal moral decisions -- am I really going to crank through the ethical calculus before deciding what to eat for dinner, or how I should behave towards a friend?  But on societal-level "policy" questions, utilitarianism starts to feel much more natural.  "Should preventative treatment of heart disease with statins be recommended to demographic group X?" -- with medical questions like this, it seems kind of crazy to do anything other than weigh the costs and benefits of each option, and pick the one with the highest expected value in life-years.  "What is the optimal system of taxation?" will involve some fundamental value judgements (who "deserves" to have more vs less, which activities should be encouraged vs discouraged), but also a lot of utilitarian-style economic arguments about what will maximize growth and avoid creating weird distortions.
  • With both the EA movement and all kinds of other institutions (like governments making policy decisions), utilitarianism is of outsized importance because it improves "reasoning transparency" and makes organizations more flexible and persuadable.  It is often easier to argue and build broad agreement about the consequences of some course of action, than it is to establish which course of action is more virtuous.  Adopting a utilitarian framework encourages institutions to publicly explain their decisionmaking, and thereby showcase the cruxes where, if their minds were changed on issue X, they would switch from working on cause A to cause B.
  • Other moral systems, like virtue ethics, seem like they might be kind of blind to one of the core ideas of EA -- that some well-targeted actions and cause areas might be 100x the impact of other, similar-seeming ones, and therefore we should make a big effort to search for those impacts with a "hits-based" approach.

For a variety of reasons (in part, because I feel I am too selfish), I don't personally identify as a utilitarian.  (Albeit I am definitely more utilitarian-adjacent than most people!)  But I think that utilitarianism is often underrated because people always consider it on an individual level, not appreciating the societal level where utilitarianism often seems most relevant.

Nice comment, you make several good points. Fwiw, I don't think our paper is conflict with anything you say here.

On this theme: @Lucius Caviola and myself have written a paper on virtues for real-world utilitarians. See also Lucius's talk Against naive effective altruism.

awesome, looks good!

Don't let perfect be the enemy of good.

As you say that we shouldn't ignore system 1 completely, just because it's uninformed and unadjusted, we shouldn't ignore utilitarianism.

I feel utilitarianism works quite well, when we stop and consider when what we're doing feels off.

Relevant quote by Eliezer (https://twitter.com/esyudkowsky/status/1497157447219232768):

Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god.

Yes. I think of this as "do things that don't scale" applied to acts of kindness. 

  1. The good argument for an EA to invest nonzero time in layman altruism (scrubbing oil off baby ducklings, cancer research, soup kitchens) emphasizes a kind of integrity cultivation, not the signaling/marketing value. (to be fair, mutual aid and harm reduction projects like soup kitchens would be the activity I actually endorse, more than the baby ducklings or cancer research). 
  2. "Street kindness" or interpersonal ethics is probably an underrated lever, because you have more refined control over how you wiggle it and the speed at which you consume feedback/measurements and update your strategy. My oldest friend with whom I logged over a hundred hours of workshopping theories of change to free the world from capitalism or patriarchy or whatever (in my sordid past) landed on something like what we call "lifestyle anarchism", in other words when I asked her recently why she quit activism and decultivated her youthful ambition she said "because every interaction is an opportunity to make the world better" (with respect to her anti-coercion worldview, a kinda NVC vibe) 

Ambitiously impartial massive levers/wins are still the right thing to want, but the daily path to them might be more intricate than, say, your behavior during a fastforward in the Click universe

Back to the PG analogy: I think it's rather too often EAs do the equivalent of saying "I will ascend from scrappy garage band to lex luthor in like a year, by doing things similar to what lex luthor is doing now", when in reality you can't start up a startup by acting like a 2023 FAANG acts. The playbooks actually have nothing in common even if FAANGs were all garage bands at one point in time. I'm glad EA cultivates ambition and everything, but, YC cultivates ambition probably more effectively than EA does.  

Agree with everything.

Your friend sounds delightful! I think actually, what I'm trying to point towards here is closer to "lifestyle anarchism" than classic virtue ethics. Coincidentally, I found myself defaulting back to explaining my values in anarchist terms when I announced my career transition from active EA community builder to baby influencer in my first blog post.

I guess it's no coincidence that Rocky's "on living without idols" is my all-time favorite on the EA forum.

Someone in discord asked about local volunteering opportunities and I said "idk for meatspace stuff impact isn't really the point" and reiterated my comment to this post and mentioned fuzzies budgeting, then wrote the following: 

I generally endorse any way of sampling from the population that disproportionately puts me in the room with sincere nonnihilistic noncynical nondefeatist people who have their heart in the right place, because it's at least plausible that the "hey, I have an action space here!" mental circuit is more important than how much they'd like to measure/multiply/maximize. The people you meet (at say food not bombs) will vary in their sympathies to the three Ms, some of them just haven't gotten the right invitation or haven't dedicated enough scrutiny to it yet and others will never in any circumstance get into it. But almost all of them will have observed something broken and decided not to seethe and cope about it because they were too busy rolling up their sleeves. That makes them precious to me.

Much of the interesting and difficult stuff about morality happens when there are different conflicting concerns at the stage. Virtue-ethicists on these matters seem much more handwavy than utilitarians or deontologists to me. Take for example the dilemma of what you should do if your host prepared you food with animal products without knowing that you are a vegan. Should you eat it? For utilitarians, it takes plenty of work to calculate the consequences but at least there is a rigorous process to base your policy upon. On page 142, paragraph 3 of this article, you may find a virtue-ethics treatment of the same dilemma, which seems very handwavy to me. 

Of course one article doesn't prove a trend, but often what I have seen is "oh there is this virtue, and there is this another virtue. Sometimes they might get into conflict, the virtue ethics is about finding the moderation between them. You will accomplish this through practical wisdom and getting more life experience" which is not informative at all. I don't think this process is better than trying to calculate the benefits and costs of your important actions.

I think the point of the virtue ethicist in this context would be that appropriate behavior is very much dependent on the situation. You cannot necessarily calculate the „right“ way in advance. You have to participate in the situation and „feel“, „live“ or „balance“ your way through it. There are too many nuances that cannot necessarily all be captured by language or explicit reasoning.

Thanks! I'm still grappling with putting the intuitions behind this post into words, so this is valuable feedback.

Personally, my heuristic in the example you describe is rolling with what I feel like. Considerations that go into that are:

1. Will it kill me? (I'm allergic to red meat)
2. Would I be actively disgusted eating it? (The case for most if not all non-vegetarian stuff.)
3. Do I lack the spoons to have a debate about this, given which amount of pushback/disappointment I expect from the host?

...and when all of them get a "no":
4. Do I feel like my nutrient profile is sufficiently covered atm? Will this, or asking for a vegan alternative make me feel more alert and healthy? (I all-too-often default to lacto-vegetarianism in stressful times. Low-effort vegan foods tend to give me deficiencies (probably protein) that give me massive cheese cravings. From utilitarianism I learned to prioritize not feeling crap over always causing minimum harm.)

While studying philosophy in Uni, I also hated virtue ethics for years due to its intrinsic fuzziness.

Things that changed since then and turned it into a very attractive default:
1. Picking up a meditation habit that made my gut feeling more salient and coherent, and my verbal reasoning less loud/coercive.
2. Learning Focusing to a reasonable level of fluency. The mental motion of checking with my system 1 what the best course forward would be is pretty much the same as pausing to tune in to my felt sense and gauging where it draws me.

Curated and popular this week
Relevant opportunities