One of the most basic ideas in the philosophy of effective altruism is that we should do more to help others. Putting this idea into practice can be, to say the least, hard.
Different people practice altruism in different ways and to different extents. Some donate to an effective charity and move on. Others use altruism as a guide in making major decisions, like which career to pursue. It’s ultimately a personal choice, and it’s not what this post is about. I can’t tell you how to live your life.
EA is not just one person’s life philosophy though; it’s a community too. So here’s something I can say: whatever values a community chooses to espouse, and however it chooses to interpret them, its practices ought to support those values.
Suppose a large community wants to support altruism. Actually, let’s simplify things: suppose it doesn’t care to support anything else. This isn’t necessarily what EA is or should be, but it will highlight some points that apply more generally. What might a community of maximally dedicated altruists look like? A natural place to start is that it should encourage altruism and discourage selfishness. But that’s worth a second thought.
Failure modes
Some people at first interpret altruism as forgoing all disposable income to fund highly rated charities. If the culture places too much emphasis on altruistic behavior, it’s no surprise that some of those people are going to end up in a “misery trap.” Michael Nielsen’s notes summarize the phenomenon well. Here’s a small example, taken from The Most Good You Can Do:
Julia admits to making mistakes. When shopping, she would constantly ask herself, “Do I need this ice cream as much as a woman living in poverty elsewhere in the world needs to get her child vaccinated?” That made grocery shopping a maddening experience…
Even if you’re a maximally dedicated altruist, a misery trap would destroy your productivity and dissuade others from joining your cause. So from an impact-maximizing perspective it’s acceptable, even obligatory, to invest in your own happiness. Similar reasoning advises against excessive frugality in other areas, like movement building. Yes, conferences are expensive, but some of them are worth it. Networking pays off in hard-to-quantify ways.
Those who stop short of excessive frugality, still spending money on ice cream and conference tickets, might appear selfish to someone who is forgetting about second-order consequences. Sometimes it’s actually pretty hard to confirm that they aren’t being selfish. But scrutinizing every expense would be a terrible waste of time, so it’s not worth making a big deal out of it.
That attitude has own failure mode. A group of morally motivated people funding things labeled “altruism” might attract… differently motivated people who want to label their thing “altruism.” There’s outright grift to worry about, and then there are more subtle forms of motivated reasoning. For example, someone might really convince themselves that leasing a fancy office will pay for itself in employee morale. It’s hard to prove that it won’t, but such arguments shouldn’t be taken at face value. Even very ethical people are prone to deceive themselves.
Both sides of the coin, motivated reasoning and grift, have been discussed at length in the context of “free-spending EA.” While the problem is certainly worse in that context, it doesn’t entirely go away when money is tight. There are still plenty of ways to trade off altruism for personal benefit. Even in the complete absence of money, status alone can be a powerful enough incentive to distort one’s reasoning. (Like Scott Alexander, I’m including “a sense of belonging, being appreciated, and connecting with people” in my definition of status.)
In an article on the FTX fallout, Gideon Lewis-Kraus brought up one way to approach the issue:
In Jewish law, there is a concept called “mar’it ayin” designed to address this kind of ambiguity: you don’t eat fake bacon, for example, because a passerby might see you and conclude you’re eating real bacon. The reason for this law isn’t primarily to protect the reputation of the fake-bacon-eater; it’s to sustain the norms of the whole community. The passerby might decide that, if it was O.K. for you to eat bacon, it’s O.K. for him to do it, too. When important norms—of frugality, and the honesty with which it was discussed—are seen as violated, the survival of the culture is imperilled.
I think this perspective is very appropriate for certain things, like buying luxury homes as part of an ostensibly charitable project. If you apply it to everything though—“hm, partying after EA Global would look kind of selfish, so maybe I shouldn’t”—you risk sliding back into a misery trap.
Norms
Let’s return to our group of maximally dedicated altruists. What they’d like to do is encourage the ambiguous actions that serve the greater good, and discourage the ones that are actually just selfish. The problem is that they can’t tell which are which.
One approach is to start with their ideal, translate it into rules, and then just enforce those rules as best they can without being too overbearing. I think this is a mistake. The rules are going to sound like “fly business class if it’s worth being better rested before an important meeting” and “spend more on food if it will make your event more impactful.” There’s just no way to enforce that kind of decision-making. Anyone who expects rules to be followed is going to be in trouble.
It would be better to frame altruism as only an ideal. Since an ideal can’t be depended upon, important decisions would call for transparent communication, conflict of interest statements, approval from multiple parties, and the like. This approach is pretty reasonable. It’s definitely less susceptible to egregious fraud. But it doesn’t help much with small-scale motivated reasoning: if perfectly balanced cost-benefit analysis is widely aspired to, people will want to believe they’re doing it. They may be tempted to persuade others of their rationalizations. Those other people, meanwhile, may be reluctant to “accuse” someone of imperfection. This seems like a real danger to a community’s epistemic health.
So let’s take another step back. It doesn’t actually matter how well norms work in a perfect world. It matters that they work in the face of ambiguity. What if we start over and prioritize that?
Perhaps the first thing to settle on is what counts as altruism. This will factor into decisions about which projects to fund and which ones to hold up as exemplary. The stakes are high: an error can not only waste charitable resources, but promote further growth of the waste. That’s a strong reason to be conservative with the altruism label. If there’s any real ambiguity, it’s better to treat an action as selfish.
Crucially, that doesn’t mean rejecting ambiguous actions outright. What will count against someone’s status—even a little bit—is a second degree of freedom. Here it’s better to give people the benefit of the doubt and reject only clear-cut extravagance. No one wants to promote misery trap thinking. Besides, in some cases the benefits of accepting the altruistic actions will outweigh the costs of accepting the selfish ones.
Flying business class to a high-stakes meeting? Premium catering for an event? Sounds kind of selfish, but I’d accept it. In fact, I might embrace it as much as I would if someone convinced me that the decision was morally justified. That way, they’d have no reason to even try.
Implications
This seems rather counterintuitive. Many things are going to fall into the gray area between “almost certainly selfish” and “almost certainly altruistic.” Most of them will be regarded as selfish, yet completely accepted. This is not just about investing in happiness, which is an indirect way to help others. Nor is it about altruism-life balance—remember, we’ve been imagining people who are maximally dedicated to altruism! Even if we leave room for other values, if we only try to promote altruism within a certain boundary, the argument suggests that we sometimes accept selfishness within that boundary.
Counterintuitive indeed, but these norms are perfectly consistent. More than that, they’re complementary. Regarding your friends as selfish would be uncomfortable if selfish were not an acceptable thing to be; accepting selfishness would invite fraud if it weren’t always labeled as such.
Is this effective altruism though? I don’t see why not. This particular way of handling ambiguity does nothing to change the ultimate goal of solving the world’s biggest problems. It doesn’t mean distrusting other members of the community; if anything, realistic expectations make trust easier. And it highlights, rather than diminishes, the significance of legible signals like a commitment to donate.
Embracing selfishness does require two things. The first is institutions that are built for it, meaning they don’t depend on impartial cost-benefit analysis by people who have a stake in the outcome. (Funding selfishness is something we often wish to avoid.) The second is a culture that doesn’t try too hard to reject it. Again, we have better things to do with our time than scrutinize every expense.
Since I’m most familiar with university community building, I’ll use free meals as an example. It is conceivable that buying dinner for an EA group might be a good investment to keep people engaged and eventually nudge a few of them towards high-impact careers. But there are strong incentives to believe this and no practical way to verify it.
So, part one: if I were a prospective funder, I would regard expensive EA dinners as a selfish activity. That doesn’t mean it’s always wrong to fund them, but impact-oriented donors would probably want to be conservative about it. Not only does this ensure that the funds in question are used wisely, it sets an important example for everyone else. (CEA recently scaled things back in this category; I think it was a good call regardless of what the stock market is doing.)
Then part two: it’s not worth debating all the ways eating dinner together might or might not help spread EA ideas. In fact, it’s probably better if you don’t. Sometimes building a community means spending time with friends doing “low-impact” things, and that is really really okay. Embracing selfishness lets you not sweat the small stuff and focus on what’s actually important.
Something I’ve been reminded of in recent months is how delicate EA culture is. Growth will naturally make certain norms harder to maintain—the important question is how they can evolve while preserving what makes EA special. We can afford neither to compromise our standards with ineffective spending, nor to erode trust by constantly regarding one another with suspicion. Though embracing selfishness is far from a complete answer, it may help to take a few steps in that direction.
This is something I actually agree with, not just in terms of movement-building, but as a wider moral philosophy. There is reason to think that utilitarianism is too demanding, for example, by demanding that everyone make every decision impartially (e.g., by giving the benefits/harms to family and friends the same priority as benefits/harms to strangers), or at the extremes stating that people ought to calculate every action in terms of how much good/harm it does to others. Both of these examples are impractical, ultimately leading to misery by not taking into account what makes human lives worth living (e.g., having committed relationships to a select number of people who one considers more valuable than strangers, or sometimes indulging in frivolities that may prevent one from being maximally altruistic). I think people often associate utilitarianism with consequentialism as a whole, which I think may be counterproductive. Sprinkling in some egoist practices here and there may be what ultimately leads to the most happiness and least harm in the long run, as diminishing the quality of one’s own life in the name of helping others, if universalised, would lead to an unhappy world (in this way, I think Kant’s Categorical Imperative may be useful here).