[My views are my own, not my employer's. Thanks to Michael Dickens for reviewing this post prior to publication.]
[More discussion here]
Summary
Spreading ethical offsetting is antithetical to EA values because it encourages people to focus on negating harm they personally cause rather than doing as much good as possible. Also, the most favored reference class for the offsets is rather vague and arbitrary.
There are a few positive aspects of using ethical offsets, and situations in which advocating ethical offsets may be effective.
Definition
Ethical offsetting is the practice of undoing harms caused by one's activities through donations or other acts of altruism. Examples of ethical offsetting include purchasing carbon offsets to make up for one’s carbon emissions and donating to animal charities to offset animal product consumption. More explanation and examples are available in this article.
Against offsetting
I think ethical offsetting is antithetical to EA values, and have three main objections to it.
1) In practice, people doing ethical offsetting use vague and arbitrary reference classes.
2) It's not the most effectively altruistic thing to do.
3) It spreads suboptimal and non-consequentialist memes/norms about doing good.
1) The reference class people pick for ethical offsets is arbitrary.
For example, let's say I cause some harm by buying milk that came from a cow that was treated poorly, and I want to negate the harm. I have a bunch of options.
I cannot undo the exact harm done by my purchase once it's happened, but I could (try to) seek out that specific cow and try to do something nice for her, negating the harm I caused for that specific cow's utility calculus. I could donate some money to a charity that helps cows, negating my harmful effect on the total utility of cow-kind. I could donate some money to a charity that helps all farmed animals, negating my harmful effect on farmed animal-kind. Or I could donate to whatever charity I thought did the most good per dollar, negating my negative impact on the universe most cost-effectively but less directly.
People seem to settle on a sort of broad cause-area-level offsetting preference (e.g. donating to help farmed animals). While reference class seems intuitive, it's ultimately arbitrary*.
2) Ethical offsetting isn't the most effectively altruistic thing.
You should do the things you think are most effectively altruistic, and you should donate to the charities you think are most effective. If you eat dead animals and don't believe animal charities are the most effective charities, I don't think you should donate to them.
Like everything else, ethical offsetting has opportunity costs; you could use that money to donate to the best charity, which is often different from the charity you’re using for ethical offsetting. It causes a harm relative to the world where you donate only to the most effective charity.
Even if you think the charity you donate your offsetting money to is the most effective, I don’t think it’s helpful to do ethical offsetting. Much of the suffering in the world isn’t directly caused by anyone, so an offsetting mindset increases the probability that you’ll miss big sources of suffering down the line. It causes a bias towards addressing anthropogenic harms, rather than harms from nature.
3) Ethical offsetting spreads anti-EA memes and norms
Ethical offsetting reinforces a preoccupation with not doing harmful things (instead of not allowing harmful things to happen, and taking action when they do). But EAs should (and usually do) focus on the sufferers, not themselves.
By encouraging others to offset, we set norms oriented around people’s personal behavior. We encourage an inefficient model of charity that involves donating based on one’s activities, not one’s abilities or the needs of charities that help neutralize various harms. We miss the chance to communicate about core EA ideas like cause prioritization and room for more funding by establishing a framework that has little room for them.
There are some other dangers involved in ethical offsetting, although I haven’t seen much evidence they actually occur: Offsetting may also encourage unhealthy scrupulosity about the harms we inevitably contribute to in order to function (although it could also help alleviate anxiety about them). And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them. This could contribute to careless and destructive norms about personal behavior.
Caveats
Offsetting is better than nothing. There may be situations in which ethical offsetting is the biggest plausible ask one can make. In such situations, I think bringing up the idea of ethical offsetting may be appropriate. And it may be an interesting conversation starter about sources of suffering and ways of alleviating them.
I've previously discussed my concerns about the obstacles to changing one's mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed.
It may be really psychologically beneficial for some people, similar to the way donations for the dubiously-named fuzzies (donations for causes that are especially personally meaningful to the donor rather than maximally effective) sometimes are.
I think the argument that we should focus on doing lots of good rather than fixing harms we cause could drive destructive thoughtlessness about personal behavior, so I’m wary about making it too frequently. I’m most worried about this concern.
*The reference class schelling point is stronger with carbon offsets, where the harmful thing is adding some carbon dioxide to the atmosphere. Carbon dioxide molecules are pretty interchangeable. If you remove as many as you added, you neutralize the harm from your emissions-causing action very directly, which is intuitively appealing.
All suffering may be equally important, but not all forms of harm are the same, or even similar. How similar the harm you offset is to the harm you cause can vary a lot. Few other types of offsetting I’ve heard of allow the opportunity to create a future so similar to the one where the harmful activity had never been done.
I don't think ethical offsetting is antithetical to EA. I think it's orthogonal to EA.
We face questions in our lives of whether we should do things that harm others. Two examples are taking a long plane flight (which may take us somewhere we really want to go, but also release a lot of carbon and cause global warming) or whether we should eat meat (which might taste good but also contribute to animal suffering). EA and the principles of EA don't give us a good guide on whether we should do these things or not. Yes, the EA ethos is to do good, but there's also an understanding that none of us are perfect. A friend of a friend used to take cold showers, because the energy that would have heated her shower would be made by a polluted coal plant. I think that's taking ethical behavior in your personal life too far. But I also think that it's possible to take ethical behavior in your personal life not far enough, and counterproductively shrug it off with "Well, I'm an EA, who cares?" But nobody knows exactly how far is too far vs. not far enough, and EA doesn't help us figure that out.
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, ... (read more)
One thing I like about offsetting is that it creates a more cooperative and inclusive EA community. I.e., animal advocates might be put off less by meat-eating EAs if they learn they offset their consumption, or poverty reducers might be less concerned about long-termists making policy recommendations that (perhaps as a side effect) slow down AI progress (and thereby the escape from global poverty) if they also support some poverty interventions (especially when doing so is particularly cheap for them). In general, there seem to be significant gains from cooperation, and given repeated interaction, it's fairly easy to actually move towards such outcomes, including by starting to cooperate unilaterally.
Of course, this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
This is a bit of a side point, but to what extent do EAs actually promote ethical offsetting? It seems to me like it normally gets raised in the following ways:
A dominance argument to show that ethical consumption isn't the most important thing to focus on. Hypothetical example: If I think AMF is the best donation opportunity, but donating to The Humane League is better than going vegetarian (because it would be very cheap to "offset" my diet), it shows that donations to AMF are very much better than going vegetarian. This shows going vegetarian makes a small contribution to my potential social impact, so I shouldn't do it unless it involves negligible sacrifice.
As an option for non-consequentialist minded people who don't just want to focus on the best activities, because they have special obligations to avoid doing certain types of harm.
It doesn't seem like EAs promote ethical offsetting as a generally good thing to do. Rather, EAs suggest identifying the highest leverage ways for you to make a difference in the world, and focusing your attention on those. (and not worrying about other ways to have more impact that involve more sacrifice)
I just discovered this related and entertaining passage from Tim Harford's The Undercover Economist (2005).
... (read more)I think offsetting makes sense when seen as a form of moral trade with other people (or even possibly other factions within your own brain's moral parliament).
Regarding objection #1 about reference classes, the answer can be that you can choose a reference class that's acceptable to your trading partner. For example, suppose you do something that makes the global poor slightly worse off. Suppose that a large faction of society doesn't care much about non-human animals but does care about the global poor. Then donating to an animal charity wouldn't offset this harm in their eyes, but donating to a developing-world charity would.
Regarding objection #2, trade by its nature involves spending resources on things that you think are suboptimal because someone else wants you to.
An objection to this perspective can be that in most offsetting situations, the trading partner isn't paying enough attention or caring enough to actually reciprocate with you in ways that make the trade positive-sum for both sides. (For trade within your own brain, reciprocation seems more likely.)
I sympathise with the point you make with this post.
However, isn't it antithetical to consequentialism, rather than EA? EAs can have prohibitions against causing harms to groups of people.
How does this speak to people who use rule-based ethics that obliges them to investigate the benefit of their charitable gifts?
I strongly agree. It's like trying to avoid a trade deficit with every country you interact with. The currency of value is better if it's not region-locked.
Often EAs propose offsetting as a counterargument to "if something harms others you must not do it". So you show that offsetting is better than strict harm avoidance, and then you give reasons why you should instead focus on the most important things.
Offsetting isn't antithetical to EA; to my mind it's a step towards EA.
Notice that the narrowest possible offset is avoiding an action. This perfectly undoes the harm one would have done by taking the action. Every time I stop myself from doing harm I can think of myself as buying an offset of the harm I would have done for the price it cost me to avoid it.
I think your arguments against offsetting apply to all actions. The conclusion would be to never avoid doing harm unless it's the cheapest way to help.
I am not aware of EA associated people using ethical offsets beyond a small amount they don't consider part of their charity budget. Is there an "Ethical Offsetting is Great for EA" position you are arguing against?
Thanks for this post.
It's like we came full circle from people donating minimal amounts of money to charity to relieve their guilt over their perpetuation of global injustice, to people working very hard and doing everything they can to fight global injustice, to people donating minimal amounts of money to relieve their guilt over their perpetuation of global injustice.
Just accept it. Some of your actions will harm others no matter what you do. The only way to make it worthwhile is to go out there and achieve lots of valuable things. Be confident and proud of what you accomplish and you can accept the harm that you will have to commit.
Thank you for this thought-provoking article! We want to make it the topic of our next meetup, so I’ve tried to clarify what my new position should be.
Your first two points are easily conceded—in my view everyone should direct their donations to the, in their view, most effective charity when offsetting. Your third point is most interesting.
Nino already married your and Scott’s positions, but I find it more useful to structure my thoughts in a list of pros and cons anyway.
On the pro side I see the following arguments:
Edit: I've posted before reading others comments. Others have already made this an similar points.
Here is a story of how ethical offsetting can be effective.
I was trying to decide if I should fly or go by train. Flying is much faster and slightly cheaper, but train is much more environmentally friendly. With out the option of environmental offset, I have no idea how to compare these values, i.e. [my time and money] v.s. [direct environmental effect of flying].
What I did was to calculate what offsetting would cost, and it turned out to be around... (read more)
While I agree that offsetting isn't the best thing to spend resources on, I don't like the framing of it being 'antithetical to EA'. Whether offsetting is a good idea or not is a good, object-level discussion to have. Whether it is aligned with or antithetical to EA brings in a lot more connotations, with little to gain:
Offsetting can also be viewed as deciding to co-operating in a tragedy of the commons like situation. If a large enough proportion of the population/businesses decided to offset their emissions then presumably global warming would cease to be an issue. This would cost everyone a small amount individually, but the individual gain would be large. Perhaps the money could do more good elsewhere, but defecting simply encourages more people to defect as well and possibly causes the whole deal to collapse.
Not that I offset my carbon, just an interesting thought.
"I've previously discussed my concerns about the obstacles to changing one's mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed."
This has roughly been my reasoning for considering donating small sums ... (read more)
I'm not sure that offsetting is better than nothing - it may actually be harmful:
1. Offsetting fools people into thinking that their emissions from (eg) flying can be "made harmless" in some way, whereas the bald physical reality is that for flight emissions, they are the most dangerous emissions, in the most fragile part of the atmosphere (apart from ESAS methane release, and long term impact of HCFCs and HFCs).
2. It's harmful to help persuade people it's fine to pollute and pay, rather than actually reduce actual emissions, espec... (read more)
"And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them."
I think it would be helpful to distinguish between the claims (1) "given that one has imposed some harm, one is obligated to offset it" and (2) "any imposition of harm is justified if it is offset." This article argues against the first claim, while Scott argues that the second one seems false. It seems pretty easy to imagine someone accepting (1) and rejecting (2), and I'd be pretty s... (read more)
Good points, but I would go further, having worked in this field both with meteorologists and politicals.
Individual Offsets are easier than behaviour change to do, so a handy sop to guilty conscience of middle class people, who want to keep driving and flying, so perfect for self-deception.
More here: www.rationalreflection.net/can-we-offset-immorality
Thus offsets at individual and local level = advanced greenwash, wrapped up as an environmental project.
In fact, most offsets are deeply flawed and many, particularly renewable energy projects (which may he... (read more)