Hide table of contents

I'd like to thank Parker Whitfill, Andrew Kao, Stefan Schubert, and Phil Trammell for very helpful comments. Errors are my own.


Many people have argued that those involved in effective altruism should “be nice”, meaning that they should cooperate when facing prisoner’s dilemma type situations ([1] [2] [3]). While I believe that some of these are convincing arguments, it seems to be underappreciated just how often someone attempting to do good will face prisoner’s dilemmas. Previous authors seem to highlight mostly zero-sum conflict between opposing value systems [3] [4] or common-sense social norms like lying [1]. However, the problem faced by a group of people trying to do good is effectively a public goods problem [10]; this means that, except in rare cases (like where people 100% agree on moral values), someone looking to do good will be playing a prisoner’s dilemma against others looking to do good.

In this post I first give some simple examples to illustrate how collective action problems almost surely arise between a group of people looking to do good. I then argue that the standard cause-prioritization methodology used within EA recommends to defect (“free-ride”) in these prisoner’s dilemma settings. Finally, I discuss some potential implications of this, including that there may be harms from popularizing EA thinking and that there may be large gains from improving cooperation.


Main Points:

1. A group of people trying to do good are playing a form of a public goods game. Except in rare circumstances, this will lead to inefficiencies due to free-riding (defecting), and thus gains from cooperation.

2. Free-riding comes from individuals putting resources toward causes which they personally view as neglected (being under-valued by other people’s value systems) at the expense of causes for which there is more consensus.

3. Standard EA cause prioritization recommends that people free-ride on others' efforts to do good (at least when interacting with people not in the EA community).

4. If existing societal norms are to cooperate when trying to do good, EA may cause harm by encouraging people to free-ride.

5. There may be large gains from improving cooperation.


Collective Action Problems Among People Trying to do Good

Note that the main argument in this section is not original to me. Others within EA have written about this, some in more general settings than what I look at here [10].

The standard collective action problem is in a setting where people are selfish (each individual cares about their own consumption) but there’s some public good, say clean air, that they all value. The main issue is that when deciding whether to pollute the air or not, an individual doesn’t consider the negative impacts that pollution will have on everyone else. This creates a prisoner’s dilemma, where they would all be better off if they didn’t pollute, but any individual is better off by polluting (defecting). These problems are often solved through governments or through informal norms of cooperation.

Here I argue that this collective action problem is almost surely present among a group of people trying to do good, even if every member of the group is completely unselfish. All that is needed is that people’s value systems place some weight on how good the world is (they are not simply warm-glow givers) and that they have some disagreement about what counts as good (there’s some difference in values). The key intuition is that in an uncooperative setting each altruist will donate to causes based on their own value system without considering how much other altruists value those causes. This leads to underinvestment in causes which many different value systems place positive weight on (causes with positive externalities for other value systems) and overinvestment in causes which many value systems view negatively (causes with negative externalities). Except in a few unlikely circumstances, an allocation can be found which is preferred by every value system (a pareto improvement) over the non-cooperative equilibrium, just like with any other public goods game.

For most readers, I expect that the examples below will get the main point across. If anyone is especially interested, here is a more general model of altruistic coordination that I used to check the intuition.


Examples

A. Two funders, positive externalities

Take a situation with two funders: a total utilitarian and an environmentalist (taken to mean someone who intrinsically values environmental preservation). Each has a total of $1000 to donate. The total utilitarian thinks that climate change mitigation is a very important cause, but they would prefer that funding instead goes toward AI safety research, which they think is about 50% more important than climate change. The environmentalist also thinks climate change mitigation is important, but they would prefer to spend money on near-term conservation efforts, which they view as being 50% more important than climate change. The environmentalist places almost no value on AI safety research and the total utilitarian places almost no value on near-term conservation efforts. If they don’t cooperate, the unique Nash equilibrium has them both spending their money on their own preferred causes, so $1000 goes to AI safety, $1000 to conservation, and $0 to climate change. If they could cooperatively allocate donations, they would choose to give all of the money ($2000) to climate change, which gives each of them a payoff 33% higher than in the non-cooperative case.


B. Two funders, negative externalities

The gains from cooperation would be even larger if each funder placed negative value on the other funder’s preferred cause. For example, if one funder’s preferred cause was pro-choice advocacy and the other’s was pro-life advocacy, then their payoffs in the non-cooperative setting may be nearly zero (their donations cancel each other out), which means the cooperative setting will have nearly infinitely higher payoffs in percentage terms. This idea has been noted before in writings on moral trade [4].

Importantly, even if funders’ preferences for direct work lead to no negative externalities, there could be negative externalities in their preferences for advocacy. For example, in the situation in example A, neither funder places negative value on the other funder’s preferred cause. However, if we allow the utilitarian to fund advocacy which persuades people to donate to AI safety rather than climate change or conservation, this advocacy would be negatively valued by the environmentalist. Thus, even small differences in preferences for direct work can lead to zero-sum conflict on the advocacy front (for further discussion see [3] and [12]).


C. Multiple funders, positive externalities

Now notice that we could add a third funder to example A who was in a symmetric situation (say they valued anti-aging research, which the other two funders hardly value at all, 50% more than climate change, but place no value on AI safety or conservation). In this case the gains from cooperating (putting all the money into climate change research) increase to 50% for each person. In general, adding funders with their own “weird” cause will increase the gains from cooperating on causes for which there is more consensus.


D. No externalities

One case where cooperation does not lead to any gains is where people’s value systems are perfectly perpendicular to each other, so that there are no externalities. The most famous example of this is an economy with selfish individuals (so everyone only cares about their own consumption and places no value, positive or negative, on the consumption of others). The non-cooperative equilibrium in this setting will be efficient, meaning that there can be no gains from cooperation (footnote: this is similar to the first-welfare theorem). This could also occur (although I think it’s very unlikely) in a setting with altruistic individuals. In the setting from example A, if we change preferences so that both the environmentalist and the utilitarian place no value on climate change, then the non-cooperative equilibrium of the game cannot be improved upon. However, as was noted above, the possibility of advocacy can create negative externalities between funders, and thus significant opportunities for cooperation. Also, I think in reality we see significant overlap in values, leading to large positive externalities from donations to certain causes.


E. Identical Value Systems

Another case in which the non-cooperative equilibrium is efficient is when there is no value disagreement among funders. Imagine two total utilitarians in the setting from example A. They would both choose to fund AI safety research in the non-cooperative setting, which is also the cooperative choice.

However, notice that this conclusion depends on the assumption that people are perfectly moral. If we add that they are partially selfish, but still agree on what is morally right, then in the non-cooperative setting they will overinvest in their own personal consumption. This leads to gains from cooperating by spending more on the public good (AI safety), like in the classical collective action problem.

Perhaps the current EA community is close to having identical moral value systems (and is mostly unselfish) to the point where the gains from cooperation are low. I expect that this isn’t true. It seems like there is a lot of heterogeneity in value systems within EA, and even small value differences can lead to a lot of inefficiency due to the advocacy channel mentioned above [12]. Also, even if people’s moral values are identical, there seems be a lot of disagreement about difficult-to-answer empirical questions within EA (such as the question of whether we are living in the most important century [13]). These disagreements, as long as they persist, also lead to collective action problems.


EA Cause-Prioritization and Free-Riding

Having established that people attempting to do the most good are typically playing a prisoner’s dilemma, I now want to look at what EA organizations (mainly 80,000 Hours) have suggested for people to do. Here I would like to distinguish between cooperation with people involved in EA vs people outside of it. It seems that within EA it is commonly accepted that people should cooperate with people who have different values [2]. People often speak of maximizing “our” impact rather than my impact. And, importantly, people seem to disapprove of choices which benefit your own value system at the expense of others’ values.

With prisoner’s dilemmas against people outside of EA, it seems that the standard advice is to defect. In 80,000 Hours’ cause prioritization framework, the goal is to estimate the marginal benefit (measured by your value system, presumably) of an extra unit of resources being invested in a cause area [5]. No mention is given to how others value a cause, except to say that cause areas which you value a lot relative to others are likely to have the highest returns. This is exactly the logic of free-riding which led to coordination failures in the above examples: every individual makes decisions irrespective of the benefits or harms to other value systems, which leads to underinvestment in causes which many people value positively and overinvestment in causes which many value negatively.

The cause areas in example A were chosen because I think climate change is one area where EA is probably free-riding off other people’s efforts to do good. Given its wide range of negative consequences (harm to GDP, the global poor, animals, the environment, and extinction risk), a variety of moral systems place positive weight on mitigating climate change. Perhaps for this reason, governments and other groups are putting a large amount of resources towards the problem. This large amount of resources, along with the assumption of diminishing returns, has led many EAs to not put resources toward climate change (because it is not neglected), and instead focus on other cause areas. In effect, this a decision to free-ride on the climate change mitigation work being done by those with different value systems. I expect this is also the case for many other causes which EAs regard as “important but not neglected”.


What Should We Do About This?

Although I believe that the EA community frequently defects in prisoner’s dilemmas, I am much less certain about whether this is a bad thing. If everyone else is defecting, and it’s very costly to improve cooperation, then the best that we can do is to defect ourselves. However, if there currently is some cooperation going on, following EA advice could reduce that cooperation, and thus be sub-optimal. Furthermore, even if there isn’t much cooperation currently, working to improve cooperation could be more valuable than simply not cooperating, depending on how costly it is to do so.


Working to Not Destroy Cooperation

There are a few reasons why I think it’s possible that there’s currently some cooperation between people with different value systems. First is that a large literature in behavioral economics finds that people frequently cooperate when playing prisoner’s dilemmas, at least when they expect their opponent to also cooperate [6]. There is also a fair amount of research showing that studying economics causes people to defect more often in prisoner’s dilemmas [14]. Hopefully learning about effective altruism doesn’t lead to a similar behavior change among moral actors. However, it should be noted that in behavioral research the outcomes are typically monetary payoffs to participants. I’m not aware of any research showing that people tend to cooperate when the outcomes of the game are moral objectives (like in the examples I listed above). For all I know, people don’t cooperate much in such situations, and thus it would not be possible for EA to cause more defection.

Next, some criticisms of effective altruism seem to be in line with the concern that it will reduce cooperation among those who wish to do good. Daron Acemoglu’s criticism of effective altruism from 2015 is one example [7] (note that Acemoglu is one of the most influential economists in the world). Although much of his critique is on earning to give, I think the substance of the critique applies more generally. He claims that effective altruism often advocates for doing good in ways that have negative externalities for others (like earning to give through high frequency trading), and thus it may be harmful if it became normal to view earning to give as an ethical life. He thinks many existing norms are more beneficial, such as the view that things like civil service or community activism are ethical activities.

More generally, there is a lot of criticism of private philanthropy for being “undemocratic” [8]. Free-riding issues among those looking to do good are one basis for this criticism. The government is the main institution we have for cooperating to solve collective action problems, which includes collective action problems between those looking to do good. Although any individual could do more good by donating their time and money to private philanthropy (defecting), we all may be better off if we all worked through the government or through some other cooperative channel. The large amount of criticism of private philanthropy may be evidence that cooperative norms around doing good are somewhat common in society.

If the above stories are true, and there actually is a degree of cooperative behavior happening, then spreading the methodology currently used within EA could be harmful, as it could lead to a decrease in cooperation. One may think we can still use this methodology without advocating that others do it, which may avoid any negative consequences. This is basically the idea of defecting in secret. As Brian Tomasik discusses [1], this seems unlikely to succeed; if EA has any major successes, then even without any advocacy other people are likely to notice and to imitate our methodology.

Another implication of this is that further investments in EA cause prioritization could be harmful. One of the main differences between the cause prioritization work done by EA organizations and work more commonly done in economics is that EA cause prioritization takes the perspective of a benevolent individual rather than a government. Perhaps, as EA cause prioritization continues to improve, more people will choose to use their advice and act unilaterally rather than cooperatively.

I should also note that even if the above stories are true, the other benefits of EA (mainly, encouraging people to do good effectively) may outweigh any negative effects from reducing cooperation.


Working to Improve Cooperation

Even if there isn’t much cooperation currently happening, there could be large gains to working to build such cooperation. For example, if cooperative norms aren’t widespread, then we could work to build those norms. If the government is currently very dysfunctional and non-cooperative, then we can work to improve it. A number of EA initiatives already involve increasing cooperation, including:

1. Work on improving institutional decision-making [9] and international cooperation

2. Work on mechanism design for altruistic coordination [10]

3. CLR’s research initiative on cooperation [11]

The arguments given here only strengthen the case for working on those causes. There are also a number of academic literatures that could be valuable, including those on the private provision of public goods and group conflict.

There are some other important considerations here. One is that methods for building cooperation between a more like-minded group of people may not work for building cooperation among more diverse groups. For example, increasing the warm glow from fighting for a common cause may help solve collective action problems within a political party, but it may make it more difficult to get party members to support compromise with an opposing party (because compromise prevents them from getting warm-glow from fighting).

Also, there may be reasons to prioritize building mechanisms for cooperation within effective altruism before expanding to a more value-diverse group of people. Let’s assume that people of significantly different value systems to the average EA tend to mostly be inefficient in their efforts to do good. If they are introduced to EA, they will be able to more effectively achieve their goals, which may actually have negative externalities on those currently involved in EA (through the advocacy channels mentioned above, for example). Thus, it may be better to first develop good mechanisms for cooperation, so that once these other people are introduced to EA ideas it will be rational for them to cooperate as well.

Finally, and more speculatively, I expect that many ways to improve cooperation involve increasing returns to scale, at least in a narrow sense. For example, improving institutions at the national or international level may only succeed if a very large number of people participate, which may be very difficult to achieve if the current norm is that altruists don’t cooperate much (you have to convince everyone to coordinate on another equilibrium). More appealing would be to pursue methods of cooperating which provide benefits even if smaller numbers of people participate. This could include reforming local governments, one at a time, then taking the reforms to state and national governments. Or it could include building a mechanism for cooperating within effective altruism and then adding more people into that mechanism incrementally.


Conclusion

There is no general reason to believe that good outcomes will arise when every individual aims to do the most good with respect to their own value system. In fact, in standard settings (like a group of people independently choosing where to donate money), the outcome when individuals aim to maximize their own impact will almost surely be inefficient. This means that there can be large gains to cooperation between altruistic individuals. It also means that the effective altruism movement, which encourages individuals to maximize their impact, could have negative consequences.


References

[1] https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/

[2] https://80000hours.org/articles/coordination/

[3] https://rationalaltruist.com/2013/06/13/against-moral-advocacy/

[4] https://www.fhi.ox.ac.uk/wp-content/uploads/moral-trade-1.pdf

[5] https://80000hours.org/articles/problem-framework/

[6] https://www.sciencedirect.com/science/article/pii/S1574071406010086

[7] http://bostonreview.net/forum/logic-effective-altruism/daron-acemoglu-response-effective-altruism

[8] https://www.vox.com/future-perfect/2019/5/27/18635923/philanthropy-change-the-world-charity-phil-buchanan

[9] https://80000hours.org/problem-profiles/improving-institutional-decision-making/

[10] https://drive.google.com/file/d/1_Tob-zKBVBrnuQ0kWEBFFuuo_4A6WIRj/view

[11] https://longtermrisk.org/topic/cooperation/

[12] https://www.philiptrammell.com/blog/43

[13] https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1

[14] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5584942/

Comments39
Sorted by Click to highlight new comments since:

I wonder if EA as it currently exists can be reframed into more cooperative terms, which could make it safer to promote. I'm speculating here, but I'd be interested in thoughts.

One approach to cause prioritisation is to ask "what would be the ideal allocation of effort by the whole world?" (taking account of everyone's values & all the possible gains from trade), and then to focus on whichever opportunities are most underinvested in vs. that ideal, and where you have the most comparative advantage compared to other actors. I've heard researchers in EA saying they sometimes think in these terms already. I think something like this is where a 'cooperation first' approach to cause selection would lead you.

My guess is that there's a good chance this approach would lead EA to support similar areas to what we do currently. For instance, existential risks are often pitched as a global public good problem i.e. I think that on balance, people would prefer there was more effort going into mitigation (since most people prefer not to die, and have some concern for future generations). But our existing institutions are not delivering this, and so EAs might aim to fill the gap, so long as we think we have comparative advantage addressing these issues (and until the point where institutions can be improved that this is no longer needed).

I expect we could also see work on global poverty in these terms. On balance, people would prefer global poverty to disappear (especially if we consider the interests of the poor themselves), but the division into nation states makes it hard for the world to achieve that.

This becomes even more likely if we think that the values of future generations & animals should also be considered when we construct the 'world portfolio' of effort. If these values were taken into account, then currently the world would, for instance, spend heavily on existential risk reduction & other investments that benefit the future, but we don't. It seems a bit like the present generation is failing to cooperate with future generations. EA's cause priorities aim to redress this failure.

In short, the current priorities seem cooperative to me, but the justification is often framed in marginal terms, and maybe that style of justification subtly encourages an uncooperative mindset.

I agree with your intuition that with what a "cooperative" cause prioritization might look like. Although I do think a lot more work would need to be done to formalize this. I also think it may not make sense to use cooperative cause prioritization: if everyone else always acts non-cooperatively, you should too.

I'm actually pretty skeptical of the idea that EA tends to fund causes which are widely valued by people as a whole. It could be true, but it seems like it would be a very convenient coincidence. EA seems to be made up of people with pretty unique value systems (this, I'd expect, is partly what leads EAs to view some causes as being orders of magnitude more important than the causes that other people choose to fund). It would be surprising if optimizing independently for the average EA value system leads to the same funding choices as would optimizing for some combination of the value systems in the general population. While I agree that global poverty work seems to be pretty broadly valued (many governments and international organizations are devoted to it), I'm unsure about things like x-risk reduction. Have you seen any evidence that that is broadly popular? Does the UN have an initiative on x-risk?

I would imagine that work which improves institutions is one cause area which would look significantly more important in the cooperative framework. As I mention in the post, governments are one of the main ways that groups of people solve collective action problems, so improving their functioning would probably benefit most value systems. This would involve improving both formal institutions (constitutions), or informal institutions (civic social norms). In the cooperative equilibrium, we could all be made better off because people of all different value systems would put a significant amount of resources towards building and maintaining strong institutions.

A (tentative) response to your second to last paragraph: the preferences of animals and future generations would probably not be directly considered when constructing the cooperative world portfolio. Gains from cooperation come from people who have control over resources working together so that they're better off than in the case where they independently spend their resources. Animals do not control any resources, so there are no gains from cooperating with them. Just like in the non-cooperative case, the preferences of animals will only be reflected indirectly due to people who care about animals (just to be clear: I do think that we should care about animals and future people). I expect this is mostly true of future generations as well, but maybe there is some room for inter-temporal cooperation.

Interesting. My personal view is that the neglect of future generations is likely 'where the action is' in cause prioritisation, so if you exclude their interests from the cooperative portfolio, then I'm less interested in the project.


I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?

The point about putting more emphasis on international coordination and improving institutions seems reasonable, though again, I'd wonder if it's enough to trump the lower neglectedness.

Either way, it seems a bit odd to describe longtermist EAs who are trying to help future generations as 'uncooperative'. It's more like they're trying to 'cooperate' with future people, even if direct trade isn't possible.


On the point about whether the present generation values x-risk, one way to illustrate it is that value of a statistical life in the US is about $5m. This means that US citizens alone would be willing to pay, I think, 1.5 trillion dollars to avoid 0.1ppt of existential risk.

Will MacAskill used this as an argument that the returns on x-risk reduction must be lower than they seem (e.g. perhaps the risks are actually much lower), which may be right, but still illustrates the idea that present people significantly value existential risk reduction.

I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?

I think one point worth emphasizing is that if the cooperative portfolio is a pareto improvement, then theoretically no altruist, including longtermist EAs, can be made worse off by switching to the cooperative portfolio.

Therefore, even if future generations are heavily neglected, the cooperative portfolio is better according to longtermist EAs (and thus for future generations) than the competitive equilibrium. It may still be too costly to move towards the competitive equilibrium, and it is non-obvious to me how the neglect of future generations changes the cost of trying to move society towards the cooperative portfolio or the gain of defecting. But if the cost of moving society to the cooperative portfolio is very low then we should probably cooperate even if future generations are very neglected.

I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?

The more general model captured the idea that there are almost always gains from cooperation between those looking to do good. It doesn't show, however, that those gains are necessarily large relative to the costs of building cooperation (including opportunity costs). I'm not sure what the answer is to that.

Here's one line of reasoning which makes me think the net gains from cooperation may be large. Setting aside the possibility that everyone has near identical valuations of causes, I think we're left with two likely scenarios:

1. There's enough overlap in valuations of direct-work to create significant gains from compromise on direct-work (maybe on the order of doubling each persons impact). This is like example A in the post.

2. Valuations of direct work are so far apart (everyone thinks that their cause area is 100x more valuable than others) that we're nearly in the situation from example D, and there will be relatively small gains from building cooperation on direct work. However, this creates opportunities for huge externalities due to advocacy, which means that the actual setting is closer to example B. Intuition: If you think x-risk mitigation is orders of magnitude more important than global poverty, then an intervention which persuades someone to switch from working on global poverty to x-risk will also have massive gains (and have massively negative impact from the perspective of the person who strongly prefers global poverty). I don't think this is a minor concern. It seems like a lot of resources get wasted in politics due to people with nearly perpendicular value systems fighting each other through persuasion and other means.

So, in either case, it seems like the gains from cooperation are large.

I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation.

For now, I don't think any major changes in decisions should be made based on this. We don't know enough about how difficult it would be to build cooperation and what the gains to cooperation would be. I guess the only concrete recommendation may be to more strongly emphasize the "not being a jerk" part of effective altruism (especially because that can often be in major conflict with the "maximize impact" part). Also I would argue that there's a chance that cooperation could be very important and so it's worth researching more.

I also wanted to attempt to clarify 80k's position a little.

With prisoner’s dilemmas against people outside of EA, it seems that the standard advice is to defect. In 80,000 Hours’ cause prioritization framework, the goal is to estimate the marginal benefit (measured by your value system, presumably) of an extra unit of resources being invested in a cause area [5]. No mention is given to how others value a cause, except to say that cause areas which you value a lot relative to others are likely to have the highest returns.

I agree this is the thrust of the article. However, also note that in the introduction we say:

However, if you’re coordinating with others in aiming to have an impact, then you also need to consider how their actions will change in response to what you do, which adds additional elements to the framework, which we cover here.

Within the section on scale we say:

It can also be useful to group instrumental sources of value within scale, such as gaining information about which issues are most important, or building a movement around a set of issues. Ideally, one would also capture the spillover benefits of progress on this problem on other problems. Coordination considerations, as briefly covered later, can also change how to assess scale.

And then at the end, we have this section:

https://80000hours.org/articles/problem-framework/#how-to-factor-in-coordination


On the key ideas page, we also have a short section on coordination and link to:

https://80000hours.org/articles/coordination/

Which advocates compromising with other value systems.

And, there's the section where we advocate not causing harm:

https://80000hours.org/key-ideas/#moral-uncertainty-and-moderation


Unfortunately, we haven't yet done a great job of tying all these considerations together – coordination gets wedged in as an 'advanced' consideration; whereas maybe you need to start from a cooperative perspective, and totally reframe everything in those terms.

I'm still really unsure of all of these issues. How common are prisoner's dilemma style situations for altruists? When we try to factor in greater cooperation, how will that change the practical rules of thumb? And how might that change how we explain EA? I'm very curious for more input and thinking on these questions.

Thanks for the clarification. I apologize for making it sound as if 80k specifically endorsed not cooperating.

I don't buy your example on 80k's advice re: climate change. You want to cooperate in prisoner's dilemmas if you think that it will cause the agent you are cooperating with to cooperate more with you in the future. So there needs to a) be another coherent agent, which b) notices your actions, c) takes actions in response to yours, and d) might plausibly cooperate with you in the future. In the climate change case, what is the agent you'd be cooperating with here and does it meet these criteria?

Is it the climate change movement? It doesn't seem to me that "the climate change movement" is enough of a coherent agent to do things like decide "let's help EA with their goals."

Or is it individual people who care about climate change? Are they able to help you with your goals? What is it you want from them?

First, the only strong claim that I'm trying to make in the post is that the standard EA advice in this setting is to free-ride. Free-riding is not necessarily irrational or immoral. In the section "Working to not Destroy Cooperation" I argue that it's possible that this sort of free-riding will make the world worse, but that is more speculative.

As far as who the other players are in the climate change example, I was thinking of it as basically everyone else in the world who has some interest in preventing climate change, but the most important players are those who are or could potentially have a large impact on climate change and other important problems. This takes the form of a many-player public goods game, which is similar conceptually to a prisoner's dilemma. While I do think it's unlikely that everyone who has contributed to fighting climate change will collectively decide "let's not help EA with their goals", I think it's possible that if EA has success with their current strategy, some people will choose to use the methodology of EA. This could lead them to contribute to causes which are neglected by their value systems but which most people currently in EA find less important than climate change (causes like philanthropy in their local communities, or near term conservation work, or spreading their religion, or some bizarre thing that they think is important but no one else does). So, in that way, free-riding by EA could lead others to free-ride, which could make us all worse off.

Gotcha. So your main concern is not that EA defecting will make us miss out on good stuff that we could have gotten via the climate change movement deciding to help us on our goals, but rather that it might be bad if EA-type thinking became very popular?

Interesting, thanks for writing this up!

In practice, and for the EA community in particular, I think there are some reasons why the collective action problem isn't quite as bad as it may seem. For instance, with diminishing marginal returns on causes, the most efficient allocation will be a portfolio of interventions with weights roughly proportional to how much people care on average. But something quite similar can also happen in the non-cooperative equilibrium for some diversity of actors who all support the cause they're most excited about. (Maybe this is similar to case D in your analysis.)

Can you point to examples of concrete EA causes that you think get too much or too little resources due to these collective action problems?

Thanks for the comment. First, I'd like to point out that I think there's a good chance that the collective action problem within EA isn't so bad because, as I mentioned in the post, there has been a fairly large emphasis on cooperating with others within EA. It's when interacting with people outside of EA that I think we're acting non-cooperatively.


However, it's still worth discussing whether there are major unsolved collective action problems within EA. I'll give some possible examples here, but note that I'm very unsure about many of these examples. First, here are some causes which I think benefit EAs of many different value systems and are thus would be underfunded if people were acting non-cooperatively:

1. General infrastructure including the EA forum, EA funds or EA global. This also would include the mechanisms for cooperation which I mentioned in the post. All of these things are like public goods in that that they probably benefit nearly every value system within EA. If true, this also means that the "EA meta fund" may be the most public good-like of the four EA funds.

2. The development of informal norms within the community (like being nice, not overly-stating or making misleading arguments, cooperating with others). The development and maintenance of these norms also seems to be a public good which benefits all value systems.

3. (this is the most speculative one) more long-term oriented approaches to near-term EA cause areas. An example is approaches to global development which involve building better and lasting political institutions (see this forum post). This may represent a kind of compromise between some long-termist EAs (who may normally donate to AI safety) and global development EAs (who would normally donate to short-term development initiatives like AMF).


And here are some causes which I think are viewed as harmful by some value systems and thus would be overfunded if people acted non-cooperatively:

1. Advocacy efforts to convince people to convert from other EA cause areas to your own. As I mentioned in the post, these can be valued negatively by other value systems.

2. Causes which increase (or decrease) the population. Some people disagree on whether creating more lives is on average good or bad (for example, some suffering-focused EAs may think that creating more human lives is good. Conversely, some people may think that creating more farm animal lives is on average good). This means that causes which increase (decrease) the population will be viewed as harmful by those who view population increases (decreases) as bad. Brian Tomasik's example at the end of this post is along those lines.


So, in general, I don't think I agree that the EA community is likely to not have major collective action problems. It seems more likely, though, that EA has solved most of its internal collective action problems through emphasizing cooperation.

One more example to add here of a cause which may be like a "public good" within the EA community: promoting international cooperation. Many important causes are global public goods (that is, causes which benefit the whole world and thus any one nation has an incentive to free-ride on other nations' contributions), including global poverty, climate change, x-risk reduction, and animal welfare. I know that FHI already has some research on building international cooperation. I would guess that some EAs who primarily give to global poverty would be willing to shift funding towards building international cooperation if some EAs who normally give to AI safety do the same.

I feel like the motivating example here (of the prisoner's dilemma between a utilitarian and an environmentalist) is relying a lot on the specific numbers going into the example. In particular, it's relying on the assumption that cause areas don't differ dramatically in impact.

If you believe that (a) the best ideosyncratic giving opportunities are very much better than the best consensus giving opportunities, and (b) other people's best ideosyncratic giving opportunities are much less bad (by your values) than your own best ideosyncratic giving opportunities are good, then the "non-co-operative" altruistic equilibrium will be better by your values overall than the "co-operative" equilibrium.

This seems to be true in the example given. Near-term conservation work might be bad from a total utilitarian perspective (I'm not sure if this is true, but it's plausible to me), but it seems much less bad than, say, AI safety work is good. If so, the cost of having the environmentalist work on conservation instead of climate change is well worth paying in exchange for being able to work on AI safety.

Ditto for most major "mainstream" charitable causes: from a longtermist perspective I'd say that, whatever their sign, their magnitude tends to be drastically smaller than that of the most promising EA causes. So foregoing large gains from cause prioritisation to work better with groups advocating these causes might simply not be worth it.

Conversely, if you're dealing with a fairly powerful optimiser with a very different value set from yours (e.g. a committed total utilitarian negotiating with a committed negative utilitarian), their best options might be very negative from your perspective, so co-operation is more important for both of you.

Whoops, sorry, I wrote this yesterday and then forgot to post it until today, and in the meantime Ben Todd made the same point in one of his comments.

Interesting. Reminds me of this post by Paul Christiano on moral public goods

Thanks for that reference! I hadn't come across that before. I think the main difference is that for most of my post I'm considering public goods problems among people who are completely unselfish but have different moral values. But problems also exist when people have identical moral values and some level of selfishness. Paul Christiano's post does a nice job of explaining that case. Milton Friedman also wrote about that problem (specifically, he talked about how poverty alleviation is a public good).

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

This post describes issues that could apply to nearly every kind of EA work, with clear negative consequences for everyone involved. I especially liked the problem statement in this passage:

The key intuition is that in an uncooperative setting each altruist will donate to causes based on their own value system without considering how much other altruists value those causes. This leads to underinvestment in causes which many different value systems place positive weight on (causes with positive externalities for other value systems) and overinvestment in causes which many value systems view negatively (causes with negative externalities).

The post supports this point with a well-structured argument. Elements I especially liked:

  • The use of tables to demonstrate a simple example of the problem
  • References to criticism of EA from people outside the movement (showing that “free-riding” isn’t just a potential issue, but may be influencing how people perceive EA right now)
  • References to relevant work already happening within the movement (so that readers have a sense for existing work they could support, rather than feeling like they’d have to start from scratch in order to address the problem)
  • The author starting their “What should we do about this?” section by noting that they weren’t sure whether “defecting in prisoner’s dilemmas” was actually a bad thing for the EA community to do. It’s really good to distinguish between “behavior that might look bad” and “behavior that is actually so harmful that we should stop it.”

Your post paints a picture of differences in value where I only see differences in careful thinking. The general public supports local charities and animal shelters not because they have different values, but because they have not spend much time to think carefully about their altruistic aspirations. I think most people would find causes like poverty in developing countries and global catastrophic risks very much within their altruistic priorities if they would use tools like prioritization and cost-efficiency. Those are not EA-specific tools, those are tools that people already use for their personal lives.

Others have alluded to it, I just wanted to make this point into its own comment because this part of your essay seems so off to me.

Thanks for the comment. If differences in careful thinking are the main sources of differences in people's altruistic behavior and those differences can be easily eliminated through informing people about the benefits of thinking carefully, then I agree that the ideas in this post are not very important.

The reason that the second part is relevant is because as long as these differences in careful thinking persist, then it's as if people have differences in values (this is the same as what I said in the essay about how there are a lot of differences in beliefs within the EA community which lead to different valuations of causes, even when people's moral values are identical). If these differences in careful thinking were easily to eliminate, then we should be prioritizing informing the entire world about their mistakes ASAP, so that any differences in altruistic priorities would be eliminated. Unfortunately, I don't think it's true that these differences are easy to eliminate (I think that's partially why the EA community has moved away from advocacy).

I also would disagree that differences in careful thinking are the main sources of disagreements in people's altrusitic behavior. Even within the EA community, where I think most people think very carefully, there are large differences in people's valuations of causes, as I mentioned in the post. I expect that the situation would be similar if the entire world started "thinking more carefully".

This is a tangent, but if you're looking for an external critic maybe making a point along these lines, then the LRB review of DGB might be better. You could see systemic change is a public good problem, and the review claims that EAs neglect it due to their individualist focus. More speculation at the end of this:

https://forum.effectivealtruism.org/posts/7DfaX75zGehPZWJTx/thread-for-discussing-critical-review-of-doing-good-better

Thank you for the post – very interesting and thought provoking ideas. I have a couple of points to explore further that I'll break into different replies.

I'd be curious for more thoughts on how common these situations are.

In the climate change, AI safety, conservation example, it occurred to me that if each individual thinks that their top option is 10 times more effective than the second option, it becomes clearly better again (from their pov) to support their top option. The numbers seem to only work because AI safety is marginally better than climate change.

You point out that the problem becomes more severe as the number of funders increases. It seems like there are roughly 4 'schools' of EA donors, so if we consider a coordination problem between these four schools, it'll roughly make the issue 2x bigger, but it seems like that still wouldn't outweigh 10x differences in effectiveness.

The point about advocacy making it worse seems good, and a point against advocacy efforts in general. Paul Christiano also made a similar point here: https://rationalaltruist.com/2013/06/13/against-moral-advocacy/

I'd be interested in more thoughts on how commonly we're in the prisoner's dilemma situation you note, and what the key variables are (e.g. differences in cause effectiveness, number of funders etc.).

Thanks a lot for the comment. Here are a few points:

1. You're right that the simple climate change example it won't always be a prisoner's dilemma. However, I think that's more due to the fact that I assumed constant returns to scale for all three causes. At the bottom of this write-up I have an example with three causes that all have log returns. As long as both funders value the causes positively and don't have identical valuations, a pareto improvement is possible through cooperation (unless I'm making a mistake in the proof, which is possible). So I think the existence of collective action problems is more general than the climate change example would make it seem.

2. It's a very nice point that the gains from cooperation may be small in magnitude, even if they're positive. That is definitely possible. But I'm a little skeptical that large valuation differences between the 4 'schools' of EA donors means that the gains to cooperation are likely to be small. I think even within those schools there are significant disagreements among causes. For example, within the long-termist school, disagreements on whether we're living in an extremely influential time or on how to value population increases can lead to very large disagreements in valuation of causes. Also, when people have very large differences in valuations of direct causes, the opportunity for conflict on the advocacy front seems to increase (See Phil Trammell's post here).


I agree that it would be useful to get more of an idea of when the prisoner's dilemma is likely to be severe. Right now I don't think I have much more to add on that.

At the bottom of this write-up I have an example with three causes that all have log returns. As long as both funders value the causes positively and don't have identical valuations, a pareto improvement is possible through cooperation.

Very interesting, thank you.

This was one of the most important posts I've read on the Forum all year. I'll definitely be thinking about it for a while. Thank you for posting it!

Thanks for this post, I found it quite interesting.

You write:

There is also a fair amount of research showing that studying economics causes people to defect more often in prisoner’s dilemmas [14]. Hopefully learning about effective altruism doesn’t lead to a similar behavior change among moral actors.

The abstract of the cited source reads:

Do economics students behave more selfishly than other students? [...] The three mechanisms [that might lead them to do so] were tested by inviting students from various disciplines to participate in a relatively novel experimental game and asking all participants to give reasons for their choices. Compared with students of other disciplines, economics students [...] [emphasis added]

So it sounds like this study showed that economics students defect more often than other students, but not that studying economics causes that. It might be that different kinds of people decide to study economics vs other subjects, and that their pre-existing differences play the causal role here.

Do you know if there's research more directly supporting your claim? E.g., studies which measure the same students' behaviours before vs after studying economics? (I only read the abstract of that study, and didn't look into other studies.)

This may be a relatively minor/tangential point. But it also feels like it might make a decent difference to how much I should predict defection to increase as a result of learning about EA cause-prioritisation ideas. 

Thanks, this is a very good comment. I mostly cited that article for the literature review, which includes a few papers that argue for a causal connection between learning economics and free-riding. However, I looked into it more today, and it seems like the entire body of work is inconclusive on this question. Here's a more recent literature review on that.

I'll edit that part of the post to be more accurate.

Some of your statements about advocacy and politics reminded me of the following two posts by Robin Hanson, which I enjoyed and would recommend to people who haven't read them yet:

Thanks so much for writing this. I've had similar worries regarding local charity and things like fair trade for a while.

Would you elaborate on what you mean?

Even though fair trade is ineffective on an individual level, it may be effective on a collective level because enough people find it appealing for broad adoption. Deciding to ignore it weakens any attempt to establish buying fairtrade as a society.

EAs don't arise out of a vacuum, but out of society. If society is doing well, then EAs are more likely to do well too and hence to have more impact. So by not donating to a local charity, you are refusing to invest in the society that provided you the chance to have an impact in the first place.

Not saying you should donate locally or buy fair trade, just pointing out one worry with ignoring them.

Thank you for creating this! It's important and extremely well-written.

Feels apropos to drop a pointer to Gabay et al. 2019 – "MDMA Increases Cooperation and Recruitment of Social Brain Areas When Playing Trustworthy Players in an Iterated Prisoner's Dilemma"

I don't read this Forum post as suggesting individual within EA are uncooperative, but rather that EA institutions/teachings are uncooperative.

Institutional decision-making is the aggregate of individual decision-making, right?

I would disagree - although the 'rules of the game' are created by individuals, they are larger than any one individual and difficult to change by any one individual despite their best effort.

But that's a bit of a tangent - I can see why you thought it was relevant.

Right – when we're looking for ways to improve coordination, we should consider interventions at both the systemic level and the individual level.

It seems obvious that there's a close relationship between the two levels. If the causal relationships between levels are murky, that implies casting a wide net when surveying potential interventions. (If we can't see the causal relationships clearly at the start, we can't confidently rule out interventions on either level.)

That article has a good reference list but these are mostly of historical interest . I do reccomend reading the article and looking at the references . (it might take 80,000 hours to read them all in full so i do not reccomend that --just read the abstracts , and if you want look at the intros, conclusions, scan main text --especially equations----one can say in 1 equation more than in 1000 words, but the equation has 1000 words behind it, and references. )

Michael Taylor's 1976 book 'the possibility of cooperation' discussed this theme, but he used a sort of outmoded game theoretic approach which has been mostly replaced by a different formalism (which derives from physics and theoretical and evolutionary biology--behavioral economics and Rawlsian utilitarianism or other variants doesn't come close to that formalism.)

The conclusion to the article does come to the correct conclusion. EA movement could easily just do the greatest good for the smallest number. In ancient europe the educated and better off classes barricaded themselves into forts, castles and mansions. There was the bubonic plague. But this in the long term in a sense saved many others in the future --created the 'enlightenment'.

I think with current modern knowledge that approach is unnecesary. i cannot wholeheartedly reccomend this group because they also have limited view of cooperation but they do have more currrent thought in this area. www.santafe.edu or https://www.santafe.edu

Curated and popular this week
Relevant opportunities