Cross-posted from my blog.

You might hear stories of someone who influenced someone else to be vegan or to donate 100 dollars and then claimed to have caused X animal lives to be saved or $100 to be donated, which are very good things indeed. But the person who donated that $100 can also claim responsibility for donating that money, because they were an integral step in the outcome, without which the money wouldn’t have been donated.

But if both parties are claiming full responsibility for causing $100 to be donated, shouldn’t that imply that $200 was donated? So who can claim responsibility here? Are they both equally responsible? Is it reasonable to say that they were both fully responsible after all? Or is it, as many things are in the real world, much more complicated than that? This is important if we, as individuals and organisations interested in maximising impact, are going to be rigorous about measuring the impact of individuals.

A friend once told me a story that poses an ethical riddle. It goes like this:

A married woman had been growing bored. Her husband wasn’t paying her attention anymore, and had stopped treating her well. She started sneaking away at night to go and sleep with other men across the river from her house. There was a bridge but she took the ferry to reduce the risk of being seen. One night, she went across the river but the man whom she had arranged to sleep with didn’t show. She went back to the ferry, but the boat master had heard of what the woman was doing from a friend and didn’t want to ferry her anymore. The woman, desperate, went across the bridge, where a drunken man killed her in a fit of rage. Whose fault was it that the woman died?

Another, more complicated riddle is presented:

There were four men in a military camp in the middle of the desert. Three of them hated the fourth, John, and wanted to kill him, but they wanted it to look like an accident. One day, when it was John’s turn to go on patrol, one of the others took his chance and put poison in John’s water flask. A second soldier, not knowing what the first had done, poured out John’s water and replaced it with sand. The third then came and poked small holes in the bottle so its contents would slowly leak out. When John was halfway through his patrol and looked for a drink, he realised his flask was empty, and he died of thirst. Who killed John?

In safety, there is a concept known as the ‘root cause’. For example, take the Air France Flight 4590 in 2000 which involved a Concorde plane outside Charles de Gaulle International Airport in France. The plane crashed, killing all crew and passengers, and some bystanders on the ground. Was it the crew’s fault? No, because the plane’s engine had caught fire shortly before take-off. So was it the fault of the engine manufacturers?

No, as it was revealed that a tyre had ruptured during take-off which hit the fuel tank, which resulted in the flame. This in turn was caused by a piece of metal found on the runway, which had fallen off of another airplane that day. This led back to the operator who had replaced that particular piece of metal, who had incorrectly installed the piece. This was interpreted as the root and primary cause of the accident.

But even so we can go back further. Someone must have trained this operator – did they do a bad job? Is it the fault of the management of that company for not putting the correct practices in place to eliminate the occurrence of such events? Maybe someone had just upset the operator and he wasn’t thinking straight.

If we go back to our first example and apply the root cause logic, that suggests that the woman died because of her husband. But this is an uncomfortable result, as the one who is most at fault is surely the man who actually killed her. Some might argue that the root cause is really just the drunken man, but it has to be said that all individuals in that story played an integral part in the woman’s death.

It might even be argued that the man was not thinking straight. What if he was drugged through no fault of his own? To be clear here, I don’t mean to imply that each player in this chain of events should be held responsible, or indeed be ‘guilty’, but they did play an unknowing role.

Bringing this all back to the original question, I confess I don’t have an answer. But I’m convinced that the answer isn’t as simple as we think, and if we want to be rigorous about measuring the impact that individuals have through an action or over their life, we should consider this further. At the very least, we should define very clearly what we mean when we say “I/we caused $100 to be donated.”

 

Looking forward to hearing comments.

9

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since:

If the reason we want to track impact is to guide/assess behavior, then I think counting foreseeable/intended counterfactual impact is the right approach. I'm not bothered by the fact that we can't add up everyone's impact. Is there any reason that would be important to do?

In the off-chance it's helpful, here's some legal jargon that deals with this issue: If a result would not have occurred without Person X's action, then Person X is the "but for" cause of the result. That is so even if the result also would not have occurred without Person Y's action. Under these circumstances, either Person X or Person Y can (usually) be sued and held liable for the full amount of damages (although that person might be able later to sue the other and force them to share in the costs).

Because "but for" causation chains can be traced out indefinitely, we generally only hold one accountable for the reasonably foreseeable results of their actions. This limitation on liability is called "proximate cause." So Person X proximately caused the result if their actions were the but-for cause of the result, and the result was reasonably foreseeable.

I think the policy reasons underlying this approach (to guide behavior) probably apply here as well.

I can't find the exact term, but my casual understanding of game theory/mechanism design will point to @mhpage's points being the right approach in economics too, not just law.

Someone should do an academic study on this. It's a tricky, important and academically interesting problem. You could probably write a B.A./M.A. dissertation or even a Ph.D. dissertation on this, or write an academic paper.

In fact, I think we should encourage more research in academia on topics that would be valuable to EA. I've been thinking a bit on this recently.

I think that if you "claim good for doing something" you are not pure altruist. Doing good things is always collaborative efforts. I saw examples how good programs failed than people started to discuss who did most "good". There is no personal good, and if you expect reward for your good it is just a deal on "goods markets". Of course, thinking about how many goods I help to create is rising my self-esteem and may help to navigate in the future decisions.

There is certainly an important difference here between cause and blameworthiness. In law, as in many cases in philosophy when one wants to make a moral appraisal, we are interested in more than mere causation. Culpability is often an additional requirement, and that can make things murky. Further, even more murkiness is introduced by the presence of moral luck, which some have argued might be highly intractable. However, for the purposes of EA assessments, I think basic counterfactual causation is sufficient. In precise terms, I think it is enough for a cause to be merely necessary, if not sufficient, for us to evaluate it as being useful. Let's say that I convince Smith to donate a million dollars to an effective charity. It is certainly true that such a donation wouldn't have been possible if Smith hadn't earned that million dollars, but it is also true that the donation wouldn't have occurred had I not made my pitch to Smith. We can say both factors (me pitching Smith and Smith earning the money) are necessary but not sufficient, assuming it is in fact true that Smith wouldn't have made a similar donation without me. This does open up the possibility that both Smith and I can say that we "caused" a million dollars to be donated to an effective charity, but I'm not sure that's actually problematic. Without either one of our actions occurring, the donation wouldn't have happened.

When extrapolating this concept over the course of multiple cause/effect cycles, however, I believe there may be an epistemic problem. Using Singer's vegetarian in the cafeteria example, it is very hard to know how many of the subsequent vegetarians would have come to accept vegetarianism through other channels. We might not even be able to attribute all of Singer's vegetarianism to this one individual, as it seems like Singer might be the sort of person who would have at some point accepted vegetarianism anyway. In other words, even playing the counterfactual game, it isn't clear what the otherwise outcome might have been. This seems to be a problem that we would face in any large set of cause/effect cycles.

Thanks for that post Michael. I have been musing on that when considering my own effectiveness. I ended up deciding that I don't actually have a problem with the donor and the influencer claiming they caused $100 to be donated. (But I reckon part of this is because it makes feel more effective).

I was amused by Peter Singer suggesting that the vegetarian he sat next to in a university dining hall once upon a time might be able to claim all the good from all the people Singer influenced to be vegetarian, and all the people those people influenced to be vegetarian.... so where might it stop!

But if both parties are claiming full responsibility for causing $100 to be donated, shouldn’t that imply that $200 was donated?

I guess it's useful to distinguish moral praiseworthiness from counterfactual effect. Both the donor and the person influencing the donor ("the influencer") did something morally praiseworthy in this case. I also think that the fact that the donor was influenced by the influencer doesn't reduce the praiseworthiness of the donor's action. This means that the total amount of praiseworthiness is greater than it is in a case where the donor determines to donate without having been influenced to do that by an influencer.

I don't see anything strange about that. If ten people together kill a person, they aren't 10% as blameworthy as a single murderer. One could even argue that they are each individually as blameworthy as a single murderer would have been - meaning that the total amount of blameworthiness is 10x as high in the ten-murderers-case as it is in the single-murderer-case. There isn't a fixed amount of praiseworthiness or blameworthiness to be distributed among actors for each action with a certain positive or negative outcome.

If you're trying to do the most good you can, you should - at least from a consequentialist perspective - not think of who's praiseworthy and to what degree, however, but rather how your actions maximise positive outcomes in the world. And it seems to me that this means that you should act on your best guesses of how others would act, given your actions.

Suppose that you choose between the following two actions:

a) Spend one hour to (in expectation) influence one person (who otherwise would not have donated) to donate $100. b) Spend one hour working, earning $50. Donate the $50.

Since your actions lead to $100 being donated in case a) (compared to just $50 in case b)), you clearly should choose that course of action, even though you are not the only person who has committed a morally praiseworthy action in a), whereas that is the case in b).

Things might become more complicated when you're comparing different EA organisations, each of which may have had an impact on donors decisions to donate, but I'll stop here for now.

So I guess there's a distinction between cause and praiseworthiness or blameworthiness.

I agree with your point about acting on the margin to maximise good done, but this is more targeted at the measurement of effectiveness. For example when an EA organisation claims to have caused $1,000 donated to effective charities for each $100 of operational costs, does that mean what we think it does? I suppose in that sense it comes down purely to counterfactuality, which is a different application of causation than we use in law.

But even so, if an EA org causes person X to start another EA org by spending $1,000, which then goes on to create 10 more EAs, who each donate $1,000, from $1,000 of further funding, do we say the return on investment is 10:1, or 5:1? How would we split that between the two orgs for the sake of reporting and measurement of effectiveness?

I think we have to consider two things here:

a) What is the relevant units that you attribute impact to?

b) Why is it relevant to measure impact of past performance in the first place?

To clarify these questions, consider the following example. Suppose that country C has 349 members of parliament (MPs), elected in a UK-style first-past-the-post system. Now compare two scenarios:

1) One party, A, gets 175 MPs, whereas the other, B, gets 174 MPs 2) A gets 176 MPs and the other 173 MPs.

Now suppose that a win for A is worth 1 trillion dollars. Then in scenario 1), each A MP can claim to have caused 1 trillion dollars for C, in the sense that if they hadn't won their race against their specific B opponent, C would have lost 1 trillion dollars. In scenario 2), however, none of them can claim to have had any impact at all, in that sense, because even if they had lost their race, A would have won.

Now firstly, note that this analysis is heavily dependent on the unit of impact attribution being individual MPs. Suppose that all but one consituencies had two MPs rather than one. In that case, the appropriate unit of impact attribution rather becomes these pairs of MPs, in which case each pair of A MPs actually did cause a 1 dollars gain (since if it weren't for them, A would have lost 174-175).

Now I think that it can be that it's not always obvious what the appropriate unit of impact attribution is (even though it might have been in the scenarios involving MPs). Suppose, e.g., that an EA org, according to a certain analysis, has caused 100 000 dollars to be moved to cost-effective charities, but that this is all due to a certain individual. Why are we then to attribute this impact to the EA org, rather than the individual? (This question obviously becomes all the more important if the individual subsequently has left the organisation.) Or conversely, why are we to attribute it to the EA org, rather than to the EA movement as a whole? (It might be that some organisations grow more thanks to the general momentum of the EA movement than thanks to any effort of their own.) Why is the EA org the appropriate level of analysis? (Not saying it isn't, but that it is something that needs to be explicitly argued for.)

Let us turn to question b), and settle, for the sake of the argument, that individual MP is the appropriate unit of impact attribution. This means that in 1), each individual MP has had an enormous impact, whereas in 2), they had no impact whatsoever. Seemingly, we have very strong reasons to donate to each individual MP is 1), but very weak reasons in 2). Can this be right, given how similar the cases are?

No, it can't, the reason being that 2) give you approximately as strong reasons as 1) to believe that the next election will be a close race as well. For, to answer question a), the ultimate point of this whole impact exercice is not to evaluate past performance, but to learn how to maximise the expected impact of future donations. In a sense, the MPs were "morally lucky" in 1), and we shouldn't take luck into account when we're thinking of where to donate (since donations should be future-facing, and there is by definition no reason to believe that we will continue to be lucky).

I think that at least part of the answer regarding the first question lies in the answer to this second question. To the extent that we want to assess past impact at all, we want to choose a level of impact attribution analysis that allows us to assess the expected impact of future donations accurately.

You would say that the value of each action is the difference between what happened, and what would have happened had the action not been taken. I'm not sure I follow the chain of events you've outlined, but the concept should be straightforward.

wow...awesome post

This feels like a "if a tree falls in the forest and no one is nearby, does it make a sound?" type debate. "Blame" and "credit" are social constructions, not objective features of reality discoverable through experiment, and in principle we could define them however we wanted.

I think the right perspective here is a behavioral psychology one. Blame & credit are useful constructions insofar as they reinforce (and, counterfactually, motivate) particular behaviors. For example, if Mary receives credit for donating $100, she will feel better about the donation and more motivated to donate in the future--to society's benefit. If Joe makes a good bet in a poker game, but ends up losing the round anyway, and his poker teammates blame him for the loss, he will feel punished for making what was fundamentally a good bet and not make bets like that one in the future--to his team's harm.

So ultimately the question of where to assign credit or blame is highly situation-dependent, and the most important input might be how others will see & learn from how the behavior is regarded. I might blame John's three comrades equally for his death, because they all made an effort to kill him and I want to discourage efforts to kill people equally regardless of whether they happen to work or not. I may even assign all 3 comrades the "full blame" for John's death, because blame, being a social construct, is not a conserved quantity.

Let's take the donating $100 example again. Let's say I can cause an additional $100 worth of donation to Givewell by donating $x to Giving What We Can. Say the EA community assigns me $100 worth of credit for achieving this. If I receive $100 worth of credit for either making or encouraging a donation of $100, then I will be motivated to encourage donation whenever x < 100, and make donations directly whenever x > 100.

This approach would be an efficient outcome for EA. Suppose x = $80; that is, donating $80 to Giving What We Can results in an additional $100 for Givewell. Thus the net effect from my $80 donation is that $100 gets donated to Givewell. But if x = $120 the movement would be better off had I donated $120 to Givewell directly instead of using it to purchase $100 worth of donation.

But there are complicated second-order effects. Suppose the person who donates $100 as a result of my $x donation to Giving What We Can notices that since x < 50, they are best off donating their $100 to Giving What We Can too. Done on a wide scale this has the potential to change the value of x in complicated ways--you could probably figure out the new value of x using some calculus, but it's getting late. There's also the effect of increasing the speed of movement growth, which might be a bad thing, or maybe the person I encourage to donate $100 later learns that I was purchasing credit more efficiently than they were and feels like a sucker. Or maybe people outside the movement notice this "credit inflation" aspect of EA and discount the movement because of this. (Similar to how we discount trophies from sports competitions if every player gets their own "participation trophy".) There's also time value of money--if my $80 donation to GWWC takes 20 years to manifest as $100 more for Givewell, then depending on the rate of return I'd get through investing the $80 I might be better off investing it and donating the resulting capital in 20 years. To decide between this option and direct donation I'd need to know Givewell's discount rate. Etc.

Some interesting points John, and I agree that blame can be manipulated to mean what we want it to mean for a purpose. But - this was more directed at the measurement of impact in EA meta-orgs and individuals. If some EA org claims to have directed $200,000 of donations to effective charities for a spend of $100,000, the cost-benefit ratio would be 1:2. But I'm not convinced that this is the whole picture, and if we're not measuring this type of thing correctly, we could be spending $100,000 to raise only $99,999 counterfactually and not realising it.

One example is that I rarely see the cost-benefit done for where this money might have gone otherwise, even when it is counterfactual. Maybe it would have gone to a pretty good charity instead of a great one, and so we shouldn't be able to pull the full value from that. And maybe that $1,000 donation made to AMF would have happened anyway. And all sorts of other complicated events.

I'm just making the point that things are, I believe, more complicated than we generally make them out to be.

Thanks for starting a discussion on this topic. I’ve been worrying about it too, and summarized my worries in this comment last month.

My worry is that as soon as we try to attribute impact to individual agents – something that strikes me as at least somewhat artificial, as you, Michael, and others in this thread have laid out – this will make it very hard to come up with an attribution system that does not create perverse incentives.

This is aggravated by many people’s inclination toward competitiveness, EA’s focus on prioritization, dependence on donors, and maybe interest in selling one’s impact to fund further operation (at least in cases of scarcity of funding).

“Prioritization is centrally about comparison, so especially charities that are dependent on funding from donors who donate to the best charity are highly incentivized to think in terms of comparison. If a single charity thinks in terms of comparison (defects), then it would be self-destructive of the other charities not to defect either. This does not hold for repeated prisoner’s dilemmas, but I can’t see any way to ever get out of such a situation in order to repeat it. Here prioritization and attribution in combination produce pervese incentives if flow-through effects of cooperation are not made an explicit part of the prioritization (I wrote about this here).”

Nonprofit with Balls has written about one manifestation of this problem and warns of the donor hording and creation of shadow missions that it leads to. He also points out that evolutionary pressures among charities favor those that succumb to these perverse incentives.

Does anyone have an idea of how to estimate how bad this problem is?

Curated and popular this week
Relevant opportunities