This is a special post for quick takes by PabloAMC 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
It seems that we have some great donation opportunities in at least some cases such as AI Safety. This has made me wonder what donation strategies I prefer. Here are some thoughts, also influenced by Zvi Mowshowitz's:
Attracting non-EA funding to EA causes: I prefer donating to opportunities that may bring external or non-EA funding to some causes that EA may deem relevant.
Expanding EA funding and widening career paths: Similarly, if possible fund opportunities that could increase the funds or skills available to the community in the future. For this reason, I feel highly supportive of Ambitious Impact project to create onramps for careers with impact in earning to give, for instance. This is in contrast to incubating new charities (Charity Entrepreneurship), which is slightly harder to motivate unless you have strong reasons to believe your impact is more cost-effective than typical charities. I am a bit wary that uncertainty might be too large to clearly distinguish charities in the frontier.
Fill in the gap left by others: Aim to fund charities that are medium-sized between their 2nd to 5th years of life: they are not small and young enough that they can rely on Charity Entrepreneurship seed funding. But they are also not large enough to get funding from large funders. One could similarly argue that you should fund causes that non-EAs are less likely to fund (e.g. animal welfare), though I find this argument more strongly if non-EA funding was close to fully funding those other causes (e.g. global health) or if the full support of the former (animal welfare) fully depends on the EA community.
Value stability for people running charities: By default and unless there are clearly better opportunities, keep donating to the same charities as previously done, and do so with unrestricted funds. This allows some stability for charities, which is very much welcomed for the charities. Also, do not push too hard on the marginal cost-effectiveness of donations, because that creates some poor incentives.
Favour hits-based strategies and local-knowledge: Favour hits-based strategies particularly those in which you profit from local knowledge of opportunities that may not be visible to others in the community.
One example of a charity I will support is ARMoR which fits well with points 1 and 3. I am also excited about local knowledge opportunities in the AI Safety ecosystem. Otherwise, I am also particularly optimistic about the work of Apollo Research on evaluations and Redwood Research on AI control; as I believe those to be particular enablers of more robust AI governance.
The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example.
I find this argument morally repugnant and want to highlight it. Using some of the words I have used in a reply:
A clear-thinking EA should strongly oppose “ends justify the means” reasoning.
First, naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct.
Second, plausibly it is wrong to do harm even when doing so will bring about the best outcome.
Finally, let me say the post itself seems to pit animal welfare against global poverty causes, which I found divisive and probably counterproductive.
I downvoted this post because it is not representative of the values I believe EA should strive for. It may have been sufficient to show disagreement, but if someone goes for the first time into the forum and sees the post with many upvotes, their impression will be negative and may not become engaged with the community. If a reporter reads the forum and reads this, they will negatively cover both EA and animal welfare. And if someone was considering taking the 10% pledge or changing their career to support either animal welfare or global health and read this, they will be less likely to do so.
I am sorry, but I will strongly oppose "ends justify the means" argument put forward by this post.
Vasco has come to a certain conclusion on what the best action is, given a potential trade-off between the impact of global health initiatives and animal welfare.
I think it is reasonable to disagree but I think it is bad for the norms of the forum and unnecessarily combative for us to describe moral views we disagree with as "morally repugnant". I think this is particularly unfair if we do not elaborate on why we either:
a) think this trade-off does not exist, or is very small.
or
b) disagree.
For example, global health advocates could similarly argue that EA pits direct cash transfers against interventions like anti-malaria bednets, which is divisive and counterproductive, and that EA forum posts doing this will create a negative impression of EA on reporters and potential 10% pledgers.
In my view, discussing difficult, morally uncomfortable trade-offs between prioritising different, important causes is a key role of the EA forum - whether within cause areas (should we let children die of cancer to prioritise tackling malaria / should we let cows be abused to prioritise reducing battery cage farming of hens), or across cause areas. We should discuss these questions openly rather than avoiding them to help us make better moral decisions.
I think it would also be bad if we stopped discussing these questions openly for fear of criticism from reporters - this would bias EA towards preserving the world's moral status quo enforced by the media.
Also, traditionally, criticism of "ends justifies the means" reasoning tends to object to arguments which encourage us to actively break deontological rules (like laws) to pursue some aggregate increase in utility, rather than arguments to prioritise one approach to improving utility over the other (which causes harm by omission rather than active harm), eg - prioritising animal welfare over global health, or vice-versa. With a more expansive use of the term, critics could reject GiveWell style charity comparison as "ends justifies the means reasoning" which argues one should let some children die of tetanus to save other children from malaria.
For example, global health advocates could similarly argue that EA pits direct cash transfers against interventions like anti-malaria bednets, which is divisive and counterproductive, and that EA forum posts doing this will create a negative impression of EA on reporters and potential 10% pledgers.
There is a difference between what the post does and what you mention. The post is not saying that you should prioritize animal welfare vs global health (which I would find quite reasonable and totally acceptable). I would find that useful and constructive. Instead, the post claims you should simply not donate the money if considering antimalarial nets. Or in other words, that you should let children die because of the chicken they may have eaten.
Also, traditionally, criticism of "ends justifies the means" reasoning tends to object to arguments which encourage us to actively break deontological rules (like laws) to pursue some aggregate increase in utility, rather than arguments to prioritise one approach to improving utility over the other (which causes harm by omission rather than active harm), eg - prioritising animal welfare over global health, or vice-versa.
In fact, the deontological rule he is breaking seems clear to me: that innocent children should die because their statistical reference class says they will do something bad. And yes, they are still innocent. To me, any moral theory that dictates that innocent children should die is probably breaking apart at that point. Instead he bites the bullet and assumes that the means (preventing suffering) justifies the ends (letting innocent children die). I am sorry to say that I find that morally repugnant.
Also, let me say: I have no issue with discussing the implications of a given moral theory, even if they look terrible. But I think this should be a means to test and set limits to your moral theory, not a way to justify this sort of opinion. Let me reemphasize that my quarrel has nothing to do with cause prioritization or cost-effectiveness. Instead, I have a strong sense that innocent children should not be let die. If my moral theory disagrees with the strong ethical sense, it is the strong ethical sense that should guide the moral theory, and not the other way around.
To me, any moral theory that dictates that innocent children should die is probably breaking apart at that point. Instead he bites the bullet and assumes that the means (preventing suffering) justifies the ends (letting innocent children die). I am sorry to say that I find that morally repugnant. [...] Instead, I have a strong sense that innocent children should not be let die. If my moral theory disagrees with the strong ethical sense, it is the strong ethical sense that should guide the moral theory, and not the other way around.
Hmm, but we are all letting children die all the time from not donating. I am donating just 15% of my income; I could certainly donate 20-30% and save additional lives that way. I think my failing to donate 20-30% is morally imperfect, but I wouldn't call it repugnant. What is it that makes "I won't donate to save lives because I think it creates a lot of animal suffering" repugnant but "I won't donate to save lives because I prefer to have more income for myself" not?
What is it that makes "I won't donate to save lives because I think it creates a lot of animal suffering" repugnant but "I won't donate to save lives because I prefer to have more income for myself" not?
I think actively advocating for others to not save children's lives is a step beyond a mere decision not to donate. I read it this way:
Action: Write EA Forum post criticizing lifesaving as net-negative activity.
Implied Theory of Impact: Readers decide not to donate to GiveWell et al. --> Fewer lives get saved --> Less meat gets eaten --> Fewer animals suffer.
If I'm reading the theory of impact correctly, innocent children dying is a key part of the intended mechanism of action (MoA) -- not a side effect (as it is with "prefer to have more income for myself").
There are obviously some cruxes here -- including whether there is a moral difference between actively advocating for others not to hand out bednets vs. passively choosing to donate elsewhere / spend on oneself, and whether there is a moral difference between a bad thing being part of the intended MoA vs. a side effect. I would answer yes to both, but I have lower consequentialist representation in my moral parliament than many people here.
Even if one would answer no to both cruxes, I submit that "no endorsing MoAs that involve the death of innocent people" is an important set of side rails for the EA movement. I think advocacy that saving the lives of children is net-negative is outside of those rails. For those who might not agree, I'm curious where they would put the rails (or whether they disagree with the idea that there should be rails).
Thanks, that is a useful distinction. Although I would guess Vasco would prefer to frame the theory of impact as "find out whether donating to GiveWell is net positive -> help people make donation choices that promote welfare better" or something like that. I buy @Richard Y Chappell🔸's take that it is really bad to discourage others from effective giving (at least when it's done carelessly/negligently), but imo Vasco was not setting out to discourage effective giving, or it doesn't seem like that to me. He is -- I'm guessing -- cooperatively seeking to help effective givers and others make choices that better promote welfare, which they are presumably interested in doing.
There are obviously some cruxes here -- including whether there is a moral difference between actively advocating for others not to hand out bednets vs. passively choosing to donate elsewhere / spend on oneself, and whether there is a moral difference between a bad thing being part of the intended MoA vs. a side effect. I would answer yes to both, but I have lower consequentialist representation in my moral parliament than many people here.
Yes, I personally lean towards thinking the act-omission difference doesn't matter (except maybe as a useful heuristic sometimes).
As for whether the harm to humans is incidental-but-necessary or part-of-the-mechanism-and-necessary, I'm not sure what difference it makes if the outcomes are identical? Maybe the difference is that, when the harm to humans is part-of-the-mechanism-and-necessary, you may suspect that it's indicative of a bad moral attitude. But I think the attitude behind "I won't donate to save lives because I think it creates a lot of animal suffering" is clearly better (since it is concerned with promoting welfare) than the attitude behind "I won't donate to save lives because I prefer to have more income for myself" (which is not).
Even if one would answer no to both cruxes, I submit that "no endorsing MoAs that involve the death of innocent people" is an important set of side rails for the EA movement. I think advocacy that saving the lives of children is net-negative is outside of those rails. For those who might not agree, I'm curious where they would put the rails (or whether they disagree with the idea that there should be rails).
I do not think it is good to create taboos around this question. Like, does that mean we shouldn't post anything that can be construed as concluding that it's net harmful to donate to GiveWell charities? If so, that would make it much harder to criticise GiveWell and find out what the truth is. What if donating to GiveWell charities really is harmful? Shouldn't we want to know and find out?
I do not think it is good to create taboos around this question. Like, does that mean we shouldn't post anything that can be construed as concluding that it's net harmful to donate to GiveWell charities? If so, that would make it much harder to criticise GiveWell and find out what the truth is. What if donating to GiveWell charities really is harmful? Shouldn't we want to know and find out?
The taboo would be around advocacy of the view that "it is better for the world for innocent group X of people not to exist." Here, innocent group X would be under-5s in developing countries who are/would be saved by GiveWell interventions. That certain criticisms of GiveWell couldn't be made without breaking the taboo would be a collateral effect rather than the intent, but it's very hard to avoid over-inclusiveness in a taboo.
There have been social movements that assert that "it is better for the world for innocent group X of people not to exist" and encourage people to make legal, non-violent decisions premised on that belief. But I think the base rate of those social movements going well is low (and it may be ~zero). Based on that history and experience, I would need to see a very compelling argument to convince me that going down that path was a good idea here. I don't see that here; in particular, I think advocacy of the reader donating a share of their charitable budget to animal-welfare orgs to offset any potential negative AW effects of the lifesaving work they fund is considerably less problematic.
Relatedly, I also don't see things going well for EA if it is seen as acceptable for each of us to post our list of group X and encourage others to not pull members of group X out of a drowning pond even if we could do so costlessly or nearly so. Out of respect for Forum norms, I'm not going to speculate on who other readers' Group Xs might include, but I can think of several off the top of my head for whom one could make a plausible net-negative argument, all of whom would be less morally objectionable to include on the list than toddlers....
To clarify, I think I'm ok with having a taboo on advocacy against "it is better for the world for innocent group X of people not to exist", since that seems like the kind of naive utilitarianism we should definitely avoid. I'm just against a taboo on asking or trying to better understand whether "it is better for the world for innocent group X of people not to exist" is true or not. I don't think Vasco was engaging in advocacy, my impression was that he was trying to do the latter, while expressing a lot of uncertainty.
I'd say that it's a (putative) instance of adversarial ethics rather than "ends justify the means" reasoning (in the usual sense of violating deontic constraints).
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent people's lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think it's much better to encourage AW offsets than to discourage GHD life-saving.
(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isn't strictly optimal.)
So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
Trolley problems are sufficiently abstract -- and presented in the context of an extraordinary set of circumstances -- that they are less likely to trigger some of the concerns (psychological or otherwise) triggered by the present case. In contrast, lifesaving activity is pretty common -- it's hard to estimate how many times the median person would have died if most people would not engage in lifesaving action, but I imagine it is relatively significant.
If I am in mortal danger, I want other people to save my life (and the lives of my wife and child). I do not want other people deciding whether I get medical assistance against a deadly infectious disease based on their personal assessment of whether saving my life would be net-positive for the world. That's true whether the assessment would be based on assumptions about people like me at a population level, or about my personal value-add / value-subtract in the decider's eyes. If I have that expectation of other people, but don't honor the resulting implied social contract in return, that would seem rather hypocritical of me. And if I'm going to honor the deal with fellow Americans (mostly white), and not honor it with young children in Africa, that makes me rather uncomfortable too for presumably obvious reasons.
We sometimes talk about demandingness in EA -- a theory under which I would need to encourage people not to save myself, my wife, and my son if they concluded our reference class (upper-middle class Americans, likely) was net negative for the world is simply too demanding for me and likely for 99.9% of the population too.
Finally, I'm skeptical that human civilization could meaningfully thrive if everyone applied this kind of logic when analyzing whether to engage in lifesaving activities throughout their lives. (I don't see how it make sense if limited to charitable endeavors.) Especially if the group whose existence was calculated as negative is as large as people who eat meat! In contrast, I don't have any concerns about societies and cultures functioning adequately depending on how people answer trolley-like problems.
So I think those kinds of considerations might well explain why the reaction is different here than the reaction to an academic problem.
I agree with most except perhaps the framing of the following paragraph.
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
In my opinion the key difference is that here the bad outcome (eg animal suffering but any other, really), may happen because of decisions taken by the people you are saving. So, in a sense it is not an externally imposed mechanism. The key insight to me is that the children always have the chance to prevent the suffering that follows, people can reason and become convinced, as I was, that this suffering is important and should be prevented. Consequently, I feel strongly against letting innocent people die in these situations. So overall I do not think this has to do with speciesism or cause prioritisation.
Incidentally, this repeats with many cultural themes in films and books, that people can change their minds, and that they should be given the chance to. Similarly, it is a common theme that you should not kill innocent people to prevent some bad thing from happening (think Thanos and overpopulation, Herod convicting Jesus to die to prevent greater wrongdoings…). Clearly these are not strong ethical arguments, but I think they contain a grain of truth; and one should probably have a very strong bias against (taboo level) endorsing (not discussing) conclusions that justifies letting innocent people die.
Just wanted to copy MacAskill's comment here so people don't have to click through:
Though I was deeply troubled by the poor meater problem for some time, I've come to the conclusion that it isn't that bad (for utilitarians - I think it's much worse for non-consequentialists, though I'm not sure).
The basic idea is as follows. If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I've saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I've saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn't compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).
So let's say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe -10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it's still a good thing to save someone's life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).
This is informative, I strongly upvoted. A few comments though:
I find it ok to entertain the idea of what is the expected value of doing X or Y as a function of their consequences, be it longtermism or animal welfare.
I would find it very morally unappealing to refuse to save lives on the grounds of convicting people of actions they have not committed yet. Eg, if a child is drowning before you, I think it would be wrong in my opinion to let her drown because he might cause animal suffering. A person can make decisions and I would find it wrong to let her die because of what her statistical group does.
As I commented there: I don't think this is the kind of "ends justify the means" reasoning that MacAskill is objecting to. Vasco isn’t arguing that we should break the law. He’s just doing a fairly standard EA cause prioritization analysis. Arguing that people should not donate to global health doesn't even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it's perfectly permissible to let hundreds or thousands of children die of preventable diseases. Utilitarians and other consequentialists are the ones who hold "weird" views here, because we reject the act/omission distinction in the first place.
(For my part, I try to donate in such a way that I'm net-positive from the perspective of someone like Vasco as well as global health advocates.)
Arguing that people should not donate to global health doesn't even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it's perfectly permissible to let hundreds or thousands of children die of preventable diseases.
I think I disagree with this. Instead, I think most people find it hard to do what they believe because of social norms. But I think it would be hard to find a significant percentage of people who believe that "letting innocent children die because of what they could do".
Utilitarians and other consequentialists are the ones who hold "weird" views here, because we reject the act/omission distinction in the first place.
Probably you are somewhat right here, but I believe "letting innocent children die" is even a weirder opinion to have.
My donation strategy:
It seems that we have some great donation opportunities in at least some cases such as AI Safety. This has made me wonder what donation strategies I prefer. Here are some thoughts, also influenced by Zvi Mowshowitz's:
One example of a charity I will support is ARMoR which fits well with points 1 and 3. I am also excited about local knowledge opportunities in the AI Safety ecosystem. Otherwise, I am also particularly optimistic about the work of Apollo Research on evaluations and Redwood Research on AI control; as I believe those to be particular enablers of more robust AI governance.
Contra Vasco Grilo on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming?
The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example.
I find this argument morally repugnant and want to highlight it. Using some of the words I have used in a reply:
Let me quote William MacAskill comments on "What We Owe the Future" and his reflections on FTX (https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx):
Finally, let me say the post itself seems to pit animal welfare against global poverty causes, which I found divisive and probably counterproductive.
I downvoted this post because it is not representative of the values I believe EA should strive for. It may have been sufficient to show disagreement, but if someone goes for the first time into the forum and sees the post with many upvotes, their impression will be negative and may not become engaged with the community. If a reporter reads the forum and reads this, they will negatively cover both EA and animal welfare. And if someone was considering taking the 10% pledge or changing their career to support either animal welfare or global health and read this, they will be less likely to do so.
I am sorry, but I will strongly oppose "ends justify the means" argument put forward by this post.
Vasco has come to a certain conclusion on what the best action is, given a potential trade-off between the impact of global health initiatives and animal welfare.
I think it is reasonable to disagree but I think it is bad for the norms of the forum and unnecessarily combative for us to describe moral views we disagree with as "morally repugnant". I think this is particularly unfair if we do not elaborate on why we either:
a) think this trade-off does not exist, or is very small.
or
b) disagree.
For example, global health advocates could similarly argue that EA pits direct cash transfers against interventions like anti-malaria bednets, which is divisive and counterproductive, and that EA forum posts doing this will create a negative impression of EA on reporters and potential 10% pledgers.
In my view, discussing difficult, morally uncomfortable trade-offs between prioritising different, important causes is a key role of the EA forum - whether within cause areas (should we let children die of cancer to prioritise tackling malaria / should we let cows be abused to prioritise reducing battery cage farming of hens), or across cause areas. We should discuss these questions openly rather than avoiding them to help us make better moral decisions.
I think it would also be bad if we stopped discussing these questions openly for fear of criticism from reporters - this would bias EA towards preserving the world's moral status quo enforced by the media.
Also, traditionally, criticism of "ends justifies the means" reasoning tends to object to arguments which encourage us to actively break deontological rules (like laws) to pursue some aggregate increase in utility, rather than arguments to prioritise one approach to improving utility over the other (which causes harm by omission rather than active harm), eg - prioritising animal welfare over global health, or vice-versa. With a more expansive use of the term, critics could reject GiveWell style charity comparison as "ends justifies the means reasoning" which argues one should let some children die of tetanus to save other children from malaria.
Hi there,
Let me try to explain myself a bit.
There is a difference between what the post does and what you mention. The post is not saying that you should prioritize animal welfare vs global health (which I would find quite reasonable and totally acceptable). I would find that useful and constructive. Instead, the post claims you should simply not donate the money if considering antimalarial nets. Or in other words, that you should let children die because of the chicken they may have eaten.
In fact, the deontological rule he is breaking seems clear to me: that innocent children should die because their statistical reference class says they will do something bad. And yes, they are still innocent. To me, any moral theory that dictates that innocent children should die is probably breaking apart at that point. Instead he bites the bullet and assumes that the means (preventing suffering) justifies the ends (letting innocent children die). I am sorry to say that I find that morally repugnant.
Also, let me say: I have no issue with discussing the implications of a given moral theory, even if they look terrible. But I think this should be a means to test and set limits to your moral theory, not a way to justify this sort of opinion. Let me reemphasize that my quarrel has nothing to do with cause prioritization or cost-effectiveness. Instead, I have a strong sense that innocent children should not be let die. If my moral theory disagrees with the strong ethical sense, it is the strong ethical sense that should guide the moral theory, and not the other way around.
Hmm, but we are all letting children die all the time from not donating. I am donating just 15% of my income; I could certainly donate 20-30% and save additional lives that way. I think my failing to donate 20-30% is morally imperfect, but I wouldn't call it repugnant. What is it that makes "I won't donate to save lives because I think it creates a lot of animal suffering" repugnant but "I won't donate to save lives because I prefer to have more income for myself" not?
I think actively advocating for others to not save children's lives is a step beyond a mere decision not to donate. I read it this way:
Action: Write EA Forum post criticizing lifesaving as net-negative activity.
Implied Theory of Impact: Readers decide not to donate to GiveWell et al. --> Fewer lives get saved --> Less meat gets eaten --> Fewer animals suffer.
If I'm reading the theory of impact correctly, innocent children dying is a key part of the intended mechanism of action (MoA) -- not a side effect (as it is with "prefer to have more income for myself").
There are obviously some cruxes here -- including whether there is a moral difference between actively advocating for others not to hand out bednets vs. passively choosing to donate elsewhere / spend on oneself, and whether there is a moral difference between a bad thing being part of the intended MoA vs. a side effect. I would answer yes to both, but I have lower consequentialist representation in my moral parliament than many people here.
Even if one would answer no to both cruxes, I submit that "no endorsing MoAs that involve the death of innocent people" is an important set of side rails for the EA movement. I think advocacy that saving the lives of children is net-negative is outside of those rails. For those who might not agree, I'm curious where they would put the rails (or whether they disagree with the idea that there should be rails).
Thanks, that is a useful distinction. Although I would guess Vasco would prefer to frame the theory of impact as "find out whether donating to GiveWell is net positive -> help people make donation choices that promote welfare better" or something like that. I buy @Richard Y Chappell🔸's take that it is really bad to discourage others from effective giving (at least when it's done carelessly/negligently), but imo Vasco was not setting out to discourage effective giving, or it doesn't seem like that to me. He is -- I'm guessing -- cooperatively seeking to help effective givers and others make choices that better promote welfare, which they are presumably interested in doing.
Yes, I personally lean towards thinking the act-omission difference doesn't matter (except maybe as a useful heuristic sometimes).
As for whether the harm to humans is incidental-but-necessary or part-of-the-mechanism-and-necessary, I'm not sure what difference it makes if the outcomes are identical? Maybe the difference is that, when the harm to humans is part-of-the-mechanism-and-necessary, you may suspect that it's indicative of a bad moral attitude. But I think the attitude behind "I won't donate to save lives because I think it creates a lot of animal suffering" is clearly better (since it is concerned with promoting welfare) than the attitude behind "I won't donate to save lives because I prefer to have more income for myself" (which is not).
I do not think it is good to create taboos around this question. Like, does that mean we shouldn't post anything that can be construed as concluding that it's net harmful to donate to GiveWell charities? If so, that would make it much harder to criticise GiveWell and find out what the truth is. What if donating to GiveWell charities really is harmful? Shouldn't we want to know and find out?
The taboo would be around advocacy of the view that "it is better for the world for innocent group X of people not to exist." Here, innocent group X would be under-5s in developing countries who are/would be saved by GiveWell interventions. That certain criticisms of GiveWell couldn't be made without breaking the taboo would be a collateral effect rather than the intent, but it's very hard to avoid over-inclusiveness in a taboo.
There have been social movements that assert that "it is better for the world for innocent group X of people not to exist" and encourage people to make legal, non-violent decisions premised on that belief. But I think the base rate of those social movements going well is low (and it may be ~zero). Based on that history and experience, I would need to see a very compelling argument to convince me that going down that path was a good idea here. I don't see that here; in particular, I think advocacy of the reader donating a share of their charitable budget to animal-welfare orgs to offset any potential negative AW effects of the lifesaving work they fund is considerably less problematic.
Relatedly, I also don't see things going well for EA if it is seen as acceptable for each of us to post our list of group X and encourage others to not pull members of group X out of a drowning pond even if we could do so costlessly or nearly so. Out of respect for Forum norms, I'm not going to speculate on who other readers' Group Xs might include, but I can think of several off the top of my head for whom one could make a plausible net-negative argument, all of whom would be less morally objectionable to include on the list than toddlers....
To clarify, I think I'm ok with having a taboo on advocacy against "it is better for the world for innocent group X of people not to exist", since that seems like the kind of naive utilitarianism we should definitely avoid. I'm just against a taboo on asking or trying to better understand whether "it is better for the world for innocent group X of people not to exist" is true or not. I don't think Vasco was engaging in advocacy, my impression was that he was trying to do the latter, while expressing a lot of uncertainty.
I'd say that it's a (putative) instance of adversarial ethics rather than "ends justify the means" reasoning (in the usual sense of violating deontic constraints).
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent people's lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think it's much better to encourage AW offsets than to discourage GHD life-saving.
(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isn't strictly optimal.)
Trolley problems are sufficiently abstract -- and presented in the context of an extraordinary set of circumstances -- that they are less likely to trigger some of the concerns (psychological or otherwise) triggered by the present case. In contrast, lifesaving activity is pretty common -- it's hard to estimate how many times the median person would have died if most people would not engage in lifesaving action, but I imagine it is relatively significant.
If I am in mortal danger, I want other people to save my life (and the lives of my wife and child). I do not want other people deciding whether I get medical assistance against a deadly infectious disease based on their personal assessment of whether saving my life would be net-positive for the world. That's true whether the assessment would be based on assumptions about people like me at a population level, or about my personal value-add / value-subtract in the decider's eyes. If I have that expectation of other people, but don't honor the resulting implied social contract in return, that would seem rather hypocritical of me. And if I'm going to honor the deal with fellow Americans (mostly white), and not honor it with young children in Africa, that makes me rather uncomfortable too for presumably obvious reasons.
We sometimes talk about demandingness in EA -- a theory under which I would need to encourage people not to save myself, my wife, and my son if they concluded our reference class (upper-middle class Americans, likely) was net negative for the world is simply too demanding for me and likely for 99.9% of the population too.
Finally, I'm skeptical that human civilization could meaningfully thrive if everyone applied this kind of logic when analyzing whether to engage in lifesaving activities throughout their lives. (I don't see how it make sense if limited to charitable endeavors.) Especially if the group whose existence was calculated as negative is as large as people who eat meat! In contrast, I don't have any concerns about societies and cultures functioning adequately depending on how people answer trolley-like problems.
So I think those kinds of considerations might well explain why the reaction is different here than the reaction to an academic problem.
I agree with most except perhaps the framing of the following paragraph.
In my opinion the key difference is that here the bad outcome (eg animal suffering but any other, really), may happen because of decisions taken by the people you are saving. So, in a sense it is not an externally imposed mechanism. The key insight to me is that the children always have the chance to prevent the suffering that follows, people can reason and become convinced, as I was, that this suffering is important and should be prevented. Consequently, I feel strongly against letting innocent people die in these situations. So overall I do not think this has to do with speciesism or cause prioritisation.
Incidentally, this repeats with many cultural themes in films and books, that people can change their minds, and that they should be given the chance to. Similarly, it is a common theme that you should not kill innocent people to prevent some bad thing from happening (think Thanos and overpopulation, Herod convicting Jesus to die to prevent greater wrongdoings…). Clearly these are not strong ethical arguments, but I think they contain a grain of truth; and one should probably have a very strong bias against (taboo level) endorsing (not discussing) conclusions that justifies letting innocent people die.
You may be interested to read some of MacAskill's older writing on the subject https://www.lesswrong.com/posts/FCiMtrsM8mcmBtfTR/?commentId=9abk4EJXMtj72pcQu
Just wanted to copy MacAskill's comment here so people don't have to click through:
Thanks MHR!
This is informative, I strongly upvoted. A few comments though:
I find it ok to entertain the idea of what is the expected value of doing X or Y as a function of their consequences, be it longtermism or animal welfare.
I would find it very morally unappealing to refuse to save lives on the grounds of convicting people of actions they have not committed yet. Eg, if a child is drowning before you, I think it would be wrong in my opinion to let her drown because he might cause animal suffering. A person can make decisions and I would find it wrong to let her die because of what her statistical group does.
As I commented there: I don't think this is the kind of "ends justify the means" reasoning that MacAskill is objecting to. Vasco isn’t arguing that we should break the law. He’s just doing a fairly standard EA cause prioritization analysis. Arguing that people should not donate to global health doesn't even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it's perfectly permissible to let hundreds or thousands of children die of preventable diseases. Utilitarians and other consequentialists are the ones who hold "weird" views here, because we reject the act/omission distinction in the first place.
(For my part, I try to donate in such a way that I'm net-positive from the perspective of someone like Vasco as well as global health advocates.)
Hi @Jbentham,
Thanks for the answer. See https://forum.effectivealtruism.org/posts/K8GJWQDZ9xYBbypD4/pabloamc-s-quick-takes?commentId=XCtGWDyNANvHDMbPj for some of the points. Specifically, the problem I have with the post is not about cause prioritization or cost-effectiveness.
I think I disagree with this. Instead, I think most people find it hard to do what they believe because of social norms. But I think it would be hard to find a significant percentage of people who believe that "letting innocent children die because of what they could do".
Probably you are somewhat right here, but I believe "letting innocent children die" is even a weirder opinion to have.