Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).
He reframes EA concepts in a more accessible way, such as replacing ācounterfactualsā with the sports acronym āVORPā (Value Over Replacement Player).
And here I was thinking hardly a soul read my suggesting this framing ...
Thanks for writing this, it's very interesting.
Instead, I might describe myself as a preferentialist or subjectivist about what matters, so that whatās better is just whatās preferred, or what would be better according to our preferences, attitudes or ways of caring, in general.
This sounds similar to Christine Korsgaard's (Kantian) view on value, where things only matter because they matter to sentient beings (people, to Kant). I think I was primed to notice this because I remember you had some great comments on my interview with her from four years ago.
Quoting her:
Utilitarians think that the value of people and animals derives from the value of the states they are capable of ā pleasure and pain, satisfaction and frustration. In fact, in a way it is worse: In utilitarianism, people and animals donāt really matter at all; they are just the place where the valuable things happen. Thatās why the boundaries between them do not matter. Kantians think that the value of the states derives from the value of the people and animals. In a Kantian theory, your pleasures and pains matter because you matter, you are an āend in yourselfā and your pains and pleasures matter to you.
I guess "utilitarianism" above could be replaced with "hedonism" etc. and it would sort of match your writing that hedonism etc. is "guilty [...] of valuing things in ways that donāt match how we care about things". Anyway, she discusses this view in much greater detail in Fellow Creatures.
See also St. Jules, 2024 and Roelofs, 2022 (pdf) for more on ways of caring and moral patienthood, using different terminology.
Fyi, the latter two of these links are broken.
Thanks!
The correct "moral fix" isn't "don't get mail," it's "don't kick dogs." Do you share this intuition of non-responsibility?
I'm also not a philosopher, but I guess it depends on what your options are. If your only way of influencing the situation is by choosing whether or not to get mail, and the dog-kicking is entirely predictable, you have to factor the dog-kicking into the decision. Of course the mailman is ultimately much more responsible for the dog kicking than you are, in the sense that your action is one you typically wouldn't expect to cause any harm, whereas his action will always predictably cause harm. (In the real world, obviously there are likely many ways of getting the mailman to stop kicking dogs that are better than giving up mail.)
I'm not sure whether it makes sense to think of blameworthy actions as wrong by definition. It probably makes more sense to tie blameworthiness to intentions, and in that case an action could be blameworthy even though it has good consequences, and even though endorsing it leads to good consequences. Anyway, if so, obviously the mailman is also much more blameworthy than you, given that he presumably had ill intentions when kicking the dog, whereas you had no ill intentions when getting your mail delivered.
To clarify, I think I'm ok with having a taboo on advocacy against "it is better for the world for innocent group X of people not to exist", since that seems like the kind of naive utilitarianism we should definitely avoid. I'm just against a taboo on asking or trying to better understand whether "it is better for the world for innocent group X of people not to exist" is true or not. I don't think Vasco was engaging in advocacy, my impression was that he was trying to do the latter, while expressing a lot of uncertainty.
Thanks, that is a useful distinction. Although I would guess Vasco would prefer to frame the theory of impact as "find out whether donating to GiveWell is net positive -> help people make donation choices that promote welfare better" or something like that. I buy @Richard Y Chappellšø's take that it is really bad to discourage others from effective giving (at least when it's done carelessly/negligently), but imo Vasco was not setting out to discourage effective giving, or it doesn't seem like that to me. He is -- I'm guessing -- cooperatively seeking to help effective givers and others make choices that better promote welfare, which they are presumably interested in doing.
There are obviously some cruxes here -- including whether there is a moral difference between actively advocating for others not to hand out bednets vs. passively choosing to donate elsewhere / spend on oneself, and whether there is a moral difference between a bad thing being part of the intended MoA vs. a side effect. I would answer yes to both, but I have lower consequentialist representation in my moral parliament than many people here.
Yes, I personally lean towards thinking the act-omission difference doesn't matter (except maybe as a useful heuristic sometimes).
As for whether the harm to humans is incidental-but-necessary or part-of-the-mechanism-and-necessary, I'm not sure what difference it makes if the outcomes are identical? Maybe the difference is that, when the harm to humans is part-of-the-mechanism-and-necessary, you may suspect that it's indicative of a bad moral attitude. But I think the attitude behind "I won't donate to save lives because I think it creates a lot of animal suffering" is clearly better (since it is concerned with promoting welfare) than the attitude behind "I won't donate to save lives because I prefer to have more income for myself" (which is not).
Even if one would answer no to both cruxes, I submit that "no endorsing MoAs that involve the death of innocent people" is an important set of side rails for the EA movement. I think advocacy that saving the lives of children is net-negative is outside of those rails. For those who might not agree, I'm curious where they would put the rails (or whether they disagree with the idea that there should be rails).
I do not think it is good to create taboos around this question. Like, does that mean we shouldn't post anything that can be construed as concluding that it's net harmful to donate to GiveWell charities? If so, that would make it much harder to criticise GiveWell and find out what the truth is. What if donating to GiveWell charities really is harmful? Shouldn't we want to know and find out?
To me, any moral theory that dictates that innocent children should die is probably breaking apart at that point. Instead he bites the bullet and assumes that the means (preventing suffering) justifies the ends (letting innocent children die). I am sorry to say that I find that morally repugnant. [...] Instead, I have a strong sense that innocent children should not be let die. If my moral theory disagrees with the strong ethical sense, it is the strong ethical sense that should guide the moral theory, and not the other way around.
Hmm, but we are all letting children die all the time from not donating. I am donating just 15% of my income; I could certainly donate 20-30% and save additional lives that way. I think my failing to donate 20-30% is morally imperfect, but I wouldn't call it repugnant. What is it that makes "I won't donate to save lives because I think it creates a lot of animal suffering" repugnant but "I won't donate to save lives because I prefer to have more income for myself" not?
Thanks, thatās encouraging! To clarify, my understanding is that beef cattle are naturally polled much more frequently than dairy cattle, since selectively breeding dairy cattle to be hornless affects dairy production negatively. If I understand correctly, thatās because the horn growing gene is close to genes important for dairy production. And that (the hornless dairy cow problem) seems to be what people are trying to solve with gene editing.
Thanks. I take you to say roughly that you have certain core beliefs that you're unwilling to compromise on, even if you can't justify those beliefs philosophically. And also that you think it's better to be upfront about that than invent justifications that aren't really load-bearing for you. (Let me know if that's a misrepresentation.)
I think it's virtuous that you're honest about why you disagree ("I place much lower weight on animals") and I think that's valuable for discourse in that it shows where the disagreement lies. I don't have any objection to that. But I also think that saying you just believe that and can't/won't justify it ("I cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of views") is not particularly valuable for discourse. It doesn't create any opening for productive engagement or movement toward consensus. I don't think it's harmful exactly, I just think more openness to examining whether the intuition withstands scrutiny would be more valuable.
(That is a question about discourse. I think there's also a separate question about the soundness of the decision procedure you described in your original comment. I think it's unsound, and therefore instrumentally irrational, but I'm not the rationality police so I won't get into that.)
I'm registering a forecast: Within a few months we'll see a new Vasco Grilo post BOTECing that insecticide-treated bednets are net-negative expected value due to mosquito welfare. Looking forward to it. :)