Richard Y ChappellšŸ”ø

Associate Professor of Philosophy @ University of Miami
5804 karmaJoined
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

šŸ”ø10% Pledge #54 with GivingWhatWeCan.org

Comments
345

I'm happy for folks to read the article and judge for themselves. The author briefly references some reasonable ideas in the course of building up a fundamentally unreasonable thesis: that "The problem [with effective altruism] is that weā€™ve stretched optimization beyond its optimal limits," and that sometimes donating to the local homeless over EA charities will better serve "the real value you hold dear [that is, helping people]."

They most clearly exhibit the fallacy I warn against ("some tradeoffs are unclear, therefore you might as well be an ineffective altruist") in this passage criticizing attempted optimization:

In your case, youā€™re trying to optimize how much you help others, and you believe that means focusing on the neediest. But ā€œneediestā€ according to what definition of needy? You could assume that financial need is the only type that counts, so you should focus first on lifting everyone out of extreme poverty, and only then help people in less dire straits. But are you sure that only the brute poverty level matters?

... if you want to optimize, you need to be able to run an apples-to-apples comparison ā€” to calculate how much good different things do in a single currency, so you can pick the best option. But because helping people isnā€™t reducible to one thing ā€” itā€™s lots of incommensurable things, and how to rank them depends on each personā€™s subjective philosophical assumptions ā€” trying to optimize in this domain will mean you have to artificially simplify the problem. You have to pretend thereā€™s no such thing as oranges, only apples.

I also think their discussion of integrity is fundamentally confused:

It sounds like thatā€™s what youā€™re feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this personā€™s suffering ā€” that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your ā€œbrain.ā€ Itā€™s not dumber or more irrational. Itā€™s the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!

It's not apples and oranges. It's just helping people you can see vs helping people who are out of sight, and so less emotionally engaging. Those shouldn't be different values -- as the author themselves says at the start, there's just the one value of helping people, and different strategies for how to achieve that. What they don't acknowledge is that the strategy of prioritizing more salient / emotionally-engaging people is less effective at helping, even if it's more effective at indulging your emotional needs. Calling the emotional bias "integrity" is not philosophically helpful or illuminating. It's muddled thinking, running cover for blatant bias.

Fair question! I don't know the answer. But I'd be surprised if the two came apart too sharply in this case (even though, as you rightly note, they can drastically diverge in principle). My sense is that GiveWell aims to recommend relatively "safe" bets, rather than a "hits-based" EV-maximizing approach. (I think it's important to be transparent when recommending the latter, just because I take it many people are not in fact so comfortable with pursuing that strategy, even if I think they ought to be.)

I actually think that's fine. You can always look it up if you're interested in the details, but for the casual consumer of charity-evaluation information, the bottom-line best estimate is the info that's decision-relevant, not the uncertainty range. I think it's completely fine for people to share core info like this without simultaneously sharing all the fine print. Just like it's OK for public health experts to promote simple pro-vax messaging that doesn't include all the fine print.

(See moral misdirection for my principled account of when it is or isn't OK to leave out information.)

Absent these ranges, I see these claims repeated all over the place as if $5000 really is an objectively correct answer and not a rough estimate.

Here you just seem to be repeating the mistake of assuming that presenting a best estimate without also presenting the uncertainty range is thereby to present it as certain. I disagree with that interpretative norm. There is no "as if" being presented. That's on you.

Not sure why this got tagged as 'Community'. It's not about the community, but about applying EA principles, substantive issues in applied decision theory, and associated mistakes in the reasoning of many critics of effective altruism. (Maybe an overzealous bot didn't like the joking footnote reference to Kamala Harris's "coconut tree" line, and it got mischaracterized as political?)

Edit - fixed now, thanks mods!

My central objection to Thorstad's work on this is the failure to properly account for uncertainty. Attempting to exclusively model a most-plausible scenario, and draw dismissive conclusions about longtermist interventions based solely on that, fails to reflect best practices about how to reason under conditions of uncertainty. (I've also raised this criticism against Schwitzgebel's negligibility argument.) You need to consider the full range of possible models / scenarios!

It's essentially fallacious to think that "plausibly incorrect modeling assumptions" undermine expected value reasoning. High expected value can still result from regions of probability space that are epistemically unlikely (or reflect "plausibly incorrect" conditions or assumptions). If there's even a 1% chance that the relevant assumptions hold, just discount the output value accordingly. Astronomical stakes are not going to be undermined by lopping off the last two zeros.

Tarsney's Epistemic Challenge to Longtermism is so much better at this. As he aptly notes, as long as you're on board with orthodox decision theory (and so don't disproportionately discount or neglect low-probability possibilities), and not completely dogmatic in refusing to give any credence at all to the longtermist-friendly assumptions (robust existential security after time of perils, etc.), reasonable epistemic worries ultimately aren't capable of undermining the expected value argument for longtermism.

(These details can still be helpful for getting better-refined EV estimates, of course. But that's very different from presenting them as an objection to the whole endeavor.)

Just to expand on the above, I've written a new blog post - It's OK to Read Anyone - that explains (i) why I won't personally engage in intellectual boycotts [obviously the situation is different for organizations, and I'm happy for them to make their own decisions!], and (ii) what it is in Hanania's substack writing that I personally find valuable and worth recommending to other intellectuals.

fyi, the recording is now available, and (upon reviewing it) I've expanded upon my other comments in a new post at Good Thoughts. (I'd be curious to hear from anyone who has a strikingly different impression of the debate than I had.)

Right, you'd also have to oppose healthcare expansion, vaccines (against lethal illnesses), pandemic mitigation efforts, etc.  I guess if you really believed it, you would take the results (more early death) to have positive expected value. It's a deeply misanthropic thesis. So it's probably worth getting clearer on why it isn't ultimately credible, despite initial appearances.

If you can stipulate (e.g. in a thought experiment) that the consequences of coercion are overall for the best, then I favor it in that case. I just have a very strong practical presumption (see: principled proceduralism) that liberal options tend to have higher expected value in real life, once all our uncertainty (and fallibility) is fully taken into account.

Maybe also worth noting (per my other comment in this thread) that I'm optimistic about the long-term value of humanity and human innovation. So, putting autonomy considerations aside, if I could either encourage people to have more kids or fewer, I think more is better (despite the short-term costs to animal welfare).

Load more