Richard Y ChappellšŸ”ø

Associate Professor of Philosophy @ University of Miami
5680 karmaJoined
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

šŸ”ø10% Pledge #54 with GivingWhatWeCan.org

Comments
340

My central objection to Thorstad's work on this is the failure to properly account for uncertainty. Attempting to exclusively model a most-plausible scenario, and draw dismissive conclusions about longtermist interventions based solely on that, fails to reflect best practices about how to reason under conditions of uncertainty. (I've also raised this criticism against Schwitzgebel's negligibility argument.) You need to consider the full range of possible models / scenarios!

It's essentially fallacious to think that "plausibly incorrect modeling assumptions" undermine expected value reasoning. High expected value can still result from regions of probability space that are epistemically unlikely (or reflect "plausibly incorrect" conditions or assumptions). If there's even a 1% chance that the relevant assumptions hold, just discount the output value accordingly. Astronomical stakes are not going to be undermined by lopping off the last two zeros.

Tarsney's Epistemic Challenge to Longtermism is so much better at this. As he aptly notes, as long as you're on board with orthodox decision theory (and so don't disproportionately discount or neglect low-probability possibilities), and not completely dogmatic in refusing to give any credence at all to the longtermist-friendly assumptions (robust existential security after time of perils, etc.), reasonable epistemic worries ultimately aren't capable of undermining the expected value argument for longtermism.

(These details can still be helpful for getting better-refined EV estimates, of course. But that's very different from presenting them as an objection to the whole endeavor.)

Just to expand on the above, I've written a new blog post - It's OK to Read Anyone - that explains (i) why I won't personally engage in intellectual boycotts [obviously the situation is different for organizations, and I'm happy for them to make their own decisions!], and (ii) what it is in Hanania's substack writing that I personally find valuable and worth recommending to other intellectuals.

fyi, the recording is now available, and (upon reviewing it) I've expanded upon my other comments in a new post at Good Thoughts. (I'd be curious to hear from anyone who has a strikingly different impression of the debate than I had.)

Right, you'd also have to oppose healthcare expansion, vaccines (against lethal illnesses), pandemic mitigation efforts, etc.  I guess if you really believed it, you would take the results (more early death) to have positive expected value. It's a deeply misanthropic thesis. So it's probably worth getting clearer on why it isn't ultimately credible, despite initial appearances.

If you can stipulate (e.g. in a thought experiment) that the consequences of coercion are overall for the best, then I favor it in that case. I just have a very strong practical presumption (see: principled proceduralism) that liberal options tend to have higher expected value in real life, once all our uncertainty (and fallibility) is fully taken into account.

Maybe also worth noting (per my other comment in this thread) that I'm optimistic about the long-term value of humanity and human innovation. So, putting autonomy considerations aside, if I could either encourage people to have more kids or fewer, I think more is better (despite the short-term costs to animal welfare).

My thoughts:

(1) If building human capacity has positive long-term ripple effects (e.g. on economic growth), these could be expected to swamp any temporary negative externalities.

(2) It's also not clear that increasing population increases meat-eating in equilibrium. Presumably at some point in our technological development, the harms of factory-farming will be alleviated (e.g. by the development of affordable clean meat). Adding more people to the current generation moves forward both meat eating and economic & technological development. It doesn't necessarily change the total number of meat-eaters who exist prior to our civ developing beyond factory farming.

But also: people (including those saved via GHD interventions) plausibly still ought to offset the harms caused by their diets. (Investing resources to speed up the development of clean meat, for example, seems very good.)

I think the idea is to reduce the future population of meat-eaters by encouraging contraceptive use, so kind of the opposite (in terms of total population) of saving lives.

(I have to say, the idea that we should positively prefer future people to not exist sounds pretty uncomfortable to me, and certainly less appealing than supporting people in making whatever reproductive decisions they personally prefer, which would include both contraceptive and fertility/child support.)

Interesting, thanks for the link! I agree that being a useful social ally and doing what's morally best can come apart, and that people are often (lamentably) more interested in the former.

Yeah, that seems right as a potential 'failure mode' for explicit ethics taken to extremes. But of course it needs to be weighed against the potential failures of implicit ethics, like providing cover for not actually doing any good.

Everyone has the right to life. That implies everyone who wants to live has the guarantee from society they can do it, even if the cause of otherwise not living is natural (example: dying by ageing).

That's not what is ordinarily meant by "the right to life". (See Judy Thomson's famous paper, 'A Defense of Abortion', which argues that the right to life is really just the right not to be killed unjustly. It is not violated by, e.g., unplugging yourself from someone who depends upon your organs to live.)

I think we should want society to offer just those rights that would best promote overall flourishing. A guarantee against premature death obviously doesn't meet those criteria. (Suppose we could save one person's life at the cost of trillions of dollars, leaving nothing for education or other important "quality of life" improvements.)

More generally, you seem to be thinking of death as an absolutely bad thing: something to be avoided at all costs. That seems mistaken to me. Death is better understood as a merely comparative harm: a shorter happy life is not as good as a longer happy life would be (all else equal). But that's no reason at all to prefer that the short happy life never exist at all.

Load more