Oxford philosopher William MacAskill’s new book, What We Owe the Future, caused quite a stir this month. It’s the latest salvo of effective altruism (EA), a social movement whose adherents aim to have the greatest positive impact on the world through use of strategy, data, and evidence. MacAskill’s new tome makes the case for a growing flank of EA thought called “longtermism.” Longtermists argue that our actions today can improve the lives of humans way, way, way down the line — we’re talking billions, trillions of years — and that in fact it’s our moral responsibility to do so.
In many ways, longtermism is a straightforward, uncontroversially good idea. Humankind has long been concerned with providing for future generations: not just our children or grandchildren, but even those we will never have the chance to meet. It reflects the Seventh Generation Principle held by the indigenous Haudenosaunee (a.k.a. Iroquois) people, which urges people alive today to consider the impact of their actions seven generations ahead. MacAskill echoes the defining problem of intergenerational morality — people in the distant future are currently “voiceless,” unable to advocate for themselves, which is why we must act with them in mind. But MacAskill’s optimism could be disastrous for non-human animals, members of the millions of species who, for better or worse, share this planet with us.
Read the rest on Forbes.
Center for Reducing Suffering is longtermist, but focuses on the issues this article is concerned about. Suffering-focused views are not very popular though, and I agree that most longtermist organizations and individuals seem to be focused on future humans more than future non-human beings, at least that's my impression, I could be wrong. Center on Long-Term Risk is also longtermist, but focused on reducing suffering among all future beings.
Thank you for the insights!
Before I read this I took it mostly as a given that most people's mainline scenario for astronomical numbers of people involved predominantly digital people. If this is your mainline scenario the arguments for astronomical amounts of animal suffering seem much weaker (I think).
Excuse me for repeating some of the things Brian said in reply to Calebp. Since I want to do a complete formulation of my arguments.
I think there are a few potential pushbacks to the "digital being dominating argument"
Well articulated. Thanks for adding this.
I think we should have a lot of uncertainty about the future. For example:
There could be a high percentage of digital people but some non-digital people, and so animals still matter.
Digital people might cause suffering to digital animals.
We could treat digital people as terribly as we do animals.
Others have written about these ideas here: https://forum.effectivealtruism.org/topics/non-humans-and-the-long-term-future.
Thanks for your comment!