This is a linkpost for https://www.carnegiecouncil.org/media/article/long-termism-ethical-trojan-horse?utm_content=222854956&utm_medium=social&utm_source=linkedin&hss_channel=lcp-88870
I came across this article from the Carnegie Council's Artificial Intelligence and Equality Initiative and I can't help but feel like they misunderstand longtermism and EA. The article mentions the popularity of William Macaskill's new book "What We Owe the Future" and the case for considering future generations and civilization. I would recommend you read the article before you read my take below, but Carnegie Council made the common fallacious arguments against longtermism.
- They make it seem like in order to address longtermism, you have to completely ignore the present. I have never heard an EA argue for disregarding contemporary issues.
- They convey that longtermism requires you to "put all your eggs in one basket," the basket being longtermism and not today's problems.
- Regulating AI will result in a slowdown in production. Yes, this is true but risking an uncontrollable accelerating risky technology like AI can result in the end of humanity and mass suffering. Therefore, the trade-off should be worth it much like regulating carbon emissions is (better word for worth it).
I don't think the arguments are fallacious if you look at how strong longtermism is defined:
Positively influencing the future is not just a moral priority but the moral priority of our time.
See general discussion here and in depth discussion here
Perhaps they should have made that distinction since not all EAs take the strong longtermist view - including MacAskill himself who doesn't seem certain.
The article was on MacAskill’s book which doesn’t argue for strong longtermism but longtermism.
However, I think the acknowledgement and critique of strong longtermism is necessary.
To me it seems they understood longtermism just fine and just so happen to disagree with strong longtermism's conclusions. We have limited resources and if you are a longtermist you think some to all of those resources should be spent ensuring the far future goes well. That means not spending those resources on pressing neartermist issues.
If EAs, or in this case the UN, push for more government spending on the future the question everyone should ask is where that spending should come from. If it's from our development aid budgets, that potentially means removing funding for humanitarian projects that benefit the worlds poorest.
This might be the correct call, but I think it's a reasonable thing to disagree with.
They understand the case for longtermism but don’t understand the proposals or solutions for longtermism aspirations.
One of the UN’s main goals is sustainable development. You can still address today’s issues while having these solutions have the future in consideration.
Therefore, you don’t have to spend most funds only addressing the long term future. You can tackle both problems simultaneously.
You can only spend your resources once. Unless you argue that there is a free lunch somewhere, any money and time spent by UN inevitably has to come from somewhere else. Arguing that longtermist concerns should be prioritized necessarily requires arguing that other concerns should be de-prioritized.
If EA's or the UN argue that longtermism should be a priority, it's reasonable for the authors to question from where those resources are going to come.
For what it's worth I think it's a no-brainer that the UN should spend more energy on ensuring the future goes well, but we shouldn't pretend that it's not at the expense of those who currently exist.
In the early 2000's when climate change started seriously getting onto the multilateral agenda, there were economists like Bjørn Lomborg arguing that we instead should spend our resources on cost-effective poverty alleviation.
He was widely criticized for this, for example by Michael Grubb, an economist and lead author for several IPCC reports, who argued:
Yet today, much (if not most) multilateral climate mitigation, is funded by countries' foreign aid budgets. The authors of this article, like Lomborg was almost two decades ago, are reasonable to worry that multilateral organizations adopting new priorities comes at the expense of the existing.
I believe we should spend much more time and money ensuring the future goes well, but we shouldn't pretend that this isn't at the expense of other priorities.
"If the basic idea of long-termism—giving future generations the same moral weight as our own—seems superficially uncontroversial, it needs to be seen in a longer-term philosophical context. Long-termism is a form of utilitarianism or consequentialism, the school of thought originally developed by Jeremy Bentham and John Stuart Mill.
The utilitarian premise that we should do whatever does the most good for the most people also sounds like common sense on the surface, but it has many well-understood problems. These have been pointed out over hundreds of years by philosophers from the opposing schools of deontological ethics, who believe that moral rules and duties can take precedence over consequentialist considerations, and virtue theorists, who assert that ethics is primarily about developing character. In other words, long-termism can be viewed as a particular position in the time-honored debate about inter-generational ethics.
The push to popularize long-termism is not an attempt to solve these long-standing intellectual debates, but to make an end run around it. Through attractive sloganeering, it attempts to establish consequentialist moral decision-making that prioritizes the welfare of future generations as the dominant ethical theory for our times."
This strikes me as a very common class of confusion. I have seen many EAs say that what they hope for out of "What We Owe the Future" is that it will act as a sort of "Animal Liberation for future people". You don't see a ton of people saying something like "caring about animals seems nice and all, but you have to view this book in context. Secretly being pro-animal liberation is about being a utilitarian sentientist with an equal consideration of equal interests welfarist approach, that awards secondary rights like life based on personhood". This would seem either like a blatant failure of reading comprehension, or a sort of ethical paranoia that can't picture any reason someone would argue for an ethical position that didn't come with their entire fundamental moral theory tacked on.
On the one hand I think pieces like this are making a more forgivable mistake, because the basic version of the premise just doesn't look controversial enough to be what MacAskill actually is hoping for. Indeed I personally think the comparison isn't fantastic, in that MacAskill probably hopes the book will have more influence on inspiring further action and discussion than on changing minds about the fundamental issue (which again is less controversial, and which he spends less time in the book on).
On the other hand, he has been at special pains to emphasize in his book, interviews, and secondary writings, that he is highly uncertain about first order moral views, and is specifically, only arguing for longtermism as a coalition around these broad issues and ways of making moral decisions on the margins. Someone like MacAskill who is specifically arguing for a period where we hold off from irreversible changes as long as possible in order to get these moral discussions right really doesn't fit the bill or someone trying to "make an end run around" these issues.
Will is promoting longtermism as a key moral priority - merely one of our priorities, not the sole priority. He'll say things like (heavily paraphrased from my memory) "we spend so little on existential risk reduction - I don't know how much we should spend, but maybe once we're spending 1% of GDP we can come back and revisit the question".
It's therefore disappointing to me when people write responses like this, responding to the not-widely-promoted idea that longtermism should be the only priority.
What about this quote:
The authors are suggesting that AI safety risk research and sponsorship is similar to greenwashing, just a facade to hide the short-term goals of AI technology developers.