TO

Toby_Ord

3229 karmaJoined

Comments
141

We can go to a distant country and observe what is going on there, and make reasonably informed decisions about how to help them.

We can make meaningful decisions about how to help people in the distant future. For example, to allow them to exist at all, to allow them to exist with a complex civilisation that hasn't collapsed, to give them more prosperity that they can use as they choose, to avoid destroying their environment, to avoid collapsing their options by other irreversible choices, etc. Basically, to aim and giving them things near the base of Maslow's Hierarchy of Needs or to give them universal goods — resources or options that can be traded for whatever it is they know they need at the time. And the same is often true for international aid.

In both cases, it isn't always easy to know that our actions will actually secure these basic needs, rather than making things worse in some way. But it is possible. One way to do it for the distant future is to avoid catastrophes that have predictable longterm effects, which is a major reason I focus on that and suggest others do too.

I don't see it as an objection to Longtermism if it recommends the same things as traditional morality — that is just as much a problem for traditional theories, by symmetry. It is especially not a problem when traditional theories might (if their adherents were careful) recommend much more focus on existential risks but in fact almost always neglect the issue substantially. If they admit that Longtermists are right that these are the biggest issues of our time and that the world should massively scale up focus and resources on them, and that they weren't saying this before we came along, then that is a big win for Longtermism. If they don't think it is all that important actually, then we disagree and the theory is quite distinctive in practice. Either way the distinctiveness objection also fails.

I don't have time to look into this in full depth, but it looks like a good paper, making useful good-faith critiques, which I very much appreciate. Note that the paper is principally arguing against 'strong longtermism' and doesn't necessarily disagree with longtermism. For the record, I don't endorse strong longtermism either, and I think that the paper delineating it which came out before any defenses of (non-strong) longtermism has been bad for the ability to have conversations about the form of the view that is much more widely endorsed by 'longtermists'.

My main response to the points in the paper would be by analogy to cosmopolitanism (or to environmentalism or animal welfare). We are saying that something (the lives of people in of future generations) matters a great deal more than most people think (at least judging by their actions). In all cases, this does mean that adding a new priority will mean a reduction in resources going to existing priorities. But that doesn't mean these expansions of the moral circle are in error. I worry that the lines of argument in this paper apply just as well to denying previous steps like cosmopolitanism (caring deeply about people's lives across national borders). e.g. here is the final set of bullets you listed with minor revisions:

  • Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future distant countries.
  • Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations people in distant countries in principle, our resources are constrained.
  • Focusing on the far future distant countries comes at a cost to addressing present-day local needs and crises, such as health issues and poverty.
  • Implementing longtermism cosmopolitanism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.

What I'm trying to show here is that these arguments apply just as well to argue against previous moral circle expansions which most moral philosophers would think were major points of progress in moral thinking. So I think they are suspect, and that the argument would instead need to address things that are distinctive about longtermism, such as arguing positively that future peoples' lives don't matter morally as much as present people.

Thank you so much for everything you've done. You brought such renewed vigour and vision to Giving What We Can that you ushered it into a new era. The amazing team you've assembled and culture you've fostered will put it such good stead for the future.

I'd strongly encourage people reading this to think about whether they might be a good choice to lead Giving What We Can forward from here. Luke has put it in a great position, and you'd be working with an awesome team to help take important and powerful ideas even further, helping so many people and animals, now and across the future. Do check that job description and consider applying!

Great idea Thomas.

I've just sent a letter and encourage others to do so too!

A small correction:

Infamously there was a period where some scientists on the project were concerned that a nuclear bomb would ignite the upper atmosphere and end all life on Earth; fortunately they were able to do some calculations suggesting that showed beyond reasonable doubt that this would not happen before the Trinity test occurred. 

The calculations suggesting the atmosphere couldn't ignite were good, but were definitively not beyond reasonable doubt. Fermi and others kept working to re-check the calculations in case they'd missed something all the way up to the day of the test and wouldn't have done so if they were satisfied by the report. 

The report (published after Trinity) does say:

One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely.

That is often quoted by people who want to suggest the case was closed, but the next (and final) sentence of the report says:

However, the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desireable.

Great piece William — thanks for sharing it here.

I liked your strategy for creating robust principles that would have worked across a broad range of cases, and it would be good to add others to the Manhattan Project example. 

I particularly liked you third principle:

Principle 3: When racing, have an exit strategy 

In the case of the Manhattan project, a key moment was the death of Hitler and surrender of Germany. Given that this was the guiding reason — the greater good with which the scientists justify their creation of a terrible weapon — it is very poor how little changed at that point. Applying your principles, one could require a very special meeting if/when any of the race-justifying conditions disappear, to force reconsideration at that point.

This paragraph was intended to speak to the relevance of this argument given that (as you say) we can't easily advance all progress uniformly:

And it may have some uncomfortable consequences. If advancing all progress would turn out to be bad, but advancing some parts of it would be good, then it is likely that advancing the remaining parts would be even more bad. Since some kinds of progress are more plausibly linked to bringing about an earlier demise (e.g. nuclear weapons, climate change, and large-scale resource depletion only became possible because of technological, economic, and scientific progress) these parts may not fare so well in such an analysis. So it may really be an argument for differentially boosting other kinds of progress, such as moral progress or institutional progress, and perhaps even for delaying technological, economic, and scientific progress.

Thanks Mike — a very useful correction. I'm genuinely puzzled as to why this didn't lead to a more severe early response given China's history with SARS. That said, I can't tell from the article how soon the sample from this patient was sequenced/analysed.

Yes, that's right about the track-skipping condition for the exogenous case, and I agree that there is a strong case the end of factory farming will be endogenous. I think it is a good sign that the structure of my model represents some/all of the key considerations in your take on progress too — but with the different assumption about the current value changing the ultimate conclusion. 

I delivered this talk before the Rootclaim debate, though I haven't followed that debate since, so can't speak to how much it has changed views. I was thinking of the US intelligence community's assessments and the diversity of opinions among credible people who've looked into it in detail. 

Load more