TD

Tristan D

40 karmaJoined Working (0-5 years)Seeking workAustralia

Posts
1

Sorted by New

Comments
9

It isn't a clear winner but neither were any of the other options and it was cost competitive.

>What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here).

In this thread Toby Ord has said that he and most longtermists don't support 'strong determinism'. Although he hasn't elucidated what the mainstream view of longtermism is.

We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering.

With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation.

If all the argument amounts to is that it will be good in expectation, well we can say that about a lot of cause areas. What we need is an argument for why it would be good in expectation, compared to all these other cause areas.

>The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.

Future well being does matter but focusing on existential risk doesn't lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.

According to the authors of the linked article, longtermists have not convincingly shown that taking the far future in account impacts decision-making in practice. Their claim is that the burden of proof here lies for the longtermist. If the far future is important for moral decision-making then this claim needs to be justified. A surface level justification that people in the far future would want to be alive, is equally justified by reference to the near future. 

You linked a quantitative attempt at answering the question of whether focus on existential risk requires priority if we consider <200 years, and the answer appears to be in the affirmative (depending on weightings). Is there a corresponding attempt at making this case using the far future as a reference point?

In order to provide a justification for preventative x-risk policies with reference to their impact on the far future we would need to compare it with the impact of other focus areas and how they would influence the far future. That is in part where the 'We Are Not in a Position to Predict the Best Actions for the Far Future' claim fits in because how are we supposed to do an analysis of the influence of any intervention (such as medical research, but including x-risk interventions) on people living millions of years into the future. It's possible that if we did have that kind of predictive power, many other focus areas may turn out to be orders of magnitude more important than focus on existential risks.

Thanks for linking to that research by Laura Duffy, that's really interesting. It would have been relevant for the authors of the current article as well.

According to their analysis, spending on conservative existential risk interventions are cost competitive (within an order of magnitude) to spending on AMF. Further, compared to plausible less conservative existential risk interventions, AMF is "probably" an order of magnitude less cost-effective. Under Rethink Priorities' estimates for welfare ranges, for cage-free campaigns and the hypothetical shrimp welfare intervention, existential risk interventions are either cost competitive, or an order of magnitude less cost-effective. 

I think that actually gives some reasonable weight to the idea that existential risk can be justified without reference to the far future. Duffy used a timeline of <200 years and even then a case can be made that interventions focussing on existential risk should be prioritised. At the very least it adds level of uncertainty about the relevance of the far future in moral-decision making.

Perhaps that could have been worded better in my summary. It is not that we cannot predict what could boost medical research in the far future. Rather it is that we cannot predict the effect that medical research will have on the far future. For example the magnitude of the effect may be so incredibly large that it might out prioritize traditional existential risks, either because it leads to a good future, or perhaps to a bad future. Or perhaps further investments in medical research will not lead to any significant gains in the things we care about. Either way we don't have a means of predicting how our current actions will influence the far future.

With regards to value, being alive, having the ability to do what we want, and minimizing suffering, might very well be things that people in the far future value, but they are also things that we currently value now. On the authors account therefore, these values can guide our moral decision making by virtue of being things we value now and into the near future and referencing that they will also be valued by the far future is an irrelevant extra piece of information, i.e it does no additional work in guiding our moral decision making.

What is the more widely endorsed view of longtermists? 

I largely agree with your "distant countries" objection. Just because something is practically implausible does not make it morally wrong, or not worthy of attention. I also think it's not necessarily true that implementing longtermism requires radical changes to human psychology or social institutions. We need not necessarily convince every human on the planet to care about the lives of future generations, only those who might have a meaningful impact (which could be a small number).

Nevertheless, I think the other three objections that you don't mention provide some interesting and potentially serious challenges for longtermism, perhaps for weaker forms as well.

In the article the authors are somewhat ambiguous about the meaning of 'near future'. They do at one point refer to the present and a few generations, as their potential time stamp. But your point raises an interesting question for the longtermists: How long does the future need to be in order for future people to have moral weight?

Although we might want to qualify it slightly in that the element of interest is not necessarily the number of years into the future but rather how many people (or beings) will be in the future. The question then becomes: How many people need to be alive in the future in order for their lives to have moral weight?

If we knew a black hole would swallow humanity in 200 years, on some estimates, there could still be ~15 billion human lives to come. If we knew that the future held only 15 more billion lives, would that justify not focusing on existential risks?

So i order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty.

If you're referring to the first point I would reword this to:

In order to justify longtermism in particular, you have to point out proposed policies that can't be justified by drawing on the near future.

It is rather that longtermists have not provided any examples of moral decisions that would be different if we were to consider the far future versus the near future. All current focus areas, the authors argue, can be justified by appealing to the near future. 

Hey everyone. Tristan here from Tasmania. I first heard about EA from a Peter Singer lecture at Melbourne in 2014 on his book TLYCS. Have since completed a Master of Public Health. I've  been working as a bush walking guide for the last year but currently looking for EA related work. Applying for jobs on the 80,000 hours job board at organizations like GiveWell, GiveDirectly, and AMF.

Also looking to engage with people about EA and related topics in Tasmania, or online. Looking at potentially starting a university or other EA group in Tasmania.

Let me know if you have any advice re looking for EA work/ starting a EA group.