J

JackM

4094 karmaJoined

Bio

Feel free to message me on here.

Comments
696

It is commonly assumed a lot of interventions will likely fall prey to the "washing-out hypothesis" where the impact of the intervention becomes less significant as time goes on, meaning that the effects of actions in the near future matter more than their long-term consequences. In other words, over time, the differences between the outcomes of various actions tend to fade or "wash out." So in practice most people would assume the long-term impact of something like medical research is, in expectation, zero.

Longtermists aim to avoid "washing out". One way is to find interventions that steer between "attractor states". For example, extinction is an attractor state in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time.

This is all explained better in the paper The Case for Strong Longtermism which I would recommend you read.

At a risk of getting off topic from the core question, which interventions do you think are most effective in ensuring we thrive in the future with better cooperative norms? I don't think it's clear that this would be EA global health interventions. I would think boosting innovation and improving institutions are more effective.

Also boosting economic growth would probably be better than so-called randomista interventions from a long-term perspective.

What do you think about people who do go through with suicide? These people clearly thought their suffering outweighed any happiness they experienced.

I'm not sure how I have stigmatised any particular response.

Thank you for justifying your vote for global health!

One counterargument to your position is that, with the same amount of money, one can help significantly more non-human animals than humans. Check out this post. An estimated 1.1. billion chickens are helped by broiler and cage-free campaigns in a given year. Each dollar can help an estimated 64 chickens to a total of 41 chicken-years of life.

This contrasts to needing $5,000 to save a human life through top-ranked GiveWell charities.

Personally I would gain more value from knowing why people would prefer $100m to go to global health over animal welfare (or vice versa) than knowing if people would prefer this. This is partly because it already seems clear that the forum (which isn't even a representative sample of EAs) has a leaning towards animal welfare over global health.

So if my comment incentivises people to comment more but vote less then that is fine by me. Of course my comment may not incentivise people to comment more in which case I apologise.

Interesting to note that, as it stands, there isn't a single comment on the debate week banner in favor of Global Health. There are votes for global health (13 in total at time of writing), but no comments backing up the votes. I'm sure this will change, but I still find it interesting.

One possible reason is that the arguments for global health > animal welfare are often speciesist and people don't really want to admit that they are speciesist - but I'm admittedly not certain of this.

In a nutshell - there is more suffering to address in non-human animals, and it is a more neglected area.

What we need is an argument for why it would be good in expectation, compared to all these other cause areas.

Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for "normal" longtermism i.e. not "strong" is pretty much the same structure.

Future well being does matter but focusing on existential risk doesn't lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.

Yes that's true. Again we're dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it's also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.

And, to reiterate, while we focus on these areas to some extent now, I don't think we focus on them as much as we would in a world where society at large accepts longtermism.
 

The analysis I linked to isn't conclusive on longtermism being the clear winner if only considering the short-term. Under certain assumptions it won't be the best. Therefore if only considering the short-term, many may choose not to give to longtermist interventions. Indeed this is what we see in the EA movement where global health still reigns supreme as the highest priority cause area.

What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here). In short, significantly more value is at stake with reducing existential risk because now you care about enabling far future beings to live and thrive. If longtermism is the clear winner then we shouldn't see a movement that clearly prioritises global health, we should see a movement that clearly prioritises longtermist causes. This would be a big shift from the status quo.

As for your final point, I think I understand what you / the authors were saying now. I don't think we have no idea what the far future effects of interventions like medical research are.  We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering. Could that be wrong - sure - but we're just talking about expectational value. With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation. The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.

Load more