[I expect other people to have more valuable reflections. These are mine, and I'm not very confident in many of them.]
Some reflections on parts of a recent Philosophy Tube video by Abigail Thorn on effective altruism and longtermism. Jessica Wen has a summary of the video.
What I liked
The video is very witty and articulate, and it has an pleaseant tone that doesn't take itself too seriously. Abigail Thorn's criticism is generally well-researched and I think it's a good-faith criticism of effective altruism and longtermism. For example, she gives a surprisingly accurate portrayal of earning to give, a concept often poorly portrayed in the media.
Where I agree
I found myself agreeing with her on quite a few points, such as that the treatment of AI risk in What We Owe the Future (WWOTF) is pretty thin.[1] Or that effective altruism may have undervalued the importance of ensuring that decisions (or even just discussions) about the shape of the long-run future are not made by a small group of people who can't even begin to represent the whole of humanity.
Where I'm Confused/Disagree
I'm not surprised that I disagree with or am confused by parts of the criticism. The video is ambitious and covers a lot of ground – FTX, Measurability Bias, Longtermism, Pascal's Mugging, MrBeast, EA as suspiciously well aligned with business interests, The Precipice versus WWOTF, etc. – within 40 entertaining minutes.
Measurability Bias and Favoring the Short-Term
“EA tends to favor short-term, small-scale interventions that don't tackle the root of the problems. Some of those short-term interventions don't last or have long-term negative effects. This is all to say, that it is by no means settled what 'effective' actually means. And in fairness to them, some EAs are aware of this and they do talk about it, but none of the EA philosophers I've read quite seem to understand the depth of this issue.” – Abigail Thorn
While this may have been an accurate description of early effective altruism, in 2019 – out of all of the most engaged EAs – only 28-32% of people were clearly working on short-term, 'easily' measured research/interventions. Although around 70% of funding was directed towards near-term research/interventions.[5] And a lot of the global health funding goes to large-scale interventions that work at a policy level – such as the Lead Exposure Elimination Project or the RESET Alcohol Initiative.[6]
Deciding on the Future on Our Own
“MacAskill and Ord write a lot about progress and humanity's potential, but they say almost nothing about who gets to define those concepts. Who gets seen as an expert, who decides what counts as evidence, whose vision of the future gets listened to?” – Abigail Thorn
I agree that we should be very hesitant of a small, weird, elite group of people having an incredibly outsized ability to shape the future.[2]
But many (most?) EAs in longtermism work on preventing existential risks rather than on designing detailed plans for an ideal future. This does not mean that their research is free from ethical judgments, but the work is focused on ensuring that there is a future that anyone can shape at all. And therefore it's plausible that individuals with diverging visions for the future are likely to end up working together to address existential risks.[3]
I'd also note that longtermist philosophers are also not making sweeping claims about the optimal shape of the future. In The Precipice there is a longer section on the “Long Reflection” about handing off the decision about the shape of the future to others. MacAskill agrees in WWOTF that even though “it seems unlikely to me that anything like the long reflection will occur. [... W]e can see it as an ideal to try to approximate.”
Conflating longtermism and existential risk reduction
Philosophy Tube is never explicit about who is part of longtermism, it remains as some vague group of people doing 'stuff'. But in reality, most of the people you'd likely include in the longtermism community are working on existential risk. Many of them[4] are not motivated by longtermist ideas or are motivated by a common sense version of (weak) longtermism. Wanting to protect the current world population or their grandchildren's children is enough motivation for many to work on reducing existential risks.
Pascal’s Mugging
The thought experiment is beautifully presented, but it is unclear to me how Philosophy Tube relates it to longtermism. The argument only works if the positive expected value comes from a huge upside despite an infinitesimal likelihood. Most effective altruists consider the existential risk to be much higher than typically envisioned in Pascal’s Mugging examples. Toby Ord estimates a 1 in 6 chance of an existential catastrophe in the next century – slightly higher than the "1 in a trillion" chance mentioned by Abigail Thorn in her version of Pascal's Mugging.
Possible Nitpick on Reproductive Rights
The part about WWOTF's handling of reproductive rights issues struck me as potentially misleading.
“Turning now to what I didn't like so much about the book, you can kind of tell it was written by a man because there is almost zero discussion of reproductive rights. If I was bringing out a book in current year about the moral duties that we have to unborn people, the first thing I would've put in it, page 1, 72 point font, 'Do not use this book to criminalize abortion!' Maybe he'll discuss that in a future edition.” – Abigail Thorn
This left me with the impression, that MacAskill was simply blasé about abortion. But going back to the book, he does write:
“Of course, whether to have children is a deeply personal choice. I don't think that we should scold those who choose not to, and I certainly don’t think that the government should restrict people’s reproductive rights by, for example, limiting access to contraception or banning abortion.”
I understand that this might not be forceful enough because it's in the middle of the book and not literally on “page 1, 72 point font” – but what it says is almost exactly what Philosophy Tube wants it to say. Criticising MacAskill for 'doing the right thing, but just slightly off' feels needlessly antagonistic.
A Personal Takeaway
- In public writing, if you believe a point is important (like not wanting to advocate taking away reproductive rights in any way) it's not enough to just say the point. You should be very aware of how forcefully the point comes across.
Again, I really enjoyed the video in general. And I'm glad that the first in-depth criticism of EA with a wide reach was made by a channel that spends so much time researching its topics and strives to present tricky issues in an even-handed manner.
- ^
Especially compared to how important it is to many in the existential risk community.
- ^
I would have personally enjoyed a chapter in WWOTF about this problem.
- ^
I'm pretty uncertain about this. There might be ways of smuggling in assumptions about morality in work on existential risk reduction I'm currently not thinking of.
- ^
My guess is most, but I don't know of any polls.
- ^
Also, I'm confused about which interventions EAs favor have had “long-term negative effects”. I would not be surprised if that were the case, but I've not seen any concrete examples. The source for this claim doesn't point to any cases where EAs have caused negative effects, it discusses a pathway through which EAs could have a long-term negative effect.
- ^
And this is not a new thing: Effective altruists love systemic change (from 2015)
On your response to the Pascal's mugging objection, I've seen your argument made before about Pascal's mugging and strong longtermism (that existential risk is actually very high so we're not in a Pascal mugging situation at all) but I think that reply misses the point a bit.
When people worry about the strong longtermist argument taking the form of a Pascal mugging, the small probability they are thinking about is not the probability of extinction, it is the probability that the future is enormous.
The controversial question here is: how bad would extinction be?
The strong-longtermist answer to this question is: there is a very small chance that the future contains an astronomical amount of value, so extinction would therefore be astronomically bad in terms of expected value.
Under the strong longtermist point of view, existential risk then sort of automatically dominates all other considerations, because even a tiny shift in existential risk carries enormous expected value.
It is this argument that is said to resemble a Pascal mugging, in which we are threatened/tempted with a small probability of enormous harm/reward. And I think this is a very valid objection to strong longtermism. The 'small probability' involved here is not the probability of extinction, but the small probability of us colonizing the galaxy and filling it with 10^(something big) digital minds.
Pointing out that existential risk is quite high does not undermine this objection to strong longtermism. If anything it makes it stronger, because it reduces the chance that the future is going to be as big as it needs to be for the strong longtermist argument to go through.
Thank you for your response – I think you make a great case! :)
I very much agree that Pascal's Mugging is relevant to longtermist philosophy,[1] for similar reasons to what you've stated – like that there is a trade-off between high existential risk and a high expected value of the future.[2]
I'm just pretty confused about whether this is the point being made by Philosophy Tube. Pascal's mugging in the video has as an astronomical upside that "Super Hitler" is not born - because his birth would mean that "the future is doomed". She doesn't really address whether the future being big is plausible or not. For me, her argument derives a lot of the force from the implausibility of the infinitesimally small chance of achieving the upside by preventing "Super Hitler" from being born.
And maybe I watched too much with an eye for the relevance of Pascal's Mugging to longtermist work on existential risk. I don't think your version is very relevant unless existential risk work relies on astronomically large futures, which I don't think much of it does. I think it's quite a common sense position that a big future is at least plausible. Perhaps not Bostromian 10^42 future lives, but the 'more than a trillion future lives' that Abigail Thorn uses. If we assume a long-run population of around 10 billion. Then we'd get to 1 trillion people who would have lived in 10*80 = 800 years.[3] That doesn't seem to be an absurd timeframe for humanity to reach. I think most of the longtermist-inspired existential risk research/efforts still work with futures that only have a median outcome of a trillion future lives.
I omitted this from an earlier draft of the post. Which in retrospect maybe wasn't a good idea.
I'm personally confused about this trade-off. If I had a higher p(doom), then I'd want to have more clarity about this.
I'm unsure if that's a sensible calculation.
I should admit at this point that I didn't actually watch the Philosophy Tube video, so can't comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesn't need to rely on the possibility of 'Bostromian' futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you don't need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply "well yeah but you don't have to be on board with these sci-fi sounding concepts to support work on existential risk" is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. It's like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if you're ok with eating meat otherwise. It's very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).
I would like to add to this that there is also just the question of how strong a lot of these claims can be.
Maybe the future is super enormous. And maybe me eating sushi tomorrow night at 6pm instead of on Wednesday could have massive repercussions. But it could also have massive repercussions for me to eat sushi on Friday, or something.
A lot of things "could" have massive repercussions. Maybe if I hadn't missed the bus last week, Super Hitler wouldn't have been born.
There are some obvious low-hanging fruits in the world such that they would reduce the risk of catastrophe (say, nuclear disarmament, or the Seed Vault, or something). But there are also a lot of things that seem less obvious in their mechanisms, and which could go radically differently than how the people who outline them seem to think. Interventions to increase the number of liberal democracies on the planet and the amount of education could lead to more political polarization and social instability, for example. I'm not saying it would, but it could. Places that have been on the receiving end of "democratizing" interventions often wind up more politically unstable or dangerous for a variety of reasons, and the upward trend in education and longevity over the past few decades has also been an upward trend in polarization, depression, anxiety, social isolation...
Sure, maybe there's some existential risk to humanity, and maybe the future is massive, but what reason do I have to believe that my eating sushi, or taking public transit, or donating to one charity over another, or reading some book, is actually going to have specific effects? Why wouldn't the unintended consequences outweigh the intended ones?
It's not just skepticism about the potential size of the future, it's skepticism about the cause-effect relationship being provided by the potential "mugger". Maybe we're 100% doomed and nothing we do will do anything because an asteroid is going to hit us in 50 years that we will not be able to detect due to a serendipitous astronomical occurrence, and all of it is pointless. Maybe some omnipotent deity is watching and will make sure we colonize the galaxy. Maybe research into AI risk will bring about an evil AI. Maybe research into AI is pointless because AIs will necessarily be hyperbenevolent due to some law of the universe we have not yet discovered. Maybe a lot of things.
Even with the dedication and careful thought that I have seen many people put into these probabilities, it always looks to me like there aren't enough variables to be comfortable with any of it. And there are people who don't think about this in quantitative terms who would find even my hypothetical more comprehensive models to be inadequate.
off the top of my head not trying very hard, here are in-movement confrontations of that problem
It's not her fault that she has finite time and wanted to keep the running length of the video down, maybe this stuff is more in the weeds and not accessible to a surface glance. Maybe it is Ord's and MacAskill's fault for not emphasizing that we've been struggling with this debate a lot (I accuse Ord of being too bullish on positive longtermism and not bullish enough on negative longtermism in my above-linked shortform).
What a fantastic video lol. Never heard of philosophy tube before. Yeah agree with most of your comments
Another one is that the comment she quoted "if you randomly asked one of the people who themselves live in abject poverty there is no chance they will mention one of EA's supported "effective charities" isn't very important, but also hilariously wrong in the case of givedirectly....
Thanks for these reflections - I think I generally agree with the points where you've expanded on Thorn's lack of nuance. However, I think where MacAskill mentions reproductive rights in WWOTF it not only lacks force, but seems completely out of the blue: it's not backed up by the reasoning he's been using up until that point, and is never expanded on afterwards. This means it comes across as lacking conviction and seems to be a throwaway statement.
My assumption here is that the book, written for a broadly left-wing audience, was basically aiming to reassure the reader that the author was on their side and then move on as quickly as possible. The current contemporary progressive view on the subject - that conception/contraception/abortion is entirely a personal choice, and that it is inappropriate to apply any moral pressure to women about them - is one that assigns essentially zero value to future lives. So expanding further on the subject could only raise the question that, even if you don't think abortion should be literally illegal, that people should have to explicitly take into account the welfare of the child (and other future people) into account when deciding. The longtermist's support for abortion is inherently going to be a contingent, fact-specific one - and as that would not be a comforting thought for the target audience, the book instead moves on swiftly.
I agree that it's not well embedded into the book. However, I'm not sure it has to be.
In most of Western Europe, abortion is not a significant political issue. For example, polling consistently finds around 86% of people in the UK think that "Women should have the right to an abortion" and only around 5% of people think that they shouldn't. Given that the readers of WWOTF likely hold even more progressive views, it may be sufficient to make a brief mention of the topic and move on.
It is possible to interpret the book's emphasis on the value of future people as implying that abortion is morally wrong. But, this line of reasoning could also be applied to other issues, such as assisted suicide, which is not mentioned in the book. Should we then criticise MacAskill for potentially emboldening people who oppose assisted suicide (in such situations)? Even though the right to assisted suicide is lacking in much of Europe, it seems fine to set it aside as it's not central to the book. Similarly, it seems fine not to spend too much time on the topic of abortion rights.
It's possible that the book has a political blindspot and fails to anticipate how the book would be read by some people (although I haven't seen any evidence outside of Philosophy Tube). I encourage pointing this out, but I dislike being borderline hostile towards someone over it.