So, as some of you might have noticed, there’s been a little bit of media attention about effective altruism / longtermism / me recently. This was all in the run up to my new book, What We Owe The Future, which is out today!

I think I’ve worked harder on this book than I’ve worked on any other single project in my life. I personally spent something like three and a half times as much work on it as Doing Good Better, and I got enormous help from my team, who contributed more work in total than I did. At different times, that team included (in alphabetical order): Frankie Andersen-Wood, Leopold Aschenbrenner, Stephen Clare, Max Daniel, Eirin Evjen, John Halstead, Laura Pomarius, Luisa Rodriguez, and Aron Vallinder.  Many more people helped immensely, such as Joao Fabiano with fact checking and the bibliography, Taylor Jones with graphic design, AJ Jacobs with storytelling, Joe Carlsmith with strategy and style, and Fin Moorhouse and Ketan Ramakrishnan with writing around launch. I also benefited from the in-depth advice of dozens of academic consultants and advisors, and dozens more expert reviewers. I want to give a particular thank-you and shout out to Abie Rohrig, who joined after the book was written, to run the publicity campaign. I’m immensely grateful to everyone who contributed; the book would have been a total turd without them. 

The book is not perfect — reading the audiobook made vivid to me how many things I’d already like to change — but I’m overall happy with how it turned out. The primary aim is to introduce the idea of longtermism to a broader audience, but I think there are hopefully some things that’ll be of interest to engaged EAs, too: there are deep dives on moral contingency, value lock-in, civilisation collapse and recovery, stagnation, population ethics and the value of the future. It also tries to bring a historical perspective to bear on these issues more often than is usual in the standard discussions.

The book is about longtermism (in its “weak” form) — the idea that we should be doing much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.). Some of you have worried (very reasonably!) that we should simplify messages to “holy shit, x-risk!”. I respond to that worry here: I think the line of argument is a good one, but I don't see promoting concern for future generations as inconsistent with also talking about how grave the catastrophic risks we face in the next few decades are.

In the comments, please AMA - questions don’t just have to be about the book, can be about EA, philosophy, fire raves, or whatever you like! (At worst, I’ll choose to not reply.) Things are pretty busy at the moment, but I’ll carve out a couple of hours next week to respond to as many questions as I can. 

If you want to buy the book, here’s the link I recommend: https://www.barnesandnoble.com/w/what-we-owe-the-future-william-macaskill/1140658116. (I’m using different links in different media because bookseller diversity helps with bestseller lists.) 

If you’d like to help with the launch, please also consider leaving an honest review on Amazon or Good Reads!

Comments68
Sorted by Click to highlight new comments since:

I enjoyed the book and recommend it to others!

In case of of interest to EA forum folks, I wrote a long tweet thread with more substance on what I learned from it and remaining questions I have here: https://twitter.com/albrgr/status/1559570635390562305

Thanks so much Alexander — It’s a good thread!

Highlighting one aspect of it: I agree that being generally silent on prioritization across recommended actions is a way in which WWOTF lacks EA-helpfulness that it could have had. This is just a matter of time and space constraints. For chapters 2-7, my main aim was to respond to someone who says, “You’re saying we can improve the long-term future?!? That’s crazy!”, where my response is “Agree it seems crazy, but actually we can improve the long-term future in lots of ways!”

I wasn’t aiming to respond to someone who says “Ok, I buy that we can improve the long-term future. But what’s top-priority?” That would take another few books to do (e.g. one book alone on the magnitude of AI x-risk), and would also be less “timeless”, as our priorities might well change over the coming years. 

On the “how much does AI and pandemics need longtermism” - I respond to that line of thinking a bit here (also linked to in the OP).

Good Twitter thread; thanks for sharing it.

Please try to get this book translated into as many languages as possible! I think it's a great chance to get attention to longtermism in non-English countries too. Happy to assist with organizing a German translation!

I will!

It’s already coming out in Swedish, Dutch and Korean, and we're in discussion about a German translation. Given the success of the launch, I suspect we’ll get more interest in the coming months. 

The bottleneck tends not to be translators, but reputable publishers who want to publish it.

I'm willing to volunteer to help with some of the workload with the Chinese version!

I'd assume the bottleneck regarding translations is not to find people that might be able to translate it or organise the translation, but to find a publisher in various countries.

That's also my understanding. However, Will probably has some power over it. I.e. he can talk to his literary agent to actively approach publishers and even offer money to foreign publishers for translating the book.

[anonymous]40
0
0

Currently at #52 on Amazon's Best Sellers list!

I imagine it's particularly good to get it to #50 so that it appears on the first page of results?

We should at least strive to get it above The Very Hungry Caterpillar (#21).

[anonymous]10
0
0

Alright Henry, don't get carried away. The Very Hungry Caterpillar was the best thing to happen to What We Owe The Future.

I haven't received my copy yet, so how do we know that they are not, in fact, the same book?

Humanity enters a consumerist phase (the industrial revolution), becomes bloated, enters a cocoon (the long reflection) and emerges as a beautiful butterfly (a flourishing future for humanity).

[Epistemic status: I started this comment thinking it was a joke, now I don't even know!]

How much do you view your role as being a representative of the longtermist ideas/movement, vs as an independent scholar/writer with your own perspective on the relevant questions?

Such a good question, and it’s something that I’ve really struggled with. 

Personally, I don’t see myself as a representative of anyone but myself (unless I’m explicitly talking about others’ ideas), and my identity as an academic makes me resistant to the “representative of EA” framing. I’m also worried about entrenching the idea that there is “an EA view” that one is able to represent, rather than a large collection of independent thinkers who agree on some things and disagree on others. 

But at the same time some people do see me as a representative of EA and longtermism, and I’m aware that they do, and will take what I say as representing EA and longtermism. Given the recent New Yorker and TIME profiles, and the surprising success of the book launch, that issue will probably only get stronger. 

So what should I do? Honestly, I don’t know, and I’d actually really value advice.  So far I’ve been just feeling it out, and make decisions on a case-by-case basis, weighing both “saying what I think” and “representing EA / longtermism” as considerations.

Huge congratulations on the book!

My question isn't really related – it was triggered by the New Yorker/Time pieces and hearing your interview with Rob on the 80,000 Hours podcast (which I thought was really charming; the chemistry between you two comes across clearly). Disregard if it's not relevant or too personal or if you've already answered elsewhere online.

How did you get so dang happy?

Like, in the podcast you mention being one of the happiest people you know. But you also talk about your struggles with depression and mental ill-health, so you've had some challenges to overcome.

Is the answer really as simple as making mental health your top priority, or is there more to it? Becoming 5–10x happier doesn't strike me as typical (or even feasible) for most depressives[1]; do you think you're a hyper-responder in some regard? Or is it just that people tend to underindex on how important mental health is and how much time they should spend working at it (e.g. finding meds that are kinda okay and then stopping the search there instead of persisting)?

  1. ^

I think it’s a combination of multiplicative factors. Very, very roughly:

  • Prescribed medication and supplements: 2x improvement
  • Understanding my own mind and adapting my life around that (including meditation, CBT, etc): 1.5x improvement 
  • Work and personal life improvements (not stressed about getting an academic job, doing rewarding work, having great friends and a great relationship): 2x improvement 

To illustrate quantitatively (with normal weekly wellbeing on a +10 to -10 scale) with pretty made-up numbers, it feels like an average week used to be like:  1 days: +4; 4 days: +1; 1 day: -1; 1 day: -6.

Now it feels like I’m much more stable, around +2 to +7. Negative days are pretty rare; removing them from my life makes a huge difference to my wellbeing.  

I agree this isn’t the typical outcome for someone with depressive symptoms. I was lucky that I would continue to have high “self-efficacy” even when my mood was low, so I was able to put in effort to make my mood better. I’ve also been very lucky in other ways: I’ve been responsive to medication, and my personal and work life have both gone very well.

Relevant excerpt from his prior 80k interview:

Rob Wiblin: ...How have you ended up five or 10 times happier? It sounds like a large multiple.

Will MacAskill: One part of it is being still positive, but somewhat close to zero back then...There’s the classics, like learning to sleep well and meditate and get the right medication and exercise. There’s also been an awful lot of just understanding your own mind and having good responses. For me, the thing that often happens is I start to beat myself up for not being productive enough or not being smart enough or just otherwise failing or something. And having a trigger action plan where, when that starts happening, I’m like, “OK, suddenly the top priority on my to-do list again is looking after my mental health.” Often that just means taking some time off, working out, meditating, and perhaps also journaling as well to recognize that I’m being a little bit crazy.

Aside from starting from a low baseline and adopting good mental health habits, I'd be interested to know how much of the 5–10x happiness multiplier Will would attribute to his professional success and the growth of the EA movement. Is that stuff all counteracted by the hedonic treadmill?

(I ask not just for selfish reasons as a fellow depressive, but also because making EAs happier probably has instrumental benefits)

Hi Will, 

From listening to your podcast with Ali Abdaal, it seems that you're relatively optimistic about humanity being able to create aligned AI systems. Could you explain the main reasons behind your thinking here?

Thanks!

Huge question, which I’ll absolutely fail to do proper justice to in this reply! Very briefly, however:  

  • I think that AI itself (e.g. language models) will help a lot with AI safety.
  • In general, my perception of society is that it’s very risk-averse about new technologies, has very high safety standards, and governments are happy to slow down the introduction of new tech. 
  • I’m comparatively sceptical of ultra-fast takeoff scenarios, and of very near-term AGI (though I think both of these are possible, and that’s where much of the risk lies), which means that in combination with society’s risk-aversion, I expect a major endogenous societal response as we get closer to AGI. 
  • I haven’t been convinced of the arguments for thinking that AI alignment is extremely hard. I thought that Ben Garfinkel’s review of Joe Carlsmith’s report was good.

 That’s not to say that “it’s all fine”. But I’m certainly not on the “death with dignity” train.

Hello Will, I'm really enjoying the book so far (it hit shelves not strict-on-sale, so I got it a few days ago)! I have noticed that there's been a big push from your team for large-scale media attention in the direction of positively voicing your views on longtermism. I was wondering if the team has a strategy for negative publicity as well? This has been something I've been worried about for a while, as I think that our movement is small enough that much of what people think about us will come from what outsiders decide to say about us, and my impression of EA's media strategy recently has been that it rarely publishes response pieces to negative attention outside of twitter or the forum. I'm worried that this strategy is a mistake, but that it will be especially problematic in the wake of the massive media attention EA and longtermism is getting now. I'm wondering if there is any worked out strategy for this issue so far, and if so roughly what it is?

Some sorts of critical commentary are well worth engaging with (e.g. Keiran Setiya’s review of WWOTF); in other cases, where criticism is clearly misrepresentative or strawmanning, I think it’s often best not to engage.

In a sense I agree, but clearly to whom? If it is only clear to us, this might be too convenient an excuse to ignore critics for a movement to allow itself to have, and at any rate leaving it unaddressed will allow misconceptions to spread.

To make a more specific elaboration in light of recent developments, it looks like Emile Torres has written another hit piece just recently. I think it would be a really good idea to try to submit a response to Salon, either by you, or some other prominent longtermist, or if not Salon, maybe Aeon (which I suspect is more likely to take it, although I think it would be the less valuable outlet to get a piece into, since Torres' piece there is now pretty old, and Aeon isn't that widely read). Torres seems to be interested in writing these over and over again for as mainstream an audience as possible, and so far there have been no responses from EAs in mainstream outlets, I think having one, especially written for the outlet this piece came from, would be a really good idea. I suspect this piece will get shared a whole lot more than Torres' other pieces, in light of the increased media attention longtermism is getting, and to many, silence will be seen as damning.

Hello Will very excited to read the book! My question however is about fire raves. How do you run them and what are your tips for making them the best experience they can be?

I’m one of the organising members of EA Dunedin in New Zealand and I’m planning on organising one for our group. I have a fair bit of experience organising bonfires/campfires but these tend to be the rather chill sort, maybe with a bluetooth speaker for some people to dance.

Bunch of questions: Feel free to just reply to whichever one you want and ignore the others :)

What are some key things to bring these to the next level? What’s an underrated aspect that people tend to forget about? What are the highlights of your night?

For example I’ve learned from organising house parties that it’s good to have the dancing area out of direct line of sight of the table with the food on it. So people dancing don’t get self-conscious about people just standing and eating and watching them. Any insight would be awesome (also welcome comments from others with fire-rave experience)!

To be clear - these are a part of my non-EA life, not my EA life!  I’m not sure if something similar would be a good idea to have as part of EA events - either way, I don’t think I can advise on that!

Related to fire raves:
Would you join community organized (fire) raves, say after-parties from EAG/EAGx or burner-style events? (Winking at the amazing EA Berlin community ;) )

(Or do you see a potential PR risk? Or would you not enjoy it as much with the attention you are getting? Would you join a masked (fire) rave?)

Congratulations to the team that did the media outreach work for the book - looks like you guys did an incredible job!  

Congratulations on the book launch! I am listening to the audiobook and enjoying it. 

One thing that has struck me - it sounds like longtermism aligns neatly with a  strongly pro-natalist outlook.

The book mentions that increasing secularisation isn't necessarily a one-way trend. Certain religious groups have high fertility rates which helps the religion spread. 

Is having 3+ children a good strategy for propagating longtermist goals?  Should we all be trying to have big happy families with children who strongly share our values? It seems like a clear path for effective multi-generational community building! Maybe even more impactful than what we do with our careers...

This would be a significant shift in thinking for me -- in my darkest hours I have wondered if having children is a moral crime (given the world we're leaving them). It also is slightly off-putting as it sounds like it is out of the playbook for fundamentalist religions. 

But if I buy the longtermist argument, and if I assume that I will be able to give my kids happy lives and that I will be able to influence their values, it seems like I should give more weight to the idea of having children than I currently do.

I see that UK total fertility rates has been below replacement level since 1973 and has been decreasing year on year since 2012. I imagine that EAs / longtermists are also following a similar trend.

Should we shut up and multiply?!

I've also been thinking a lot about longtermism and its implication for fertility. dotsam has taken  longtermism's pro-natalist bent in a relatively happy direction, but it also has some very dark implications. Doesn't longermism imply a forced birth be a great outcome (think of those millions of future generations created!)? Doesn't it imply conservatives are right and abortion is an horrendous crime? There are real moral problems with valuing a potential life with the same amount of weight as as an actual life. 

I'm really surprised by how common it is for people's thoughts to turn in this direction!  (cf. this recent twitter thread)  A few points I'd stress in reply:

(1) Pro-natalism just means being pro-fertility in general; it doesn't mean requiring reproduction every single moment, or no matter the costs.

(2) Assuming standard liberal views about the (zero) moral status of the non-conscious embryo, there's nothing special about abortion from a pro-natalist perspective. It's just like any other form of family planning--any other moment when you refrain from having a child but could have done otherwise.

(3) Violating people's bodily autonomy is a big deal; even granting that it's good to have more kids all else equal, it's hard to imagine a realistic scenario in which "forced birth" would be for the best, all things considered.  (For example, it's obviously better for people to time their reproductive choices to better fit with when they're in a position to provide well for their kids. Not to mention the Freakonomics stuff about how unwanted pregnancies, if forced to term, result in higher crime rates in subsequent decades.)

In general, we should just be really, really wary about sliding from "X is good, all else equal" to "Force everyone to do X, no matter what!"  Remember your J.S. Mill, everyone!  Utilitarians should be liberal.

Only if you're strictly total utilitarian. But won't all these things drop us into a situation like in the repugnant conclusion, where we would just get more people (especially women) living in worse conditions, with fewer choices?

Women in fact already are having fewer children than they want. Me and a lot of women around me would want to have children earlier than we are planning on, but we couldn't do it without dropping three levels down the socioeconomic ladder and having to give up on goals we've been investing in since elementary school. We won't only be quashing our potential but that of the children we would raise once we do have the resources to invest in them. Is that really a better future?

If EA really wants to increase fertility at a global level I think some hard thought needs to be given to how to change the social structures and incentives so that women can have children without having to also disproportionately carry such a large burden through pregnancy, birth, and childcare.

This probably isn't the sort of thing you're thinking of, but I'm really hoping we can figure out artificial wombs for this reason

I actually have given artificial wombs a little thought. I do think they'd be great: they could eliminate a very common suffering, give more options to LGBTQ people, aid in civilizational resilience, and definitely increase the number of wanted children people have in practice. They make sense within many different ethical frameworks.

I also think we're very, very far from them. I'm a systems biologist in a lab that also ventures into reproductive health, and we ostensibly know very little about the process of pregnancy. My lab is using the most cutting-edge methods to prove very specific and fundamental things. So at the same time, I am skeptical we will see it in our lifetimes, if ever.

I'm a pronatalist family-values dad with multiple kids, and I'm an EA who believes in a fairly strong version of long-termism, but I'm still struggling to figure out how these value systems are connected (if at all).

Two possible points of connection are (1) PR reasons: having kids gives EAs more credibility with the general public, especially with family-values conservatives, religious people, and parents in general, (2) personal growth reasons: having kids gives EAs a direct, visceral appreciation of humanity as a multi-generational project, and it unlocks evolved parental-investment values, emotions, and motivations that can be difficult to access in other ways, and that can can reinforce long-term personal commitments to long-termism as a value system.

I have a lot of conservative parents who follow me on Twitter, and a common criticism of EA long-termism by them is that EAs are a bunch of young childless philosophers running around giving moral advice about the future, but they're unwilling to put any 'skin in the game' with respect to actually creating the human future through personal reproduction. They see a disconnect between EA's abstract valuation of our humanity's magnificent  potential, and EAs concretely deciding to delay or reject parenthood to devote all their time and energy to EA causes and movement-building.

Personally, I understand the serious trade-offs between parental effort (time, energy, money) and EA effort. Those trade-offs are real, and severe, and hard to escape. However, in the long run, I think that EA movement-building will require more prominent EAs actually having kids, partly for the PR reasons, and partly for the personal growth reasons. (I can write more on this at some point if anybody's interested.)

The connection is probably that for many people, the most counter-intuitive aspect of EA-style longtermism is the obligation to bring additional people into existence, which x-risk mitigation and having children both contribute to.

There is no theoretical or historic evidence of Homo sapiens natal investment being independent of environment/population.

For the average EA, I'd guess having children yourself is far less cost-effective than doing EA outreach. Maybe if you see yourself as having highly valuable abilities far beyond the average EA or otherwise very neglected within EA, then having children might look closer to competitive?

This is what Will says in the book: “I think the risk of technological stagnation alone suffices to make the net longterm effect of having more children positive. On top of that, if you bring them up well, then they can be change makers who help create a better future. Ultimately, having children is a deeply personal decision that I won’t be able to do full justice to here—but among the many considerations that may play a role, I think that an impartial concern for our future counts in favour, not against.”

Still, this doesn't make the case for it being competitive with alternatives. EA outreach probably brings in far more people for the same time and resources. Children are a huge investment.

If you're specifically targeting technological stagnation, then outreach and policy work are probably far more cost-effective than having children, because they're much higher leverage. That being said, temporary technological stagnation might buy us more time to prepare for x-risks like AGI.

Of course, Will is doing outreach with this book, and maybe it makes sense to promote people having children, since it's an easier sell than career change into outreach or policy, because people already want to have kids. It's like only asking people to donate 10% of their income in the GWWC pledge, although the GWWC pledge probably serves as a better hook into further EA involvement, and having children could instead be a barrier.

Maybe at some point the marginal returns to further EA outreach will be low enough for having children to look cost-effective, but I don't think we're there yet.

Spoiler alert - I've now got to the end of the book, and "consider having children" is indeed a recommended high impact action. This feels like a big deal and is a big update for me, even though it is consistent with the longtermist arguments I was already familiar with.

Hi Will!

In your book you acknowledge the risks of raising awareness on bioweapons, since it can make malicious actors aware of bioweapons, and they might start trying to develop them, but you also decided to raise awareness on bioweapons anyway.

Personally, given that the book is aimed at a general audience, I think this was a bad decision and will have a net effect of making GCBRs more likely.

My question is, was there much discussion with biosecurity experts regarding the risks of discussing pandemic bioweapons in your book?

Yes, we got extensive advice on infohazards from experts on this and other areas, including from people who have both domain expertise and thought a lot about how to communicate about key ideas publicly given info hazard concerns. We were careful not to mention anything that isn’t already in the public discourse.

Good to know, thanks!

Besides Will himself, congrats to the people that coordinated the media campaign around this book! Besides the many articles such as the ones in Time, the New Yorker, the New York Times, a ridiculous number of youtube channels that I follow uploaded a WWOTF related video recently.

The bottleneck for longtermism becoming mainstream seems to conveying these inherently unintuitive ideas in an intuitive and high fidelity way. From the first half I've read so far, I think this book can help a lot in alleviating this bottleneck. Excited for more people to become familiar with these ideas and get in touch with EA! I think us community builders are going to be busy for a while.

Since no one else seems to have asked... what in the dickens is a fire-rave? Is it a rave that's very good and thus attributed "fire" status? If so, PLUR.

Some tweets you could like and share...

Will's launch tweet:

https://mobile.twitter.com/willmacaskill/status/1559517270673657856

My compilation of favourite Will interviews:

https://mobile.twitter.com/peterhartree/status/1559568673920016384

How would What We Owe the Future be different if it wasn't aimed at a general audience? Imagine for example that the target audience was purely EAs. What would you put in, take out? Would you be bolder in your conclusions?

It would be a very different book if the audience had been EAs. There would have been a lot more on prioritisation (see response to Berger thread above), a lot more numbers and back-of-the-envelope calculations, a lot more on AI, a lot more deep philosophy arguments, and generally more of a willingness to engage in more speculative arguments. I’d have had more of the philosophy essay “In this chapter I argue that..” style, and I’d have put less effort into “bringing the ideas to life” via metaphors and case studies. Chapters 8 and 9, on population ethics and on the value of the future, are the chapters that are most similar to how I’d have written the book if it were written for EAs - but even so, they’d still have been pretty different. 

Congratulations on the launch! This is huge. I have to ask, though: why is the ebook version not free? I would assume that if you wanted to promote longtermism to a broad audience, you would make the book as accessible as possible. Maybe charging for a copy actually increases the number of people who end up reading it? For example, it would rank higher on bestseller lists, attracting more eyes. Or perhaps the reason is simply to raise funds for EA?

I assume it's not free because the publisher wouldn't allow it as they want to earn money.

It’s because we don’t get to control the price - that’s down to the publisher.

I’d love us to set up a non-profit publishing house or imprint that could mean that we would have control over the price.

In your recent 80k podcast, you touch on economic growth. You say:

"there is one line of argument you could make — which I think doesn’t ultimately cash out — which is: we’re going really fast, technologically. Our societal wisdom is not going fast enough. If technological growth was just in general going slower, there would be more time for moral reasoning and political development to kind of catch up. So actually, we just want a slower growth rate over the coming centuries. I think that’s not true, but that’s the one argument you could make against it."

Can you say more about why think that's not true?

Hey Will, 

Before I dive into your new book, I have one question:

What impact do you hope WWOTF will have on philanthropic work?

Since hearing about your book in May, my reading and giving have shifted toward long-term matters (I went from tossing cash like the world’s on fire to vetting business managers on their approaches to Patient Philanthropy).

The “holy shit, x-risk” is that approx 25 people die if I read WWOTF and reallocate the $111,100 I earmarked for AMF via my  EA-inspired giving fund. 

I know you are pushing LEEP.   That seems to align with near-termist methodology. But if the shift in my focus is an example of how your work impacts giving, what is the best case impact WWOTF has on philanthropy?

 

PS- Edited. See comments. Thanks Pablo!

This isn't an answer to your question, but I'm curious why you consider this to be a question worth raising for longtermism specifically, given that the same question could be raised for any claim of the form 'x  has priority over y' and the potential to influence how EA resources are allocated. For example, you could ask someone claiming that farmed animal welfare has priority over global health and development whether they worry about the human beings who will predictably die—or be "killed", as you put it—if donors reallocate funds in accordance with this ranking.

(Separately, note that the form of longtermism defended in WWOTF does not in fact imply that benefitting future beings has priority over helping people now.)

I raise the question because this thread is an AMA with Will on his current book. Will’s writing impacts my philanthropy, and so I am curious the type of impact he expected on philanthropic work before I dive in. Editing the question to speak more to that point.

As far as 'x has priority over y’ is concerned, I agree that that type of calculation can be applied to any cause. My EA-inspired philanthropy is largely allocated to organizations that can save more lives in a quicker time than others. (Grants to folks who can’t afford drugs for Multiple myeloma is a current favorite of mine.)

Re: “Killing”. Point taken.

If you haven't already seen it, you might be interested in this comment from Rohin on the attitude of longtermists.

One of the things I like about your book is that it focuses on the least controversial arguments for longtermism, not the standard ones. This is good insofar as the average person is at least willing to consider the argument for longtermism.

In fact, more philosophers should use the least controversial arguments for a philosophical position.

I have relished 'The Will MacAskill Festival' - this month's blizzard of podcasts and articles promoting the book.  You and your team should be congratulated on the consistently high quality of this extensive material, which has always been professional and informative and has often been inspiring.   Up to 17 August I have found  ten podcasts appearances and eleven articles which I have listed with links and my brief comments here. Well done and thank you!

What do you think the effects of ending biological ageing will be on fertility and predictions of trillions of descendants?

Congrats, Will!

How did you and your team decide which EA ideas were most important to try to spread widely among a more general audience?

Haven’t read the book yet, but from reviews I understand that one action implied by longtermism is ensuring that we avoid value lock-in to allow moral progress to continue.

Does your book discuss the downsides of avoiding value lock-in, eg - risks of moral regress / the risk of worse values being locked in in the future if we don’t lock in our current set of values?

Hi Will,

I wrote a post  about my concerns with longtermism. (an 8 minute read) I had several concerns, but I suppose the most important was about the difference between:

  • avoiding harm to potential future beings
  • helping potential future beings

I will read your books, when I can, to get more depth of understanding of your point of view. Broadly speaking, I think that Longtermists should self-efface. 

In my earlier post, I wrote:

Longtermists should self-efface

But I have a final concern, and a hope that longtermism will self-efface to address it. In particular, I hope that longtermists will presume that creation of some utility in the experience of a being with moral status, when accomplished through control of that being in context, will contain errors, one or more of:

  1. errors in longtermist accounts of the experience caused for the being.
  2. errors in longtermist beliefs about the control achieved over the being.
  3. errors in longtermist recognition of the moral status of the being.

Total control, as a goal, will suffer at least those three types of error, the least obvious error is the last.

In general, I think that as efforts toward ongoing control over another being increases, treatment of them as if they have moral status decreases, as their presence (or imagination) in your life shifts from one of intrinsic value to one of instrumental value only. 

However, more common in experience is that the errors are either of types 1 and 2, and that errors of type 3 occur not because increasing control is actually established, but because frustration mounts as type 1 and 2 errors are acknowledged. 

I have doubts over whether you find these concerns relevant, but I haven't read your book yet! :)

I wrote:

I doubt whether longtermists will self-efface in this way, but hopefully they will acknowledge that the life of a person that will never be conceived has no moral status. That acknowledgement will let them avoid some obvious errors in their moral calculations.

In your summaries of the utter basics of longtermism:

  1. future people have moral status
  2. there can be a lot of future people
  3. we can make their lives better

you mention existential risk. So by future people, you must mean possible future people only.  I will read your book to learn more about your views of the moral status of:

  • the human species
  • the event (or act) of conception
  • a person never conceived (in the past or future)

Actually, you can find ideas of the moral status of a person never conceived in:

  • religious views about the spirit
  • a monotonic model of human existence inside a solipsistic model of human conception

Grief over lost opportunities to create future people seems to be an alternative to solipsistic models of human conception. I will defend imagination and its role in creating goals but solipsism about never-existent people seems less preferable to grief. 

Emotions do play a role in deciding values, and their unconscious nature makes them stabilizing influences in the ongoing presence of those values, but they can be aversive. Therefore, emotions are not a feature of all plans for future experiences. In particular, the emphasis within EA culture on:

  • virtual people
  • aligned (enslaved) AI superbeings
  • indefinitely long life extension
  • rationality
  • removal of all suffering from life
  • futures of trillions of people

suggest that a combination of intellectual dishonesty and technological determinism feeds a current of EA ideas. The current runs contrary to lessons evident in our current overshoot and the more general understandings that: 

  • aversive emotions and experience are present 
  • grief, disgust, or boredom (or other feelings) about our fellow humans is inevitable but not a source of truth
  • there are limits to our capabilities as rational, perceptive, and wise moral agents 

One clear path forward is a far-future of fewer people, with lower birth rates and a declining global population intentionally accomplished over a few hundred years. A virtue of a smaller future for humanity is an assurance of it keeping resources available for its successors. Another virtue is that it reduces the temptation to try and control humanity. A smaller population has less potential to harm itself or others. 

As I wrote, I want longtermists to self-efface. To seek control over others is inevitable in a longtermist scheme. The delay between action and intended consequence in a longtermist scheme leaves opportunities for denial of worthwhile goals in favor of intellectually dishonest or even solipsistic versions of them.

[anonymous]1
0
0

Great work on the book Will! What do you think the impact of Longtermism and to a greater extent the Effective Altruism community will be by the end of this century? Examples of things I'm looking for are: How much do you think Longtermism and EA will have grown by the end of this century? How much will EA funded/supported organizations have brought reductions to existential risk and suffering in the world? How many new cause areas do you think will have been identified? (Some confidence intervals will be nice and a decade by decade breakdown of what you think the progression is going to look like towards those goals though I realize you're a busy fellow and may not have the capacity to produce such a detailed breakdown). I'm curious as to what concrete goals you think EA and Longtermism will have achieved by the end of this century and how you plan on keeping track on how close you are to achieving those goals.

Curated and popular this week
Relevant opportunities