Thanks for all the questions, all - I’m going to wrap up here! Maybe I'll do this again in the future, hopefully others will too!
Hi,
I thought that it would be interesting to experiment with an Ask Me Anything format on the Forum, and I’ll lead by example. (If it goes well, hopefully others will try it out too.)
Below I’ve written out what I’m currently working on. Please ask any questions you like, about anything: I’ll then either respond on the Forum (probably over the weekend) or on the 80k podcast, which I’m hopefully recording soon (and maybe as early as Friday). Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.
If you don't want to post your question publicly or non-anonymously (e.g. you're asking “Why are you such a jerk?” sort of thing), or if you don’t have a Forum account, you can use this Google form.
What I’m up to
Book
My main project is a general-audience book on longtermism. It’s coming out with Basic Books in the US, Oneworld in the UK, Volante in Sweden and Gimm-Young in South Korea. The working title I’m currently using is What We Owe The Future.
It’ll hopefully complement Toby Ord’s forthcoming book. His is focused on the nature and likelihood of existential risks, and especially extinction risks, arguing that reducing them should be a global priority of our time. He describes the longtermist arguments that support that view but not relying heavily on them.
In contrast, mine is focused on the philosophy of longtermism. On the current plan, the book will make the core case for longtermism, and will go into issues like discounting, population ethics, the value of the future, political representation for future people, and trajectory change versus extinction risk mitigation. My goal is to make an argument for the importance and neglectedness of future generations in the same way Animal Liberation did for animal welfare.
Roughly, I’m dedicating 2019 to background research and thinking (including posting on the Forum as a way of forcing me to actually get thoughts into the open), and then 2020 to actually writing the book. I’ve given the publishers a deadline of March 2021 for submission; if so, then it would come out in late 2021 or early 2022. I’m planning to speak at a small number of universities in the US and UK in late September of this year to get feedback on the core content of the book.
My academic book, Moral Uncertainty, (co-authored with Toby Ord and Krister Bykvist) should come out early next year: it’s been submitted, but OUP have been exceptionally slow in processing it. It’s not radically different from my dissertation.
Global Priorities Institute
I continue to work with Hilary and others on the strategy for GPI. I also have some papers on the go:
- The case for longtermism, with Hilary Greaves. It’s making the core case for strong longtermism, arguing that it’s entailed by a wide variety of moral and decision-theoretic views.
- The Evidentialist’s Wager, with Aron Vallinder, Carl Shulman, Caspar Oesterheld and Johannes Treutlein arguing that if one aims to hedge under decision-theoretic uncertainty, one should generally go with evidential decision theory over causal decision theory.
- A paper, with Tyler John, exploring the political philosophy of age-weighted voting.
I have various other draft papers, but have put them on the back burner for the time being while I work on the book.
Forethought Foundation
Forethought is a sister organisation to GPI, which I take responsibility for: it’s legally part of CEA and independent from the University, We had our first class of Global Priorities Fellows this year, and will continue the program into future years.
Utilitarianism.net
Darius Meissner and I (with help from others, including Aron Vallinder, Pablo Stafforini and James Aung) are creating an introduction to classical utilitarianism at utilitarianism.net. Even though ‘utilitarianism’ gets several times the search traffic of terms like ‘effective altruism,’ ‘givewell,’ or ‘peter singer’, there’s currently no good online introduction to utilitarianism. This seems like a missed opportunity. We aim to put the website online in early October.
Centre for Effective Altruism
We’re down to two very promising candidates in our CEO search; this continues to take up a significant chunk of my time.
80,000 Hours
I meet regularly with Ben and others at 80,000 Hours, but I’m currently considerably less involved with 80k strategy and decision-making than I am with CEA.
Other
I still take on select media, especially podcasts, and select speaking engagements, such as for the Giving Pledge a few months ago.
I’ve been taking more vacation time than I used to (planning six weeks in total this year), and I’ve been dealing on and off with chronic migraines. I’m not sure if the additional vacation time has decreased or increased my overall productivity, but the migraines have decreased it by quite a bit.
I am continuing to try (and often fail) to become more focused in what work projects I take on. My long-run career aim is to straddle the gap between research communities and the wider world, representing the ideas of effective altruism and longtermism. This pushes me in the direction of prioritising research, writing, and select media, and I’ve made progress in that direction, but my time is still more split than I'd like.
Are you happy with where EA as a movement has ended up? If you could go back and nudge its course, what would you change?
Relative to the base rate of how wannabe social movements go, I’m very happy with how EA is going. In particular: it doesn’t spend much of its time on internal fighting; the different groups in EA feel pretty well-coordinated; it hasn’t had any massive PR crises; it’s done a huge amount in a comparatively small amount of time, especially with respect to moving money to great organisations; it’s in a state of what seems like steady, sustainable growth. There’s a lot still to work on, but things are going pretty well.
What I could change historically: I wish we’d been a lot more thoughtful and proactive about EA’s culture in the early days. In a sense the ‘product’ of EA (as a community) is a particular culture and way of life. Then the culture and way of life we want is whatever will have the best long-run consequences. Ideally I’d want a culture where (i) 10% or so of people interact with the EA community are like ‘oh wow these are my people, sign me up’; (ii) 90% of people are like ‘these are nice, pretty nerdy, people; it’s just not for me’; and (iii) almost no-one is like, ‘wow, these people are jerks’. (On (ii) and (iii): I feel like the Quakers is the sort of thing I’m think... (read more)
Do you think EA has the problem of "hero worship"? (I.e. where opinions of certain people, you included, automatically get much more support instead of people thinking for themselves) If yes, what can the "worshipped" people do about it?
Yeah, I do think there’s an issue of too much deference, and of subsequent information cascades. It’s tough, because intellectual division of labour and deference is often great, as it means not everyone has to reinvent the wheel for themselves. But I do think in the current state of play there’s too much deference, especially on matters that involve a lot of big-picture worldview judgments, or rely on priors a lot. I feel that was true in my own case - about a year ago I switched from deferring to others on a number of important issues to assessing them myself, and changed my views on a number of things (see my answer to ‘what have you changed your mind about recently’).
I wish more researchers wrote up their views, even if in brief form, so that others could see how much diversity there is, and where, and so we avoid a bias where the more meme-y views get more representation than more boring views simply by being more likely to be passed along communication channels. (Maybe more AMAs could help with this!) I also feel we could do more to champion less well-known people with good arguments, especially if their views are in some ways counter to the EA mainstream. (Two people I’d highlight here are Phil Trammell and Ben Garfinkel.)
Thank you, I'm flattered! But remember, all: Will MacAskill saying we have good arguments doesn't necessarily mean we have good arguments :)
I enjoy reading Phil's blog here: https://philiptrammell.com/blog/
Anon asks: "Do you think climate change is neglected within EA?"
I think there’s a weird vibe where EA can feel ‘anti’ climate change work, and I think that’s an issue. I think the etiology of that sentiment is (i) some people raising climate change work as a proposal to benefit the global poor, and I think it’s very fair to argue that bednets do better than the best climate change actions with respect to that specific goal; (ii) climate change gets a lot of media time, including some claims that aren’t scientifically grounded (e.g. that climate change will literally directly kill everyone on the planet), and some people (fairly) respond negatively to those claims.
But climate change is a huge problem, and working on clean tech, nuclear power, carbon policy etc are great things to do. And I think the upsurge of concern about the rights of future generations that we’ve seen from the wider public over the last couple of decades is really awesome, and I think that longtermists could do more to harness that concern and show how concern for future generations generalises to other issues too. So I want to be like, ‘Yes! And….’ with respect to climate change.
Then is climate chang... (read more)
What do you think the typical EA Forum reader is most likely wrong about?
I don’t know about ‘most likely’, but here’s one thing that I feel gets neglected: The value of concrete, short-run wins and symbolic actions. I think a lot about Henry Spira, the animal rights activist that Peter Singer wrote about in Ethics into Action. He led the first successful campaign to limit the use of animals in medical testing, and he was able to have that first win by focusing on science experiments at New York’s American Museum of Natural History, which involved mutilating cats in order to test their sexual performance after the amputation. From a narrow EA perspective, the campaign didn’t make any sense: the benefit was something like a dozen cats. But, at least as Singer describes it, it was the first real win in the animal liberation movement, and thereby created a massive amount of momentum for the movement.
I worry that in current EA culture people feel like every activity has to be justified on the basis of marginal cost-effectiveness, and that that the fact that an action would constitute some definite and symbolic, even if very small, step towards progress — and be the sort of thing that could provide fuel for a further movement — isn’t ‘allowable’ as a reason f... (read more)
Yes, some symbolic activities will turn out to be high-impact, but we have to beware survivorship bias (ie, think of all the symbolic activities that went nowhere).
I think we need to figure out how to better collectively manage the fact that political affiliation is a shortcut to power (and hence impact), yet politicisation is a great recipe for blowing up the movement. It would be a shame if avoiding politics altogether is the best we can do.
What have you changed your mind on recently?
Lots! Treat all of the following as ‘things Will casually said in conversation’ rather than ‘Will is dying on this hill’ (I'm worried about how messages travel and transmogrify, and I wouldn't be surprised if I changed lots of these views again in the near future!). But some things include:
- I think existential risk this century is much lower than I used to think — I used to put total risk this century at something like 20%; now I’d put it at less than 1%.
- I find ‘takeoff’ scenarios from AI over the next century much less likely than I used to. (Fast takeoff in particular, but even the idea of any sort of ‘takeoff’, understood in terms of moving to a higher growth mode, rather than progress in AI just continuing existing two-century-long trends in automation.) I’m not sure what numbers I’d have put on this previously, but I’d now put medium and fast takeoff (e.g. that in the next century we have a doubling of global GDP in a 6 month period because of progress in AI) at less than 10%.
- In general, I think it’s much less likely that we’re at a super-influential time in history; my next blog post will be about this idea
- I’m much more worried about a great power war in my lifeti
... (read more)This is just a first impression, but I'm curious about what seems a crucial point - that your beliefs seem to imply extremely high confidence of either general AI not happening this century, or that AGI will go 'well' by default. I'm very curious to see what guides your intuition there, or if there's some other way that first-pass impression is wrong.
I'm curious about similar arguments that apply to bio & other plausible x-risks too, given what's implied by low x-risk credence
The general background worldview that motivates this credence is that predicting the future is very hard, and we have almost no evidence that we can do it well. (Caveat I don’t think we have great evidence that we can’t do it either, though.) When it comes to short-term forecasting, the best strategy is to use reference-class forecasting (‘outside view’ reasoning; often continuing whatever trend has occurred in the past), and make relatively small adjustments based on inside-view reasoning. In the absence of anything better, I think we should do the same for long-term forecasts too. (Zach Groff is working on a paper making this case in more depth).
So when I look to predict the next hundred years, say, I think about how the past 100 years has gone (as well as giving consideration to how the last 1000 years and 10,000 years (etc) have gone). When you ask me about how AI will go, as a best guess I continue the centuries-long trend of automation of both physical and intellectual labour; in the particular context of AI I continue the trend where within a task, or task-category, the jump from significantly sub-human to vastly-greater-than-human level performance is rapid (on the order o... (read more)
The argument you give in this paragraph only makes sense if "safe" is defined as "not killing everyone" or "avoids risks that most people care about". But what about "safe" as in "not causing differential intellectual progress in a wrong direction, which can lead to increased x-risks in the long run" or "protecting against or at least not causing value drift so that civilization will optimize for the 'right' values in the long run, whatever the appropriate meaning of that is"?
If short-term extinction risk (and in general risks that most people care about) is small compared to other kinds of existential risks, it would seem to make sense for longtermists to focus their efforts more on the latter.
If you believe "<1% X", that implies ">99% ¬X", so you should believe that too. But if you think >99% ¬X seems too confident, then you should modus tollens and moderate your <1% X belief. When other people give e.g. 30% X, that only implies 70% ¬X, which seems more justifiable to me.
I use AGI as an example just because if it happens, it seems more obviously transformative & existential than biorisk, where it's harder to reason about whether people survive. And because Will's views seem to diverge quite strongly from average or median predictions in the ML community, not that I'd read all too much into that. Perhaps further, many people in the EA community believe there's good reason to think those predictions are too conservative if anything, and have arguments for significant probability of AGI in the next couple decades, let alone century.
Since Will's implied belief is >99% no xrisk this century, this either means AGI won't happen, or that it has a very high probability of going well (getting or preserving most of the possible value in the future, which seems the most useful definition of existential for EA purposes). That's at first glance of course, so not wanting the whole book, just want an intuition for how you seem to get such high confidence ¬X, especially when it seems to me there's some plausible evidence for X.
I disagree with your implicit claim that Will's views (which I mostly agree with) constitute an extreme degree of confidence. I think it's a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for "events that are at least as transformative as the industrial revolution".
That base rate seems pretty low. And that's not actually what we're talking about - we're talking about AGI, a specific future technology. In the absense of further evidence, a prior of <10% on "AGI takeoff this century" seems not unreasonable to me. (You could, of course, believe that there is concrete evidence on AGI to justify different credences.)
On a different note, I sometimes find the terminology of "no x-risk", "going well" etc. unhelpful. It seems more useful to me to talk about concrete outcomes and separate this from normative judgments. For instance, I believe that extinction through AI misalignment is very unlikely. However, I'm quite uncertain about whether people in 2019, if you handed them a crystal ball that shows what will happen (regarding AI), would generally think that things ar... (read more)
Maybe one source of confusion here is that the word "extreme" can be used either to say that someone's credence is above (or below) a certain level/number (without any value judgement concerning whether that's sensible) or to say that it's implausibly high/low.
One possible conclusion would be to just taboo the word "extreme" in this context.
Agree on "going well" being under-defined. I was mostly using that for brevity, but probably more confusion than it's worth. A definition I might use is "preserves the probability of getting to the best possible futures", or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we've locked in a substantially suboptimal moral situation, we've effectively lost most possible value - which I'd call x-risk.
The main point was fairly object-level - Will's beliefs imply it's near 1% likelihood of AGI in 100 years, or near 99% likelihood of it "not reducing the probability of the best possible futures", or some combination like <10% likelihood of AGI in 100 years AND even if we get it, >90% likelihood of it not negatively influencing the probability of the best possible futures. Any of these sound somewhat implausible to me, so I'm curious for the intuition behind whichever one Will believes.
... (read more)
Very interesting points! I largely agree with your (new) views. Some thoughts:
- If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
- a) that it's unlikely that transformative AI will be developed at all this century,
- b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up some arguments for this a while ago.)
- Which of the two do you believe to what extent? For instance, if you put 10% on transformative AI this century – which is significantly more conservative than "median EA beliefs" – then you’d have to believe that the conditional probability of extinction is less than 10%. (I’m not saying I disagree – in fact, I believe something along these lines myself.)
- What do you think about the possibility of a growth mode change (i.e. much faster pace of economic growth and probably also social change, comparable to the industrial revolution) for reasons other than AI? I feel that this is somewhat neglected in EA
... (read more)Thanks! I’ve read and enjoyed a number of your blog posts, and often found myself in agreement.
See my comment to nonn. I want to avoid putting numbers on those beliefs to avoid anchoring myself; but I find them both very likely - it’s not that one is much more likely than the other. (Where ‘transformative AI not developed this... (read more)
I'd be super interested in hearing you elaborate more on most of the points! Especially the first two.
I’d like to vote for more detail on:
Unless the change in importance is fully explained by the relative reprioritization after updating downward on existential risks.
Do I understand you correctly that you’re relatively less worried about existential risks because you think they are less likely to be existential (that civilization will rebound) and not because you think that the typical global catastrophes that we imagine are less likely?
It depends on who we point to as the experts, which I think there could be disagreement about. If we’re talking about, say, FHI folks, then I’m very clearly in the optimistic tail - others would put much higher x-risk, takeoff scenario, and chance of being superinfluential. But note I think there’s a strong selection effect with respect to who becomes an FHI person, so I don’t simply peer-update to their views. I’d expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view. If I were wrong about that I’d change my view. One relevant piece of evidence is that the Metaculus (a community prediction site) algorithm puts the chance of 95%+ of people dead by 2100 at 0.5%, which is in the same ballpark as me.
I think there's some evidence that Metaculus, while a group of fairly smart and well-informed people, are nowhere near as knowledgeable as a fairly informed EA (perhaps including a typical user of this forum?) for the specific questions around existential and global catastrophic risks.
One example I can point to is that for this question on climate change and GCR before 2100 (that has been around since October 2018), a single not-very-informative comment from me was enough to change the community median from 24% to 10%. This suggests to me that Metaculus users did not previously have strong evidence or careful reasoning on this question, or perhaps GCR-related thinking in general.
Now you might think that actual superforecasters are better, but based on the comments given so far for COVID-19, I'm unimpressed. In particular the selected comments point to use of reference classes that EAs and avid Metaculus users have known to be flawed for over a week before the report came out (eg, using China's low deaths as evidence that this can be easily replicated in other countries as the default scenario).
Now COVID-19 is not an existential risk or GCR, but it is an "out of distribution" problem showing clear and fast exponential growth that seems unusual for most questions superforecasters are known to excel at.
Hey, thanks so much for all the responses! I’m impressed by how much take-up this has had! My migraines issue has been worse over the past week so I’m sorry if my responses are slow and erratic (and the 80k podcast has been bumped back to early October), but they will come! And if I do 't respond to you yet, it might be just because the question is good and deserves thought!
What's one piece of research / writing that you think is missing from the public internet, but you think a Forum writer could create?
If I could pick just one, it would be an assessment of existential risk conditional on some really major global catastrophe (e.g. something that kills 90% / 99% / 99.9%) of the world’s population. I think this is really crucial because: (i) for many of the proposed extinction risks (nuclear, asteroids, supervolcanoes, even bio), I find it really hard to see how they could directly kill literally everyone, but I find it much easier to see how they could kill some very large proportion of the population; (ii) there’s been very little work done on evaluating how likely (or not) civilisation would be to rebound from a really major global catastrophe. (This is the main thing I know of.)
Ideally, I’d want the piece of research to be directed at a sceptic. Someone who said: “Even if 99.9% of the world’s population were killed, there would still be 7 million people left, approximately the number of hunter-gatherers prior to the Neolithic revolution. It didn’t take very long — given Earth-level timescales — for hunter-gatherers to develop agriculture and then industry. And the catastrophe survivors would have huge benefits compared to them: inherited knowledge, leftover technology, low-lyin... (read more)
I’m also just really pro Forum users trying to independently verify arguments made by others in EA (or endorsed by others in EA), or check data that’s being widely used. E.g. I thought Jeff Kaufman’s series on AI risk was excellent. And recently Ben Garfinkel has been trying to locate the sources of the global population numbers that underlie the ‘hyperbolic growth’ idea and I’ve found that work important and helpful.
(In general, I think we can sometimes have a double standard where we will happily tear apart careful, widely-cited research done by people outside the community, but then place a lot of weight on ideas or arguments that have come from within the community, even if they haven’t gone through the equivalent of rigorous peer-review.)
Do you have any thoughts on why there is not much engagement/participation in technical AI safety/alignment research by professional philosophers or people with philosophy PhDs? (I don't know anyone except one philosophy PhD student who is directly active in this field, and Nick Bostrom who occasionally publishes something relevant.) Is it just that the few philosophers who are concerned about AI risk have more valuable things to do, like working on macrostrategy, AI policy, or trying to get more people to take ideas like existential risk and longtermism seriously? Have you ever thought about at what point it would start to make sense for the marginal philosopher (or the marginal philosopher-hour) to go into technical AI safety? Do you have a sense of why "philosophers concerned about AI risk" as a class hasn't grown as quickly as one might have expected?
On a related note, I feel like encouraging EA people with philosophy background to go into journalism or tech policy (as you did in the recent 80,000 Hours career review) is a big waste, since an advanced education in philosophy does not seem to create an obvious advantage in those fields, whereas there are important philosophical questions in AI alignment for which such a background would be more obviously helpful. Curious what your thinking is here.
It occurs to me that another reason for the lack of engagement by people with philosophy backgrounds may be that philosophers aren't aware of the many philosophical problems in AI alignment that they could potentially contribute to. So here's a list of philosophical problems that have come up just in my own thinking about AI alignment.
EDIT: Since the actual list is perhaps only of tangential interest here (and is taking up a lot of screen space that people have to scroll through), I've moved it to the AI Alignment Forum.
Hey Wei_Dai, thanks for this feedback! I agree that philosophers can be useful in alignment research by way of working on some of the philosophical questions you list in the linked post. Insofar as you're talking about working on questions like those within academia, I think of that as covered by the suggestion to work on global priorities research. For instance, I know that working on some of those questions would be welcome at the Global Priorities Institute, and I think FHI would probably also welcome philosophers working on AI questions. But I agree that that isn’t clear from the article, and I’ve added a bit to clarify it.
But maybe the suggestion is working on those questions outside academia. We mention DeepMind and Open AI as having ethics divisions, but likely only some philosophical questions relevant to AI safely are done in those kinds of centers, and it could be worth listing more non-academic settings in which philosophers might be able to pursue alignment relevant questions. There are, for instance, lots of AI ethics organizations, though most are only focused on short-term issues, and are more concerned with 'implications' than with philosophical questions that arise
... (read more)Thanks for making the changes. I think they address most of my concerns. However I think splitting the AI safety organizations mentioned between academic and non-academic is suboptimal, because it seems like what's most important is that someone who can contribute to AI safety go to an organization that can use them, whether that organization belongs to a university or not. On a pragmatic level, I'm worried that someone sees a list of organizations where they can contribute to AI safety, and not realize that there's another list in a distant part of the article.
Individual grants from various EA sources seem worth mentioning. I would also suggest mentioning FHI for AI safety research, not just global priorities research.
Ok, tha
... (read more)Sure. To clarify, I think it would be helpful for philosophers to think about those problems specifically in the context of AI alignment. For example many mainstream decision theorists seem to think mostly in terms of what kind of decision theory best fit with our intuitions about how humans should make decisions, whereas for AI alignment it's likely more productive to think about what would actually happen if an AI were to follow a certain decision theory and whether we would prefer that to what would happen if it were to follow a different decision theory. Another thing that would be really helpful is to act as a bridge from mainstream philosophy research to AI alignment research, e.g., pointing out relevant results from mainstream philosophy when appropriate.
Ah ok. Any chance you could discuss this issue with h
... (read more)What are your top 3 "existential risks" to EA? (i.e. risks that would permanently destroy or curtail the potential of Effective Altruism - both to the community and the ideas)
What has been the biggest benefit to your well-being since getting into EA? What would you advice to the many EA's who struggle with being happy/not burning out? (our community seems to have a higher than average rate of mental illness)
Honestly, the biggest benefit to my wellbeing was taking action about depression, including seeing a doctor, going on antidepressants, and generally treating it like a problem that needed to be solved. I really think I might not have done that, or might have done it much later, were it not for EA - EA made me think about things in an outcome-oriented way, and gave me an extra reason to ensure I was healthy and able to work well.
For others: I think that Scott Alexander's posts on anxiety and depression are really excellent and hard to beat in terms of advice. Other things I'd add: I'd generally recommend that your top goal should be ensuring that you're in a healthy state before worrying too much about how to go about helping others; if you're seriously unhappy or burnt our, fixing that first is almost certainly the best altruistic thing you can do. I also recommend maintaining and cultivating a non-EA life: having a multi-faceted identity means that if one aspect of your life isn't going so well, then you can take solace in other aspects.
A significant amount of your effort and the focus of the EA movement as a whole is on longtermism. Can you steelman arguments for why this might be a bad idea?
No need to steelman - there are good arguments against this and it’s highly nonobvious what % of EA effort should be on longtermism, even from the perspective of longtermism. Some arguments:
I think all these considerations are significant, and are part of why I’m in favour of EA having a diversity of causes and worldviews. (Though not necessarily on the ‘three cause area’ breakdown which we currently have, which I think is a bit narrow).
What mistake do you most commonly see EAs making?
Pretty hard to say, but the ‘hero worship’ comment (in the sense of ‘where opinions of certain people automatically get much more support instead of people thinking for themselves’) seems pretty accurate.
Insofar as this is a thing, it has a few bad effects: (i) means that more meme-y ideas get overrepresented relative to boring ideas; (ii) EA ideas don’t get stress-tested enough, or properly ‘voted’ on by crowds; (iii) there’s a problem of over-updating (“80k thinks everyone should earn to give!”; “80k thinks no-one should earn to give!” etc), especially on messages (like career advice) that are by their nature very person- and context-relative.
Very few of these questions are about you as a person. That seems worth noting. On the one hand I'd be interested in what your favourite novel is. On the other hand that seems an inappropriate question to ask - "Will isn't here to answer questions about his personality, he's here to maximise wellbeing". Should we want to humanise key figures within the EA ideological space (like you)?
If yes, what made you laugh recently?
I think asking more personal questions in AMAs is a good idea!
Favourite novel: I normally say Crime and Punishment by Dosteovsky, but it’s been a long time since I’ve read it so I’m not sure I can still claim that. I just finished The Dark Forest by Liu Chixin and thought it was excellent.
Laugh: my partner is a very funny person. Last thing that made me laugh was our attempt at making cookies, but it’s hard to convey by text.
This reminds me of the most important AMA question of all:
MacAskill, would you rather fight 1 horse-sized chicken, or 100 chicken-sized horses?
I'm pretty terrified of chickens, so I'd go for the horses.
I remember going to a 'fireside chat' at EAGxOxford a few years ago - the first such conference I'd been to. The topic was general wellbeing amongst EAs. Hearing Will and the other participants talk candidly about difficulties they'd faced was very humbling and humanising.
I don't think we should necessarily shy away from such questions.
What piece of advice would you give to you 20 year old self?
Because my life has been a string of lucky breaks, ex post I wouldn’t change anything. (If I’d gotten good advice age 20, my life would have gone worse than it in fact has gone.) But assuming I don’t know how my life would turn out:
Then more concretely (again, this is assuming I don’t know how things actually turn out):
(I’m assuming that “Buy Apple stock” is not in the spirit of the question!)
What do you think the best argument is against strong longtermism?
> From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad.
I worry that this type of problem is often exaggerated, e.g. with the suggestion that 'proposed x-risk A has some arguments going for it, but one could make arguments for thousands of other things' when the thousands of other candidates are never produced and could not be produced and appear to be in the same ballpark. When one makes a serious effort to catalog serious candidates at reasonable granularity the scope of considerations is vastly more manageable than initially suggested, but cluelessness is invoked in lieu of actually doing the search, or a representative subset of the search.
I think you might be misunderstanding what I was referring to. An example of what I mean: Suppose Jane is deciding whether to work for Deepmind on the AI safety team. She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad. Because there’s some precisification of her credences on which taking the job is good, and some on which taking the job is bad, then if she uses a Liberal decision rule (= it is permissible for you to perform any action that is permissible according to at least one of the credence functions in your set), it’s permissible for her to take the job or not take the job.
The issue is that, if you have imprecise credences and a Liberal decision rule, and are a longtermist, then almost all serious contenders for actions are permissible.
So the neartermist would need to have some way of saying (i) we can carve out the definitely-good part of the action, which is better than not-doing the action on all precisifications of the credence; (ii) we can ignore the other parts of the action (e.g. the flow-through effects) that are good on ... (read more)
That's an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.
If the argument from cluelessness depends on giving that kind of special status to imprecise credences, then I just reject them for the general reason that coarsening credences leads to worse decisions and predictions (particularly if one has done basic calibration training and has some numeracy and skill at prediction). There is signal to be lost in coarsening on individual questions. And for compound questions with various premises or contributing factors making use of the signal on each of those means y... (read more)
What has been your biggest success? What has been your biggest mistake?
I guess simply getting the ball rolling on GWWC should probably win, but the thing I feel proudest of is probably DGB — I don’t think it’s perfect, but I think it came together well, and it’s something where I followed my gut even though others weren’t as convinced that writing a book was a good idea, and I’m glad I did.
On mistakes: Huge number in the early days, of which poor communication with GiveWell was huge and really could have led to EA as a genuine unified community never forming; the controversial early 80k campaign around earning to give was myopic, too. More recently, I think I really messed up in 2016 with respect to coming on as CEA CEO. I think for being CEO you should be either in or out, where being ‘in’ means 100% committed for 5+ years. Whereas for me it was always planned as a transitional thing (and this was understood internally but I think not communicated properly externally), and when I started I had just begun a tutorial fellowship at Oxford, which other tutorial fellows normally describe as ‘their busiest ever year’, and was also still dealing with the follow-on PR from DGB, so it was like I already had one and a half other full-time jobs. And there wa... (read more)
To what extent, if any, have online sources (such as Less Wrong) influenced your thinking, as compared to "traditional" philosophy?
If you had the option of making a small change to EA by pressing a button, would you do it? If so, what would it be? What about a big change?
Is there a question you want to answer that hasn't been asked yet? What's your answer to it?
What topics do you wish were more discussed within EA?
What do you think are the things or ideas that most casual EAs don't know much about or appreciate enough, but are (deservedly or undeservedly) very influential in EA hubs or organizations like CEA, 80K, GPI, etc? Some candidates I have in mind for this are things like cluelessness, longtermism, the possibility of short AI timelines, etc.
What similar gaps in easily-accessible EA topics do you think exist?
(I think Rob Wiblin's now-archived effective altruism FAQ was the best intro to EA around - much better than anything similar offered 'officially'. I've also toyed with writing up some of David Pearce's work in a more accessible format.)
I'm surprised by how much low-hanging fruit there is still left to edit Wikipedia in order to make more people aware of (and provide them with a more sophisticated understanding of) important ideas that are relevant to EA. I've been adding and improving Wikipedia content on the side for two years now, with a clear focus on articles that are related to altruism.
In my experience, editing Wikipedia is really i) easy, ii) fun, iii) there are many content gaps left to fill, and iv) it exposes the content you write to a much larger audience (sometimes several orders of magnitude larger) than if you wrote instead for a private blog or the EA Forum. Against this background, I'm surprised that not more knowledgeable EAs contribute to Wikipedia (feel free to reach out to me if you would potentially like to do just that).
A word of caution: the quality control on Wikipedia is fairly strong and it is generally disliked if people make edits that come across as ideologically-motivated marketing rather than as useful information. For this reason, I aspire to genuinely improve the quality of the article with all the edits I make, though my choice of articles to edit is informed by ... (read more)
Population ethics; moral uncertainty.
I wonder if someone could go through Conceptually and make sure that all the wikipedia entries on those topics are really good?
Rob's FAQ is also my favorite introduction to EA, and I'll be spending some time over the next month thinking about whether there's a good way to blend the style of that introduction with the current EA.org introduction (which is due for an update).
Anon asks: "1. Population ethics: what view do you put most credence in? What are the best objections to it?"
Total view: just add up total wellbeing.
Best objection: very repugnant conclusion: Take any population Pi with N people in unadulterated bliss, for any N. Then there is some number M such that a population Pj that consists of 10^100(N) people living in utter hell, and M people with lives barely worth living, such that Pj is better than Pi.
"2. Population ethics: do you think questions about better/worse worlds are sensibly addressed from a "fully impartial" perspective? (I'm unsure what that would even mean... maybe... the perspective of all possible minds?). Or do you prefer to anchor reflection on population ethics in the values of currently existing minds (e.g. human values)?"
Yeah, I think we should try to answer this ‘from the point of view of the universe’.
"3. Given your work on moral uncertainty, how do you think about claims associated with conservative world views? In particular, things like (a) the idea that revolutionary individual reasoning is rather error prone, and requires the refining discipline of tradition as a guide... (read more)
I'm interested in hearing more about your thoughts on the Long Reflection. How likely is it to happen by default? How likely is it to produce a good outcome by default? What kind of things do you see as useful for making it more likely to happen and more likely to produce a good outcome? Anything else you want to say about it? Will you be writing it up somewhere in the near future (in which case I could just wait for that)?
The GPI Research Agenda references "Greg Lewis, The not-so-Long Reflection?, 2018" but I'm unable to find it anywhere.
ETA: I've been told that Greg's article is currently in draft form and not publicly available, and both Toby Ord and Will MacAskill's upcoming books will have some discussions of the Long Reflection.
If you could persuade people of any professional background to dedicate their careers to working for the current core EA orgs, what kinds of backgrounds/skill sets/career histories would be represented which aren't currently?
Have you considered doing the mainstream intellectual podcasts as a means of repping 80k? Eg David Pakman, Dave Rubin, whatever you might get onto? If you don't think that's a good idea, why not?
Do you worry that your involvement in utilitarianism.net could exacerbate the existing confusion and lead people to think that EA and utilitarianism are the same thing?
.
-
I am reminded of the story where Victor Hugo, who was away from Paris when Les misérables was first published, wrote his editor a letter inquiring about the sales of his much anticipated novel. The letter contained only one character: ?
A few days later, the reply arrived. It was equally brief: !
Les misérables was an immediate best-seller.
(Unfortunately, the story is likely apocryphal.)
∸
What do you think is the biggest professional mistake you made? (of the ones you can share) What is the biggest single professional 'right choice' you made? [Side-note: interesting we don't have a word for the opposite of mistake, just like we don't have one for catastrophe..]
Do you think economic growth is key to popular acceptance of longtermism, as increased wealth leads people to adopt post-materialist values?
What do you see as the best longterm path for EA? Should we try to stay small and weird, or try to get buy-in from the masses? How important is academic influence for the long term success of EA?
Will there be anything in the book new for people already on board with longtermism?
What is your opinion on Extinction Rebellion? (asking this question because they seem concerned about future generations, able to draw attention, and (somewhat) open to changing their mind.)
How would effective altruism be different if we're living in a simulation?
How do you decide your own cause prioritization? Relatedly, how do you decide where to donate to?
Do you have a coach? Why, or why not? (I feel they really help with stuff like "stay focused on a few topics" and keeping one accountable to those goals)
I note your main project is writing a book on longtermism. Would you like to see the EA movement going in a direction where it focuses exclusively, or almost exclusively, on longtermist issues? If not, why not?
To explain the second question, it would seem answering 'no' to the first question would be in tension with advocating (strong) longtermism.
I'm pro there being a diversity of worldviews and causes in EA - I'm not certain in longtermism, and think such diversity is a good thing even on longtermist grounds. I mention reasons in the 'steel manning arguments against EA's focus on longtermism' question. And I talked a little bit about this in my recent EAG London talk. Other considerations are helping to avoid groupthink (which I think is very important), positive externalities (a success in one area transfers to others) and the mundane benefit of economies of scale.
I do think that the traditional poverty/animals/x-risk breakdown feels a bit path-dependent though, and we could have more people pursuing cause areas outside of that. I think that your work fleshing out your worldview and figuring out what follows from it is the sort of thing I'd like to see more of.
Do you think that the empirical finding that pain and suffering are distributed along a lognormal distribution (cf. Logarithmic Scales of Pleasure and Pain) has implications for how to prioritize causes? In particular, what do you say about these tentative implications:
... (read more)[Meta note: this post doesn't appear on the front page and it probably should! I only found it through the RSS feed.]
Hi William! Great idea.
Hope it's still possible to submit these!:
- I love the EA movement - the community, values and work that goes on is just very aligned with me personally. One thing that stands out though is that every organization either recommends, was born out of, or has sent staff members to work at, seemingly every other organization within the movement - the OpenPhil/GiveWell/Good Ventures group, the 8ok/CEA group, some others like CFAR and so on. Do you see this as a risk, or a positive in terms of maintaining some unity around the overall m
... (read more)Who do you think it would be most valuable if you could be put in touch with?
Can you speak to the expected value/impact (either marginal or total) of writing a book?
I've been trying to evaluate career decisions about studying psychology and neuroscience. Do you think that studying motivation from a neuroscientific perspective is an effective way to contribute to AI alignment work? Do you think that-considering the scale of mental illnesses such as anxiety of depression-doing work on better understanding anxiety and depression is also highly effective?
Personally, I would be leery of doing an AMA currently because I don't feel I have that much that the whole community ought to spend time reading.
Hmm, that's a shame. I hereby promise to ask some questions to whoever does the next AMA!
PlayPumps: overrated or underrated?
LessWrong has a kind of AMA open thread where a bunch of people, including some EAs, have been doing AMAs. I'm not sure if others are still monitoring it and answering questions, but I am at least.
Anon asks: “When you gave evidence to the UK government about the impacts of artificial intelligence, why didn't you talk about AI safety (beyond surveillance)?
https://www.parliament.uk/ai-committee”
I think you’re mistaking me for someone else!
I recently finished the last season of Vox's Future Perfect podcast. One of the focus areas was questions about democracy and charitable giving, like how Bill Gates has helped lots of folks, but his projects are determined ultimately by his personal decision matrix. There are many more examples: trust funds based on poorly conceived goals, private donations to public schools, social engineering by large companies. Do questions like this trouble you? Do you feel that democracy and the effective altruism movement are at odds?
If there were an election, how do you think you would decide who to vote for? Would you produce any content on that decision making process?
What are your thoughts on the rise of left-wing politics in the US (e.g. the Sanders campaign, the election of AOC and the rest of the squad, the victories and near-victories at the local levels)? Related: how do you think EAs should think about the 2020 US presidential race?
Hello Will,
I have a question about longtermism and its use within the EA movement. While I find your (strong) longtermism hypothesis quite plausible and convincing, I do consider some "short-termist" cause areas to be quite important, even in the long term. (I always go back to hearing that "you have to be a shorttermist to care about wild animal suffering" which striked me as odd).
Because of that, I liked that the classic longterm cause area was called x/s-risk prevention, because that was one way to create value in the longterm. I t... (read more)