Note: Aaron Gertler, a Forum moderator, is posting this with Toby's account. (That's why the post is written in the third person.)
This is a Virtual EA Global AMA: several people will be posting AMAs on the Forum, then recording their answers in videos that will be broadcast at the Virtual EA Global event this weekend.
Please post your questions by 10:00 am PDT on March 18th (Wednesday) if you can. That's when Toby plans to record his video.
About Toby
Toby Ord is a moral philosopher focusing on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?
His earlier work explored the ethics of global health and global poverty, which led him to create Giving What We Can, whose members have pledged hundreds of millions of pounds to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement.
His current research is on avoiding the threat of human extinction, which he considers to be among the most pressing and neglected issues we face. He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science. His work has been featured more than a hundred times in the national and international media.
Toby's new book, The Precipice, is now available for purchase in the UK and pre-order in other countries. You can learn more about the book here.
How likely do you think we would be to recover from a catastrophe killing 50%/90%/99% of the world population respectively?
Does it worry you that there are very few published peer reviewed treatments of why AGI risk should be taken seriously that are relevant to current machine learning technology?
What would convince you that preventing s-risks is a bigger priority than preventing x-risks?
Suppose that humanity unified to pursue a common goal, and you faced a gamble where that goal would be the most morally valuable goal with probability p, and the most morally disvaluable goal with probability 1-p. Given your current beliefs about those goals, at what value of p would you prefer this gamble over extinction?
The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?
Thanks for the comment! Toby is going to do a written AMA on the Forum later in the year too. This one is timed so that we can have video answers during Virtual EA Global.
What is your solution to Pascal's Mugging?
What's a regular disagreement that you have with other researchers at FHI? What's your take on it and why do you think the other people are wrong? ;-)
We're currently in a time of global crisis, as the number of people infected by the coronavirus continues to grow exponentially in many countries. This is a bit of a hard question, but a time of crisis is often the time when governments substantially refactor things because it's finally transparent that they're not working, so can you name a feasible concrete change in the UK government (or a broader policy for any developed government) that you think would put us in a far better position for future such situations, especially future pandemics that have a much more serious chance of being an existential catastrophe?
In an 80,000 Hours interview, Tyler Cowen states:
How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead's conclusion in this piece? Do you think Cowen&a... (read more)
What do you think is the biggest mistake that the EA community is currently making?
Is your view that:
(i) the main thing that matters for the long-term is whether we get to the stars
(ii) This could plausibly happen in the next few centuries
(iii) therefore the main long-termist relevance of our actions is whether we survive the next few centuries and can make it to the stars?
Or do you put some weight on the view that long-term human and post-human flourishing on Earth could also account for >1% of the total plausible potential of our actions?
Do you think that "a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill's] view [about the level of risk this century] than to the median FHI view"? If so, should we defer to such a panel out of epistemic modesty?
What have you changed your mind on recently?
There are many ways that technological development and economic growth could potentially affect the long-term future, including:
What do you think is the overall sign of economic growth? Is it different for developing and developed countries?
Note: The fifth bullet point was added after Toby recorded his answers.
If you could only convey one idea from your new book to people who are already heavily involved in longtermism, what would it be?
Can you tell us a specific insight about AI that has made you positively update on the likelihood that we can align superintelligence? And a negative one?
What are the three most interesting ideas you've heard in the last three years? (They don't have to be the most important, just the most surprising/brilliant/unexpected/etc.)
Do you think we will ever have a unified and satisfying theory of how to respond to moral uncertainty, given the huge structural and substantive differences between apparently plausible moral theories? Will MacAskill's thesis is one of the best treatments of this problem, and it seems like it would be hard to build an account of how one ought to respond to e.g. Rawlsianism, totalism, libertarianism, person-affecting views, absolutist rights-based theories, and so on, across most choice situations.
What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?
Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of misaligned artificial general intelligence?
Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?
Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist's curse etc.)?
If you could convince a dozen of the world's best philosophers (who aren't already doing EA-aligned research) to work on topics of your choice, which questions would you ask them to investigate?
Are there any specific natural existential risks that are significant enough that more than 1% of EA resources should be devoted to it? .1%? .01%?
Can you tell us something funny that Nick Bostrom once said that made you laugh? We know he used to do standup in London...
On balance, what do you think is the probability that we are at or close to a hinge of history (either right now, this decade, or this century)?
What are the most important new ideas in your book for someone who's already been in the EA movement for quite a while?
You break down a "grand strategy for humanity" into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.
But do you worry that we might not get a chance for a long reflection before having to "lock in" certain things to reach existential security?
For example, perhaps to reach existential security given a vulnerable world, we put in place "greatly amplified capacities for preventive policing and global governance" (Bostr... (read more)
What are your top three productivity tips?
Do you think that climate change has been neglected in the EA movement? What are some options that seem great to you at the moment to have a very large impact to stir us in a better direction regarding climate change?
We have a lot of philosophers and philosophically-minded people in EA, but only a tiny number of them are working on philosophical issues related to AI safety. Yet from my perspective as an AI safety researcher, it feels like there are some crucial questions which we need good philosophy to answer (many listed here; I'm particularly thinking about philosophy of mind and agency as applied to AI, a la Dennett). How do you think this funnel could be improved?
What's a book that you read and has impacted how you think / who you are, that you expected most people here won't have read?
Can you describe a typical day in your life with sufficient granularity that readers can have a sense of what "being a researcher at a place like FHI" is like?
What's up with Pascal's Mugging? Why hasn't this pesky problem just been authoritatively solved? (and if it has, what's the solution?) What is your preferred answer? / Which bullets do you bite (e.g., bounded utility function, assigning probability 0 to events, a decision-theoretical approach cop-out, etc.)?
Which ethical views do you have non-negligible credence in and, if true, would substantially change what you think ought to be prioritized, and how? How much credence do you have in these views?
Suppose your life's work ended up having negative impact. What is the most likely scenario under which this could happen?
As a sharp mind, respected scholar, or prominent member in the EA community, you have a certain degree of agency, an ability to start new projects and make things happen, a no small amount of oomph and mojo. How are you planning to use this agency in the coming decades?
What's one book that you think most EAs have not yet read and you think that they should (other than your own, of course)?
What are some of your current challenges? (maybe someone in the audience can help!)
What are you looking for in a research / operations colleague?
How robust do you think the case is for any specific longtermist intervention? E.g. do new considerations constantly affect your belief in their cost-effectiveness, and by how much?
In your book, you define an existential catastrophe as "the destruction of humanity's longterm potential". Would defining it instead as "the destruction of the vast majority of the longterm potential for value in the universe" capture the concept you wish to refer to? Would it perhaps slightly more technically accurately/explicitly capture what you wish to refer to, just perhaps in a less accessible or emotionally resonating way?
I wonder this partly because you write:
... (read more)Do you think the problems of infinite ethics give us reason to reject totalism or long-termism? If so, what is the alternative?
What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)
How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you're pretty confident about both of these, do you think additional research on infinites is relatively low priority?
How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?
What do you think of applying Open Phil's outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?
Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse's EA Global talk seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for "I noticed that over the last few ... (read more)
What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?
... (read more)What are your views on the prioritization of extinction risks vs other longtermist interventions/causes?
Which interventions/causes do you think are best to support/work on according to views in which extra people with good or great lives not being born is not at all bad (or far outweighed by other considerations)? E.g. different person-affecting views, or the procreation asymmetry.
You seem fairly confident that we are at "the precipice", or "a uniquely important time in our story". This seems very plausible to me. But how long of a period are you imagining for the precipice?
The claim is much stronger if you mean something like a century than something like a few millennia. But even if the "hingey" period is a few millennia, then I imagine that us being somewhere in it could still be quite an important fact.
(This might be answered past chapter 1 of the book.)
Do you lean more towards a preferential account of value, a hedonistic one, or something else?
How do you think tradeoffs between pleasure and suffering are best grounded according to a hedonistic view? It seems like there's no objective one-size-fits-all trade-off rate, since it seems like you could have different people have different preferences about the same quantities of pleasure and suffering in themselves.
What new evidence would cause the biggest shifts in your priorities?
What are the three least interesting ideas you've heard in the last three years? (They don't have to be the least important, just the least surprising/brilliant/unexpected/etc.)
Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of engineered pandemics?
What do you like to do during your free time?
There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?
I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.
What are your thoughts on how to evaluate or predict the impact of longtermist/x-risk interventions, or specifically efforts to generate and spread insights on this matters? E.g., how do you think about decisions like which medium to write in and whether to focus on generating ideas vs publicising ideas vs fundraising?
How would your views change (if at all) if you thought it was likely that there are intelligent beings elsewhere in the universe that "are responsive to moral reasons and moral argument" (quote from your book)? Or if you thought it's likely that, if humans suffer an existential catastrophe, other such beings would evolve on Earth later, with enough time to potentially colonise the stars?
Do your thoughts on these matters depend somewhat on your thoughts on moral realism vs antirealism/subjectivism?
What are some of your favourite theorems, proofs, algorithms, and data structures?
What are some directions you'd like the EA movement or some parts of the EA movement to take?
If you've read the book 'So good they can't ignore you', what do you think are the most important skills to master to be a writer/philosopher like yourself?
Hi Tobby! Thanks for being such a great source of inspiration for philosophy and EA. You're a great model to me!
Some questions, feel free to pick:
1) What philosophers are your sources of inspiration and why?
(put my other questions in separate comments). Also, writing "Toby"!