Hello Effective Altruism Forum, I am Nate Soares, and I will be here to answer your questions tomorrow, Thursday the 11th of June, 15:00-18:00 US Pacific time. You can post questions here in the interim.
Last week Monday, I took the reins as executive director of the Machine Intelligence Research Institute. MIRI focuses on studying technical problems of long-term AI safety. I'm happy to chat about what that means, why it's important, why we think we can make a difference now, what the open technical problems are, how we approach them, and some of my plans for the future.
I'm also happy to answer questions about my personal history and how I got here, or about personal growth and mindhacking (a subject I touch upon frequently in my blog, Minding Our Way), or about whatever else piques your curiosity. This is an AMA, after all!
EDIT (15:00): All right, I'm here. Dang there are a lot of questions! Let's get this started :-)
EDIT (18:00): Ok, that's a wrap. Thanks, everyone! Those were great questions.
Short version: FAI. (You said "hope", not "expect" :-p)
Longer version: Hard question, both because (a) I don't know how you want me to trade off between how nice the advance would be and how likely we are to get it, and (b) my expectations for the next five years are very volatile. In the year since Nick Bostrom released Superintelligence, there has been a huge wave of interest in the future of AI (due in no small part to the efforts of FLI and their wonderful Puerto Rico conference!), and my expectations of where I'll be in five years range all the way from "well that was a nice fad while it lasted" to "oh wow there are billions of dollars flowing into the field".
But I'll do my best to answer. The most obvious schelling point I'd like to hit in 5 years is "fully naturalized AIXI," that is, a solid theoretical understanding of how we would "brute force" an FAI if we had ungodly amounts of computing power. (AIXI is an equation that Marcus Hutter uses to define an optimal general intelligence under certain simplifying assumptions that don't hold in the real world: AIXI is sufficiently powerful that you could use it to destroy the world while demonstrating something that would surely look like "intelligence" from the outside, but it's not yet clear how you could use it to build a generally intelligent system that maximizes something in the world -- for example, even if you gave me unlimited computing power, I wouldn't yet know how to write the program that stably and reliably pursues the goal of turning as much of the universe as possible into into diamond.)
Formalizing "fully naturalized AIXI" would require a better understanding of decision theory (How do we want advanced systems to reason about counterfactuals? Preferences alone are not enough to determine what counts as a "good action," that notion also depends on how you evaluate the counterfactual consequences of taking various actions, we lack a theory of idealized counterfactual reasoning.), logical uncertainty (What does it even mean for a reasoner to reason reliably about something larger than the reasoner? Solomonoff induction basically works by having the reasoner be just friggin' bigger than the environment, and I'd be thrilled if we could get a working theoretical model of "good reasoning" in cases where the reasoner is smaller than the environment), and a whole host of other problems (many of them covered in our technical agenda).
5 years is a pretty wildly optimistic timeline for developing fully naturalized AIXI, though, and I'd be thrilled if we could make concrete progress in any one of the topic areas listed in the technical agenda.