I lead the Existential Security team (previously known as the General Longtermism team) at Rethink Priorities. We are currently focused on helping launch entrepreneurial projects that reduce existential risk. See here for a blog post explaining our team strategy for the year.
My previous work has included nanotechnology strategy research and co-founding EA Pathfinder, which I co-led from April to September 2022. Before joining Rethink Priorities in early 2022 I was a Senior Research Scholar at the Future of Humanity Institute, and before that I completed a PhD in DNA nanotechnology at Oxford University and spent 5 years working in finance as a quantitative analyst.
If you're interested in learning more about nanotechnology strategy research, you could check out this database of resources I made.
Feel free to send me a private message here, or to email me at hello [at] bensnodin dot com.
You can also give me anonymous feedback with this form!
Thanks for these!
I think my general feeling on these is that it's hard for me to tell if they actually reduced existential risk. Maybe this is just because I don't understand the mechanisms for a global catastrophe from AI well enough. (e.g. because of this, linking to Neel's longlist of theories for impact was helpful, so thank you for that!)
E.g. my impression is that some people with relevant knowledge seem to think that technical safety work currently can't achieve very much.
(Hopefully this response isn't too annoying -- I could put in the work to understand the mechanisms for a global catastrophe from AI better, and maybe I will get round to this someday)
I think my motivation comes from things to do with: helping with my personal motivation for work on existential risk, helping me form accurate beliefs on the general tractability of work on existential risk, and helping me advocate to other people about the importance of work on existential risk.
Thinking about it maybe it would be pretty great to have someone assemble and maintain a good public list of answers to this question! (or maybe someone did already and I don't know about it)
Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?
(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)
I (with lots of help from my colleague Marie Davidsen Buhl) made a database of resources relevant nanotechnology strategy research, with articles sorted by relevance for people new to the area. I hope it will be useful for people who want to look into doing research in this area.
Thanks, would be interested to discuss more! I'll give some reactions here for the time being
This sounds astonishingly high to me (as does 1-2% without TAI)
(For context / slight warning on the quality of the below: I haven't thought about this for a while, and in order to write the below I'm mostly relying on old notes + my current sense of whether I still agree with them.)
Maybe we don't want to get into an AGI/TAI timelines discussion here (and I don't have great insights to offer there anyway) so I'll focus on the pre-TAI number.
I definitely agree that it seems like we're not at all on track to get to advanced nanotechnology in 20 years, and I'm not sure I disagree with anything you said about what needs to happen to get there etc.
I'll try to say some things that might make it clearer why we are currently giving different numbers here (though to be clear, as is hopefully apparent in the post, I'm not especially convinced about the number I gave)
Scientists convince themselves that Drexler's sketch is infeasible more often than one might think. But to someone at that point there's little reason to pursue the subject further, let alone publish on it. It's of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley's participation in the debate certainly didn't redound to his reputation.
So there's not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that's at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn't go through in generality or can't be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn't put much weight on the apparent lack of rebuttals.
I definitely agree with the points about incentives for people to rebut Drexler's sketch, but I still think the lack of great rebuttals is some evidence here (I don't think that represents a shift in my view -- I guess I just didn't go into enough detail in the post to get to this kind of nuance (it's possible that was a mistake)).
Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don't know if you'd be interested in working to try to do that, but if you were I'd potentially be very keen to support that. (Similarly for ~showing something like "near-infeasibility for Drexler's sketch.)
Thanks for writing this Joey, very interesting!
Since the top 20% of founders who enter your programme generate most of the impact, and it's fairly predictable who these founders will be, it seems like getting more applicants in that top 20% bracket could be pretty huge for the impact you're able to have. Curious if you have any reaction to that? I don't know whether expanding the applicant pool at the top end is a top priority for the organisation currently.