I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last decade I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.
In July, we published the following research posts:
Additionally, our Researcher/Writer Michael Aird published:
Thanks for writing the post! I think we need a lot more strategy research, cause prioritization being one of the most important types, and that is why we founded Convergence Analysis (theory of change and strategy, our site, and our publications). Within our focus of x-risk reduction we do cause prioritization, describe how to do strategy research, and have been working to fill the EA information hazard policy gap. We are mostly focused on strategy research as a whole which lays the groundwork for cause prioritization. Here are some of our articles:
We’re small and relatively new group and we’d like to see more people and groups do this type of research and that this field get more support and grow. There is a vast amount to do and immense opportunity in doing good with this type of research.
Nice post!
Here are a couple additional posts that I think are worth checking out by Gwern:
https://www.lesswrong.com/posts/ktr39MFWpTqmzuKxQ/notes-on-psychopathy
https://www.lesswrong.com/posts/Ft2Cm9tWtcLNFLrMw/notes-on-the-psychology-of-power
Following Sean here I'll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn't take him up on the bet people wouldn't take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.
Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn't have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.
Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.
Hmm... I will take you up on a bet at those odds and with those resolution criteria. Let's make it 50 GBP of mine vs 250 GBP of yours. Agreed?
I hope you win the bet!
(note: I generally think it is good for the group epistemic process for people to take bets on their beliefs but am not entirely certain about that.)
Other perspectives that are arguably missing or extensions that can be done are:
Here also is an additional post analyzing the ITN framework: https://forum.effectivealtruism.org/posts/fR55cjoph2wwiSk8R/formalizing-the-cause-prioritization-framework