Metaculus has currently got over 1000 open forecasting questions, many of which are longtermist or EA focused.
These include several EA-focused categories, e.g. EA survey 2025, an Alt-Protein Tournament, Animal Welfare, the "Ragnorak" global catastrophic risks series, and other questions on the distant future.
I am volunteering at Rethink Priorities doing forecasting research, and am looking to see if there are EA related questions with long time horizons (>5 years) people are interested in seeing predictions on, and if there are I am willing to put some time into operationalising them and submitting them to Metaculus.
I think this would be both directly useful for those who have these questions and others who find them interesting, and also useful for expanding the database of such questions we have for the purpose of improving long term forecasting.
This question is part of a project of Rethink Priorities.
It was written by Charles Dillon, a volunteer for Rethink Priorities. Thanks to Linch Zhang for advising on the question. If you like our work, please consider subscribing to our newsletter. You can see all our work to date here.
For this, would you prefer to condition on something like there being no transformative AI, or not? I feel like sometimes these questions end up dominated by considerations like this, and it is plausible you care about this answer only conditional on something like this not happening.
Thanks for these!
Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?
It ... (read more)