Nice! Thanks Saul. Prize Pool is now removed from the header (though the commenting prize continues). The reason why I don't put the commenting prize in the Prize Pool field, if you're curious, is because it would then automatically generate "takes" for participants per forecasting performance, when the prize is for comments.
This short take is a linkpost for this Discussion Post by Metaculus's Technical Product Manager
Did a comment change your mind? Give Metaculus's new 'Changed my mind' button a click!
And for binary questions, clicking the button lets you update your prediction directly from the comment.
The China and Global Cooperation Tournament has ended, thank you to everyone who forecasted! Here are the top three finishers:
The China and Global Cooperation Tournament featured questions related to the future of China and its role in the world stage. Participants forecasted on questions such as the global uptake of China’s currency, the number of Japanese Air Force responses to Chinese aircraft, and the combined market cap of large Chinese companies.
The Global Pulse Tournament has concluded: Forecasters contributed to two academic studies conducted by Dr. Philipp Schoenegger, Associate Professor Barbara Fasolo, and Associate Professor Matteo Galizzi, all at the Behavioural Lab at the London School of Economics.
Congratulations to the top three winners:
Dr. Schoenegger shared the following about the research the tournament supported:
I wanted to thank all of you for your hard work in this tournament! It was really amazing to follow the forecasts and discussions over the past few months (even though I, at times, really wanted to jump in and forecast myself)!
We used the forecasts you generated in two academic studies, where we presented a report made up of of your forecasts to two samples of decision-makers who were forecasting on questions that were directly connected to but distinct from those asked in the tournament. While I am not yet able to share results as we are still in the analysis phase, I can tell you that we wouldn't have been able to study this in such a thorough way without the actual tournament data you provided, so thank you!
And last but not least, congratulations to the winners!
You can find more details about the research shared by Dr. Schoenegger here. You can also see more details and the full leaderboard for the Global Pulse Tournament here.
Metaculus forecaster impact in our 3rd annual FluSight Challenge:
• Strengthened CDC flu forecasts
• Helped train mechanistic models
• 1 successful abstract submission by our partner, Dr. Thomas McAndrew, who leads Lehigh University's Computational Uncertainty Lab
• Sped progress on 2 manuscripts
The flu burden has subsided for the season and with it the third annual FluSight Challenge has concluded. The top three prize-winners earning a share of the $5,000 prize pool are:
Congratulations to the top three winners – who also earned the gold, silver, and bronze medals in the tournament – and congratulations to all the prizewinners!
Our returning partner for this project, Dr. Thomas McAndrew, who leads Lehigh University's Computational Uncertainty Lab, shared the following:
We wish to thank everyone who submitted forecasts of the peak timing and intensity of influenza hospitalizations throughout this 2023/24 season. Forecasts supported training a mechanistic model to generate forecasts for all 50 states in the country. Your contributions have led to a successful abstract submission (below), and this work has spurred two manuscripts: one focused on a computational technique to extend forecasts of peak timing/intensity from 10 states to all 50 states; and a second focused on how to train a mechanistic model on human judgment ensemble forecasts of peak timing/intensity. Your work here builds upon previous human judgment-support mechanistic modeling on a synthetic outbreak. Thank you again for all the hard work and dedication during the influenza season.
Abstract: For the 2022/2023 season, influenza accounted for 31m cases, 360k hospitalizations and 21k deaths, costing US healthcare ~$4.6 billion. Forecasts of influenza hospitalizations provide health officials with advanced warning. Prior research has revealed that forecasts generated by human judgment are similar in accuracy to mathematical models. However, little work has compared an equally weighted vs performance-weighted human judgment ensemble. We collected weekly human judgment forecasts of the peak number of incident hospitalizations (peak intensity) and the epidemic week where this peak occurs (peak time) for 10 US states from Oct 2023 to March 2024. The median, 25th, and 75th percentiles were extracted for each week. We found that, for the performance-weighted ensemble, forecast uncertainty decreased just before and after peak intensity and peak timing. However, uncertainty for the equally-weighted ensemble does not decrease. For both ensembles, the median prediction of peak intensity is smaller than the truth at the beginning of the season and approaches the truth monotonically. For peak time, the performance-weighted ensemble median prediction is later than the truth and approaches the truth only after the true peak time is observed. We have observed that the performance-weighted ensemble tends to produce more accurate forecasts of peak timing and intensity compared to an equally-weighted ensemble. One potential mechanism for this boost in accuracy was that the performance-weighted ensemble correctly assigned large weights to experienced forecasters and small weights to inexperienced forecasters. Experienced forecasters may be needed when asked to predict peak timing/intensity of an ongoing outbreak.
You can see more details and the full leaderboard for the FluSight Challenge 2023/24 here.
We will be in touch with prize winners to arrange payments. We are also happy to facilitate donations to the organizations listed here.
Congratulations again to SpottedBear, skmmcj, and datscilly!
@Austin & @Saul Munn — all approved, and we've recalculated the new $2250 prize pool. Thank you both for your generosity!