C

christian

@ Metaculus
943 karmaJoined
Interests:
Forecasting

Comments
37

@Austin & @Saul Munn — all approved, and we've recalculated the new $2250 prize pool. Thank you both for your generosity!

Nice! Thanks Saul. Prize Pool is now removed from the header (though the commenting prize continues). The reason why I don't put the commenting prize in the Prize Pool field, if you're curious, is because it would then automatically generate "takes" for participants per forecasting performance, when the prize is for comments. 

Hey Austin, that's so generous and really appreciated. Let me get with our Programs team, and I'll be back to you. 

Hey Ozzie, I'll add that it's also a brand new post. But yes, your feedback is/was definitely appreciated. 

Metaculus introduces 'Changed my mind' button

This short take is a linkpost for this Discussion Post by Metaculus's Technical Product Manager

  • Do you sometimes read a comment so good that you revise your whole world model and start predicting the opposite of what you believed before?
  • Do you ever read a comment and think “Huh. Hadn’t thought of that.” and then tweak your prediction by a few percentage points?
  • Do you ever read a comment so clearly wrong that you update in the opposite direction?
  • Do you ever wish you could easily tell other forecasters that what they share is valuable to you?
  • Do you ever want to update you prediction right after reading a comment, without getting RSI in your scrolling finger?

Did a comment change your mind? Give Metaculus's new 'Changed my mind' button a click! 

And for binary questions, clicking the button lets you update your prediction directly from the comment. 

Winners of the China and Global Cooperation Tournament

The China and Global Cooperation Tournament has ended, thank you to everyone who forecasted! Here are the top three finishers:

  1. twsummer – $687
  2. Ab5A8bd20V – $194
  3. Inertia – $80

The China and Global Cooperation Tournament featured questions related to the future of China and its role in the world stage. Participants forecasted on questions such as the global uptake of China’s currency, the number of Japanese Air Force responses to Chinese aircraft, and the combined market cap of large Chinese companies.

Metaculus: Winners of the Global Pulse Tournament + Forecaster Impact

The Global Pulse Tournament has concluded: Forecasters contributed to two academic studies conducted by Dr. Philipp Schoenegger, Associate Professor Barbara Fasolo, and Associate Professor Matteo Galizzi, all at the Behavioural Lab at the London School of Economics
 

Congratulations to the top three winners:

  1. skmmcj – $233
  2. datscilly – $231
  3. SpottedBear – $110

Dr. Schoenegger shared the following about the research the tournament supported:

I wanted to thank all of you for your hard work in this tournament! It was really amazing to follow the forecasts and discussions over the past few months (even though I, at times, really wanted to jump in and forecast myself)!

We used the forecasts you generated in two academic studies, where we presented a report made up of of your forecasts to two samples of decision-makers who were forecasting on questions that were directly connected to but distinct from those asked in the tournament. While I am not yet able to share results as we are still in the analysis phase, I can tell you that we wouldn't have been able to study this in such a thorough way without the actual tournament data you provided, so thank you!

And last but not least, congratulations to the winners!

You can find more details about the research shared by Dr. Schoenegger here. You can also see more details and the full leaderboard for the Global Pulse Tournament here.

Metaculus FluSight Challenge 2023/24 Winners + Forecasting Impact


Metaculus forecaster impact in our 3rd annual FluSight Challenge: 

• Strengthened CDC flu forecasts
• Helped train mechanistic models
• 1 successful abstract submission by our partner, Dr. Thomas McAndrew, who leads Lehigh University's Computational Uncertainty Lab
• Sped progress on 2 manuscripts 

The flu burden has subsided for the season and with it the third annual FluSight Challenge has concluded. The top three prize-winners earning a share of the $5,000 prize pool are:

  1. SpottedBear – $3,432
  2. skmmcj – $914
  3. datscilly – $297

Congratulations to the top three winners – who also earned the gold, silver, and bronze medals in the tournament – and congratulations to all the prizewinners!

Our returning partner for this project, Dr. Thomas McAndrew, who leads Lehigh University's Computational Uncertainty Lab, shared the following:

We wish to thank everyone who submitted forecasts of the peak timing and intensity of influenza hospitalizations throughout this 2023/24 season. Forecasts supported training a mechanistic model to generate forecasts for all 50 states in the country. Your contributions have led to a successful abstract submission (below), and this work has spurred two manuscripts: one focused on a computational technique to extend forecasts of peak timing/intensity from 10 states to all 50 states; and a second focused on how to train a mechanistic model on human judgment ensemble forecasts of peak timing/intensity. Your work here builds upon previous human judgment-support mechanistic modeling on a synthetic outbreak. Thank you again for all the hard work and dedication during the influenza season.

Abstract: For the 2022/2023 season, influenza accounted for 31m cases, 360k hospitalizations and 21k deaths, costing US healthcare ~$4.6 billion. Forecasts of influenza hospitalizations provide health officials with advanced warning. Prior research has revealed that forecasts generated by human judgment are similar in accuracy to mathematical models. However, little work has compared an equally weighted vs performance-weighted human judgment ensemble. We collected weekly human judgment forecasts of the peak number of incident hospitalizations (peak intensity) and the epidemic week where this peak occurs (peak time) for 10 US states from Oct 2023 to March 2024. The median, 25th, and 75th percentiles were extracted for each week. We found that, for the performance-weighted ensemble, forecast uncertainty decreased just before and after peak intensity and peak timing. However, uncertainty for the equally-weighted ensemble does not decrease. For both ensembles, the median prediction of peak intensity is smaller than the truth at the beginning of the season and approaches the truth monotonically. For peak time, the performance-weighted ensemble median prediction is later than the truth and approaches the truth only after the true peak time is observed. We have observed that the performance-weighted ensemble tends to produce more accurate forecasts of peak timing and intensity compared to an equally-weighted ensemble. One potential mechanism for this boost in accuracy was that the performance-weighted ensemble correctly assigned large weights to experienced forecasters and small weights to inexperienced forecasters. Experienced forecasters may be needed when asked to predict peak timing/intensity of an ongoing outbreak.

You can see more details and the full leaderboard for the FluSight Challenge 2023/24 here.

We will be in touch with prize winners to arrange payments. We are also happy to facilitate donations to the organizations listed here.

Congratulations again to SpottedBear, skmmcj, and datscilly!

Thanks, I've created a new link which shouldn't expire and I've updated the post. 

Great to see this! Absolutely, we're looking forward to sharing more Metaculus collaborations with more interesting public thinkers in the near future. 

Load more