I found your post interesting and your proposal compelling. Noam Bardin (founder of Waze) recently founded Post.News, a platform that aims to implement a version of specific parts of your proposal in their mixed new news and social media site. His interview on the Pivot Podcast explained very little details of the product (mainly his "vision"), although they are still early in development and seemed to have rushed their launch given the happenings at Twitter. Nonetheless, the parts of that podcast episode that I found most interesting and may be relevant to your proposal are:
42:30 - Waze established a hierarchal system of maps editors that have built reputation over time and have jurisdiction over specific geographic locations. A version of this that may be implemented in Post.News sounds like having specific content moderators for specific geographies regions or topics.
This sounded similar to the fact-checkers in your proposal, in which fact-checkers list topics they are experts in. A key difference between the Waze and Post.News examples and your proposal, is that your proposal suggests that fact checkers are selected at random for each article to prevent collusion.
The key similarity between Waze, Post.News and your proposal is that these actors' objectives are to optimise for some specific score - whether that is increasing maps accuracy, minimising user reports of community violations or closeness to the average score to other fact-checkers for a given article - with the purpose being that optimising for these scores improves the quality of the service.
53:57 - Establishing a two-tier user system in which:
Content on Post.News by verified users (real identity is verified using a paid 3rd-party service) is distributed amongst and beyond your followers using a reputation system to determine degree of amplification. Verified users that cross some threshold of 'bad content' as reported by content moderators or others users, will be removed from the reputation system and their content will only be distributed to their followers. Content on Post.News by anonymous users (real identity is not verified) is distributed to only their followers.
I am curious about what happens to content in your proposal that isn't chosen for post editorial review (ever) and what happens to the post while it is in review. Do you think a two-tier user system for content is necessary to attract content, writers, fact-checkers and reader (i.e. there needs to be a certain volume of content for you to attract users and build the news site)?
Question: In your proposal, how would the system onboard fact-checkers and what factors would influence selection of new fact-checkers if reputation scores are 'fresh' broadly on the platform or for a specific topic? Would bad fact-checkers be weeded-out by poor scores from their first few fact-check assignments?
Question: In your proposal, how / would you enable fact-checkers to list topics of expertise? Again, would topics be removed from fact-checkers' profiles by poor scores from their first few fact-check assignments?
An observation I have about your proposal is the high volume of fact-checkers you would need to onboard to keep up with article content, particularly given that your proposal needs multiple fact-checkers per topic for one article.
My prediction is that future AI systems will emerge that will perform fact-checking activities for a significant volume of content (I am uncertain what amount this would be). These systems will have their own 'knowledge graph' of facts, extracted from a variety of sources, influenced by sources' reputations (that they track), so humans don't have to. Rudimentary versions of this already exist (e.g. Google places an answer in a callout box at the top of search results, Microsoft's implementation of OpenAI's GPT3.5 presents web sources when it produces prompt responses). @quinn's comment about Bayesian Truth Serums are similar to what I envision for mechanistic fact-checking.
Its worth nothing that I've just started an self-direct research project on this very topic and would be open to chatting further if you are invested in this topic too.
I found your post interesting and your proposal compelling. Noam Bardin (founder of Waze) recently founded Post.News, a platform that aims to implement a version of specific parts of your proposal in their mixed new news and social media site. His interview on the Pivot Podcast explained very little details of the product (mainly his "vision"), although they are still early in development and seemed to have rushed their launch given the happenings at Twitter. Nonetheless, the parts of that podcast episode that I found most interesting and may be relevant to your proposal are:
Question: In your proposal, how would the system onboard fact-checkers and what factors would influence selection of new fact-checkers if reputation scores are 'fresh' broadly on the platform or for a specific topic? Would bad fact-checkers be weeded-out by poor scores from their first few fact-check assignments?
Question: In your proposal, how / would you enable fact-checkers to list topics of expertise? Again, would topics be removed from fact-checkers' profiles by poor scores from their first few fact-check assignments?
An observation I have about your proposal is the high volume of fact-checkers you would need to onboard to keep up with article content, particularly given that your proposal needs multiple fact-checkers per topic for one article.
My prediction is that future AI systems will emerge that will perform fact-checking activities for a significant volume of content (I am uncertain what amount this would be). These systems will have their own 'knowledge graph' of facts, extracted from a variety of sources, influenced by sources' reputations (that they track), so humans don't have to. Rudimentary versions of this already exist (e.g. Google places an answer in a callout box at the top of search results, Microsoft's implementation of OpenAI's GPT3.5 presents web sources when it produces prompt responses). @quinn's comment about Bayesian Truth Serums are similar to what I envision for mechanistic fact-checking.
Its worth nothing that I've just started an self-direct research project on this very topic and would be open to chatting further if you are invested in this topic too.
Thanks for posting!