You'll have to trust someone, unless you want to be alone.
If you want to share ideas with folks who have their own;
if you want to work together on a job too big for one,
you'll have to trust someone.
If anyone betrays you, the theory of the game
says you should practice "Tit for Tat" ideally; all the same,
you'll have to trust someone.
So give the default gift of trust to everyone you meet:
the Senator in Washington, the beggar on the street.
Some will be untrustworthy, you know that in advance,
but hold your "Tat" back in reserve; give everyone a chance.
The soldier in his foxhole, the general in his jet:
although we fight together, we die alone; and yet,
you'll have to trust someone.
*
Whom (or what) do you trust? Why?
Theists trust in God. Patriots trust in the Constitution. Children [initially] trust their parents. Trump supporters trust Fox & Friends. Democrats trust CNN. Mulder trusts no one. Good scientists also claim to trust no one, but they are selective in their degree of distrust: they tend to trust the Bureau of Standards, the top peer-reviewed journals and their most respected colleagues. Atheists and libertarians trust their own judgment. Idiots trust everything they see on the Internet.
Some people claim we live in an Information Economy. This is naive. Information is cheap. Information is so readily available that we are all drowning in a sea of information. What is really precious is knowledge of which information is reliable and worth knowing. How can we access that knowledge? Only by trusting some authority. I wish I could say, "Trust only your own judgment!" but in order to do so (and not be a fool) you have to make sure your judgment is informed; and that returns you to the original question: which information can you trust?
So, whether you are a scientist or a voter or a consumer, you have the same problem: who is worthy of your trust? And how do you decide?
Most people today allow someone else to decide for them, thereby placing all their trust in that person. Perhaps they recognize that different people are qualified in different arenas, so they trust Chris Wallace or Anderson Cooper to accurately report the news, Suzy Menkes or Kim Kardashian to judge fashion, Nature or Popular Science to cover the latest science, and so on. Confirmation bias plays a huge role in these choices, even for physicists. Most people are wise enough to recognize this, but what choice do they have? None of us has the time or energy to dig down to the original data and analyze it ourselves, or to learn enough about fashion, music or art to make judgments more refined than, "I like that one!"
Let's unpack that question, "What choice do they have?" Maybe we can do better....
If we wish to avoid falling back into the original trap of picking an Authority to trust based on our own uninformed judgment, we need to be able to consult many authorities whose judgment has been appraised by "juries of their peers". This scheme is implemented in the scientific community by means of peer review, in which each new paper is reviewed by respected scientists with established expertise in the subject area. It works fairly well, up to a point, but is still beset by politics and confirmation bias: peer review is orchestrated by Editors who choose the reviewers. Moreover, peer review leaves no room for "disinterested third parties" to weigh in on the validity or importance of new research.
Social media are more democratic: anyone can "Like" or "Upvote" or (sometimes) "Downvote" a post; but few offer any opportunity (other than in a Reply) to specify what the reviewer likes or dislikes about the item, nor is there any "weighting" of the reviewer's opinion according to their own credibility. This system could never produce a semi-objective (trustworthy) evaluation of anything. It is pure politics.
So the first criterion for a democratic, self-organizing system of evaluation is that it knows who is doing the evaluation and the extent to which they know what they are talking about.
Can Technology Help?
Google (among other entities) now uses every bit of accessible information about your browsing habits, you buying habits and your political habits to build a sophisticated model of you as a person, both to better serve your Googling needs and to better serve the advertisers competing for your attention. Like all innovations, this is both good news ("...better serve...") and bad news (advertisers...). But the "deep learning" technology exists, and has been successfully applied to interpretation of "big data". Perhaps it can also be applied to important data, like which theory of Dark Matter is more plausible, or whether the Global Climate Crisis will really kill us all.
At o'Peer I have outlined a strategy for removing some of the politics from open Peer review and making it more democratic, more responsive and more trustworthy. Unsurprisingly, it has not "taken off" so far, partly because (let's face it) I am just an amateur at the art of constructing software that can learn. Also, active physicists can't afford to champion revolution against the Editors who now decide their future prominence. Also, there are vulnerabilities to be worked out... I will address this below, but first let me paint the big canvas:
- Everyone gets a unique ID that computers can recognize and confirm. This is already true for most physicists (see ORCID.org) and for all citizens of the People's Republic of China -- which illustrates the range of applications. But it is necessary for this system to know who you are, for reasons that will become obvious. Note that while the system must know who you are, you can still remain anonymous to everyone else, if you so desire.
- Your contributions to various human enterprises (e.g. Science or Art or Music or Prosperity or Altruism) will have been constantly evaluated by others, resulting in an accumulated credibility index in all those areas and more. The more refined and numerous the evaluations become, the more accurately your credibility will be established.
- When you are moved to evaluate someone else's contribution, you can express your evaluation in as much detail as you choose. A refined Machine Learning algorithm will eventually be able to interpret your comments in quantitative terms. It will then weight your evaluations by your credibility index in that particular aspect of that particular topic, add it to a weighted running average of the global evaluation of said contribution, and thereby enhance or denigrate the author's credibility index in that arena.
- Insincere, petty or malicious negative evaluations will not go unnoticed, but the resulting damage will be primarily to the evaluator's credibility index.
What Could Possibly Go Wrong?
Obviously, the list is endless. People will try to "game the system". They will succeed. The system will have to be refined and adjusted constantly to make it more resistant to tampering and bias. This will be an enormous undertaking by an army of experts. But think of the possible "payoff": a way to gradually, democratically and (eventually) fairly offer advice on whom to trust about what. But let's list some of the pitfalls, in order to get started on refining the system before it even exists....
- Spoofing: lots of jerks will try to pretend to be the person everyone trusts. Biometric ID may help... or not. Of course, this problem arises in other realms as well.
- Hacking: the entire database and its maintenance will have to be protected by (for instance) blockchain elements that perform the storage as well as the learning.
- Conspiracy: groups of people will collude to raise each other's credibility and/or the perceived trustworthiness of a chosen leader. I'm hoping that such people will not usually be able to garner much credibility themselves, which will hamper their efforts. This problem already haunts peer review in physics, but the steadfast skepticism of most physicists tends to dampen its negative effects.
Let's get started!