In 2017, I did my Honours research project on whether, and how much, fact-checking politicians’ statements influenced people’s attitudes towards those politicians, and their intentions to vote for them. (At my Australian university, “Honours” meant a research-focused, optional, selective 4th year of an undergrad degree.) With some help, I later adapted my thesis into a peer-reviewed paper: Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. This was all within the domains of political psychology and cognitive science.
During that year, and in a unit I completed earlier, I learned a lot about:
- how misinformation forms
- how it can sticky
- how it can continue to influence beliefs, attitudes, and behaviours even after being corrected/retracted, and even if people do remember the corrections/retractions
- ways of counteracting, or attempting to counteract, these issues
- E.g., fact-checking, or warning people that they may be about to receive misinformation
- various related topics in the broad buckets of political psychology and how people process information, such as impacts of “falsely balanced” reporting
The research that’s been done in these areas has provided many insights that I think might be useful for various EA-aligned efforts. For some examples of such insights and how they might be relevant, see my comment on this post. These insights also seemed relevant in a small way in this comment thread, and in relation to the case for building more and better epistemic institutions in the effective altruism community.
I’ve considered writing something up about this (beyond those brief comments), but my knowledge of these topics is too rusty for that to be something I could smash out quickly and to a high standard. So I’d like to instead just publicly say I’m happy to answer questions related to those topics.
I think it’d be ideal for questions to be asked publicly, so others might benefit, but I’m also open to discussing this stuff via messages or video calls. The questions could be about anything from a super specific worry you have about your super specific project, to general thoughts on how the EA community should communicate (or whatever).
Disclaimers:
- In 2017, I probably wasn’t adequately concerned by the replication crisis, and many of the papers I was reading were from before psychology’s attention was drawn to that. So we should assume some of my “knowledge” is based on papers that wouldn’t replicate.
- I was never a “proper expert” in those topics, and I haven’t focused on them since 2017. (I ended up with First Class Honours, meaning that I could do a fully funded PhD, but decided against it at that time.) So it might be that most of what I can provide is pointing out key terms, papers, and authors relevant to what you’re interested in.
- If your question is really important, you may want to just skip to contacting an active researcher in this area or checking the literature yourself. You could perhaps use the links in my comment on this post as a starting point.
- If you think you have more or more recent expertise in these or related topics, please do make that known, and perhaps just commandeer this AMA outright!
(Due to my current task list, I might respond to things mostly from 14 May onwards. But you can obviously comment & ask things before then anyway.)
I'dlike to have read this before having our discussion:
But their recommendations sound scary:
Interesting article - thanks for sharing it.
Why do you say their recommendations sound scary? Is it because you think they're intractable or hard to build support for?
Sorry, I should have been more clear: I think "treating attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners" is hard to build support for, and may imply some risk of abuse.
I've seen some serious stuff on epistemic and memetic warfare. Do you think misinformation in the web has recently been or is currently being used as an effective weapon against countries or peoples? Is it qualitatively different from good old conspiracies and smear campaigns? Do you have some examples? Do standard ways of counter-acting (e.g., fact-checking) can effectively work in the case of an intentional attack (my guess: probably not; an attacker can spread misinformation more effectively than we can spread fact-checking - and warning about it wil increase mistrust and polarization, which might be the goal of the campaign)? What would be your credences on your answers?
Good questions!
Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didn't pick up on much writing and discussion about these points.) So it's a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
One thing that you didn't raise, but which seems related and important, is how advancements in certain AI capabilities could affect the impacts of misinformation. I find this concerning, especially in connection with the point you make with this statement:
Early last year, shortly after learning about EA, I wrote a brief research proposal related to the combination of these points. I never pursued the research project, and have now learned of other problems I see as likely more important, but I still do think it'd be good for someone to pursue this sort of research. Here it is:
References:
Really thanks!
I think "offense-deffense balance" is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, it's particularly concerning how easily it can overrun our defenses - so that, even if we succeed by fact-checking every inaccurate statement, it'll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).
So, despite everything I've read about the subject (though notvery sistematically), I haven't seen feasible well-written strategies to address this asymmetry - except for some papers on moderation in social networks and forums (even so, it's quite time consuming, unless moderators draw clear guidelines - like in this forum). I wonder why societies (through authorities or self-regulation) can't agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a source - something even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.
I think I've got similar concerns and thoughts on this. I'm vaguely aware of various ideas for dealing with these issues, but I haven't kept up with that, and I'm not sure how effective they are or will be in future.
The idea of making captcha requirements before things like commenting very widespread is one I haven't heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.
I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)
But some issues also come to mind (note: these are basically speculation, rather than drawing on research I've read):
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/impacts of misinformation. I'd guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)
Agreed. But I don't think we could do that without changing the environment a little bit. My point is that rationality isn’t just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and it’s way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isn't really “statements that are false”, or people who are actually fooled by them. The problem is that, if I’m convinced I’m surrounded by lies and nonsense, I’ll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I haven’t found any paper testing this hypothesis, though. If it is right, then most articles I’ve seen on “fake news didn’t affect political outcomes” might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like “digit 1 in position nth”. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like “digit 1 in positions n, m and o”.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; it’s even worse if the Agent already knows some of the Principal's biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobs - like social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like “military personnel support this, financial investors would never accept that”. If you can convince voters they’ll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my “rationality skills” arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldn’t extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didn’t take 2 min to google it (and find out that, yes, “greenhouse” is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldn’t have done it myself if I didn’t already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldn’t provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against “anthropic global warming”, without even caring to put a consistent credence on them - like first pointing to alternative causes for warming, and then denying the warming itself. He didn't really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of "rating agencies"; I mean, they shouldn't only screen for false statements, but actually provide information about who is accurate - so mitigating what I've been calling the "lemons problem in news". But who rates the raters? Besides the risk of capture, I don't know how to make people actually trust the agencies in the first place.
Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.
An abstract from one paper:
Two other relevant papers:
Parts of your comment reminded me of something that's perhaps unrelated, but seems interesting to bring up, which is Stefan Schubert's prior work on "argument-checking", as discussed on an 80k episode:
I think you raise interesting points. A few thoughts (which are again more like my views rather than "what the research says"):
Not sure if I understand the suggestion, or rather how you envision it adding value compared to the current system.
Fact-checkers already do say both that some statements are false and that others are accurate.
Also, at least some of them already have ways to see what proportion of a certain person's claims that the fact-checker evaluated turned out to be true vs false. Although that's obviously not the same as what proportion of all a source's claims (or all of a source's important claims, or whatever) are true.
But it seems like trying to objectively assess various sources' overall accuracy would be very hard and controversial. And it seems like one way we could view the current situation is that most info that's spread is roughly accurate (though often out of context, not highly important, etc.), and some is not, and the fact-checkers pick up claims that seem like they might be inaccurate and then say if they are. So we can perhaps see ourselves as already having something like an overall screening for general inaccuracy of quite prominent sources, in that, if fact-checking agencies haven't pointed out false statements of theirs, they're probably generally roughly accurate.
That's obviously not a very fine-grained assessment, but I guess what I'm saying is that it's something, and that adding value beyond that might be very hard.
Meta comment
I felt unsure how many people this AMA would be useful to, if anyone, and whether it would be worth posting.
But I’d guess it’s probably a good norm for EAs who might have relatively high levels of expertise in a relatively niche area to just make themselves known, and then let others decide whether it seems worthwhile to use them as a bridge between that niche area and EA. The potential upside - the creation of such bridges - seems notably larger than the downsides - a little time wasted writing and reading the post, before people ultimately just decide it’s not valuable and scroll on by.
I’d be interested in other people’s thoughts on that idea, and whether it’d be worth more people doing “tentative AMAs”, if they’re “sort-of” experts in some particular area that isn’t known to already be quite well represented in EA (e.g., probably not computer science or population ethics). E.g., maybe someone who did a Masters project on medieval Europe could do an AMA, without really knowing why any EAs would care, and then just see if anyone takes them up on it.
It's now occurred to me that a natural option to compare this against is having something like a directory listing EAs who are open to 1-on-1s on various topics, where their areas of expertise or interest are noted. Like this or this.
Here are some quick thoughts on how these options compare. But I'd be interested in others' thoughts too.
Relative disadvantages of this "tentative AMA" approach:
Relative advantage of this "tentative AMA" approach:
To get the ball rolling, and give examples of some insights from these areas of research and how they might be relevant to EA, here’s an adapted version of a shortform comment I wrote a while ago:
Potential downsides of EA's epistemic norms (which overall seem great to me)
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation, and related areas, which might suggest downsides to some of EA's epistemic norms. Examples of the norms I'm talking about include just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
From this paper's abstract:
This seems to me to suggest some value in including "epistemic status" messages up front, but that this don't make it totally "safe" to make posts before having familiarised oneself with the literature and checked one's claims. (This may suggest potential downsides to both this comment and this whole AMA, so please consider yourself both warned and warned that the warning might not be sufficient!)
Similar things also make me a bit concerned about the “better wrong than vague” norm/slogan that crops up sometimes, and also make me hesitant to optimise too much for brevity at the expense of nuance. I see value in the “better wrong than vague” idea, and in being brief at the cost of some nuance, but it seems a good idea to make tradeoffs like this with these psychological findings in mind as one factor.
Here are a couple other seemingly relevant quotes from papers I read back then (and haven’t vetted since then):
Two more examples of how these sorts of findings can be applied to matters of interest to EAs: