Tomorrow night, up to 100 million people — those are Super Bowl numbers — will watch Hillary Clinton and Donald Trump debate on Long Island, during what will be the most pivotal moment of the 2016 presidential election campaign so far. Viewers will be watching for all the usual stuff — body language, witty rejoinders, whether or not the candidates seem genuine, and so on.
But perhaps most of all, people will be watching for minor fibs, half-truths, and howlers. Because we live in the age of the online fact-checker, every single factual statement uttered by Clinton or Trump will be put through the wringer — someone, somewhere, will check the veracity of each and publish the results online. The quickest fact-checks will occur nearly in real time, tweeted out by journalists and researchers while Trump and Clinton are still onstage. The more complicated or research-heavy fact-checks will go up late tomorrow night or early the next morning. Tuesday, a great chunk of the conversation will inevitably surround the question of who lied more, and about what. FactCheck.org and PolitiFact, and the handful of other major websites that make fact-checking a regular part of what they do, will post their results. (So far, it should be said, Trump has been far more dishonest than Clinton, by PolitiFact’s count.)
Those of us who are connoisseurs of fact-checking websites have a tortured relationship with them. On the one hand, they’re providing a vital civic service — what could be more important than holding politicians accountable for the claims they make during a presidential campaign? On the other hand, there’s a certain level of baseline futility at work here. It’s very unlikely that, once you zoom out to the level of the country as a whole, fact-checking efforts make people all that much smarter or better informed about politics.
That’s what political scientists think, at least. When they subject study participants to fact-checking interventions in labs, mixed results pop out: Sometimes fact-checks seem to make people more politically informed, on average; sometimes they don’t. Sometimes, it appears, there’s even a so-called backfire effect, in which exposure to debunking of false claims causes people to hold on tighter to those beliefs.
In the real world, where the average citizen is buffeted endlessly by conflicting claims, many of them politicized or otherwise warped, and is often more motivated to seek out information that confirms their worldview than information that will correct their misperceptions, most political scientists think fact-checking doesn’t really have a chance. The voters who believe Donald Trump when he insists that hordes of unvetted Syrian immigrants are threatening American citizens — fact-check: the U.S. just accepted its 10,000th, and there’s a careful vetting process — are not the voters who go to FactCheck.org with an open mind. Hence that feeling of futility.
Some new research, though, complicates things a bit — and offers at least a limited case for optimism. Last week, the political scientists Brendan Nyhan of Dartmouth College and Jason Reifler of the University of Exeter published a draft version of a paper in which they report some important new findings about the effectiveness of fact-checking.
Nyhan and Reifler built their experiment specifically to overcome the limitations of some past fact-checking research — among other things, this research has often only tested the immediate impact of fact-checking rather than its potentially lingering effects. Their study is the first to test the effects of fact-checking over time, on a representative sample of Americans — 1,000 from the survey company YouGov’s panel, in this case.
The study took place in the fall of 2014. Participants were surveyed twice, once in September and once in December, and were asked a bunch of questions about their own political beliefs, their feelings about fact-checking, and so on. Between the two survey waves, half the pool was put in a placebo group that was asked to read some random press releases. The other half was presented with fact-checking in the form of popular-on-social-media articles from PolitiFact dealing with “U.S. senate or gubernatorial candidates in the 2014 election or a current elected official who is a national political figure.” During the second wave, participants were also asked some factual questions pertaining to the content of either the press releases (for the placebo group) or the PolitiFact articles (for the control group).
“Treatment” with fact-checking, it turned out, “increas[ed] the proportion of accurate answers by approximately fourteen percentage points.” The researchers describe the effect as “relatively large given that the mean proportion of accurate answers in the control condition was only 33%[.]” They also found something really interesting about who was affected by the fact-checking: It didn’t really matter, in terms of the size of the effect, whether the individual respondent was an informed or uninformed political consumer, as determined by the questions they answered for the study. The high- and low-information voters benefited equally.
“It’s surprising,” said Nyhan. “For reasons we described in the theory section, we expected politically knowledgeable respondents to be better able to process and subsequently recall fact-checks, which often concern complicated issues and can be difficult to understand.” Also surprising was that it didn’t really matter whether a given fact-check went “for” or “against” a given respondent’s biases — that is, Democrats weren’t any less likely than Republicans to believe fact-checks that were negative against a Democrat.
So can this study be taken as evidence that fact-checking “works”? Nyhan said that he thinks it depends a great deal on context. “It depends how you define ‘works’!” he wrote in an email. “We were encouraged to find lasting improvements in belief accuracy and to find that they held for both belief-consistent and belief-inconsistent claims.” As he and Reifler point out in the paper, 2014 was a midterm election, and the issues they fact-checked in their study weren’t particularly salient or exciting. One of them involved electricity rates, for example. There’s a lot of research and theory suggesting that it’s when people feel emotionally riled up, or feel like some aspect of their identity is at stake, that they are the most resistant to fact-checking. No one is going to get riled up over some random debunking of electricity rates.
So there’s a chance that if Nyhan and Reifler ran a similar experiment during this race, the effects they elicited wouldn’t be as positive. It’s almost certainly the case that staunch Clinton or Trump partisans would be less receptive to fact-checks targeting their favored candidates. As for other voters who are more in the middle, the only way to know is to check — Nyhan said he couldn’t reveal any details, but that he could confirm that he was currently working on a fact-checking study pegged to the presidential race.
As for tomorrow night and the broader role of fact-checking in general, a picture, slightly fuzzy but getting clearer, is slowly emerging about which sort of facts might be most susceptible to checking: Generally speaking, the more emotionally resonant a given claim is, and/or the more tightly associated it is with one party or the other, the more resistant it will be to fact-checking. So on an issue like belief in global warming or gun control, where people’s views are tied closely to their partisan identities, it will probably be more difficult for fact-checking to sink in. It’s likely the less politicized, less emotional issues where people will be a bit more open-minded — though there, too, it will likely be difficult to get through to hardened partisans.
While the latest and most rigorous experiments are underway, there are some best practices Nyhan and Reifler released in 2013 that can help maximize the impact of fact-checking and debunking. Whenever possible, for example, they should turn to sources from the same side where the falsehood is identified. If you’re debunking a Democratic fib, have a Democrat explain why it’s false. That isn’t always practical, of course, so Nyhan and Reifler also have simpler suggestions, like the phrasing you use when you debunk a false claim: Don’t say “X is false” — that can sometimes reinforce the falsehood in the eyes or ears of news consumers. Instead, say “Y is true.” It seems like a subtle distinction, but research suggests it — and many others — matters a great deal.
Fighting misinformation can seem like an endless uphill battle, in light of the internet’s penchant for amplifying nonsense. But papers like Nyhan and Reifler’s show that researchers are slowly but surely making progress in figuring out how to fight back. There isn’t reason to lose hope just yet.