There’s (More) Hope for Political Fact-Checking

Do you even FactCheck.org? Photo: EITAN ABRAMOVICH/AFP/Getty Images

No modern American presidential election has featured as much lying and misinformation as this one. Donald Trump has quickly established himself as one of the most dishonest political figures in political history, lying with astounding frequently and brazenness, even by the standards of politicians. Just 15 percent of the Donald Trump statements analyzed by PolitiFact, for example, have been rated as “true” or “mostly” true — the number for Hillary Clinton is 51 percent.

And yet fact-checking has appeared distressingly impotent this campaign season. There has been no shortage of focus on Trump’s lies, with countless outlets debunking them. It doesn’t stick, though — Clinton has even been found to be perceived as less trustworthy in some national surveys, despite what appears to be a vast dishonesty gap between the two candidates.

Because of all this, it’s been a fascinating, disturbing time for anyone interested in the question of how political misinformation spreads, and why it seems so hard to get the truth to sink in. One common explanation for Trump’s seeming immunity to a year and a half of feverish debunking is the so-called backfire effect. First described in a 2010 paper by the political scientists Brendan Nyhan and Jason Reifler, the idea is simple: If someone believes something that’s false, and you present them with a correction, in many situations rather than update their belief they will double down, holding even tighter to that initial belief. In this view, even if Trump’s followers see all the coverage debunking and correcting their candidate’s false statements, it doesn’t matter — many of them will just cling tighter to the idea that Clinton is about to be indicted, or that President Obama screamed at a protester over the weekend.

Since that initial paper, this effect has been observed repeatedly, by Nyhan and Reifler and by other researchers. Now, none of this research showed that the backfire effect is inevitable — there seemed to be some intricacies as to when and how strongly it showed up. (To take one example, it appeared to afflict conservatives more than liberals.) But it’s still one of those satisfying ideas that seemed to explain the world really nicely. After all, this isn’t a problem restricted to the current election: Every year, the internet’s misinformation situation seems to get a little bit worse, and we’re constantly presented with evidence that fact-checking doesn’t seem to really work. The backfire effect seemed to capture this dynamic.

Except: Two new upcoming studies of the backfire effect call into question its very existence. These studies collected far more subjects than the original backfire study, and both find effectively no backfire effect at all. And unlike the original study, the subjects in these new ones weren’t just college students — they were thousands of people, of all ages, from all around the country.

If this new finding holds up, this is a very important, well, correction: It suggests that overall, fact-checking may be more likely to cause people, even partisans, to update their beliefs rather than to cling more tightly to them. And part of the reason we now know this is that Nyhan and Reifler put their money where their mouths were: When a team of two young researchers approached them suggesting a collaboration to test the backfire effect in a big, robust, public way, they accepted the challenge. So this is partly a story about a potentially important new finding in political science and psychology — but the story within the story is about science being done right.

It started a year and a half ago or so, when Ethan Porter, then a grad student at the University of Chicago and now an assistant professor at George Washington University, teamed up with Thomas Wood, a political scientist at Ohio State University, to run a big experiment on this subject. “The backfire paper has alway struck Tom and I as a very important paper,” said Porter. “You care about democracy and accountability, you care about people’s abilities to incorporate and respond to factual information.” (Full disclosure: I am buddies with Porter from when we both lived in D.C.)

For the Wood and Porter paper, which is currently being reviewed by journals for publication but which can be read in draft form here, the co-authors tested 36 issues on 8,100 subjects. They presented the respondents, recruited on Amazon Mechanical Turk, with various real-world false claims, with half also being exposed to a correction of the false claim. Wood and Porter then compared how much those in the no-correction group believed in each item, on a 1 to 5 scale, as compared to those in the correction group. The respondents were also asked about their political beliefs, allowing the authors to compare the effects of a given correction on people from different parts of the political spectrum.

As the paper notes, the experiments were set up in ways designed to maximize the chances of a backlash effect being observed. Many of the issues the respondents were asked about are extremely politically charged — abortion and gun violence and illegal immigration — and the experiment was conducted during one of the most heated and unusual presidential elections in modern American history. The idea was something like, Well, if we can’t find the backfire effect here, with a big sample size under these sorts of conditions, then we can safely question whether it exists.

And that’s what happened. “Across all experiments,” the researchers write, “we found only one issue capable of triggering backfire: whether WMD were found in Iraq in 2003.” Even there, changing the wording of the item in question eliminated the backfire effect.

Here is a nice visual rundown of some of Wood and Porter’s findings, followed by an explanation of how to read the chart (you can view the full-size version here):

Each mini-graph represents one false statement actually made by a politician. Then, within an individual graph, the bars show the average level of belief in that statement at a given point on the left-right political spectrum among the study’s respondents. The top bar reflects the beliefs of experiment participants who were exposed to the false belief, but not a correction, while the bottom bar reflects the beliefs of participants who were exposed to the false belief and a correction. The bigger the gap between the bars, the more effective the correction was.

What these graphs clearly show — and Wood and Porter more or less found this pattern through all their experiments — is that even for politically charged issues, correcting false beliefs seemed to “work” in the sense of at least leading to a lower level of belief in the item in question. If the backlash effect were real, the researchers would have observed the opposite effect: There should have been various points, particularly at the edges of the political spectrum where those in the “corrected” group believed more in the statement in question.

There’s still a pretty big political element to the question of who believes what, of course: Conservatives were much more likely than liberals to believe that the U.S. has the highest tax rates in the world, and liberals much more likely to believe that the majority of U.S. prisoners are locked up on drug charges. But the encouraging thing was that correcting seemed to work, on many issues at least, for just about everyone, whether they were a partisan or a moderate.

This was not what Wood and Porter expected to find. “Tom and I really thought we’d have a paper where we said, Here are the three issues that cause liberals to backfire, and here are three that cause conservatives to,” said Porter. Instead, the numbers offered effectively no evidence for the backfire effect at all. They decided to reach out to Nyhan and Reifler to let them know.

If you read a lot of Science of Us, you might know that this is the point where things sometimes go south. Researchers don’t always react well to their work being criticized or poked or prodded. In the worst cases, defensive researchers attempt to actively stymie the research of those they see as “attacking” their findings. Nyhan and Reifler didn’t do that — instead, they effectively said Huh, cool finding — maybe our research overstated the backfire effect. Then, watching Trump campaign manager Paul Manafort denigrate F.B.I. crime statistics during the Republican National Convention, trying to boost his boss’s false portrayal of an America drowning in horrific violence, Porter said, “The lightbulb went off, and we said, What if we could test backfire now, in the heat of the presidential campaign, with Brendan and Jason?”

Again, the two more senior researchers were cooperative. “Jason and I decided to co-author with Tom and Ethan because they are doing important, careful work,” said Nyhan in an email. “We thought a collaboration would be a great way to test our ideas and theirs jointly. I think we’re all surprised and saddened that so many academic debates devolve into name-calling and vitriol. In science, it’s fine to be wrong or to provide an incomplete or partial answer as long as we are moving collectively toward the truth. That is everyone’s goal here. (Also, Jason and I would be terrible hypocrites if we doubled-down on our factual beliefs after being corrected!)”

So the four researchers got to work on a similar study, but which differed in two important ways: First, it was conducted during what was close to the absolute peak of election season, suggesting backfire effects should have been even more likely than they were during Wood and Porter’s experiment. Second, the researchers used both Mechanical Turk and tools offered by the polling firm Morning Consult to make sure that Wood and Porter’s result wasn’t an artifact of mTurk.

While the Wood and Porter research focused on a bunch of different areas, the collaboration between the four researchers focused on just one: Trump’s claims about out-of-control crime. And when the numbers came in, again, there was little to no sign of backfire — a result the authors plan on submitting for publication.

As Nyhan explained in the Upshot over the weekend:

[W]e found that correcting Mr. Trump’s message reduced the prevalence of false beliefs about long-term increases in crime. When respondents read a news article about Mr. Trump’s speech that included F.B.I. statistics indicating that crime had “fallen dramatically and consistently over time,” their misperceptions about crime declined compared with those who saw a version of the article that omitted corrective information (though misperceptions persisted among a sizable minority). Specifically, beliefs that crime had increased over the last 10 years declined among both Trump supporters (from 77 percent to 45 percent) and Clinton supporters (from 43 percent to 32 percent).

“Taken together,” said Porter, “these findings suggest that anxieties about the absence or minimal role of truth in politics plays may be overstated. People are willing to acknowledge factual information, even when that information challenges their most cherished political beliefs. it doesn’t mean they change those beliefs — but it does mean that there is a role for factual information in politics.” So one shouldn’t necessarily feel optimistic about the possibility of some sort of massive fix to the currently dire internet landscape of false information. For one thing, by virtue of both studies’ design Wood and Porter were effectively able to force people to read fact-checking information. That isn’t how it works in the wild — people seem to self-select into information sources that will confirm rather than challenge their preexisting beliefs and biases. Plus, the real-world implications of some of these effects are unclear, as is the question of how long they’ll last; if fact-checking can reduce belief in a false item from approximately 3.5 to 3 on a scale of 5, as it appeared to do for staunch conservatives on the top-left “disproportionately criminal” item, what does that mean in a practical sense? And will that effect hold after the person in question then watches six more Trump speeches in which he repeats that lie? Finally, there may well be some issues, in some contexts, that still generate backfire effects, though Porter said that he’s skeptical about backfire “as a generalizable phenomenon.”

But those caveats aside, these are still important findings, especially in light of another recent Reifler/Nyhan finding which also suggests fact-checking can be effective. Overall, the tide is turning a bit in fact-checking’s favor, setting aside the difficulties of applying this to the real world.

Just as importantly, the Nyhan-Reifler-Wood-Porter collaboration is an inspiring model of how social-scientific collaboration should work. Again, Nyhan and Reifler could have aggressively nitpicked Wood and Porter’s findings, but they didn’t; instead, they acknowledged solid research when they saw it, despite the fact that it challenged one of their own more well-known results.

In fact, Porter explained that part of the reason his and Wood’s paper has gotten the media attention it has is because Nyhan has promoted it on social media. “The reason we got the Poynter interview, and the reason we’ve gotten press about this, is because Brendan tweeted out our article,” said Porter (referring to this article). “Brendan himself has brought attention to a piece that challenges his own work. Brendan is a guy with a large platform, and Brendan is using that platform to publicize work that is critical and challenges his own work. That’s incredibly laudable.”

There’s (More) Hope for Political Fact-checking