It’s a question that has been asked endlessly, and with increasing frequency since the worst conspiracy-addled depths of the presidential campaign: Why do so many people believe ridiculous internet rumors? How could anyone think, in light of the thinner-than-thin evidence available, that Comet Ping Pong, for example, is the site of an evil child-sex dungeon?
All too often this propensity is chalked up to low intelligence or ignorance, but the reality is that it’s a lot more complicated — people believe in fake news as a result of psychological tendencies we all have and occasionally indulge in. That’s a fact on clear display in a new paper in Advances in Political Psychology (there’s what looks like a late draft version readable in unlocked form here), in which D.J. Flynn and Brendan Nyhan of Dartmouth College, and Jason Reifler of the University of Exeter, offer a useful and informative rundown of — as the paper’s title puts it — “The Nature and Origins of Misperceptions” about politics.
What most stands out from the paper is that it relates, in detail, a really useful, simple framework for understanding why people believe and spread fake news and other sorts of unsupported or debunked political claims: “directional versus accuracy motivations.” As the researchers explain, “accuracy and directional motivations affect how people search for, evaluate, and incorporate information into their beliefs.” Sometimes we are more motivated to get at the truth of a given matter, and sometimes we are more motivated to seek out support for a position we already hold.
This is related to the fairly well-known idea of motivated reasoning, of course, but what’s particularly useful is Flynn and his colleague’s conception of a “tug-of-war” between the two types of reasoning. All of us are capable of both, and different cues and situations are more likely to fuel one or the other. Researchers are still early on in fully understanding how all this works, but the paper offers some really interesting nuggets. Among them:
Context greatly affects whether we are more likely to engage in cognition motivated by accuracy or directional concerns. Sometimes, you really do just want accurate information. Flynn and his colleagues use the example of someone seeking to buy a new washing machine. Generally speaking, they’re going to want as much hard, objective information about the qualities of different models as possible. Their accuracy motivations will likely win out over their directional motivations, because they are mostly concerned with finding a washing machine that will do a good and affordable job cleaning their clothes.
More specifically, affect — that is, emotion — seems like a major trigger of directionally motivated reasoning. Buying a washing machine is not, generally, an emotional experience. The prospective buyer is in a position to calmly evaluate their options. But let’s say they have fond childhood memories of watching their mother do the laundry using one particular brand. Or, on the other hand, let’s say their washing machine blew up when they were a kid due to a freak accident, injuring their father. These are emotionally charged memories, and are likely to induce a heightened role for directionally motivated reasoning — in the first case, the prospective buyer may be more likely to discount information which casts the brand in question in a bad light; in the second they are more likely to discount information which casts the brand in question in a good light. So even situations where “rational” thinking would seem to prevail can be freighted with emotional cues, causing directionally motivated reasoning to kick in.
Politicization matters a lot. Some issues are more politicized than others. For most people, their choice of washing machine brand isn’t. For most people, their beliefs about climate change are. The more politicized an issue it is — the more it sparks thoughts and emotions about who we are and what we value and who our friends and foes are — the more likely it is to induce directionally motivated reasoning. As the authors write, “Studies of beliefs about nonpoliticized facts will thus tend to show greater responsiveness to new or corrective information (especially on matters that are difficult to counterargue, such as the true value of relatively obscure statistics) than those that focus on the much smaller set of high-profile misperceptions.” All else being equal, it will be easier for me to present you with facts that change your opinion on washing machines than your opinion on global warming. Again, this is probably related to the central role of affect in human cognition and decision-making. If you mix together the politicization point and the affect point, an answer to how someone could possibly believe the Pizzagate rumors starts to emerge: If you’ve become convinced that Hillary Clinton is a deeply evil, horrific villain who represents everything you hate in the world, and you’re deep down a highly emotional, directionally-motivated-reasoning rabbit hole, maybe it isn’t such a stretch to embrace gonzo conspiracy theories about her and her allies.
Researchers should construct a scale measuring “individual-level differences in the strength of underlying accuracy and/or directional motivations.” This is one of the more provocative ideas in the paper — that there might be a way to better understand who is, all else being equal, more likely to engage in directionally motivated reasoning. Maybe there isn’t even a stable way to measure these differences, but if there were that could go a long way toward explaining why some people are more susceptible to fake news, whether coming from the left or right.
Prejudice could be a powerful factor in making people adopt directionally motivated reasoning. Flynn and his colleagues cite evidence that “reminding people of difference in racial identity or age from the presidential candidates in the 2008 election” increased the odds they would accept false “smears” about the candidates. This could be tied into the point above about emotion: For many people, these sorts of differences trigger strong affective responses, which could in turn trigger directionally motivated reasoning (to be clear, that’s me theorizing, not an explicit claim from the paper). By this logic, if you want to get people to believe false things about Obama, harp on just how “different” or “foreign-seeming” he is. That’s definitely a playbook we’ve seen before.
This stuff is immensely complicated to study. This paper is littered with evidence that comes from lab experiments in which researchers evaluate how different sorts of primes affect people’s propensity to engage in directionally versus accuracy-motivated reasoning. As the authors would be the first to tell you, it isn’t necessarily the case that you can directly translate these results to real life — in the aggregate, they offer some valuable hints. But people’s real-life political behaviors and beliefs are likely overdetermined, the result of a host of factors ranging from social ties to their personality to their news-consumption habits, many of these factors reinforcing one another. So when it comes to the long-term goal of getting people to act more in accordance with reality-based understandings of the world, there’s still a huge amount of work to be done.