Since Donald Trump was elected president, journalists and political scientists and others have scrambled to figure out what the hell happened. After all, there was rather strong agreement among the most respected forecasters that Hillary Clinton would win — FiveThirtyEight was the outlier among the big-time ones, attracting scorn for claiming Trump had even a one-in-three chance of winning.
One of the more interesting attempts to unpack what happened was published earlier this week in Politico magazine. Katelyn Fossett and Steven Shepard, both reporters there, convened a forum of those few forecasters and pollsters who did see appear to see a Trump win coming, or at least a very tight election, and asked them how they did it. Now, it’s important to point out that these forecasters, too, got various things wrong, and because of how probability works it’s hard to make concrete statements about an individual forecast succeeding or failing. Was Silver “wrong” in his forecast that Trump would lose? Well, his model predicted that one out of every three elections, Trump would win. So on paper, Silver’s model missed, but things are a bit more complicated than right and wrong — we can’t travel to parallel universes and see what happened there.
But on the other hand, as Fossett and Shepard note, the pollsters and forecasters they spoke with “all picked up on different hints that something bigger and more unpredictable was brewing this election, whether it was that fishy zero percent figure for Trump’s approval among black voters or the creeping understanding that women were particularly uncomfortable admitting who they were voting for.” What makes the conversation so interesting is how it runs through the various techniques they used to try to dig down into the minds of voters during what turned out to be a very unusual election.
According to one of the participants, Robert Cahaly, a senior strategist at the Republican polling firm the Trafalgar Group, an absolutely key part of the general polling misfire during this election cycle was the apparent prevalence of “shy voters” who supported Trump but felt socially awkward about saying so. Here’s what he said about these voters:
I saw a lot of commentators refer to this say that they believe that the “shy voter” worked both ways [shy Trump and shy Hillary voters]. That is not what we experienced. In fact, what we experienced was a pattern that was so unnatural we knew there had to be something to it.
I grew up in the South and everybody is very polite down here, and if you want to find out the truth on a hot topic, you can’t just ask the question directly. So, the neighbor is part of the mechanism to get that real answer. In the 11 battleground states, and 3 non-battleground, there was a significant drop-off between the ballot test question [which candidate you support] and the neighbors’ question [which candidate you believe most of your neighbors support]. The neighbors question result showed a similar result in each state: Hillary dropped [relative to the ballot test question] and Trump comes up across every demographic, every geography. Hillary’s drop was between 3 and 11 percent while Trump’s increase was between 3 and 7 percent. This pattern existed everywhere from Pennsylvania to Nevada to Utah to Georgia, and it was a constant.
This is a version of the social desirability bias — people responding to survey questions in a manner that they think will make them look good, rather than accurately (it comes up a lot in questions about racial attitudes). It’s fascinating to think that you can get around it, at least partially, by simply asking someone about their neighbors rather than themselves. And if Cahaly is accurately recounting just how common this trend was — if it really showed up everywhere and was as big as he said — that could explain a nice chunk of the gap between the most highly touted election forecasts and the shocking outcome.