Hurricane Joaquin was expected to hit New York this past weekend; it didn’t. Likewise, interest rates were expected by most economists to rise last year; they didn’t. On the surface, predicting the future can sound like the stuff of horoscopes and storefront psychics, and yet there are clearly many instances in which we need to have at least a somewhat-good idea of what is likely to happen down the road. It’s crucial for decision-making and planning.
Luckily, some people out there are incredibly skilled at this, and Philip Tetlock, a psychologist at the Wharton School of the University of Pennsylvania, has just written a book — along with journalist Dan Gardner — revealing their secrets. It’s called Superforecasting: The Art and Science of Prediction, and much of the book is based on a widely reported, 20-year study Tetlock led, in which superforecasters took a stab at estimating the probability of some kind of global future event. (For example: “Will there be a significant attack on Israeli territory before May 10, 2014?”) Their answers were compared against the answers of intelligence officers, who had access to classified information concerning the questions posed. The superforecasters, it’s important to note, were pretty much your everyday kind of folk, including housewives and factory workers; they had no specialized knowledge and had no access to specialized information. Even still, their answers were, on average, about 30 percent more accurate than the experts’ answers.
How can this be? In an interview with Science of Us, Tetlock said that superforecasters’ skill comes down to one thing. Surprisingly, it isn’t the attributes you might guess, like numeracy or a high level of general intelligence. “One of the discoveries is how much hinges on a person’s attitude,” he said. Most of us — experts included — make decisions too quickly, and change our minds too slowly. Superforecasters, on the other hand, keep an open mind when forming opinions, seeking information from a wide variety of sources. (They’re the wide-ranging fox to the expert’s super-specific hedgehog, Tetlock said, using the old analogy.)
But they also are okay with being wrong, and are able to revisit and revise their prediction when new information comes to light. As the website for the Good Judgment Project — that’s what Tetlock called the overarching research project, which has involved more than 20,000 participants, explains, “belief updating” is a key component to the superforecaster’s skill. Participants had three months to ponder a forecasting question, and were told they could change their prediction at will. Tetlock and his colleagues kept careful track of how often the participants updated their predictions:
We measured the frequency of belief updates and their magnitude. For example, we could compare a forecaster who places 1.5 predictions per question and whose average update is 20 percentage points with another one who makes 2.3 predictions and updates by 11 percentage points, on average. Forecasters who updated their beliefs more often, and in smaller increments, tended to be more accurate than those who made fewer, or larger updates. Frequency and magnitude independently predicted accuracy. We verified the robustness of these relationships in and out of sample.
In plainer language, this means that those who made “frequent, small belief updates” were most accurate. And a big part of this mind-set, Tetlock said, is being comfortable with a little cognitive dissonance — that is, they’re able to hold two contradicting ideas in mind at once. “They can deal with it,” he said. “The classic formulation of cognitive dissonance theory is, people hate dissonance and they move automatically to try to reduce it.” Superforecasters, on the other hand, are more tolerant of this mental state than most of us, he said.
And this leads them to more-nuanced estimates than the black-and-white of yes or no. Superforecasters understand that the best possible estimate requires more varied shades of gray than even a simple maybe can provide. “We have a quote in the book from the world champion poker player, who says, I can tell the difference between a great player and a talented amateur: A great player knows the difference between a 55/45 bet and a 45/55 bet,” Tetlock said. Being as exact as possible with the degree of doubt around the prediction matters, in other words.
And this, incidentally, is likely a big reason why there aren’t really a lot of famous superforecasters, Tetlock said. Think of the people who are paid to talk or write about what could potentially happen down the road: pundits and columnists and others who proclaim their predictions confidently. Superforecasters’ ability to see many sides of an issue means that they “tend to speak in ways that make them less interesting to the media,” Tetlock said. “Media prefer people who use the word moreover rather than however. I can give you ten reasons why Saudi Arabia is going to experience a jihadist fundamentalist coup in the next six months as opposed to someone who comes on and says, Well, it’s a delicate balance … It’s not hard to imagine how a producer, who is very ratings-conscious, is going to gravitate toward the more interesting, opinionated hedgehogs more than people who would seen as the equivocating or even cowardly fox.” In comparison, the science of forecasting is a little quieter, but at the same time, it’s no less exciting.