How to Train Yourself to Be a More Rational Thinker

Photo: Vangelis Aragiannis/Getty Images/iStockphoto

By now, nearly everyone — or at least everyone who’s taken a psychology or business course — is familiar with human foibles like the better-than-average effect (at many tasks, most of us think we’re better than most people) or illusory correlation (we easily read relationships into randomness). The psychologists Daniel Kahneman and Amos Tversky popularized the ideas of biases and heuristics in the 1970s; more recently, psychologist Dan Ariely put the spotlight back on the concept of human irrationality with his 2008 book, Predictably Irrational. I myself have gainfully contributed to the cottage industry of looking smart by saying we’re dumb.

And yet somehow, despite such faulty brains, we are a species that has landed on the moon, and that sometimes even manages to get along. Apparently, under the right circumstances, we can pay attention to facts, straighten our slanted beliefs, and make prudent decisions. Just what are these elusive circumstances, and where can I get some?

Consider this something like a tool chest for rationality. It’s far from comprehensive, and it focuses more on epistemic matters like avoiding bias than instrumental ones like avoiding procrastination, but if you can master even one implement — and use it regularly — you’ll be ahead of most people.

* * *

When our judgment misses the mark, it often means we’ve aimed too high. In realms from dating to business, we’re overconfident and overly optimistic. We believe what we want to believe. Discussions with other people can sometimes bring us back to earth, but there are also ways to tap multiple perspectives inside ourselves. Whenever you have a surefire idea that you know will work, try this: Think of reasons it won’t. (Or, alternatively, if you’re sure it won’t, think of reasons it will.) For any belief, argue against it.

For example, in one study, managers were asked to guess whether the liabilities of a particular company were greater than $1.9 billion, and to rate their confidence. About 54 percent were correct, but the average confidence was 72 percent. Other managers were asked to give an answer, then think of a reason they might be wrong, and guess again. This time, 62 percent were correct, but their average confidence level stayed about the same — meaning their overconfidence dropped.

Another way to use multiple perspectives is to imagine yourself not as yourself, but as an onlooker. In 1985, when Andy Grove was the president of Intel, he faced a choice: The company made its money off memory, but Japanese companies were gobbling up market share. Should Intel persist in memory, or focus more energy on processors, another area they’d been dabbling in? In his memoir, Grove recounts a conversation he had with Intel’s CEO, Gordon Moore:

I looked out the window at the Ferris Wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and I asked, “If we got kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” I stared at him, numb, then said, “Why shouldn’t you and I walk out the door, come back in, and do it ourselves?”

Grove called it the revolving-door test. And if you know Intel, you know the rest.

Supporting the revolving-door test, research shows that if we step outside of ourselves and look at our situation from a distance, we can avoid some of our biases. In one set of studies published in 2012, people made more accurate estimates of how long it would take to complete certain tasks, like writing a letter or painting a room, if they pictured themselves doing it as an onlooker would. You could also call the revolving-door or third-person test the advice test: What would you tell someone in your situation?

Or, another strategy: What would a whole group of people tell you? When you tap the “wisdom of the crowd,” most people will be wrong — but, critically, they’ll likely be wrong in different ways. If you average their responses, you’ll get something closer to the truth than most of the individual guesses.

And if you’re on your own with no group to turn to, you can tap the “wisdom of the inner crowd.” “People don’t use everything they know every time they make a decision or form a judgment,” says Jack Soll, a management professor at Duke University. In one study from 2008, participants were asked to guess at figures, like the percentage of the world’s airports in the U.S.; when they were later asked to guess again, the average of their two answers bested either on its own. Performance improved even more when the second guess came three weeks later.

A 2009 study, meanwhile, combines self-arguing with the internal crowd, for even better results. Some people estimated historical dates, then were asked to assume they were wrong, offer reasons why, and give a different estimate, which was averaged with the first. Others just gave two estimates, which were averaged. Members of the first group ended up with more accurate answers than members of the second (though neither strategy was as effective as averaging two people’s guesses).

Sometimes, you want to prepare for a range of possible scenarios, but overconfidence in your predictions narrows the range you actually consider. One study looked at 13,300 stock-market estimates over a decade, and found that the market’s real performance fell within executives’ 80 percent confidence intervals (the range they felt 80 percent certain the returns would fall within) only 36 percent of the time. In a book chapter and Harvard Business Review article on “debiasing,” Soll and his co-authors suggested conjuring three separate estimates instead of a range: your most likely estimate, plus high and low estimates you think are unlikely, but not unrealistic. This technique tends to widen the outcomes people consider, allowing them to prepare for both the best and the worst.

If you’re deciding among a set of concrete options, such as which restaurant to go to or which candidate to hire, compare them side by side, rather than one at a time, Soll says. This gives you a reference point on variables that may be hard to evaluate on their own — an employee who brought in a million dollars in sales, for example, might seem like a high performer until you see that someone else on the team has $10 million in sales. And when making comparisons, forming a gestalt of each is not always best, as irrelevant factors might seep in without your permission. For example, if you’re interviewing multiple people, it’s better to conduct structured interviews, in which you ask everyone the same questions and score each answer, instead of conducting freewheeling conversations and forming an overall impression at the end. It’s too easy to be swayed by factors that we don’t think we should be swayed by, like shared hobbies. In fact, a recent study found that unstructured interviews didn’t just provide useless information — they also diluted good information, making them worse than no interview at all.

Another way to avoid being swayed by factors we don’t want to sway us is to consider how options are framed. “It’s well-known that people don’t make decisions about outcomes; they make decisions about descriptions of outcomes,” says Spencer Greenberg, an applied mathematician who studies rationality. A raft of studies show that judgment can be swayed by incidental variables, like how hungry we are or how a question is phrased. In a classic example of framing, people are more willing to flip a switch that diverts a trolley from five people toward one when it’s described as saving five lives versus killing one. The point isn’t that phrasing it one way or the other leads to a less rational choice — there isn’t one objectively right answer. The irrationality lies in the fact that people’s choices depend on things that, upon reflection, they would tell you shouldn’t matter.

One way to reduce framing effects is to consider two versions of the same option side by side. In the trolley problem, when you realize that killing one and saving five are the same thing, language might play a smaller role. In your daily life, perhaps you’re completing an unexciting project simply to avoid the pain of “wasting” the already expended effort, a mistake known as the sunk-cost fallacy. But what if you do what we might call a mirror-framing? By quitting, you’d “waste” the time you’ve put in (killing one), but you’d also “free” the time for other projects (saving five). Same act, two mirrored perspectives.

Or maybe you haven’t even imagined all the possible options. Maybe instead of killing one or killing five, there’s a third track covered in shrubbery. Try imagining that the options you’re deciding between are no longer available, and you might come up with something even better. “When it comes to decisions, we need to expand our horizon,” Ariely, who’s also a professor at Duke, tells me. “We need to think about what other things don’t come to mind naturally.” Greenberg has created a site called ClearerThinking.org that offers tools for — you guessed it — clearer thinking. One tool helps users learn to avoid “explanation freeze,” or the lazy tendency to stick with the first explanation we come up with: The site provides examples and downsides (unnecessary catastrophizing, dangerous complacence), then offers practice by asking readers to list not one but three plausible explanations for a scenario. It takes effort, but it’s a good habit.

* * *

Step one to knowing the truth is wanting to know the truth. We often don’t — motivated reasoning leads us to see the world in the way most amenable to our current aims, and many researchers see this fun-house reality as a feature, not a bug. For example, the cognitive scientist Dan Sperber has put forth the argumentative theory of reasoning, which holds that reasoning did not evolve to refine beliefs, but to advance them and to defend against others’. That’s because we’re highly social, and it often pays more to convince others of a reality that benefits us — I’m the best candidate; I deserve the last cookie — than it does to know who really is the best candidate, or who really does deserve the last cookie. Similarly, we use reasoning to defend against others’ arguments by picking them apart.

This is why reasoning actually works pretty well collectively, as the strongest argument emerges in battle-hardened form from group discussion, but not so well individually, when we have no whetstone to hone our own assertions. “When people talk with each other, and argue with each other, they reach better beliefs and better decisions,” says Hugo Mercier, a cognitive scientist who has worked with Sperber on the theory of argumentative reasoning. “We suck quite a bit at doing that on our own, because that’s not what we evolved for.”

Arguing, then, is a great way to reach the truth — much better than huddling with like-minded teammates, which tends to lead to polarization. And there may be ways to amplify what you gain from arguing. Psychologist Anatol Rapoport diverted people from straw-man arguments for their own good. Daniel Dennett summarized Rapoport’s advice in his own book, Intuition Pumps and Other Tools for Thinking:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
  3. You should mention anything you have learned from your target.
  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Not only will you conscript a more willing accomplice in your search for truth, but the exercise in itself will help you extract valuable material from the other side’s beliefs. Julia Galef, a writer who co-founded the nonprofit Center for Applied Rationality, calls this the steel-man argument: Be generous and argue against the best version of your opponent’s beliefs that you can forge. Galef also tries to shift her motives during an argument from winning at all costs to wresting the most value. She tells herself that if she “loses,” there’s a consolation prize: She gets to take home a copy of her opponent’s weapon and use it to win the next round against someone else.

Stopping yourself in the heat of debate and redefining your aims does not come naturally. So, Galef says she recommends developing “mindfulness, the ability to detect the subtle emotional texture of your thinking — for example, that feeling of vindication when you read an article arguing for something you already believe. Or that feeling of scorn when you read something that contradicts your views.” Awareness, in turn, might lead to action: “Once you start noticing the emotional drives shaping your reasoning,” she says, “it’s much easier to accept that you’re not being totally objective most of the time. But that isn’t automatic. It’s something you cultivate.”

Greenberg also notes “how important being able to deal with negative emotions is when it comes to being more rational. If you’re trying to figure out the truth, that means when someone points out a flaw in your reasoning, you need to be able to admit that.” Short-term loss, long-term win.

* * *

We often prefer to retain our biases, even when they’re called out. No one I talked to had much hope for increasing rationality in political debate, because we have little incentive to find political truth. “Many of the beliefs we have about politics have absolutely no practical importance for us,” Mercier says. “It’s not going to affect our lives one way or the other if we believe that global warming is real, if we believe that Hillary should go to jail.” But while a single vote rarely matters, vocal support for one side buys you important allegiances. “There are plenty of incentives to believe something flattering to your own views, something that means your political ‘tribe’ was right and virtuous all along,” Galef says. “But what incentives do we have to figure out the truth? Figuring out the truth is effortful, it requires self-control, and it gets in the way of your ability to cheer for your ‘side.’”

Arguably, in such cases, rationality would be detrimental to our well-being (if not to the health of the democracy). And there’s the paradox: If irrationality helps us, is it not, then, rational? The argumentative theory of reasoning holds that our ancestors benefited from bias, or else we wouldn’t be so biased today. And in 1976, the evolutionary biologist Robert Trivers suggested that self-deception evolved for the sake of other-deception: The better you convince yourself you deserve that cookie, the more believably you can convince others. (Recent studies have supported his idea.) Research has shown that overconfidence also enhances social status — even when it’s revealed as overconfidence. And Ariely tells me that “it’s important to realize that we don’t always want to be more rational. Think about something like emotions. Yes, there’s some emotions we don’t want, but there are other emotions — like love and compassion and caring about other people — that we certainly don’t want to eliminate.” Even unpleasant emotions serve a purpose.

When I noted the cottage industry of “looking smart by saying we’re dumb,” I was half-kidding about the dumb part. Kahneman and Tversky described our flaws as the result of mental shortcuts, ones taken by a cleverly efficient brain. And my own book on magical thinking is subtitled “How Irrational Beliefs Keep Us Happy, Healthy, and Sane.” So do we really want to be more rational, in the sense of knowing the truth about things?

Yes. Sometimes. Epistemic rationality (clear thinking) can get in the way of instrumental rationality (efficacious thinking), but usually it helps it. Seeing the real lay of the land often — not always, but often — gets you to your destination, whether it’s a job or a spouse or a cookie. So knowing neat tricks for clarifying thought is essential. But first, you have to know when to use them, and to have the guts to do so.

How to Train Yourself to Be a More Rational Thinker