
The question of why people cooperate with one another has always baffled and fascinated researchers. In cases where there’s a clear benefit to cooperation, the behavior makes sense — I’m helping you out today because I expect to benefit from it tomorrow. But there are plenty of examples in which people help, or even put their lives on the line, for strangers they may well never meet again. In these cases, it’s “irrational,” from a certain point of view, to cooperate. And yet people do so all the time.
A new paper in Psychological Science attempts to explain the roots of human cooperation. Written by David G. Rand, an associate professor of psychology, management, and economics at Yale, the paper deals with the question of whether and to what extent cooperation is intuitive — that is, something we more or less do automatically, without thinking about it.
As Rand explains in the paper’s intro, in recent years researchers have attempted to apply a so-called dual-process theory framework to the question of cooperation. Dual-process theory, popularized in Daniel Kahnenam’s wonderful book Thinking, Fast and Slow, says that human thinking can be (roughly) divided into two types: System 1 thinking, which encompases quick, gut-level thoughts that don’t require much conscious deliberation (intuition, more or less); and System 2 thinking, which does involve more careful deliberation. Hence the title of the paper: “Cooperation, Fast and Slow.”
The reason people who study cooperation from this angle is that it can shed light on just how naturally cooperation comes to humans. Is our natural instinct to be helpful toward others, even when it won’t benefit us, or do we need to override our natural instinct in order to cooperate?
Rand has proposed a theory called the social heuristics hypothesis, or SHH, to help answer these questions. In short, it posits that we learn our intuitive responses to cooperation based on past experiences. If in the past being cooperative has proven the wise move — perhaps because we’ve seen that when we help someone on Monday, oftentimes they’ll turn around and help us on Wednesday — it solidifies, in a sense, and becomes our default social response. It becomes intuitive. For most but not all people, our day-to-day interactions, and the way society is shaped, favor a generally cooperative approach when it comes to our social intuitions.
But we don’t always act according to those intuitions. In certain situations, we consciously deliberate — we switch to System 2, in other words — to come up with a more context-sensitive strategy. So maybe you’re normally a helpful cooperator, but then a stranger shows up at your door at 4 a.m. asking you to come help him push his car out of a huge muddy puddle on a dark road five miles away. In this case, you might take a few moments to think about it and realize that helping him out yourself probably isn’t the best bet (maybe you’ll offer to call someone for him).
Summing all this up, Rand’s theory predicts that “intuition favors typically advantageous behavior and deliberation favors behavior that is payoff maximizing in the current situation.” In effect, SHH claims that sometimes we realize our normal approach to cooperation isn’t going to lead to the optimal result in a given situation, and we adjust our behavior accordingly.
To test whether past research supports this theory, Rand conducted a meta-analysis in which he looked at a bunch of previously published studies involving (non)cooperation between study participants in games which involve the divvying up of a pot of resources. Specifically, he was curious about the results of experiments in which participants were induced to take a more intuitive or more deliberative approach to the decision of whether to cooperate with someone else. SHH predicts that when participants in these sorts of games are nudged to rely more on intuition, it will increase the likelihood they will participate in so-called “pure” cooperation — that is, cooperation that is unlikely to offer any sort of tangible payoff — but not “strategic” cooperation, in which cooperating offers a clear payoff. That, in fact, is what he found in his survey of 67 studies covering 17,647 participants: “the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted over deliberation, but no significant difference in strategic cooperation between more intuitive and more deliberative conditions.”
The takeaway, then, is that acting from a System 1, gut-level place likely makes people more likely to commit acts of altruism that won’t offer tangible benefits in the long run — acts that could be interpreted as “good for the sake of good.”
Maybe it’s a stretch, but reading this study reminded me of the moments after the Boston Marathon bombing in 2013. In the confusing, bloody moments after the blast, the first impulse of many marathoners, spectators, and law enforcement officers was to bolt immediately in the direction of the carnage, to help anyone they could. In a moment of emotion and confusion and very little careful, deliberative theory, that was what came naturally to a lot of people. It’s human nature at its best, and Rand’s paper and theory offer a valuable account of how this sort of behavior becomes ingrained and manifests itself in the real world.