Many researchers who study human behavior love the so-called “ultimatum game” and its variants. The basic rules of this behavioral-economics exercise are simple: Person A is given a bit of money and told to divide it up between themselves and person B. Person B gets to either accept the split, in which case each party walks away with that money, or refuse it, in which case no one gets anything. These sorts of games formed the underpinnings of a great deal of research on how humans handle questions of fairness and resource allocation.
But as Pacific Standard explained in a must-read article from a couple of years ago, some scientists have noted that far too many of these experiments have been conducted on so-called WEIRD, or White Educated people from Industrialized Rich Democratic countries. This mostly isn’t due to any nefariousness on the part of researchers — college kids just make for easy research subjects and are a pretty white group — but it does mean that these results can’t really be extrapolated to humanity as a whole, which is often what happens.
The solution is obvious: Conduct the game in a wide variety of different settings so as to start learning about how different cultures handle these questions of who gets what, and there’s been a fair bit of that going on. But a recent article in the Journal of Economic Behavior & Organization complicates things even further, suggesting subtle cues could be affecting the outcomes of these experiments — without the experimenters even realizing it.
The researchers, led by Jacobus Cilliers of the McCourt School of Public Policy at Georgetown University, sent teams of five experimenters out to 60 villages across Sierra Leone. In all cases, four of the researchers were black Sierra Leoneans, but the fifth member — whose role was only to hand out money and take notes, and who was explicitly instructed not to interact with study participants in any other way — was white in half the cases and black in the other half.
In each village, the researchers gathered together some of the locals and had them play an ultimatum-game variant called the “dictator game” in which they were given the equivalent of about $1, which in poverty-stricken Sierra Leone “is an amount slightly higher than the average daily income,” and asked how much they wanted to keep and how much they wanted to distribute to another villager (chosen by the experimenters according to a few different types of rules). Unlike in the ultimatum game, the recipient had to accept whatever was given to them — they couldn’t scuttle the deal by refusing.
All things being equal, the researchers found that the presence of a white experimenter did increase the amount the villagers gave away by about 19 percent — which the authors interpret as an attempt to impress someone who is quite visibly a foreigner. There’s an important but, though: In villages that had been known to receive a lot of foreign aid, and in which one of the experimenters was white, the villagers gave less. The researchers interpret that to mean that these villagers, seeing a white person, assumed the point of the experiment was to determine their worthiness of more aid — a source of funds they generally associate with the presence of white people. Overall, write the researchers, “These results suggest that players act based on their perceptions of the experimenter as a white foreigner.”
In a sense, it doesn’t matter whether these explanations are exactly right. What matters is that tweaking the racial composition of the team administering these experiments yielded a noticeable difference in how players played, which naturally raises all sorts of questions about prior research that’s been conducted in these sorts of settings. It’s tricky enough to make the leap from the game-results to real-world behaviors, in other words, but it’s even trickier when it turns out that players’ actions can be so easily manipulated.