Imagine two moms with 1-year-old kids. One leaves her baby alone in a car for ten minutes to run in to a grocery store to pick up a prescription for her sick husband. Another, in the exact same safe, suburban neighborhood, leaves her kid alone in a car for ten minutes to chat with a man she’s having an affair with.
Which kid was exposed to more risk?
The obvious answer is that the kids were exposed to the same amount of risk. Maybe one mom was more justified in leaving her kid alone, but that doesn’t change the inherent danger of a baby sitting unattended in a car for ten minutes.
Except that’s not how the human mind works. In a newly published study in the journal Collabra, researchers Ashley Thomas, Kyle Stanford, and Barbara Sarneck reveal some really interesting things about how humans make moral judgments on the fraught question of parental neglect.
As Tania Lombrozo, a psychologist at UC Berkeley, explained on NPR’s website, the researchers conducted “six experiments with over 1,300 online participants” in which they had the participants read a “series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period.” The reason why the parent left the kid alone varied across the different vignettes: Sometimes it was for reasons that seem morally justifiable, sometimes less so.
Not surprisingly, the parent’s reason for leaving a child unattended affected participants’ judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.
The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same — that is, the age of the child, the duration and location of the unattended period, and so on — participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants’ factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent’s reason for leaving.
These fascinating results shine a spotlight on a powerful feature of human judgment: We rarely, if ever, take a “Just the facts, ma’am” approach. We have trouble filtering out certain forms of extraneous information. This shows up just about everywhere: in so-called “anchor effects” in which quantitative judgments get thrown off by irrelevant numbers, in judgments about the quality of food and wine, in the connection between physical cues like hunger and decisions that have nothing to do with how hungry we are. In the case of the hypothetical parents in the Collabra experiments, the authors write, “People don’t only think that leaving children alone is dangerous and therefore immoral. They also think it is immoral and therefore dangerous.”
There are clear societal implications here. In legal or moral judgments ostensibly based on the question of how much risk a parent exposed a child to, people will, by default, experience a ramped-up or toned-down sense of risk based on various factors that aren’t relevant. Prejudice is an obvious example: If a black woman and a white woman both get charged with the same form of child neglect, to whom would a jury be more sympathetic? It’s context dependent, of course, but there’s good reason to think that in many settings a black mom would be judged as having exposed her kid to way more harm than a white one — even if the two of them did the exact same thing.
That’s one of the reasons these sorts of studies are so important: They can offer us useful ways to account for human bias and make society fairer. They’re also just really, really interesting.