As you have likely heard, psychology is in the midst of a replication crisis: A lot of fun, sexy findings that garnered excited headlines may be false. Heck, even ideas that are so widely believed they have long been comfortably ensconced in psychology textbooks — social priming, for example — appear to be at risk of tipping over.
An important part of this story has to do with incentives. Many psychological researchers are pressured to churn out low-quality papers at a rapid-fire clip — papers that report results that are technically “significant,” statistically speaking, but based on shoddily constructed studies, data-fishing, or both. If you have enough data and you play with it in enough ways, it just isn’t hard to find correlations and to excitedly expound upon what they might mean.
In Slate, Daniel Engber tells the story of a lab run by Nick Neave of Northumbria University. Maybe you’ve seen some of the research it’s produced: Several times in the last few years, Neave’s studies about which dance moves are “scientifically” rated as sexiest have gone viral. The most recent one even got a full write-up in the New York Times.
As Engber convincingly argues, this is bad research. Take a study published by Neave’s lab a few years ago, which “ended up on ABC, NPR, the BBC, and many other places.” That study purported to show that one of the keys for a man to dance sexily is the movement of his right (but not left!) knee. Engber writes:
Most coverage focused on the knee, since that was the surprise. Could one knee (and not the other) really work such magic? Well, no. Neave’s team had looked at only 19 dancing subjects (with 37 women rating them), while measuring dozens of variables that ranged from the variability in flexion of the left shoulder to the external rotation speed of the right ankle. With such a smorgasbord of data, drawn from such a paltry set of subjects, even random blips might appear to be significant. Indeed, when the same group produced a follow-up in 2013, this one involving 30 dancing men, it failed to replicate the first experiment’s results. Now it seemed as though the speed, variability, and amplitude of a man’s arm movements—not those of his neck or knee—mattered most.
As Engber points out, the replication failure didn’t stop Neave from repeatedly citing it as a valid finding. “His paper on dancing women, for example, cites his previous research in saying ‘that male dance quality can be predicted by … the speed of movement of the right knee,’” writes Engber. “Another of his recent publications, from 2015, contends that “‘good’ dancers displayed … faster bending and twisting movements of their right knee.”” And yet when Engber called Neave to ask him about this, Neave openly said, “My guess is that this was some kind of artifact in the first study.” So it’s a true finding when you’re touting your lab’s past work, but an “artifact” when a smart journalist who has been following your work closely calls you up with questions — got it.
The final part of Engber’s story is particularly telling:
Again, I asked if it wouldn’t be a wise idea to wait, at least until he’d confirmed his findings with a fresh, hypothesis-driven experiment [to extent this line of inquiry].
“As I said, we’re psychologists,” he said. “We haven’t wrongly reported a cure for cancer or anything like that. You know, it’s not that important, is it really?”
I was forced to agree.
“We’ve reported some interesting stuff,” he continued. “We don’t fully understand everything that we’ve found. … We’ve never, ever stood up and said, ‘We have proven this.’ You can’t say that in psychology. It’s not physics. So we have these situations where you report something, and you hope that it’s true, but it might not be true. Then you change your mind; you tweak things around. That’s the nature of the game.”
This is a weak “game.” Researchers shouldn’t be able to publish extremely shaky, misleading results, act like those results are solid, garner media attention and acclaim from them, and then backtrack to “We haven’t wrongly reported a cure for cancer or anything like that” when journalists or critics come knocking with tough questions. And yet this sort of thing happens all the time.
Which is why Engber is doing really valuable journalism here. This exposure of Neave lab’s techniques will, in effect, shame the lab and its researchers a bit. And a little shame is good in situations like this! That’s how you change social norms. After all, at the moment many researchers think they can get away with and in fact benefit from publishing these types of viral-sharing-friendly results. The more they can be swayed from this idea, the better.
Obviously no one should be paraded in the streets naked or anything like that. But the labs and researchers who continue to produce this sort of shoddy research should know that the cost of doing so is increasing. More and more, people realize that psychology is in trouble, and that that trouble stems in large part from some researchers’ proclivity for chasing headlines at the expense of rigorousness and caution. Hopefully that will change soon — it already is, in certain ways — and the more we know about how these sorts of labs and researchers operate, the faster that change will come.