michael lacour

Michael LaCour Probably Fabricated a Document About Research Integrity

Back in January, staffers at Experiments in Governance and Politics, an organization that supports empirical public-policy research, had a strange email exchange with Michael LaCour. At the time, LaCour was a bright grad student in political science at UCLA basking in the publicity and interest generated by a seemingly groundbreaking paper he and Donald Green, a highly respected political science professor at Columbia — and a founder of EGAP — had published in December. The paper showed that a short conversation with gay canvassers about gay rights and marriage led to a large, durable shift in California voters’ views on these issues.

Five months later, we know that the data for the paper was fabricated by LaCour. The paper has been retracted by Science, and LaCour is waiting to find out what disciplinary action he faces from UCLA and from Princeton, where he was slated to begin working in the fall. But Macartan Humphreys, a Columbia University professor of political science and director of EGAP, told Science of Us via email that “it seems obvious now” that the email exchange he and his organization had with LaCour “was an early warning of bigger issues.”

Understanding why requires a very brief detour into the world of experiment pre-registration. This, in short, is a practice in which researchers “register” their experiments before conducting them, explaining what sorts of questions they’ll be attempting to answer, which outcomes they’ll be measuring, and so on.

Jon Krosnick, a social psychologist at Stanford who also works on issues of scientific transparency and integrity, explained that by creating a prior record of an experiment’s existence and structure, the practice accomplishes (in theory, at least) two main things: It prevents researchers from testing for a million outcomes, finding one that’s statistically significant, and claiming that that outcome had been the point of the research all along. It also acts as a counter to the so-called “file drawer problem” of only publishing successful studies and burying those that don’t pan out.

These problems have historically been most tightly associated with pharmaceutical research, Krosnick explained. But social science is starting to embrace pre-registration as well, and EGAP houses a database of pre-registered experiments. This is where the LaCour and Green paper comes in.

In the acknowledgements section of their retracted Science paper, LaCour and Green write that “The experiments reported here were registered at the Experiments in Governance and Politics site before the launch of each study.” As it turns out, this isn’t true. While the second of the two experiments reported in the study is pre-registered (though Humphreys noted that it’s “missing a lot of detail one might like to see in a registered design”), the first one isn’t. That’s why LaCour emailed EGAP: He wanted the organization to update its listings to note that both experiments had, in fact, been pre-registered.

LaCour sent EGAP a document he said was proof that the first experiment had been pre-registered — according to Humphreys, it was a PDF that LaCour claimed EGAP’s website had automatically produced after he successfully pre-registered the first experiment.

The problem is that EGAP has no such automatic document-generating system. Humphreys explained what happened next: “We checked our files and had no record of the document and we convinced ourselves it was not something we produced; we told him that we could not accept the document and that we could not recognize the first study as registered with us.”

At this point, LaCour changed his tack, as an excerpt from one of his emails shows (Humphreys wouldn’t share the rest of the emails or the document LaCour had sent EGAP, but said he’d passed all those materials along to UCLA and had notified Science):

This suggests to me that I never actually pre-registered the study on the EGAP site, but rather, submitted the proposal with EGAP guidelines as my final project for a course. … I, of course, do not expect this to qualify as validation of registration — my hope was just to clarify that I made an honest error and there was no foul play involved.

In other words, LaCour shifted to claiming that the document he had sent along hadn’t been an automatically generated PDF, but had rather been something he’d completed for a class. “The whole business had a lot of weirdness around it,” said Humphreys. “I have no doubt that he did not register with us and that the document was his invention.”

It should be said that this isn’t smoking-gun evidence of fabrication (although there is already plenty of that). But believing LaCour’s version of events requires believing a rather unlikely sequence: LaCour doesn’t pre-register the experiment with EGAP, then confuses himself into thinking he does, then confuses himself into thinking that EGAP’s computer system sent him along a confirmation document it couldn’t have; then, only after EGAP points this out to him, he remembers, Oh, it was actually for a class.

Two facets of this story bear striking similarities to previously revealed aspects of the LaCour scandal: First, LaCour apparently lied about something that likely offered very little gain — Science wasn’t going to refuse to publish an otherwise important paper simply because only one of its two experiments was pre-registered. It certainly seems like the risk of claiming both experiments had been pre-registered, and then reaching out to EGAP to “prove” it, would have outweighed any potential benefits. One possible explanation is that Green or someone else noted the discrepancy and asked LaCour to look into it, at which point LaCour decided to cover his tracks. Neither LaCour nor Green returned emailed requests for comment, and Ginger Pinholster, a spokesperson for Science, said in an email that prior to Humphreys, no one had brought the EGAP discrepancy to the publication’s attention.

The second similarity is that Donald Green’s name again shielded LaCour from closer scrutiny: “The statement of pre-registration with EGAP was welcome and not surprising to the editor handling this paper because of Donald Green’s involvement in launching E-GAP in 2009,” said Pinholster. Much as Green’s name was able to convince Krosnick when he first heard about the study and felt it sounded too good to be true —  “I see Don Green is an author. I trust him completely, so I’m no longer doubtful,” Krosnick told me he said to the “This American Life” producer who first informed him of LaCour and Green’s unlikely seeming results — it also served to short-circuit concern here.

This is as good a place as any to tie up two other loose ends:

Late Friday, LaCour released his response to the allegations that have been leveled against him by Broockman, Joshua Kalla, and Peter Aronow (or BKA, as they’re increasingly referred to in discussions of the controversy). One of his arguments is a total non-sequitur, but his most substantive criticism is that Broockman, et al., were wrong to say that the data LaCour said he had collected from his surveys was actually from the 2012 Cooperative Campaign Analysis Project, or CCAP, a separate data set — that BKA had unfairly manipulated data to come to that conclusion.

Because of the timing of the release of LaCour’s paper, it was unlikely people would dig into its claims right away. A few academics on Twitter did, though, and they seem unimpressed. One of them, Patrick Perry of the Stern School of Business at NYU, posted a stats-y rebuttal on Saturday. I emailed him to ask if he was saying that he thought LaCour’s claim was bogus. His response:

Transparently bogus. The only meaningful difference between LaCour’s data and the CCAP data is that there are more 50s in LaCour’s. BKA showed that you can account for this discrepancy if you recode the missing CCAP values as 50s. The evidence that LaCour copied data from CCAP is overwhelming. There is no other plausible explanation for the agreement between these datasets.

In other words: The CCAP data contains the same feelings-thermometer data as LaCour and Green’s paper — data produced by asking people to note how “warmly,” on a 101-point scale, they feel about gay people. And if, as BKA did, you take all the missing feelings-thermometer readings from the CCAP data — it’s common, especially in large data sets, to have some missing values — and simply replace them with 50s, or the midpoint on the scale, boom — suddenly the data is identical to CCAP (in fact, the CCAP data itself includes a variable in which this switch is made).

So it’s hard, if not impossible, to imagine how this could be the case if the BKA accusation were false. If you want more details, Neuroskeptic, a well-known science blogger at Discover, has a great rundown here. “As far as I can see,” Neuroskeptic writes, “LaCour has failed to refute this central criticism of Broockman et al.”

There’s also a rumor going around, if my inbox and some Twitter conversations I’ve seen is any indication, that an item on LaCour’s current, updated-to-remove-other-false-stuff curriculum vitae is false: the part where he says he was the “Hook Em” mascot for the Texas Longhorns from 2007–‘09.

He was in fact the mascot. I spoke with Katie Sowa, a 29-year-old UT grad who had also played the role — she’s mentioned in this article, which multiple people sent me, and which was mentioned in a since-deleted Political Science Rumors post that falsely accused LaCour of fabricating that item.

Sowa immediately said that yes, she remembered him. “He was a really good mascot,” she said. “We all enjoyed being around him. I knew he was a very smart guy.” It’s funny, she told me — she’d recently gotten a random email from a Texas A&M professor asking her the exact same question. LaCour, she said, had a great work ethic and never missed any events where Hook ‘Em was supposed to perform. She was very surprised to hear about what was going on.

LaCour Probably Fabricated an Integrity Document