As Uber has exploded across the country, oftentimes simply setting up shop in different cities while ignoring the regulatory hoops a new cab company would be required to jump through, the ride-sharing service hasn’t always been welcomed warmly by local authorities. In New York City, for example, the City Council, citing congestion supposedly caused by Uber drivers, will likely soon vote on a City Hall–backed proposal that would curb Uber’s ability to expand — a move the company says would be crippling to its New York operations.
Underneath the myriad debates spawned by the rise of Uber — How should its drivers be categorized, from a labor-law perspective? Do taxicab companies enjoy unfair monopolies? Is the medallion system broken? — the company makes a straightforward claim as to why it should be embraced by the cities with which it’s feuding: Lots of people aren’t happy with taxi service, and Uber offers a better product. The company’s arguments on this front aren’t particularly controversial among those who have taken a lot of cab rides. It can indeed be tough to get a taxi in certain neighborhoods, and cabbies themselves are notorious, at least in New York, for refusing to travel to certain places, despite the fact that they’re legally required to take passengers where they want to go.
To date, the proposition that Uber offers measurably better service than cabs has gone mostly untested. Yesterday, though, BOTEC Analysis Corporation released a rigorous, Uber-funded (more on that in a bit) study led by Rosanna Smart, a Ph.D. candidate in economics at UCLA, comparing UberX service — that’s the cheaper “flavor” of Uber — to taxi service in Los Angeles.
The design could hardly have been simpler; we sent pairs of riders to call for taxi service or use an app to summon UberX for travel along pre-planned routes. The riders recorded how long it took – starting from the moment of picking up the phone or opening the app – before they were actually in a car and on their way, and also how much the ride cost, including a standard 15% tip for the taxi drivers and any premium charged under the Uber “surge pricing” system.
After each ride, the riders switched off; whoever took a taxi last time took an Uber next time. Our riders didn’t know that Uber had paid for the study.
The results were not good for taxi companies. On average, the researchers found that “a taxi takes two to three times longer to arrive than an UberX,” while costing twice as much. The initial study was conducted over the course of a day, but the researchers followed up over the next three with so-called validation studies, this time not informing Uber which neighborhoods they’d be operating in to cut the chance that the company would, say, have extra drivers waiting in the suddenly “hot” regions of the experiment to reduce wait times. The results were similar.
As the authors point out, this isn’t an exhaustive study — to ensure participants’ safety, “A substantial number of low-income Los Angeles neighborhoods were excluded from study due to high rates of violent crime, and we collected observations only during daylight hours.” So it could be the case that in certain places at certain times, the big gap observed by the researchers would shrink or disappear entirely. There’s also no way to know, of course, whether these results would hold in other cities.
So should we trust this research, despite its funding source? The short answer is that, notwithstanding the limitations noted by the researchers themselves, it appears to be a well-designed study, and the magnitude of the differences between cab and UberX service were rather massive: “Twice as fast, half as expensive,” as Kleiman headlined his blog post.
That said, one aspect of how this study came about should cast at least a little bit of doubt on the findings: If the study hadn’t found that Uber outperformed taxis, it would never have been released publicly. “Uber didn’t influence the design or the analysis or the way we wrote it up or what we said about it,” Kleiman told Science of Us, “but they did have the decision about whether the results, once they were in, would be published or not. My view on this is yes, if I were a journalist, I’d say, Did they fund 30 studies in 30 cities and publish the one that came out right, like a drug company does? I’m pretty sure the answer is no, because we went to them [to suggest the study], they didn’t come to us. So I’m pretty sure this isn’t the only result, but I didn’t know that for certain.”
This so-called “file-drawer problem,” in which results that don’t support a given finding are tossed in a file drawer (metaphorical or otherwise), never to be seen by the public, has garnered some attention in both the medical and social sciences. Macartan Humphreys, a Columbia University political science professor and the director of Experiments in Governance and Politics, a social-science research-transparency organization, pointed out that this is such a big problem that it’s listed as the third of his organization’s five statements of principle: “In collaborations between researchers and practitioners it should be agreed in advance, and not contingent upon findings, what findings and data can be used for publication.”
Uber’s study isn’t a perfect analog since it was conducted and published by a for-profit private firm rather than an academic institution, but the same logic holds: As soon as you start cherrypicking study results, problems crop up. The whole point of studies is that the people reading them should be able to trust that there aren’t ten other conflicting studies languishing in a drawer. To Kleiman, the obvious solution here is to run more studies about Uber versus cabs. “The right conclusion is somebody should pay to do that again,” he said.