Self-driving cars are, likely, the future of transportation. But self-driving technology hasn’t quite evolved the levels of safety needed for it to become the new norm just yet. This means, of course, no napping in your Tesla. But more importantly, it means engineers, manufacturers, city planners, and lawmakers have a pressing need to determine the ethics of the driverless car, particularly surrounding potential crashes and accidents.
In theory, a self-driving car makes no mistakes. It accelerates and stops and pulls off K-turns without ever bumping into the curb. But there will always be objects that move a little more than curbs — namely, humans. In a studyconducted with U.S. participants from a team at the University of Toulouse Capitole in France, people explained they’d prefer the kind of safer world that driverless vehicles could provide — and that self-driving cars should always protect the driver only after protecting lives of those outside the car, like other drivers or pedestrians.
“We found that the large majority of people strongly feel that the car should sacrifice its passenger for the greater good,” Jean-François Bonnefon, the survey team’s leader, told Popular Mechanics. Even when it’s not the driver’s fault, or even when participants themselves were the driver in the scenario, or when they were the driver sharing the car with a family member or their own kids.
While this both makes some utilitarian sense and makes people seem altruistic, there’s a catch: Presented with the option to actually purchase a self-driving car programmed for “the greater good,” people said they’d rather buy one programmed to save themselves. Which means that while people are optimistic about self-driving cars in the hypothetical sense, when it comes to actually getting behind the wheel of one, they remain skeptical and, perhaps understandably, selfish.