technology

Your Driverless Car Could Be Programmed to Kill You

Bend in Road
Photo: Jim Weeks/Design Pics/Corbis

Many people think driverless cars could be the best thing to happen to human transportation since the internal combustion engine. If technology continues to develop at the pace proponents are hoping, a network of self-driving computers could end traffic jams forever, saving untold amounts of fuel and preventing nearly 300,000 deaths due to driver error in the U.S. every decade.

One thing these scenarios leave out, however, are some of the very real, very weird moral dilemmas that go into programming autonomous vehicles. For example, in some very specific cases, your driverless car — designed meticulously to keep you safe in just about every scenario imaginable — might actually be programmed kill you.

To understand why, imagine this scenario: You’re relaxing in your driverless car, watching the scenery go by and maybe getting drowsy, when you come around a sharp bend and see an accident a terrifyingly short distance ahead. There are three people standing outside of a car in the middle of the road, staring at their ruined front bumper and the dead deer in front of it. How will your car react? Should it plow into the stalled car and possibly kill three people, or veer into a nearby guardrail and possibly kill you?

This is a real-life version of the Trolley Problem, a classic ethics thought experiment that usually goes something like this: A runaway trolley is headed at a high speed toward five people who will likely be killed, and you have the power to divert the trolley onto another track — but if you do so, the one person standing there will be killed. There is no way to slow down or warn anyone that the trolley is coming — you need to make your decision and live with the consequences.

The point of the Trolley Problem is to isolate certain moral principles and intuitions — what do we value in moral decision-making? A purely utilitarian view argues that the most moral choice is the one that minimizes the loss of life. Three is more than one, so it would make sense for a computer to make its decision based on numbers alone. Another view is that there’s an important distinction between allowing something to happen and causing it to happen. In this view, it might be preferable to allow the car to stay on its current course, leading to three deaths, rather than to swerve wildly into the guardrail.

Of course, driverless cars will have neither the capacities for reflection nor the time to think through these issues carefully. When confronted with a scenario like the one above, they’ll have to rely on algorithms to make life-and-death (or deaths) decisions in just seconds, which means a great deal of human thinking about Trolley Problem dilemmas will have to come prebaked into their software.

To better understand how people apply these moral principles to driverless cars, Jean-Francois Bonnefon of the Toulouse School of Economics in France posed a dilemma similar to the one above to several hundred people through Amazon’s Mechanical Turk to see what they would say. The exact course of events and perspective was changed person to person, but the results generally suggested that people believe autonomous vehicles should take a utilitarian course of action and do whatever they can to minimize the death toll … as long as they aren’t in the car.

The resulting paper, published by Bonnefon and his collaborators on Arxiv.org and covered by the MIT Technology Review, came to the conclusion that most participants “wished others to cruise in autonomous vehicles more than they wanted to buy utilitarian autonomous vehicles themselves.” In other words, they were fine programming cars to kill one person to save two — except in cases where they themselves were the unlucky victim.

The authors don’t offer any speculation about what this incoherence means for the future of driverless cars. It certainly seems to track with a common human tendency: We acknowledge that this or that decisions is the best course “for the common good,” but, when the time comes to actually act in accordance with our belief, choose the selfish route instead (lots of people claim to be concerned about climate change while emitting more than their fair share of carbon). What makes the driverless-car version of the Trolley Problem so intriguing is that individual passengers might just have that choice made for them by some distant programmer they’ve never met — someone who can program in a “rational” response to an emotionally impossible decision.

Driverless Cars Might Be Programmed to Kill You