Stanford CIS

Tesla Autopilot Crash: Why We Should Worry About a Single Death

By Patrick Lin on

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Only recently, Tesla Motors revealed that one of its self-driving cars, operating in Autopilot mode, had crashed in May and killed its driver. How much responsibility Tesla has for the death is still under debate, but manyexperts are already reminding us of the huge number of lives that could be saved by autonomous cars.

Does that mean we shouldn’t worry much about the single death—that we should look away for the sake of the greater good? Is it unethical to focus on negative things that could slow down autonomous-driving technology, which could mean letting thousands of people die in traffic accidents?

Numbers do matter. Car crashes kill the equivalent of a 747 jet-plane full of people every week in the United States, said Dr. Mark Rosekind, administrator of the U.S. National Highway Traffic Safety Administration (NHTSA). That’s more than 32,000 road deaths per year in the U.S.

Interestingly, the hundreds of people who die on our roads every week don’t get the same attention as a plane crash. Traffic fatalities are so commonplace that we’ve become numb to them.

Unlike humans, self-driving cars don’t get sleepy, distracted, drunk, road-ragey, and the many other things that cause about 90 percent of crashestoday. So, robot cars could be a really important technology.

Elon Musk, CEO and co-founder of Tesla Motors, also appealed to numbers in defending his company. He deflected a question by a reporter on why he thought it wasn’t materially relevant to disclose the crash when it happened months ago:

“Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.”

This is to say, focus on the good we can do, not on the single crash which is statistically insignificant to the lives potentially saved. Sure, numbers matter, but ethics is more than math. Here’s why a moral accounting ledger is not enough.

Different people die

Looking at the numbers alone tells us only a partial story. With robot cars, crash patterns will likely be different—people injured or killed will probably not be the same ones who would otherwise be victims, and this needs to be considered.

In the fatal Tesla crash, the roof of the car was sheared off as it drove underneath the tractor-trailer crossing the road in front of it, in a T-bone collision. This is an incredibly rare event today. In a non-Autopiloted car, the human would have seen the big truck on the road (instead of watching aHarry Potter movie, as reports suggest) and hit the brakes or swerved to avoid it. He wouldn’t have died.

Google had previously allowed a blind man to sit behind the wheel of its autonomous car, to help showcase the technology’s promise: it could give mobility to many people who today are not licensed to drive. Not just the blind, but other disabled people as well as children could have new freedom without relying on another person to transport them around.

Read the full piece at IEEE Spectrum.