Stanford CIS

The Ethics of Saving Lives With Autonomous Cars Are Far Murkier Than You Think

By Bryant Walker Smith on

Cross-posted from Wired.

If you don’t listen to Google’s robot car, it will yell at you. I’m not kidding: I learned that on my test-drive at a Stanford conference on vehicle automation a couple weeks ago. The car wanted its human driver to retake the wheel, since this particular model wasn’t designed to merge lanes. If we ignored its command a third time, I wondered, would it pull over and start beating us like an angry dad from the front seat? Better to not find out.

No car is truly autonomous yet, so I didn’t expect Google’s car to drive entirely by itself. But several car companies — such as Audi, BMW, Ford, GM, Honda, Mercedes-Benz, Nissan, Volkswagen, and others — already have models and prototypes today with a surprising degree of driver-assistance automation. We can see “robot” or automated cars (what others have called “autonomous cars”, “driverless cars”, etc.), coming in our rear-view mirror, and they are closer than they appear.

Why would we want cars driving themselves and bossing us around? For one thing, it could save a lot of lives. Traffic accidents kill about 32,000 people every year in America alone. That’s about 88 deaths per day in the U.S., or one victim every 15 minutes — nearly triple the rate of firearm homicides.

There would still be accidents, not to mention real-life versions of the fictional Kobayashi Maru test in Star Trek.

If all goes well, computer-driven cars could help prevent these accidents by having much faster reflexes, making consistently sound judgments, not getting road-rage or being drunk, and so on. They simply wouldn’t be as flawed as humans are.

But no technology is perfect, especially something as complex as a computer, so no one thinks that automated cars will end all traffic deaths. Even if every vehicle on the road were instantly replaced by its automated counterpart, there would still be accidents due to things like software bugs, misaligned sensors, and unexpected obstacles. Not to mention human-centric errors like improper servicing, misuse, and no-win situations — essentially real-life versions of the fictional Kobayashi Maru test in Star Trek.

Still, there’s little doubt that robot cars could make a huge dent in the car-accident fatality rate, which is obviously a good thing — isn’t it?

Actually, the answer isn’t so simple. It’s surprisingly nuanced and involves some modern tech twists on famous, classical ethical dilemmas in philosophy.

The Puzzling Calculus of Saving Lives

Let’s say that autonomous cars slash overall traffic-fatality rates by half. So instead of 32,000 drivers, passengers, and pedestrians killed every year, robotic vehicles save 16,000 lives per year and prevent many more injuries.

But here’s the thing. Those 16,000 lives are unlikely to all be the same ones lost in an alternate world without robot cars. When we say autonomous cars can slash fatality rates by half, we really mean that they can save a net total of 16,000 lives a year: for example, saving 20,000 people but still being implicated in 4,000 new deaths.

There’s something troubling about that, as is usually the case when there’s a sacrifice or “trading” of lives.

The identities of many (future) fatality victims would change with the introduction of autonomous cars. Some victims could still die either way, depending on the scenario and how well robotic cars actually outperform human drivers. But changing the circumstances and timing of traffic conditions will likely affect which accidents occur and therefore who is hurt or killed, just as circumstances and timing can affect who is born.

That’s how this puzzle relates to the non-identity problem posed by Oxford philosopher Derek Parfit in 1984. Suppose we face a policy choice of either depleting some natural resource or conserving it. By depleting it, we might raise the quality of life for people who currently exist, but we would decrease the quality of life for future generations; they would no longer have access to the same resource.

The identities of many (future) fatality victims would change with the introduction of autonomous cars.

Most of us would say that a policy of depletion is unethical because it selfishly harms future people. The weird sticking point is that most of those future individuals would not have been born at all under a policy of conservation, since any different policy would likely change the circumstances and timing around their conception. In other words, they arguably owe their very existence to our reckless depletion policy.

Contrary to popular intuitions, then, no particular person needs to be made worse off for something to be unethical. This is a subtle point, but in our robot-car scenario, the ethics are especially striking: some current non-victims — people who already exist — would become future victims, and this is clearly bad.

But, wait. We should also factor in the many more lives that would be spared. A good consequentialist would look at this bigger picture and argue that as long as there’s a net savings of lives (in our case, 16,000 per year) we have a positive, ethical result. And that judgment is consistent with reactions reported by Stanford Law’s Bryant Walker Smith who posed a similar dilemma and found that his audiences remain largely unconcerned when the number of people saved is greater than the number of different lives killed.

Still, how much greater does the first number need to be, in order for the tradeoff to be acceptable to society?

If we focused only on end-results — as long as there’s a net savings in life, even just a few lives — it really doesn’t matter how many lives are actually traded. Yet in the real world, the details matter.

Say that the best we could do is make robot cars reduce traffic fatalities by 1,000 lives. That’s still pretty good. But if they did so by saving all 32,000 would-be victims while causing 31,000 entirely new victims, we wouldn’t be so quick to accept this trade — even if there’s a net savings of lives.

We’re surrounded by both good luck and bad luck: accidents happen.

The consequentialist might then stipulate that the lives saved must be at least twice (or triple, or quadruple) the number of lives lost. But this is an arbitrary line without a guiding principle, making it difficult to defend with reason.

Anyway, no matter where the line is, the mathematical benefit for society is little consolation for the families of our new victim class. Statistics don’t matter when it’s your child, or parent, or friend, who becomes a new accident victim — someone who otherwise would have had a full life.

However, we can still defend robot cars against the kind of non-identity problem I suggest above. If most of the 32,000 lives that will die this year are arbitrarily and unpredictably doomed to be victims, there’s no apparent reason why they should be victims in the first place. This means there’s no issue with replacing some or most of them with a new set of unlucky victims.

With this new set of victims, however, are we violating their right not to be killed? Not necessarily. If we view the right not to be killed as the right not to be an accident victim, well, no one has that right to begin with. We’re surrounded by both good luck and bad luck: accidents happen. (Even deontological – duty-based — or Kantian ethics could see this shift in the victim class as morally permissible given a non-violation of rights or duties, in addition to the consequentialist reasons based on numbers.)

Not All Car Ethics Are About Accidents

Patrick Lin

Dr. Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo and lead editor of Robot Ethics (MIT Press, 2012). He is also an associate professor in Cal Poly’s philosophy department; visiting associate professor at Stanford’s School of Engineering; affiliate scholar at Stanford Law School’s Center for Internet and Society; adjunct senior research fellow at Australia’s Centre for Applied Philosophy and Public Ethics (CAPPE); and former ethics fellow at the U.S. Naval Academy.

Ethical dilemmas with robot cars aren’t just theoretical, and many new applied problems could arise: emergencies, abuse, theft, equipment failure, manual overrides, and many more that represent the spectrum of scenarios drivers currently face every day.

One of the most popular examples is the school-bus variant of the classic trolley problem in philosophy: On a narrow road, your robotic car detects an imminent head-on crash with a non-robotic vehicle — a school bus full of kids, or perhaps a carload of teenagers bent on playing “chicken” with you, knowing that your car is programmed to avoid crashes. Your car, naturally, swerves to avoid the crash, sending it into a ditch or a tree and killing you in the process.

At least with the bus, this is probably the right thing to do: to sacrifice yourself to save 30 or so schoolchildren. The automated car was stuck in a no-win situation and chose the lesser evil; it couldn’t plot a better solution than a human could.

But consider this: Do we now need a peek under the algorithmic hood before we purchase or ride in a robot car? Should the car’s crash-avoidance feature, and possible exploitations of it, be something explicitly disclosed to owners and their passengers — or even signaled to nearby pedestrians? Shouldn’t informed consent be required to operate or ride in something that may purposely cause our own deaths?

It’s one thing when you, the driver, makes a choice to sacrifice yourself. But it’s quite another for a machine to make that decision for you involuntarily.

Ethical issues could also manifest as legal and policy choices. For instance, in certifying or licensing an autonomous car as safe for public roads, does it only need to pass the same driving test we’d give to a teenager — or should there be a higher standard, and why? If it doesn’t make sense for robot cars to strictly follow traffic laws and vehicle codes — such as sometimes needing to speed during emergencies — how far should manufacturers or policymakers allow these products to break those laws and under what circumstances?

And finally, beyond the car’s operation: How should we think of security and privacy related to data (about you) coming from your robot car, as well as from any in-vehicle apps? If we need very accurate maps to help automated cars navigate, would it be feasible to crowdsource maps — or does that hold too much room for error and abuse?

Do we now need a peek under the algorithmic hood before we purchase or ride in a robot car?

If You Don’t Know What You Don’t Know…

So far, I’ve focused on ethics and risk in automated cars, but there are potential benefits beyond reducing accidents and death. The technology could give large segments of the population — such as the elderly and handicapped — the freedom of greater mobility; save time and fuel through more efficient driving and fewer traffic jams; help the environment by reducing greenhouse gases and pollution; and more.

But compelling benefits alone don’t make ethical, policy, and legal problems go away (just look at the ongoing, heated discussions around military drones). And so it is with robot cars.

The introduction of any new technology changes the lives of future people. We know it as the “butterfly effect” or chaos theory: Anything we do could start a chain-reaction of other effects that result in actual harm (or benefit) to some persons somewhere on the planet.

Consider a dramatic example with WordPerfect, one of the first word-processing programs. Creating this tool might have displaced a particular worker. But then that parent was able to spend more time with the kids. And then one of the kids became an emergency-room doctor who ends up saving the life of the U.S. President. (Also think about this in the other, more terrible direction.)

This and other examples illustrate the intrinsic, deep complexity in forecasting effects of any given event over time, especially when it comes to “game-changing” technologies such as robotics. In engineering-speak, Bryant Walker Smith calls this part of the “system-boundaries problem.”

For us humans, those effects are impossible to precisely predict, and therefore it is impractical to worry about those effects too much. It would be absurdly paralyzing to follow an ethical principle that we ought to stand down on any action that could have bad butterfly-effects, as any action or inaction could have negative unforeseen and unintended consequences.

But … we can foresee the general disruptive effects of a new technology, especially the nearer-term ones, and we should therefore mitigate them. The butterfly-effect doesn’t release us from the responsibility of anticipating and addressing problems the best we can.

As we rush into our technological future, don’t think of these sorts of issues as roadblocks, but as a sensible yellow light — telling us to look carefully both ways before we cross an ethical intersection.

Author’s Note: Some of this research is supported by California Polytechnic State University, Stanford University’s Center for Automotive Research (CARS) and Center for Internet and Society (CIS). I thank Chris Gerdes, Sven Beiker, Bryant Walker Smith, George Bekey, and Keith Abney for reviewing earlier versions of this piece before it was edited. The statements expressed here are my opinion and do not necessarily reflect the views of the aforementioned persons or organizations.