How Self-Driving Car Policy Will Determine Life, Death and Everything In-Between

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
March 23, 2018

Self-driving cars are here. More are on their way. Major automakers and Silicon Valley giants are clamoring to develop and release fully autonomous cars to safely and efficiently chauffeur us. Some models won’t even include a steering wheel. Along with many challenges, technical and otherwise, there is one fundamental political question that is too easily brushed aside: Who decides on how transportation algorithms will make decisions about life, death and everything in between?
 

The recent fatality involving a self-driving Uber vehicle won’t be the last incident where human life is lost. Indeed, no matter how many lives self-driving cars save, accidents still will happen.

Imagine you’re in a self-driving car going down a road when, suddenly, the large propane tanks hauled by the truck in front of you fall out and fly in your direction. A split-second decision needs to be made, and you can't think through the outcomes and tradeoffs for every possible response. Fortunately, the smart system driving your car can run through tons of scenarios at lightning fast speed. How, then, should it determine moral priority?

Consider the following possibilities:

  1. Your car should stay in its lane and absorbs the damage, thereby making it likely that you’ll die.
  2. Your car should save your life by swerving into the left lane and hitting the car there, sending the passengers to their deaths—passengers known, according to their big data profiles, to have several small children.
  3. Your car should save your life by swerving into the right lane and hit the car there, sending the lone passenger to her death—a passenger known, according to her big data profile, to be a scientist who is coming close to finding a cure for cancer.
  4. Your car should save the lives worth the most, measured according to amount of money paid into a new form of life assurance insurance. Assume that each person in a vehicle could purchase insurance against these types of rare but inevitable accidents, and then, smart cars would prioritize based on their ability and willingness to pay.
  5. Your car should save your life and embrace a neutrality principle in deciding among the means for doing so, perhaps by flipping a simulated coin and swerving to the right if heads comes up and swerving to the left if its tails.
  6. Your car shouldn’t prioritize your life and should embrace a neutrality principle by randomly choosing among the three options.
  7. Your car should execute whatever option most closely matches your personal value system and the moral choices you would have made if you were capable of doing so. Assume that when you first purchased your car, you took a self-driving car morality test consisting of a battery of scenarios like this one and that the results “programmed” your vehicle.

There’s no value-free way to determine what the autonomous car should do. The choice presented by options 1–7 shouldn’t be seen as a computational problem that can be “solved” by big data, sophisticated algorithms, machine learning, or any form of artificial intelligence. These tools can help evaluate and execute options, but ultimately, someone—some human beings—must choose and have their values baked into the software.

Read the full piece at Motherboard