Stanford CIS

The Ethics of Autonomous Cars

By Bryant Walker Smith on

Cross-posted from The Atlantic.

If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.

Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes?

Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver.

But there are important differences between humans and machines that could warrant a stricter test. For one thing, we’re reasonably confident that human drivers can exercise judgment in a wide range of dynamic situations that don’t appear in a standard 40-minute driving test; we presume they can act ethically and wisely. Autonomous cars are new technologies and won’t have that track record for quite some time.

Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.

For the time being, the legal and regulatory framework for these vehicles is slight. As Stanford law fellow Bryant Walker Smith has argued, automated cars are probably legal in the United States, but only because of a legal principle that “everything is permitted unless prohibited.” That’s to say, an act is allowed unless it’s explicitly banned, because we presume that individuals should have as much liberty as possible. Since, until recently, there were no laws concerning automated cars, it was probably not illegal for companies like Google to test their self-driving cars on public highways.

To illustrate this point by example, Smith turns to another vehicle: a time machine. “Imagine that someone invents a time machine," he writes. "Does she break the law by using that machine to travel to the past?” Given the legal principle nullum crimen sine lege, or “no crime without law,” she doesn’t directly break the law by the act of time-traveling itself, since no law today governs time-travel.

This is where ethics come in. When laws cannot guide us, we need to return to our moral compass or first principles in thinking about autonomous cars. Does ethics yield the same answer as law? That’s not so clear. If time-traveling alters history in such a way that causes some people to be harmed or never have been born, then ethics might find the act problematic.

This illustrates the potential break between ethics and law. Ideally, ethics, law, and policy would line up, but often they don’t in the real world. (Jaywalking and speeding are illegal, for examples, but they don’t seem to be always unethical, e.g., during a time when there’s no traffic or in case of an emergency. A policy, then, to always ticket or arrest jaywalkers and speeders would be legal but perhaps too harsh.)

But, because the legal framework for autonomous vehicles does not yet exist, we have the opportunity to build one that is informed by ethics. This will be the challenge in creating laws and policies that govern automated cars: We need to ensure they make moral sense. Programming a robot car to slavishly follow the law, for instance, might be foolish and dangerous. Better to proactively consider ethics now than defensively react after a public backlash in national news.

The Trolley Problem

Philosophers have been thinking about ethics for thousands of years, and we can apply that experience to robot cars. One classical dilemma, proposed by philosophers Philippa Foot and Judith Jarvis Thomson, is called the Trolley Problem: Imagine a runaway trolley (train) is about to run over and kill five people standing on the tracks. Watching the scene from the outside, you stand next to a switch that can shunt the train to a sidetrack, on which only one person stands. Should you throw the switch, killing the one person on the sidetrack (who otherwise would live if you did nothing), in order to save five others in harm’s way?

A simple analysis would look only at the numbers: Of course it’s better that five persons should live than only one person, everything else being equal. But a more thoughtful response would consider other factors too, including whether there’s a moral distinction between killing and letting die: It seems worse to do something that causes someone to die (the one person on the sidetrack) than to allow someone to die (the five persons on the main track) as a result of events you did not initiate or had no responsibility for.

To hammer home the point that numbers alone don’t tell the whole story, consider a common variation of the problem: Imagine that you’re again watching a runaway train about to run over five people. But you could push or drop a very large gentleman onto the tracks, whose body would derail the train in the ensuing collision, thus saving the five people farther down the track. Would you still kill one person to save five?

If your conscience starts to bother you here, it may be that you recognize a moral distinction between intending someone’s death and merely foreseeing it. In the first scenario, you don’t intend for the lone person on the sidetrack to die; in fact, you hope that he escapes in time. But in the second scenario, you do intend for the large gentleman to die; you need him to be struck by the train in order for your plan to work. And intending death seems worse than just foreseeing it.

This dilemma isn’t just a theoretical problem. Driverless trains today operate in many cities worldwide, including London, Paris, Tokyo, San Francisco, Chicago, New York City, and dozens more. As situational awareness improves with more advanced sensors, networking, and other technologies, a robot train might someday need to make such a decision.

Human drivers may be forgiven for making an instinctive but nonetheless bad split-second decision, such as swerving into incoming traffic rather than the other way into a field. But programmers and designers of automated cars don’t have that luxury, since they do have the time to get it right and therefore bear more responsibility for bad outcomes.

Autonomous cars may face similar no-win scenarios too, and we would hope their operating programs would choose the lesser evil. But it would be an unreasonable act of faith to think that programming issues will sort themselves out without a deliberate discussion about ethics, such as which choices are better or worse than others. Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child? We don’t like thinking about these uncomfortable and difficult choices, but programmers may have to do exactly that. Again, ethics by numbers alone seems naïve and incomplete; rights, duties, conflicting values, and other factors often come into play.

If you complain here that robot cars would probably never be in the Trolley scenario—that the odds of having to make such a decision are minuscule and not worth discussing—then you’re missing the point. Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world. And it matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car) or reflexively without any deliberation (as may be the case with human drivers in sudden crashes).

Anyway, there are many examples of car accidents every day that involve difficult choices, and robot cars will encounter at least those. For instance, if an animal darts in front of our moving car, we need to decide: whether it would be prudent to brake; if so, how hard to brake; whether to continue straight or swerve to the left of right; and so on. These decisions are influenced by environmental conditions (e.g., slippery road), obstacles on and off the road (e.g., other cars to the left and trees to the right), size of an obstacle (e.g., hitting a cow diminishes your survivability, compared to hitting a raccoon), second-order effects (e.g., crash with the car behind us, if we brake too hard), lives at risk in and outside the car (e.g., a baby passenger might mean the robot car should give greater weight to protecting its occupants), and so on.

Human drivers may be forgiven for making an instinctive but nonetheless bad split-second decision, such as swerving into incoming traffic rather than the other way into a field. But programmers and designers of automated cars don’t have that luxury, since they do have the time to get it right and therefore bear more responsibility for bad outcomes.

The road ahead

Programming is only one of many areas to reflect upon as society begins to widely adopt autonomous driving technology. Here are a few others—and surely there are many, many more:

1. The car itself

Does it matter to ethics if a car is publicly owned, for instance, a city bus or fire truck? The owner of a robot car may reasonably expect that its property “owes allegiance” to the owner and should value his or her life more than unknown pedestrians and drivers. But a publicly owned automated vehicle might not have that obligation, and this can change moral calculations.

Just as the virtues and duties of a police officer are different from those of a professor or secretary, the duties of automated cars may also vary. Even among public vehicles, the assigned roles and responsibilities are different between, say, a police car and a shuttle bus. Some robo-cars may be obligated to sacrifice themselves and their occupants in certain conditions, while others are not.

2. Insurance

How should we think about risks arising from robot cars? The insurance industry is the last line of defense for common sense about risk. It’s where you put your money where your mouth is. And as school districts that want to arm their employees have discovered, just because something is legal doesn’t mean you can do it, if insurance companies aren’t comfortable with the risk. This is to say that, even if we can sort out law and ethics with automated cars, insurers still need to make confident judgments about risk, and this will be very difficult.

Do robot cars present an existential threat to the insurance industry? Some believe that ultra-safe cars that can avoid most or all accidents will mean that many insurance companies will go belly-up, since there would be no or very little risk to insure against. But things could go the other way too: We could see mega-accidents as cars are networked together and vulnerable to wireless hacking—something like the stock market’s “flash crash” in 2010. What can the insurance industry do to protect itself while not getting in the way of the technology, which holds immense benefits?

3. Abuse and misuse

How susceptible would robot cars be to hacking? So far, just about every computing device we’ve created has been hacked. If authorities and owners (e.g., rental car company) are able to remotely take control of a car, this offers an easy path for cyber-carjackers. If under attack, whether a hijacking or ordinary break-in, what should the car do: Speed away, alert the police, remain at the crime scene to preserve evidence…or maybe defend itself? For a future suite of in-car apps, as well as sensors and persistent GPS/tracking, can we safeguard personal information, or do we resign ourselves to a world with disappearing privacy rights?

What kinds of abuse might we see with autonomous cars? If the cars drive too conservatively, they may become a road hazard or trigger road-rage in human drivers with less patience. If the crash-avoidance system of a robot car is generally known, then other drivers may be tempted to “game” it, e.g., by cutting in front of it, knowing that the automated car will slow down or swerve to avoid an accident. If those cars can safely drive us home in a fully-auto mode, that may encourage a culture of more alcohol consumption, since we won’t need to worry so much about drunk-driving.

Predicting the future
We don’t really know what our robot-car future will look like, but we can already see that much work needs to be done. Part of the problem is our lack of imagination. Brookings Institution director Peter W. Singer said, “We are still at the ‘horseless carriage’ stage of this technology, describing these technologies as what they are not, rather than wrestling with what they truly are.” As it applies here, robots aren’t merely replacing human drivers, just as human drivers in the first automobiles weren’t simply replacing horses: The impact of automating transportation will change society in radical ways, and ethics can help guide it.

In “robot ethics,” most of the attention so far has been focused on military drones. But cars are maybe the most iconic technology in America—forever changing cultural, economic, and political landscapes. They’ve made new forms of work possible and accelerated the pace of business, but they also waste our time in traffic. They rush countless patients to hospitals and deliver basic supplies to rural areas, but also continue to kill more than 30,000 people a year in the U.S. alone. They bring families closer together, but also farther away at the same time. They’re the reason we have suburbs, shopping malls, and fast-food restaurants, but also new environmental and social problems.

Automated cars, likewise, promise great benefits and unintended effects that are difficult to predict, and the technology is coming either way. Change is inescapable and not necessarily a bad thing in itself. But major disruptions and new harms should be anticipated and avoided where possible. That is the role of ethics in public policy: it can pave the way for a better future, or it could become a wreck if we don’t keep looking ahead.