Stanford CIS

The Reasonable Self-Driving Car

By Bryant Walker Smith on

Cross-posted from Volokh Conspiracy.

A common debate in many circles–including the comments on my posts here–is whether legal burdens, technical limitations, or consumer preferences present the greatest immediate obstacle to fully automated motor vehicles. (Fully automated vehicles are capable of driving themselves anywhere a human can. In contrast, the low-speed shuttles from my post on Monday are route-restricted, and the research vehicles that regularly appear in the news are both route-restricted and carefully monitored by safety drivers.)

An entirely correct response is that the technologies necessary for full automation are simply not ready. Engineering challenges will be overcome eventually, but at this point they are varied and very real. If they were not, we would already see fully self-driving cars operating somewhere in our diverse world–in Shanghai or Singapore, Abu Dhabi or Auckland.

The deeper issue, which manifests itself in law, engineering, and economics, is our (imperfect and inconsistent) societal view of what is reasonably safe, because it is this view that determines when a technology is ready in a meaningful sense. Responsible engineers will not approve, responsible companies will not market, responsible regulators will not tolerate, and responsible consumers will not operate vehicles they believe could pose an unreasonable risk to safety.

How safe is safe enough? One answer, that self-driving cars must perform better than human drivers on average, accepts some deaths and injuries that a human could have avoided. Another answer, that self-driving cars must perform at least as well as a perfect human driver for every individual driving maneuver, rejects technologies that, while not perfect, could nonetheless reduce total deaths and injuries. A third answer, that self-driving cars must perform at least as well as corresponding human-vehicle systems, could lock humans into monitoring their machines–a task at which even highly trained airline pilots can occasionally fail due to understimulation or overstimulation.

The secondary effects of automated vehicle crashes challenge these answers. A particularly tragic, sensational, or unusual crash could ultimately claim more lives by tarnishing technologies that might nonetheless represent a safety gain over human driving. (In other words, headlines about a single self-driving car crash could trump 30,000 obituaries.) Conversely, early incidents could ultimately save lives by providing the real-world data needed to accelerate the design of even safer systems.

Demonstrating reasonable safety may be more difficult than defining it. A rough non-Bayesian statistical calculation suggests that a fully automated vehicle concept would need to accumulate over 700,000 miles of unassisted driving in representative conditions to establish with 99 percent confidence that it crashes less frequently than conventional cars. An international standard for functional safety similarly establishes failure rates so low that showing that a system meets the least restrictive level “would involve testing the system continuously for more than ten years, under operational conditions, with no unsafe failures and no modifications to” it (PDF). (An effort to adapt the automotive-specific standard (ISO 26262) to automated driving is ongoing–but will be for a very long time.)

These big numbers mean that an engineering safety case may need to rely on evidence beyond just empirical testing. From a legal perspective, I am particularly interested in, and would especially welcome your thoughts on, the role of process-based safety arguments, which focus on how a product is designed rather than how it performs. These arguments are central to functional safety standards, including ISO 26262. They also implicate the tension in law, as in engineering (PDF), between processes (or inputs) and products (or outputs).

In tort law, negligence is about process (how did the manufacturer perform?) while strict liability is about product (how did the manufactured item perform?), and a reasonably safe process can occasionally produce an unreasonably safe product. As sporadic failures of automated vehicles inevitably occur, negligence claims, punitive damage awards, and determinations of foreseeability may all depend in part on the reasonableness of a defendant manufacturer’s prior process-based safety arguments. (Much more has already been written on liability, and I will add another perspective tomorrow.)

In administrative law, regulation of outputs (how fast must a car be able to stop?) is generally preferable to regulation of inputs (what kind of brakes must a car have?). Because of the difficulty in prospectively defining and demonstrating automotive safety, however, initial regulatory efforts may need to emphasize inputs over outputs. For example, a state or federal agency might require automated vehicle developers to provide persuasive evidence of their engineering competence, safety record, and financial solvency before publicly testing or marketing their vehicles. Because agencies will need broad discretion to experiment, to adjust, and to impose ad hoc requirements that may border on the arbitrary, judicial deference to inchoate agency practice is especially important.

Such an input-based regulatory approach might impede start-ups and other small actors from independently developing or marketing automated vehicle technologies. Indeed, Nevada already imposes barriers to testing that could have this effect. This may be undesirable, particularly if rapid innovation is the paramount goal. However, it would mean that established companies with significant financial and reputational interests would likely be the first ones to vouch for the reasonable safety of these systems. Given the enormous stakes, indirectly forcing this kind of deliberation may be prudent.