Cross posted from the Robotics and the Law blog.
“One of the most significant obstacles to the proliferation of autonomous cars is the fact that they are illegal on most public roads.” That’s what Wikipedia tells us—at least until I change it. I can’t change a New York Times op-ed that declared “driverless cars” to be “illegal in all 50 states” or the many articles that have repeated this claim.
To the extent that such pronouncements of illegality reflect assumption rather than analysis, they are inconsistent with our nation’s entrepreneurial narrative: An invention is not illegal simply because it is new, and a novel activity is not prohibited just because it has not been affirmatively permitted. So to determine the actual legal status of the automated vehicles that may someday roam our roads, I reviewed relevant law at the international, national, and state levels. While my 100-page study raises a number of questions about both the ultimate design of these vehicles and the duties of their human operators, it finds no law that categorically prohibits automated driving. In short, even without specific legislation, automated vehicles are probably legal in the United States.
A striking corollary of this finding is that Nevada, Florida, and California (the three states that have already enacted pertinent legislation) did not really “legalize” automated vehicles, as has been popularly reported. Instead, those recent laws primarily regulate these technologies. In Nevada, for example, both an automated vehicle and its operator must be specially registered with the state, but across the border in Arizona, where a similar bill failed to pass, no such requirements exist.
The laws are significant for other reasons as well: They endorse the potential of, catalyze important discussions about, and establish basic safety requirements for these long-term technologies. To a more limited extent, these laws also reduce legal uncertainty: “Definitely legal” sounds very different than “probably legal.”
Curiously, however, one of the stronger challenges to the legality of automated vehicles is actually a law that no state can repeal. After World War II, thousands of Americans began shipping their cars across the Atlantic to motor through Europe, where they encountered a variety of drivers—of horses, pack animals, and livestock in addition to cars and bikes—who were following a variety of road customs. National governments, including the United States, sought to harmonize these customs through the 1949 Geneva Convention on Road Traffic. One of the rules of the road specified in this international agreement is that every kind of road vehicle “shall have a driver” who is “at all times … able to control” it. Because the treaty is federal law—domestically comparable to a statute enacted by Congress—no state government or federal administrative agency can lawfully contravene it.
Fortunately for Nevada and its early-adopting brethren, this treaty provision is not necessarily inconsistent with automated driving. Human operators are able to control today’s research vehicles by starting them, stopping them, and intervening at any point along the way. Even a vehicle without a human behind the wheel would probably satisfy this requirement if it performs at least as safely, reasonably, and lawfully as a human driver would. A vehicle that operates within these bounds would essentially be under control, regardless of whether its legal driver is a human, a computer, or a company. The upshot: Emerging technologies are much more likely to shape the future interpretation of this treaty language than the language is to shape the future development of these technologies.
Nonetheless, significant legal uncertainty does remain, even in Nevada, Florida, and California. Take two examples. First, the human who operates or otherwise uses an automated vehicle may need to participate more actively in that operation than the particular technology itself may demand. New York rather uniquely requires a driver to keep one hand on the steering wheel—though it does not require her to actually steer the vehicle to which the wheel is attached. The District of Columbia, among others, prohibits “distracted driving” and mandates “full time and attention” during operation—requirements that the Autonomous Vehicle Act recently passed by its council will not change. And in state tort law, even driver behavior that is not expressly illegal might nonetheless be civilly negligent.
Second, current rules of the road reflect the fact that human drivers necessarily make real-time decisions that are generally judged, if at all, only afterward. Automated driving still requires human decisions, but they are the anticipatory decisions of human designers rather than or in addition to the reactive decisions of human drivers. At the state and local levels, how and to whom will laws that prescribe “reasonable,” “prudent,” “practicable,” and “safe” driving apply? And at the federal level, what will constitute the kind of “unreasonable risk” that triggers a vehicle recall? Do standards like these merely require an automated vehicle to perform as well as a reasonable human driver—or will governments, courts, and consumers expect something more? In particular, when crashes inevitably occur, how will legal responsibility be divided among manufacturers, designers, data providers, owners, operators, passengers, and other potential parties?
These are just some of the important questions that will emerge as particular automation technologies are further developed, tested, and ultimately commercialized. Governments may not be able to answer them yet (and perhaps they shouldn’t yet try), but this does not mean that automated vehicles are illegal. To the contrary, on this threshold question of legality, my analysis suggests that while the road may be curvy, the lights are not all red.
Bryant Walker Smith researches and teaches on the legal aspects of increasing vehicle automation. Stanford will host the Transportation Research Board’s Vehicle Automation Workshop on July 16-19, 2013.