Backseat Driving

Nevada. Florida. Hawaii. Arizona. Oklahoma. As legislators move to expressly regulate automated driving, I’ll be tracking state-by-state developments on this wiki and discussing themes on this blog.

I’ll begin that discussion with a basic legal question: Who drives an automated vehicle? The answer might be no one—a truly driverless car in the legal and technical senses. It might be a natural person—the individual owner (if there is one), the occupant (ditto), or the individual who initiates the automated operation (ditto again). It might be a company—the corporate owner, the service provider, or the manufacturer. Depending on the context, it might even be some combination of these possibilities.

How might this answer matter? Let’s consider three examples: California’s motor vehicle code, Nevada’s “autonomous vehicle” statute and proposed regulation, and the 1949 Convention on Road Traffic (Geneva).

California’s motor vehicle code, which does not explicitly address automated driving, makes frequent reference to drivers and driving. For example, a “person who drives a vehicle upon a highway in willful or wanton disregard for the safety of persons or property is guilty of reckless driving.” But what if there is no person, natural or corporate, who “drives or is in actual physical control” of an automated vehicle? Without specific legislative or administrative action, this question might not be answered definitively until a police officer opts to stop an automated vehicle, whoever receives the ticket chooses to challenge it, and a statewide court ultimately sees fit to issue a decision with general application.

Nevada’s groundbreaking statute defines “autonomous vehicle” as “a motor vehicle that uses artificial intelligence, sensors and global positioning system coordinates to drive itself without the active intervention of a human operator” and additionally mandates a license endorsement that “must, in its restrictions or lack thereof, recognize the fact that a person is not required to actively drive an autonomous vehicle.” (Indeed, in one narrow context, “a person shall be deemed not to be operating a motor vehicle” that is being lawfully “driven autonomously.”) Who, then, is the “person who drives or is in actual physical control of” such a vehicle?

The Nevada Department of Motor Vehicles has adjusted its approach toward this question. An early draft of its implementing regulation provided that “[i]f a driver is not required, the autonomous technology shall be granted all of the rights and shall be subject to all of the duties applicable to the driver of a vehicle, except those provisions which by their nature can have no application.” In contrast, the latest draft (the product of an impressive effort for which the DMV should be commended) now states that “[f]or the purpose of enforcing the traffic laws and other laws applicable to drivers and motor vehicles operated in this State, the operator of an autonomous vehicle that is operated in autonomous mode shall be deemed the driver of the autonomous vehicle regardless of whether the person is physically present in the autonomous vehicle while it is engaged.” The operator, who must hold a special driver’s license endorsement, is “the person” who “causes the autonomous vehicle to engage, regardless of whether the person is physically present in the vehicle while it is engaged.” This language raises at least three questions. What “causes” the engagement? Is the person causing it necessarily a natural person? And can the DMV lawfully deem that person to be the driver of a vehicle that by statute “drive[s] itself”?

This ordinarily simple search for the ordinarily manifest driver is global. In 1950, on the advice of the Senate, the United States ratified the 1949 Convention on Road Traffic (Geneva), an international agreement “promoting the development and safety of international road traffic by establishing certain uniform rules.” Article 8 of that treaty provides that “[e]very vehicle or combination of vehicles proceeding as a unit shall have a driver" and that “[d]rivers shall at all times be able to control their vehicles or guide their animals." Article 1 defines “driver” as “any person who drives a vehicle … or guides draught, pack or saddle animals or herds or flocks on a road, or who is in actual physical control of the same...." (The United States is not a party to the 1968 Convention on Road Traffic (Vienna), which contains similar provisions.)

The effects of this treaty are not confined to Geneva, The Hague, New York, or the District of Columbia. In the United States, “[a]ll treaties … shall be the supreme law of the land; and the judges in every state shall be bound thereby….” Nonetheless, the courts have devoted considerable effort to defining the subset of treaties that constitutes “[a]ll treaties”: Under the case law, a treaty does not necessarily have “automatic domestic effect as federal law upon ratification,” and even a treaty that is “self-executing” in this sense does not necessarily “grant … individually enforceable rights.” At least two current Supreme Court justices do appear to believe the 1949 Convention to be self-executing (at least prior to the opinion from which they dissented), and several lower courts have considered this treaty in cases involving individuals who drove in the United States with an international driving permit or a foreign driver's license.

There are a number of ways in which automated driving could implicate the 1949 Convention’s “driver” provisions. First, a court applying existing statutory law might interpret that law in a way that is consistent with its understanding of the treaty, perhaps holding, for example, that even an automated vehicle has a driver under a state’s motor vehicle code. Second, if a state legislature enacts a statute expressly permitting automated driving, a plaintiff might assert that the supremacy clause preempts that statute (my thanks to William Baude for the citation). Third, if a federal or state administrative agency promulgates a regulation with the same effect, a plaintiff might challenge that regulation as contrary to law under the federal Administrative Procedure Act or the state equivalent. (Notably, in each of these actions the plaintiff would need to satisfy the relevant standing requirement by demonstrating some particularized harm.) Fourth, a plaintiff injured by an automated vehicle might seek to use the treaty to establish certain elements of its cause of action. Finally, because they either believe they should or believe they must, state legislators—and both state and federal regulators—may simply exercise their respective authority in a way that complies with their understanding of the treaty.

But what should that understanding be? Can a world in which some vehicles are self-driving or driverless comply with the 1949 Convention’s apparent requirement that every vehicle have a driver who “shall at all times be able to control” it? And can that world be reconciled with state laws that either assume or assign such a driver? Stay tuned for future posts—and for an ever more interesting future.

 

Photo by Steve Jurvetson.

Comments

Hi Bryant

Just wanted to stop by and say I love your blog and the new Wiki! We had the same idea but hadn't gotten around to it - then again I think I would trust you for higher accuracy with the Wiki anyway. We've added a link to your site and to the Wiki on Driverless Car HQ.

Hey Bryant, my first time on your blog. I think this is a fantastic post. Looking forward to reading more.

Leaving aside the potential legal interpretations of how to deal with driverless cars with our current laws...

I think when the endgame of driverless cars arrives - That is most people not owning a car but requisitioning one like you do with a taxi - the simplest solution would be to make the programmer legally liable for any accidents. So the solution is that the risk of an accident is incorporated

But even with our current legal system I would argue that the code for the AI in a driverless car is a feature in the same way the brakes are. If a car fails to brake due to a manufacturing imperfection then surely the automaker is at fault even if a human is driving the car? Presumably there is a strong case to argue that the AI coding is a manufacturing issue. On the flip side, if the braking system isn't maintained at the mechanics then the fault lies with the car owner/driver, so perhaps if the owner of a driverless car didn't get the system patched, defgragmented or virus checked they would then be liable?

Add new comment