Uber is testing its self-proclaimed “self-driving” vehicles on California roads without complying with the testing requirements of California’s automated driving law. California’s Department of Motor Vehicles says that Uber is breaking that law; Uber says it’s not. The DMV is correct.
Uber’s argument is textually plausible but contextually untenable. It exploits a drafting problem that I highlighted first when Nevada was developing its automated driving regulations and again when California enacted a statute modeled on those regulations. California’s statute defines “autonomous technology” as “technology that has the capability to drive a vehicle without the active physical control or monitoring by a human operator”—and yet the purpose of testing such a technology is to develop or demonstrate that consistent capability. Indeed, the testing provisions of the statute even require a human to actively monitor a vehicle that, by definition, doesn’t need active human monitoring.
This linguistic loophole notwithstanding, the testing requirements of California’s law were intended to apply to aspirationally automated driving. If not, then those requirements would not reach any vehicle being tested when the law was enacted, any vehicle being tested today, or any “test” vehicle whatsoever.
Uber understandably analogizes its activities to the deployment of Tesla’s Autopilot. In some ways, the two are similar: In both cases, a human driver is (supposed to be) closely supervising the performance of the driving automation system and intervening when appropriate, and in both cases the developer is collecting data to further develop its system with a view toward a higher level of automation.
In other ways, however, Uber and Tesla diverge. Uber calls its vehicles self-driving; Tesla does not. Uber’s test vehicles are on roads for the express purpose of developing and demonstrating its technologies; Tesla’s production vehicles are on roads principally because their occupants want to go somewhere.