This article is adapted from a forthcoming peer-reviewed essay in Volume 61 of the Communications of the ACM.
The pedestrian who was struck and killed by a self-driving Uber car in Arizona this week was not the first person to die in a collision involving a vehicle that was driving itself. In 2016, a driver was killed in a crash while his Tesla was in Autopilot mode. But there is a big difference between these two stories. The Tesla driver had made a decision to engage Autopilot and arguably assumed the risk of an accident. The pedestrian who recently died in Arizona took on no such obligation. This distinction has given rise to a great deal of recent commentary about self-driving vehicles and liability, with some speculating that Uber’s accident could delay wider deployment of the technology.
In its centuries of grappling with new technologies, however, the common law has seen tougher problems than these and managed to fashion roughly sensible remedies. Uber will likely settle its case with the pedestrian’s family. If not, a court will sort it out.
Read the full piece at Slate.