Stanford CIS

The First ‘Robotic' Car Crash Happened in 1947

on

"This case is a curio of weird American history, and it’s a frustrating look at gender roles at the time. But it’s not entirely frivolous to look back at the case now that a Google driverless car has been deemed at fault in a crash. The Frye case was recently resurfaced by Ryan Calo, a law professor at the University of Washington, in his paper Robots in American Law, which argues that judges have had a seriously flawed view of what a “robot” is.

The Frye case is instructive, he says, because judges regularly look at robots as entities that strictly follow orders. Historically, judges have defined robots as something that is programmed and is not capable of making its own decisions. In Frye, that definition was extended to a girl.

“The idea is that a robot is what a person or entity becomes when completely controlled by another. Such a person or entity is not capable of fault or knowledge, leaving the person behind the machine—the programmer—at fault,” Calo wrote. “While a robot, no one sees, hears, or does evil.”

Does that mean that if a driver can take control from an automated system that he or she must in order to not be at fault in the event of a crash? That’s the way courts have leaned before, Calo says, and it’s likely the reason why Google is focusing on making driverless cars without steering wheels altogether.

“As long as a human is in the loop somewhere, does that mean they can bear responsibility?” Calo said. “That came up in a lot of the cases I looked at—if there’s the possibility of a person intervening, they tend to be looked at as the one at fault.”"

Published in: Press , Robot rights , Robotics