The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
In June, CIS reported Nevada's enactment of AB 511, which directs the state's Department of Motor Vehicles (DMV) to "adopt regulations authorizing the operation of autonomous vehicles on highways within the State of Nevada." Pursuant to this mandate, the DMV has now issued draft regulations. After holding a series of public workshops and hearings (not yet posted) and submitting its proposed regulations for legislative review in accordance with the Nevada Administrative Procedure Act, the DMV could issue final regulations as early as March 2012. [Update: The final regulation is available in the Nevada Register. I have offline copies of the drafts.]
According to the Nevada Legislature's website, AB 511 "revis[ing] certain provisions governing transportation" passed the Assembly (36-6) and the Senate (20-1) and was signed into law by the governor this week. Although I am aware of no law that prohibits driverless cars, this appears to be the first law officially to sanction the technology. Specifically, the law provides that the Nevada Department of Motor Vehicles "shall adopt regulations authorizing the operation of autonomous vehicles on highways within the State of Nevada." The law charges the Nevada DMV with setting safety and performance standards and requires it to designate areas where driverless cars may be tested. (Note that this could take some serious time: Japan, for instance, has been promising standards for personal robots for years and has yet to release them.)
Is it lawful for a car to drive itself? In the absence of any law to the contrary, it should well be. A new bill is working its way through the Nevada state legislature that would remove any doubt in that state. A.B. 511 directs the Nevada Department of Transportation to authorize autonomous vehicle testing in certain geographic areas of Nevada. Should vehicles meet Nevada DOT standards, they would be permitted to "operate on a highway." The bill defines not only autonomous vehicle, but artificial intelligence as well. AI is "the use of computers and related equipment to enable a machine to duplicate or mimic the behavior of human beings." An autonomous vehicle uses "artificial intelligence, sensors, and [GPS] coordinates to drive itself." To be clear: autonomous vehicles are not yet the law of the land in Nevada. This bill must pass through two committees and receive a hearing before it can be voted on and become law. Some preliminary thoughts on the bill in its present form follow.
The term “hacking” has come to signify breaking into a computer system. A number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation. Other laws and standards incentivize private firms to use best practices in securing computers against attack.
In February, a South Korean woman was sleeping on the floor when her robot vacuum ate her hair, forcing her to call for emergency help. It may not be the dystopian future that Stephen Hawking warned us about – where intelligent devices “spell the end of the human race” – but it does highlight one of the unexpected dangers of inviting robots into our home.
"Bryant Walker Smith, a law professor at University of South Carolina and Stanford who studies autonomous vehicles, said he thinks ground robots could eventually be big, even if they are more of a public relations gimmick at first.
"“It’s the fear of robots,” Bryant Walker Smith, a fellow at the Center for internet and Society at Stanford Law School who studies driverless cars, said.
“There’s something scarier about a machine malfunctioning and taking away control from somebody.”"
"Robots can show emotions without actually having emotions, though. "Robots are now designed to exhibit emotion," says Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. "When we say robots have emotion, we don't mean they feel happy or sad or have mental states. This is shorthand for, they seem to exhibit behavior that we humans interpret as such and such.""
"The way the robot was used in the Dallas case is likely legally no different from sending an officer in to shoot a hostile suspect, according to University of Washington law professor Ryan Calo.
Still, the Dallas Police Department's decision to use the unit in this way could have a major effect on how the public views the increasing integration of robots into daily life, he said.
""Given how many police [departments] have robots and given how versatile they are and the various uses to which they've been put, including in hostage situations, I think we'll find that there have been other examples of this," says Ryan Calo, a professor at the University of Washington School of Law who studies robotics and cyberlaw. "As far as I know, this is a first time that they've used a robot to intentionally kill someone."
"Robot legal theorist Ryan Calo writes, "I thought you might enjoy my new paper, canvassing decades of American case law involving robots. Courts have had to decide, for instance, whether a robot represents something 'animate,' whether the robot band at Chuckie Cheese 'performs,' and whether a salvage crew 'possesses' a ship wreck by visiting it with a robot sub."
"Automation extends far beyond the battlefield, sometimes with profound implications. Peter Asaro, a philosopher who studies artificial intelligence, says questions need to be asked about how and when humans transfer control to machines in their every day lives.
"“The law tends to assume that people intend what they do, or at least are able to foresee the consequences of what they do,” said Ryan Calo, an assistant professor of law at the Univ. of Washington’s School of Law, in an exclusive interview with R&D Magazine.
The prospect of systems making decisions with no personal foresight could result in personal injury with no perpetrator responsible. “And that’s the concern,” he said.
"A new article calls for the law to catch up with robotic technology. Ryan Calo, assistant professor in the University of Washington School of Law, says it’s time laws reflect the rise of robotics and artificial intelligence.
"Ryan Calo, assistant professor of law at the UW School of Law, is clear about the significance of We Robot, an annual conference on robotics law and policy.
“There will be some event in the world involving robots, and everyone will look around and say, ‘Oh gosh! Who’s thinking about this?’” said Calo, program chair for the event. “The hope is that they’ll come to our papers, and watch the video of our conference.”"
Learn about the Center for Internet and Society. Come meet CIS and hear about our exciting work and ways to get involved. Learn about the Fair Use Project, Consumer Privacy Project, and more. Lunch will be provided. RSVP for this free event today.
Listen to the full radio show (in German) at Deutschlandradio.
"On the other hand: even algorithms can make mistakes. You will eventually written by humans. And just legal texts can be difficult in a formalized language to translate. They are, says Woodraw Hartzog, just not made for it to be automated. And they are not made to be enforced to one hundred percent."
October 27, 2011
Stanford Center for Internet and Society
John O. McGinnis
Lawrence B. Solum