Robotics As Social And Legal Policy

Ryan Calo, CIS Director of Privacy and Robotics, is mentioned in the following article by Kenneth Anderson, writing for the legal blog, The Volokh Conspiracy, on the recent achievements of robot technology and the differences between "open" and "closed" robots:

I write a lot about drones and warfare, robotics on the battlefield and the legal questions it raises, but my interest in robotics is much broader than that. It extends to the effort to build and utilize (see, I resisted the battlefield word “deploy”) robots in society, and particularly in day to day interactions. For example, consider the use of robot technologies in the nursing profession and eldercare. Such uses might include from technologies more or less available now, such as machines that can take over the pickup and distribution of medicines in a nursing center, using existing warehouse technologies. But over time, we want technologies that don’t yet exist, such as robots that can assist the elderly in their homes in multiple ways such as walking assistance, carrying the groceries, etc. — as distinguished from single purpose, roomba-like appliances.

The New York Times has a good piece today in the Science section, by Robert Markoff, on the current level of achievement in robotics for tasks that you or I would find simple to master, such as folding laundry. For a robot, it is really hard. The difficulties are daunting in all three areas typically associated with robots — mechanics of movement, computational processing, and sensors. Markoff is particularly good at describing the “brittleness” of robot behavior.

...

... Stanford Law School scholar Ryan Calo (who is one of the few studying the intersection of law and robotics) has a new paper out on SSRN, Open Robotics, asking much more fundamental questions. It is summarized in a fine on-line essay here. Calo describes the difference between “closed” and “open” robotics:

“Closed” robots resemble any contemporary appliance: They are designed to perform a set task. They run proprietary software and are no more amenable to casual tinkering than a dishwasher. The popular Roomba robotic vacuum cleaner and the first AIBO mechanical pet are closed in this sense. “Open” robots are just the opposite. By definition, they invite contribution. An open robot has no predetermined function, runs third-party or even open-source software, and can be physically altered and extended without compromising performance.

“Open” systems are more valuable because they invite the development of more valuable and differentiated uses, building on pre-existing platforms. Open robotics follows the same path of the development of personal computing, able to run software created by third parties, thus creating vastly more value — value which, in no small part, lay in what someone might program the machine to do. So far so good, says Calo. But then, enter the lawyers.

The trouble with open platforms is that they open the manufacturer up to a universe of potential lawsuits. If a robot is built to do anything, it can do something bad. If it can run any software, it can run buggy or malicious software. The next killer app could, well, kill someone.

Liability in a closed world is fairly straightforward. A Roomba is supposed to do one thing and do it safely. Should the Roomba cause an injury in the course of vacuuming the floor, then iRobot generally will be held liable as it built the hardware and wrote or licensed the software. If someone hacks the Roomba and uses it to reenact the video game Frogger on the streets of Austin (this really happened), then iRobot can argue product misuse.

But what about in an open world? Open robots have no intended use. The hardware, the operating system, and the individual software — any of which could be responsible for an accident — might each have a different author. Open source software could have many authors. But plaintiffs will always sue the deep pockets. And courts could well place the burden on the defendants to sort it out.

An obvious question is why this was’t an issue in personal computing and its open model. The difference, Calo notes, is largely that when things went wrong in the computer world, the losses — especially in the early years, before they started running things like grids and plants and real-world systems — were intangible and digital. The point about robots, however, is that they act directly in the gross physical world, and so the nature of injuries is very different, from the very beginning:

The damage caused by home computers is intangible. The only casualties are bits. Courts were able to invoke doctrines such as economic loss, which provides that, in the absence of physical injury, a contracting party may recover no more than the value of the contract. Where damage from software is physical, however, when the software can touch you, lawsuits can and do gain traction. Examples include plane crashes based on navigation errors, the delivery of excessive levels of radiation in medical tests, and “sudden acceleration”—a charge respecting which it took a team of NASA scientists ten months to clear Toyota software of fault.

Open robots combine, arguably for the first time, the versatility, complexity, and collaborative ecosystem of a PC with the potential for physical damage or injury. The same norms and legal expedients do not necessarily apply. In robotics no less than in the context of computers or the Internet, the possibility that providers of a platform will be sued for what users do with their products may lead many to reconsider investing in the technology. At a minimum, robotics companies will have an incentive to pursue the slow, manageable route of closing their technology.

To recap: Robots may well be the next big thing in technology. The best way to foster innovation and to grow the consumer robotics industry is through an open model. But open robots also open robotic platform manufacturers to the potential for crippling liability for what users do with those platforms. Where do we go from here?