I’m in the middle of writing a paper on liability for harm caused by (or with) personal robots. The paper grows out of a panel that Dan Siciliano and I organized around the present, near future, and far future of robotics and the law. I’ve recently received some media coverage that, while welcome and accurate, presents a danger of oversimplifying my position. Specifically, a few people have understood my remarks to suggest that manufacturers should enjoy total immunity for the personal robots they build and sell, merely because doing otherwise would chill innovation.
This post develops my position in a little more detail. On my view, robotics manufacturers should be immune from certain theories of civil liability—particularly those premised on the range of a robot’s functionality. I don’t believe that the law should bar accountability for roboticists in all instances. Nor am I by any means certain that my suggestion represents the exact right way to handle liability. But I am convinced that we should talk about the issue. The alternative is to risk missing out on a massive global advance in technology capable of substantially better our world.
The market for personal or service robotics, in the sense of robots for everyday use by consumers, is likely to expand substantially over the next few years. UN statistics, for instance, project that millions of personal or service robots will enter the home this and next year, and one research group predicts personal robotics will be a multi-billion dollar industry by 2015. The applications for these robots are potentially infinite, occurring in such areas as physical or psychological therapy, education, eldercare, exploration, hostage negotiation, rescue, entertainment, and home security. As with personal computers, many applications will arise directly from consumer ingenuity. (This may be why the South Korean government has set a goal of placing a robot in every home by 2015.)
Inevitably, some of these robots will occasion litigation. In speaking with roboticists (Andrew Ng and Oussama Khatib at Stanford University, for instance, and several engineers at the robotics start up Willow Garage), it became clear that building safe robots is an utmost priority. Engineers at Stanford and elsewhere are working on a “human-centric” approach to personal robotics, building special sensors, motors, and materials that decrease the risk of active or passive injury. Nevertheless, I believe that a completely foolproof personal robot is unlikely to be possible. Some person or property will inevitably be harmed, due either to imperfect design, or to the negligence or malice of a person exerting control over a robot.
Liability for harm caused by a personal robot is going to be very difficult to sort out. Robot control runs the gamut from relatively straightforward teleoperation to near complete automation. Robots are made up of frames, sensors, motors, and other hardware, of course, but their behavior is often governed by complex software. Both the hardware and the software can be modified—open source robotic software particularly could have hundreds of authors. It is far from clear how standard tort concepts such as foreseeability, product misuse, design defect, intentionality, and proximate cause will play out in light. (Sam Lehman-Wilzig made some of these points as far back as 1981 in his wonderful Frankenstein Unbound.)
In addition to being singularly difficult, such litigation will be high profile. Robots receive a tremendous amount of media coverage, especially in recent years. Moreover, as Ken Anderson points out, early adopters of robotics are likely to be populations such as the elderly or disabled that need in-home assistance. Other early applications have involved helping autistic children. These populations would make understandably sympathetic plaintiffs in the event of litigation.
Robots already flourish in certain contexts—space exploration, the battlefield, the factory. But note that these are contexts that have built in immunity. Military contractors are largely immune for accidents involving the weapons they build, as Ken also pointed out in our panel. Workplace injuries tend to be compensated through state workers’ compensation schemes. No such blanket protections operate in the home or public street.
Nor can we handle robot liability the way we handle liability for consumer software, another area plagued by complexity and where we place a premium on “generativity” (to use Jonathan Zittrain’s formulation). With software, as Dan recently reminded me in conversation, we allow developers to disclaim responsibility (or “warranty for any particular purpose”). It’s one thing not to be able to sue Microsoft because Word or Windows crashed and lost a document; it’s quite another not to be able to sue a robotics manufacturer because its product crashed into an object or a person.
So what do we do? How do we preserve incentives not just to build robots, but to build them to be as versatile as possible? My view, and the working thesis of my paper, is that we should take a page from the thin book of Internet law. Website services have flourished despite the many ways they can be and are misused. This is due in no small part to the immunity websites enjoy for most of the actions of their users under Section 230 of the Communications Decency Act. Notably, Section 230 also immunizes web services for actions they take in policing the conduct and content of their users. The system is imperfect—it's hard to tell who the publisher is for some content, for instance, and the availability of anonymity blocks redress in some instances—but it’s still no coincidence that Google, Facebook, MySpace, LinkedIn, and other web giants are all U.S. companies.
I intend to argue that we can and should similarly immunize robotics manufacturers for many of the uses to which their products are put. Robotics manufacturers cannot be named as defendants every time a dinner guest trips over a Roomba or a teenager reprograms the service robot to menace his sister. That a robot can do a particular activity should not open its manufacturer up to liability. Robots should not be treated like guns that, when too easily modified, can subject the manufacturer to liability.
We should take another page from Section 230 and consider immunity for harm attributable to safety features. Cars are relatively well understood, with standardized components and interiors. Thus, it may make sense to hold today’s manufacturers accountable for “aggressive” airbags that cause needless injury. But cars developed as consumer products a hundred years ago, prior to robust product liability laws and industry standards. Personal robots may not survive similar treatment.
It is these sorts of upfront immunities that I believe legislation should address. Clearly tort law has a role to play, just as insurance, industry standards, and other regulatory forces do. As Wendy Wagner argues, litigation often generates product safety information more efficiently than administrative bodies, and the threat of law suit (among many, many other things) helps keep safety in the minds of designers. But we cannot afford existing and prospective robotics manufacturers to be hauled into court for all the ways consumers will use them. Finally, in thinking through liability for autonomous robots, we should keep firmly in mind the harm caused by humans when we undertake the activity in question. Each year, about 40,000 people die from human-operated vehicles. Car crashes are the leading killer of teenagers. We should check our gut reaction to the inevitable first autonomous vehicle death against this backdrop.