By Ryan Calo on November 25, 2009 at 3:41 pm
I’m in the middle of writing a paper on liability for harm caused by (or with) personal robots. The paper grows out of a panel that Dan Siciliano and I organized around the present, near future, and far future of robotics and the law. I’ve recently received some media coverage that, while welcome and accurate, presents a danger of oversimplifying my position. Specifically, a few people have understood my remarks to suggest that manufacturers should enjoy total immunity for the personal robots they build and sell, merely because doing otherwise would chill innovation.
This post develops my position in a little more detail. On my view, robotics manufacturers should be immune from certain theories of civil liability—particularly those premised on the range of a robot’s functionality. I don’t believe that the law should bar accountability for roboticists in all instances. Nor am I by any means certain that my suggestion represents the exact right way to handle liability. But I am convinced that we should talk about the issue. The alternative is to risk missing out on a massive global advance in technology capable of substantially better our world.
The market for personal or service robotics, in the sense of robots for everyday use by consumers, is likely to expand substantially over the next few years. UN statistics, for instance, project that millions of personal or service robots will enter the home this and next year, and one research group predicts personal robotics will be a multi-billion dollar industry by 2015. The applications for these robots are potentially infinite, occurring in such areas as physical or psychological therapy, education, eldercare, exploration, hostage negotiation, rescue, entertainment, and home security. As with personal computers, many applications will arise directly from consumer ingenuity. (This may be why the South Korean government has set a goal of placing a robot in every home by 2015.)
Inevitably, some of these robots will occasion litigation. In speaking with roboticists (Andrew Ng and Oussama Khatib at Stanford University, for instance, and several engineers at the robotics start up Willow Garage), it became clear that building safe robots is an utmost priority. Engineers at Stanford and elsewhere are working on a “human-centric” approach to personal robotics, building special sensors, motors, and materials that decrease the risk of active or passive injury. Nevertheless, I believe that a completely foolproof personal robot is unlikely to be possible. Some person or property will inevitably be harmed, due either to imperfect design, or to the negligence or malice of a person exerting control over a robot.
Liability for harm caused by a personal robot is going to be very difficult to sort out. Robot control runs the gamut from relatively straightforward teleoperation to near complete automation. Robots are made up of frames, sensors, motors, and other hardware, of course, but their behavior is often governed by complex software. Both the hardware and the software can be modified—open source robotic software particularly could have hundreds of authors. It is far from clear how standard tort concepts such as foreseeability, product misuse, design defect, intentionality, and proximate cause will play out in light. (Sam Lehman-Wilzig made some of these points as far back as 1981 in his wonderful Frankenstein Unbound.)
In addition to being singularly difficult, such litigation will be high profile. Robots receive a tremendous amount of media coverage, especially in recent years. Moreover, as Ken Anderson points out, early adopters of robotics are likely to be populations such as the elderly or disabled that need in-home assistance. Other early applications have involved helping autistic children. These populations would make understandably sympathetic plaintiffs in the event of litigation.
Robots already flourish in certain contexts—space exploration, the battlefield, the factory. But note that these are contexts that have built in immunity. Military contractors are largely immune for accidents involving the weapons they build, as Ken also pointed out in our panel. Workplace injuries tend to be compensated through state workers’ compensation schemes. No such blanket protections operate in the home or public street.
Nor can we handle robot liability the way we handle liability for consumer software, another area plagued by complexity and where we place a premium on “generativity” (to use Jonathan Zittrain’s formulation). With software, as Dan recently reminded me in conversation, we allow developers to disclaim responsibility (or “warranty for any particular purpose”). It’s one thing not to be able to sue Microsoft because Word or Windows crashed and lost a document; it’s quite another not to be able to sue a robotics manufacturer because its product crashed into an object or a person.
So what do we do? How do we preserve incentives not just to build robots, but to build them to be as versatile as possible? My view, and the working thesis of my paper, is that we should take a page from the thin book of Internet law. Website services have flourished despite the many ways they can be and are misused. This is due in no small part to the immunity websites enjoy for most of the actions of their users under Section 230 of the Communications Decency Act. Notably, Section 230 also immunizes web services for actions they take in policing the conduct and content of their users. The system is imperfect—it's hard to tell who the publisher is for some content, for instance, and the availability of anonymity blocks redress in some instances—but it’s still no coincidence that Google, Facebook, MySpace, LinkedIn, and other web giants are all U.S. companies.
I intend to argue that we can and should similarly immunize robotics manufacturers for many of the uses to which their products are put. Robotics manufacturers cannot be named as defendants every time a dinner guest trips over a Roomba or a teenager reprograms the service robot to menace his sister. That a robot can do a particular activity should not open its manufacturer up to liability. Robots should not be treated like guns that, when too easily modified, can subject the manufacturer to liability.
We should take another page from Section 230 and consider immunity for harm attributable to safety features. Cars are relatively well understood, with standardized components and interiors. Thus, it may make sense to hold today’s manufacturers accountable for “aggressive” airbags that cause needless injury. But cars developed as consumer products a hundred years ago, prior to robust product liability laws and industry standards. Personal robots may not survive similar treatment.
It is these sorts of upfront immunities that I believe legislation should address. Clearly tort law has a role to play, just as insurance, industry standards, and other regulatory forces do. As Wendy Wagner argues, litigation often generates product safety information more efficiently than administrative bodies, and the threat of law suit (among many, many other things) helps keep safety in the minds of designers. But we cannot afford existing and prospective robotics manufacturers to be hauled into court for all the ways consumers will use them. Finally, in thinking through liability for autonomous robots, we should keep firmly in mind the harm caused by humans when we undertake the activity in question. Each year, about 40,000 people die from human-operated vehicles. Car crashes are the leading killer of teenagers. We should check our gut reaction to the inevitable first autonomous vehicle death against this backdrop.
Susumu Hirano June 14, 2010 at 1:10 amPermalink
I read your interesting Blog which I became to know through the May 2010 issue of the ABAJ sent to Japan two weeks ago. I have been involved in several study groups and some projects on robotic policies in Japan mainly led by the Ministry of Economy, Trade, and Industry (METI). And currently, I'm the chief of the Insurance Building Working Group as well as an associate secretary of the Robotic Business Promotion Council < http://www.roboness.jp/ > (in Japanese).
I share your concern that the litigation risk is a big obstacle for robotic industry; Japan's manufacturers are especially concerned about reputation risks. Also, availability of insurance for residual risks is important. Thus, we have been discussing these issues in Japan for some years.
Meanwhile, as I study cyber-law too, I understand your proposal to establish a statute like the DMCA Section 512 (immunity for intermediaries). In Japan, however, it seems difficult to legislate the statute even if the industry would be happy to that proposal. In Japan, there are some core pro-plaintiff groups who advocate for making the current Product Liability Act much stricter. Those groups would definitely oppose the proposal of immunity. Though I believe that developments and spreads of service robots would be for the welfare of the people especially consumers, the proposal for immunity would sound, unfortunately, "anti-consumers" in Japan. Therefore, in order to realize the immunity statute, many efforts must be made to persuade those hard-core pro-plaintiff groups in Japan.
Finally, I think that you, American relevant people, and we, Japanese ones, studying similar service robots (which we have recently become to call as "life-support robots" ) could cooperate each other to find a good solution for developing life-support robots for the welfare of general civil users. Thank you for your attention to this message.
Frank Mondana December 5, 2009 at 11:05 amPermalink
I can see the the outcome of robot litigation now.
"My Roomba 10.5 ran into my toe. This broke the nail causing severe pain for almost 20 minutes. As a result, technology now scares me. So much so that I had to quit my job as a secretary because the idea of touching a computer brings back the painful memories. I now am on permanent disability. The only way I can get through my horrifying life is with $250,000,000. The ADA needs to be amended so that any and all technology stays away from victims of high tech terror such as myself."
"Oh wait.. I am OK with my TV, ipod and cell phone. This took 2,000 hours of therapy at $500/hr so I need that money as well."
"I'm not in it for the money. I just want to make sure this doesn't happen to another innocent user."
Yep, can't wait for this.....
Kay Bradley December 1, 2009 at 11:14 amPermalink
Why not use existing structure for automobiles (and buses, load trucks, airplanes, etc.) as a basis? The market determines if someone wants a car with an airbag and other special (safety) features.. People name their cars (although I don't know if anyone has wanted to marry it before). And when you think about it, what is the real difference between automobiles and robots? Toyota now makes cars that keep you in the lane, and even brake for you. Large airliners practically fly themselves.
I don't think the manufactures should be totally held harmless - I simply don't trust the manufacturers and the review process (i.e., the FDA and medicine). It seems that if a robot has the capacity to kill or seriously harm humans. there should be a license and/or insurance required. An example would be licensed nurses in hospitals operating "machinery" which holds the patient's life in the balance. Usually the hospital holds a blanket insurance policy - but if the machine fails due to a defect you can bet the insurance attorney's will be talking to the manufacturer's attorney's.
Testing equipment/robots/programming can have some limited immunity, such as in drug testing trials.
In my opinion, there are two important areas to consider. First, what determines a robot from programming from machinery - and is it really relevant to do so? And second, what if a computer/machine/robot is hacked - who is responsible then?
Good luck with your project,
Ryan Calo December 5, 2009 at 1:40 pmPermalink
Thanks for your helpful comments. Around 1996, Curtis Karnow proposed an Turing Registry for expert systems that operates somewhat like insurance. Robotethicist AJung Moon also noted insurance as a potential avenue to defray legal risk (an idea she says she picked up at a conference in Japan) in a recent email to me, and I've see the suggestion in other places.
It's an attractive notion, though insurance regimes operate in part to domesticate the litigation risk associated a particular activity, and policies are often set on the basis of that risk. It can be nearly as chilling to have prohibitively high insurance costs in the face of legal uncertainty as it is to face that uncertainty. Also, we need to identify who assumes the risk to determine who takes out the policy (manufacturer? user? robot itself?).
It sounds like the general tenor of your comments though is that we can handle personal robots under existing models. I agree, but am proposing a model other than what we have for cars and airplanes. I'm saying that the versatility of personal robot applications trumps their arguably superficial resemblance to, for instance, vehicles. Nor do I think we want the kind of top-down, heavily regulated specifications you get with air travel and the FAA.
Add new comment