Stanford CIS

Will Robots Be 'Generative'?

By Ryan Calo on

I don’t know that generativity is a theory, strictly speaking.  It’s more of a quality.  (Specifically, five qualities.)  The attendant theory, as I read it, is that technology exhibits these particular, highly desirable qualities as a function of specific incentives.  These incentives are themselves susceptible to various forces—including, it turns out, consumer demand and citizen fear.

The law is in a position to influence this dynamic.  Thus, for instance, Comcast might have a business incentive to slow down peer-to-peer traffic and only refrain due to FCC policy.  Or, as Barbara van Schewick demonstrates inter alia in Internet Architecture and Innovation, a potential investor may lack the incentive to fund a start up if there is a risk that the product will be blocked.

Similarly, online platforms like Facebook or Yahoo! might not facilitate communication to the same degree in the absence of Section 230 immunity for fear that they will be held responsible for the thousand flowers they let bloom.  I agree with Eric Goldman’s recent essay in this regard: it is no coincidence that the big Internet players generally hail from these United States.

As van Schewick notes in her post, Zittrain is concerned primarily with yet another incentive, one perhaps less amenable to legal intervention.  After all, the incentive to tether and lock down is shaped by a set of activities that are already illegal.

One issue that does not come up in The Future of the Internet (correct me if I’m wrong, Professor Zittrain) or in Internet Architecture and Innovation (correct me if I’m wrong, Professor van Schewick) is that of legal liability for that volatile thing you actually run on these generative platforms: software.  That’s likely because this problem looks like it’s “solved.”  A number of legal trends—aggressive interpretation of warranties, steady invocation of the economic loss doctrine, treatment of data loss as “intangible”—mean you cannot recover from Microsoft (or Dell or Intel) because Word ate your term paper.  Talk about a blow to generativity if you could.

Why does this matter?   It matters because courts and policymakers are unlikely to continue to treat software this way when it shows up in machines that can physically act on the world.  In an essay entitled "A Robot In Every Home," Bill Gates opines that we are at the point today with robotics that we were with personal computers in the mid-1970s.   If he’s right, and I think he is, will robots have the qualities that together comprise generativity?

So far, so good.  The most sophisticated robots under development operate as open platforms, running a variety of software.  The leading robot operating systems (ROSs)—for instance, that under development by Willow Garage—are open source. Hardware, too, is generative.  There are numerous robot kits and tinkerer discussion fora.  So many people hacked the Roomba vacuum cleaner that iRobot ended up creating a separate product, the Create, just so people could experiment with it.

It’s possible that robots will be locked down because of security fears.  Researchers at University of Washington have already documented attacks on commercially available robots that connect to the Internet.  (For more on this topic, see my forthcoming book chapter from MIT Press.)  But before we even get there, we need to confront the issue of liability.

In a forthcoming law review article, I argue that although liability will be just as difficult to parse, robots will not be treated by litigants or courts the same way we treat software and computers. When software can result directly in physical harm to your person or property, when Word can touch you, it will prove very hard to argue losses are limited to the price of the software.  Indeed, multiple lawsuits involving physical consequences from software (e.g., glitches in radiation, navigation, and acceleration) have gotten serious traction in recent years.

I believe that, as in the case of firearms, general aviation, and the web, we should consider immunizing robotics manufacturers up front for what users run on their platforms.  In the absence of such immunity, I worry that roboticists will limit the functionality of robots and investors will look outside of the United States to Japan, South Korea, Italy, and other countries with a higher bar to litigation (and a head start). Robots will be in every home, but they won't be generative.

Image source: Jonathan McIntosh

This is a cross-post from an online symposium on Concurring Opinions.