As robots leave the factory and battlefield and enter our homes, hospitals, and skies, it is not clear who will come to regulate them. But we can begin to spot some interesting patterns. Students of this transformative technology should keep their eye on both the claims and disavowals of authority over robots by state and federal agencies. Each hold potential dangers for our civil liberties and for the future of robotics.
A regulatory agency's scope is a function of a few factors, usually delineated by the agency's animating statute. Sometimes agencies are limited to a particular industry; the Federal Communications Commission regulates common carriers like AT&T and not financial institutions like Citibank. Most agencies also have a particular purpose or mission; the Federal Trade Commission only regulates business practices, and then only those that are anti-competitive, deceptive, or unfair.
Of course, the lines are often blurry. When the FCC promulgated rules around websites linked from television shows directed at children in 2006, some stakeholders questioned whether the FCC had jurisdiction over Internet content. When the FTC examined the impact of the Google-DoubleClick merger on competition in 2007, some held the view that the impact on privacy was largely irrelevant to this inquiry. Others, among them Peter Swire, argued otherwise (PPT).
It should come as no surprise, therefore, that the mainstreaming of robotics will pose challenges for regulators. Even if it is clear that a given agency should have something to say about a robot, it is not clear exactly what the scope of their authority will be. In recent remarks at Santa Clara Law School, officials from the National Highway Safety Traffic Safety Administration stressed their role as arbiters of safety. The agency does not formally intervene in the absence of data suggesting that new regulations will prevent, reduce, or mitigate accidents.
The agency appears to at least recognize that the willingness of drivers to cede control to a computer---to give up the thrill of driving---will influence whether some drivers adopt potentially safer autonomous features. (We saw a similar dynamic in the backlash against a short-lived requirement that cars not start unless the driver buckled her seat belt.) But several questions remain. For instance: what if law enforcement claims a right to force an autonomous car to slow down or pull over? I read an article by a lieutenant in the Tracy Police Department recently that appeared to assume police would have this power.
The Federal Aviation Administration worries about (and, for now, restricts) the domestic use of drones on the basis of safety. But the agency does not appear to have anything to say about the potential of this technology to infringe upon citizen and consumer privacy---a problem if most assume that drones are the exclusive province of the FAA.
The opposite is also true: an agency could, for whatever reason, claim authority over a given robot that slows its development. It is notoriously difficult to shepherd new technology through the Food and Drug Administration's approval process, particular when the agency has no analog from which to draw lessons. The robotic seal Paro (pictured), designed to comfort patients with psychological problems such as dementia, had to wait patiently for official word from the FDA about whether it constitutes a medical device. Several robotic products manufactured in the United States are not sold here precisely because of the concern over the costs associated with regulatory approval.
Eventually we could imagine an agency devoted specifically to robotics. (It may not be named as such, just as there is no major agency specifically dedicated to computers or cars.) Until then, it makes sense to watch agency claims of authority over robots of all kinds, as well as agency disavowals of such authority. Each hold their dangers.
Image credit: Clio1789