Stanford CIS

Should The Law Punish Robot Tasks Differently?

By Ryan Calo on

I attended a fascinating thesis defense today on the subject of human-robot interaction by Stanford PhD candidate Victoria Groom.  HRI experiments apparently tend to focus on human encounters with robots; few studies test the psychology behind robot operation.  Groom’s work explores how we feel about the tasks we perform through robots.  One of the more interesting questions she and her colleagues ask is: to what extent do we feel like it’s really us performing the task?  The question is important where, as in the military, people work through robots to carry out morally charged tasks.  And the answer might have repercussions for how we think about evaluation and punishment.

It turns out that how we feel about tasks we perform through robots varies depending on certain conditions.  Groom and her colleagues show how people react differently if the robot is more or less anthropomorphic, if people teleoperate the robot or verbally instruct it, and if the robot is real or simulated digitally.  In one study, she found that actual, autonomous robots promote self-extension, i.e., the feeling that the technology is a part of you.  In another (PDF), she found that anthropomorphic robots tend to inhibit self-extension.  We tend to attribute the actions of a humanoid robot to the robot, not to ourselves, at least relative to a non-humanoid robot (like a robotic car).

This area of study has the potential to inform multiple aspects of the law.   One is punishment.  Depending on our reasons for punishing—rehabilitation, deterrence, or retribution—the introduction of robots may have distorting effects.  Consider the case of a soldier that misreads a situation and commands a ground or air robot to fire on civilians.  If we punish the soldier to rehabilitate him, he or others may feel a sense of injustice because no blame appears to flow to the robot.  If we punish out of retribution or to deter, the impact of punishment may be lessened because of a first or third-party perception that the robot is partly to blame.

You may be thinking: are you saying we should make a show of punishing the robot?  Maybe.  But more likely we should punish the soldier differently.  Groom suggests (PDF) that we can head off this issue to some extent by designing robots for their particular anticipated use.   Where we want to remind the human operator of her culpability, for instance, on the battlefield, we may want to maximize self-extension and presence.  Where we want to avoid trauma—for instance, in search and rescue operations—we may want to do the opposite.  But no design solution is perfect.  Add punishment to the growing list of legal issues that robots do or will implicate.

Published in: Blog , robots , Robotics