Stanford CIS

Would a BDSM Sex Robot Violate Asimov's First Law of Robotics?

on

"Ryan Calo

Associate Professor, Law, University of Washington, and faculty co-director of the University of Washington Tech Policy Lab

As far as I know, Asimov was not trained as a lawyer. If he were, he might have drawn a distinction between a prima facie or “on its face” violation of the Laws of Robotics and a violation that ultimately requires a remedy. What you describe may be an example of the former but not the latter.

Consider the question of whether a boxer has committed battery against her opponent during a match. The law defines battery as unwanted physical touching that is harmful or offensive. Neither boxer wishes to be hit and of course it hurts to be. But no court would find battery because, while the act of hitting another boxer may appear to be battery on its face, the defendant would quickly note that her opponent expected and consented to being hit by stepping into the ring.

Similarly, while the robot may be inflicting pain in contravention of the admonition that a robot “not injure a human being,” ultimately the robot is doing so at the behest of a person and for the ultimate purpose of pleasure. (I will withhold judgment on whether robot BDSM implicates Zeroth Law).

Meanwhile, note what happens if we don’t make this distinction: a robot would not be able to drag an injured person out of a burning building because being moved is painful. That can’t be right, or at least it shouldn’t be how we program our machines."

"Patrick Lin

Professor of Philosophy and Director of the Ethics + Emerging Sciences Group at California Polytechnic State University

Technically, yes, anything that a robot does (or fails to do) that harms a human would violate Asimov’s first law of robotics. But this is true only if we understand “harm” in a naive, overly simplistic way. Sometimes, the more important meaning is net-harm. For example, it might be painful when a child has to have a cavity drilled out or take some awful medicine, but we understand that this is for the child’s own good: in the long term, the benefits far outweigh the initial cost. We’re actually trying to save her from a greater harm.

This is easy enough for us to understand, but some obvious concepts are notoriously hard to reduce into lines of code. For one thing, determining harm may require that we consider a huge range of future effects in order to tally up the net-result. This is an infamous problem for the moral theory of consequentialism that treats ethics as a math problem.

Any harm inflicted by a BDSM robot is presumably welcomed, because it’s outweighed by a greater pleasure experienced by the person. A BDSM robot would seem to inflict harm onto you, but if you had requested this, then it wasn’t wrongfully done. If the robot were to take it too far, despite your protests and without good reason, then it’s wrongfully harming you because it’s violating your autonomy or wishes. In fact, it’d be doubly wrong, since it violates Asimov’s second law, too.

But assuming the robot is doing what you want, the pain inflicted is only technically and temporarily harm, but it’s not harm in the commonsense way that Asimov’s law should be understood. A computer, of course, can’t read our minds to figure out what we really mean—it can only follow its programming. But ethics is often too squishy to lay out as a precise decision-making procedure, especially given the countless variables and variations around a particular action or intent. And that’s exactly what gives rise to the drama in Asimov’s stories."
Published in: Press , Robotics