Focus on Human Decisions, Not Technological Ethics of Police Robots

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
July 14, 2016

The Alberta wildfires. The Deepwater Horizon oil spill. The meltdown of Fukushima.

Robots have been used to address each of these emergencies, and many more. So it should come as no surprise that police in Orlando and Dallas would use robots to respond to the recent attacks in those cities. In Orlando, a bomb robot was sent in to the Pulse nightclub after a SWAT team knocked down a wall to get in. It sent images back to law enforcement officials, who believed that the gunman had strapped explosives to some of the victims because of a photo of a battery part next to a body. (It was later revealed the part had fallen out of an exit sign or smoke detector.)

In Dallas, to my great shock, the police used their robot intentionally to kill someone. But rather than focus on the technology, we should focus on whether it was legitimate to kill Micah Johnson instead of incapacitating him. Because robots could do either.

We shouldn't pretend robots are the same as other tools. We wouldn't be having a national conversation about the use of a knife to kill a suspect. But ultimately the ethical issues around robots have to do with the new capabilities they afford. Police and others must think through how to preserve existing rights and values in light of these new affordances.

When the next crisis hits, we are going to want robots on hand. But we are also going to need policies around what is an acceptable use, and what is not.