Police Robots Need to Be Regulated to Avoid Potential Risks

Publication Type: 
Other Writing
Publication Date: 
July 14, 2016

The robot used by the Dallas police department to kill Micah Johnson — the sniper who fired into a peaceful protest and killed five police officers, injuring others — was originally designed to defuse explosives. The police attached a pound of the explosive C4 to the robot, creating a makeshift weapon out of a design that was not intended to inflict harm on people. The robot was also remote-controlled, not autonomous. I include these details to clarify: This wasn’t quite a “Robocop” scenario. But it was the first time U.S. police have used a robot armed with lethal force to kill a suspect, and this deliberate move raises important questions for the future.

If armed robots can take police officers out of harm’s way, in what situations should we permit the police to use them? (The same question goes for police use of armed drones, which have been legalized in North Dakota as long as they are "less than lethal.") The use of an armed robot in a violent standoff may make sense, but equipping squad cars with robots as part of ordinary patrols, as some envision, is much murkier.

For example, if robots become ordinary in policing, should they carry weapons — lethal (firearms) or non-lethal (electric stun guns or tear gas)? Robots permit the use of force at a distance. If distance makes it easier to use force, shouldn't we be concerned at a time when there have been protests around the country over fatal encounters with the police?

And a robot is unlike a gun in that a gun may misfire, but it can’t be hacked. The market for police robots is emerging, but we as a society — and that includes the police — should be wary of any armed police robot that is vulnerable to takeover by third parties. Experience with the security of electronic devices doesn’t inspire confidence: If third parties can hack cars or toy drones, they can certainly hack police robots.

Read the full piece at The New York Times