From The Atlantic:
Robots are replacing humans on the battlefield--but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians' duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn't be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.
The ethics of military robots is quickly marching ahead, judging by news coverage and academic research. Yet there's little discussion about robots in the service of national intelligence and espionage, which are omnipresent activities in the background. This is surprising, because most military robots are used for surveillance and reconnaissance, and their most controversial uses are traced back to the Central Intelligence Agency (CIA) in targeted strikes against suspected terrorists. Just this month, a CIA drone --a RQ-170 Sentinel--crash-landed intact into the hands of the Iranians, exposing the secret US spy program in the volatile region.
The US intelligence community, to be sure, is very much interested in robot ethics. At the least, they don't want to be ambushed by public criticism or worse, since that could derail programs, waste resources, and erode international support. Many in government and policy also have a genuine concern about "doing the right thing" and the impact of war technologies on society. To those ends, In-Q-Tel--the CIA's technology venture-capital arm (the "Q" is a nod to the technology-gadget genius in the James Bond spy movies)--had invited me to give a briefing to the intelligence community on ethical surprises in their line of work, beyond familiar concerns over possible privacy violations and illegal assassinations. This article is based on that briefing, and while I refer mainly to the US intelligence community, this discussion could apply just as well to intelligence programs abroad...