"Peter Asaro, vice-chair of the International Committee for Robot Arms Control — which is campaigning for a treaty to ban “killer robots” — questions whether a machine can be programmed to make the sort of moral and ethical choices that a human does before taking someone’s life.
Soldiers must consider whether their actions are justified and risks that they take are proportionate to a threat, he said.
“I don’t know that it’s a role that we can give to a machine,” he said. “I don’t know that looking at a bunch of different examples is going to teach it what it needs to know. Who is responsible if something goes wrong?”
If a robot follows its programming but does something wrong, it’s hard to decide who to hold responsible, Asaro said."