Stanford CIS

Why the Argument Against a Ban on Autonomous Killer Robots Falls Flat

on

"“Sometimes, you can’t separate the technology from its use, and this can make a technology unethical,” he told io9. “For instance, nukes are inherently indiscriminate and inhumane, and there’s no morally defensible use of them. It’s not clear that this is the case with killer robots, but it’s possible—I think there needs to be more investigation.”

From a moral perspective, Lin says he’s sympathetic to the ban on killer robots. But like Ackerman, he says it’s hard to imagine how that can happen.

“Any AI research could be co-opted into the service of war, from autonomous cars to smarter chat-bots,” he says. “It’s a short hop from innocent research to weaponization.”"

Published in: Press , killer robots , Robotics