"Even non-lethal autonomous robots raise serious ethical question. As technology philosopher Peter Asaro told me in an email earlier this year, Google’s self-driving cars are perfect example. “The current prototypes from Google and other manufacturers require a human to sit behind the steering wheel of the car and take over when the car gets into trouble,” he said. “But how do you negotiate that hand-over of control?” He raises several examples: a driver wakes up from a nap an incorrectly thinks an oncoming truck is a threat. Does the car let the driver swerve into trouble when he or she is perfectly safe? “What if the person is legally drunk, but wants to take control?” he asks. And when something does go wrong with a self-driving car, how is blame portioned out?"
- Date Published:11/06/2015
- Original Publication:Inverse