Dr. Asaro is Associate Professor in the School of Media Studies at the New School in New York City. He is the co-founder of the International Committee for Robot Arms Control, and has written on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, aerial drones and autonomous vehicles.
Amazon, the company synonymous with online shopping, is supplying facial recognition technology to government and law enforcement agencies over its web services platform. Branded Rekognition, the technology is every bit as dystopian as it sounds.
Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons?
The Convention on Certain Conventional Weapons (CCW) at the UN has just concluded a second round of meetings on lethal autonomous weapons systems in Geneva, under the auspices of what is known as a Group of Governmental Experts. Both the urgency and significance of the discussions in that forum have been heightened by the rising concerns over artificial intelligence (AI) arms races and the increasing use of digital technologies to subvert democratic processes.
I have been asked by Science & Film to review the realism of EYE IN THE in terms of the new technologies we see deployed in the film. Most of the technologies employed in the film narrative have some basis in reality, though many are still in very early stages, or proof-of-concept, and remain far from the reliable and useful technologies depicted in the film.
Last week the Future of Life Institute released a letter signed by some 1,500 artificial intelligence (AI), robotics and technology researchers. Among them were celebrities of science and the technology industry—Stephen Hawking, Elon Musk and Steve Wozniak—along with public intellectuals such as Noam Chomsky and Daniel Dennett. The letter called for an international ban on offensive autonomous weapons, which could target and fire weapons without meaningful human control.
"Paradoxically, the human factor is also cited by those in favor of the development of lethal autonomous weapons. "Robots aren't scared," Steve Groves, from the conservative U.S. think tank Heritage Foundation, told CBS last May. "They don't have fits of madness. They don't react to rage."
The Campaign to Stop Killer Robots is an international coalition of 59 groups, including Human Rights Watch and the Nobel Women’s Committee.
Spokesman Peter Asaro, an affiliate scholar at the Center for Internet and Society at Stanford Law School, said an international treaty to ban the weapons was urgent.
He told The New Daily killer robots would make it difficult to hold anyone accountable for war crimes and atrocities.
"Among the signatories was Peter Asaro, an assistant professor at The New School for Public Engagement in New York and co-founder of the International Committee for Robot Arms Control.
The committee believes that making decisions “about the application of violent force must not be delegated to machines.”
"Alors, quelle activité en propre restera-t-il aux humains face à des robots habiles, véloces et calculateurs ? Le philosophe Dominique Lestel, dans son ouvrage A quoi sert
"Even non-lethal autonomous robots raise serious ethical question. As technology philosopher Peter Asaro told me in an email earlier this year, Google’s self-driving cars are perfect example. “The current prototypes from Google and other manufacturers require a human to sit behind the steering wheel of the car and take over when the car gets into trouble,” he said. “But how do you negotiate that hand-over of control?” He raises several examples: a driver wakes up from a nap an incorrectly thinks an oncoming truck is a threat.
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog will all be participating in this two-day conference.
Registration is open for We Robot 2015 and we have a great program planned:
Friday, April 10
Registration and Breakfast
Welcome Remarks: Dean Kellye Testy, University of Washington School of Law
Introductory Remarks: Ryan Calo, Program Committee Chair
For more information visit: https://citp.princeton.edu/event/lunch-timer-asaro-tang/
Location: Bowl 001, Robertson Hall
Food and discussion begin at 12:15 pm. Open to current Princeton faculty, fellows and students only. RSVP required. Co-sponsored with WWS and LAPA.
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog are listed as participants for We Robot 2014. Robotics is becoming a transformative technology. We Robot 2014 builds on existing scholarship exploring the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues. If you are on the front lines of robot theory, design, or development, we hope to see you.
The motion under debate will be:“Should there be an absolute ban on autonomous systems capable of using lethal force?” Two key speakers will argue for and against the motion, and respond to each other’s presentation. This will be followed by a discussion session with the audience, and a public vote.
FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City.
Peter Asaro (assistant professor in the School of Media Studies at The New School) and S. Matthew Liao (director of the Center for Bioethics at New York University) talk to Live Science's Denise Chow and Space.com's Tariq Malik about the ethics of AI.
Hours after gunman Micah Johnson ambushed police officers in downtown Dallas, he was killed by a bomb strapped on a police robot. Robots in the past have stopped a lot of dangerous situations, but using a robot to kill - that was a first for a domestic police force. Kris Van Cleave reports on the ethical questions about the use of robots to kill suspects.
Affiliate Scholar Peter Asaro is interviewed.