Dr. Asaro is Assistant Professor in the School of Media Studies at the New School in New York City. He is the co-founder of the International Committee for Robot Arms Control, and has written on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, aerial drones and autonomous vehicles.
I have been asked by Science & Film to review the realism of EYE IN THE in terms of the new technologies we see deployed in the film. Most of the technologies employed in the film narrative have some basis in reality, though many are still in very early stages, or proof-of-concept, and remain far from the reliable and useful technologies depicted in the film.
Last week the Future of Life Institute released a letter signed by some 1,500 artificial intelligence (AI), robotics and technology researchers. Among them were celebrities of science and the technology industry—Stephen Hawking, Elon Musk and Steve Wozniak—along with public intellectuals such as Noam Chomsky and Daniel Dennett. The letter called for an international ban on offensive autonomous weapons, which could target and fire weapons without meaningful human control.
This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such a ban might be problematic for various reasons. It argues in favour of a theoretical foundation for such a ban based on human rights and humanitarian principles that are not only moral, but also legal ones. In particular, an implicit requirement for human judgement can be found in international humanitarian law governing armed conﬂict.
As the military’s armed surveillance drones have become the iconic weapon of the early twenty-first century, they have also introduced radical transformations in the traditional labor of those who operate them the pilots, crew, analysts, and commanders. In so doing, these transformations have engendered new kinds of subjectivity, with new ways of experiencing the work of surveillance and killing.
""What you still want is humans to designate the target in advance and ensure they are legal and lawful targets before the system is deployed," said Peter Asaro, a philosopher who studies artificial intelligence and is co-founder of the International Committee for Robot Arms Control."
"Peter Asaro, vice-chair of the International Committee for Robot Arms Control — which is campaigning for a treaty to ban “killer robots” — questions whether a machine can be programmed to make the sort of moral and ethical choices that a human does before taking someone’s life.
Soldiers must consider whether their actions are justified and risks that they take are proportionate to a threat, he said.
"“It is crucially important for the international community to establish a norm that prohibits delegating the authority to take human lives to machines,” Peter Asaro, a representative for the campaign, told me."
"Wer ein solches Verhalten für unmoralisch hält, obwohl das Leben der Nächsten gerettet wird, ist schon mitten in der Diskussion, die in den USA und in Europa eifrig geführt wird. Ron Arkin arbeitet für das Pentagon. Er entwickelt autonome Kampfroboter, die zum Äußersten bereit sind, aber zugleich Kollateralschäden vermeiden. Mit ihm und mit anderen wie Peter Asaro und Luís Moniz Pereira habe ich mich im März 2016 im Rahmen eines Symposiums zur Maschinenethik an der Stanford University getroffen.
""It raises a lot of concern about the increased weaponization of robots that the police use," said Peter Asaro, a co-founder of the International Committee for Robot Arms Control. The deadly use of the explosive C4 attached to the robot in Dallas is used by the military in combat situations.
"Once, I think, police departments have these kinds of weapons in their arsenal, it provides the opportunity to use them in a lot of different kinds of scenarios," said Asaro."
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog will all be participating in this two-day conference.
Registration is open for We Robot 2015 and we have a great program planned:
Friday, April 10
Registration and Breakfast
Welcome Remarks: Dean Kellye Testy, University of Washington School of Law
Introductory Remarks: Ryan Calo, Program Committee Chair
For more information visit: https://citp.princeton.edu/event/lunch-timer-asaro-tang/
Location: Bowl 001, Robertson Hall
Food and discussion begin at 12:15 pm. Open to current Princeton faculty, fellows and students only. RSVP required. Co-sponsored with WWS and LAPA.
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog are listed as participants for We Robot 2014. Robotics is becoming a transformative technology. We Robot 2014 builds on existing scholarship exploring the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues. If you are on the front lines of robot theory, design, or development, we hope to see you.
The motion under debate will be:“Should there be an absolute ban on autonomous systems capable of using lethal force?” Two key speakers will argue for and against the motion, and respond to each other’s presentation. This will be followed by a discussion session with the audience, and a public vote.
Hours after gunman Micah Johnson ambushed police officers in downtown Dallas, he was killed by a bomb strapped on a police robot. Robots in the past have stopped a lot of dangerous situations, but using a robot to kill - that was a first for a domestic police force. Kris Van Cleave reports on the ethical questions about the use of robots to kill suspects.
Affiliate Scholar Peter Asaro is interviewed.
Robotic warfare is no longer the realm of science fiction. From drones to the development of lethal, autonomous robotic weaponry, we take a look at the ethics of the future of killer robots and where this will ultimately lead mankind.
Are you worried about killer robots? Last week, some of the most prominent thinkers in science and technology signed an open letter that warned of the coming arms race should militaries pursue the development and deployment of artificially intelligent weaponry. The letter was written by The Campaign to Stop Killer Robots, an international coalition of NGOs, and was signed by almost 14,000 people, including Stephen Hawking, Elon Musk, and Steve Wozniak.
It may be old news here in Hollywood -- but the world’s scientists are now warning us about killer robots. Hundreds of scientists, including Stephen Hawking and Elon Musk wrote an open letter released this week, in which they warned of a global A.I. arms race.Peter Asaro