Dr. Asaro is Associate Professor in the School of Media Studies at the New School in New York City. He is the co-founder of the International Committee for Robot Arms Control, and has written on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, aerial drones and autonomous vehicles.
I have been asked by Science & Film to review the realism of EYE IN THE in terms of the new technologies we see deployed in the film. Most of the technologies employed in the film narrative have some basis in reality, though many are still in very early stages, or proof-of-concept, and remain far from the reliable and useful technologies depicted in the film.
Last week the Future of Life Institute released a letter signed by some 1,500 artificial intelligence (AI), robotics and technology researchers. Among them were celebrities of science and the technology industry—Stephen Hawking, Elon Musk and Steve Wozniak—along with public intellectuals such as Noam Chomsky and Daniel Dennett. The letter called for an international ban on offensive autonomous weapons, which could target and fire weapons without meaningful human control.
This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such a ban might be problematic for various reasons. It argues in favour of a theoretical foundation for such a ban based on human rights and humanitarian principles that are not only moral, but also legal ones. In particular, an implicit requirement for human judgement can be found in international humanitarian law governing armed conﬂict.
As the military’s armed surveillance drones have become the iconic weapon of the early twenty-first century, they have also introduced radical transformations in the traditional labor of those who operate them the pilots, crew, analysts, and commanders. In so doing, these transformations have engendered new kinds of subjectivity, with new ways of experiencing the work of surveillance and killing.
""It's hard to say who is responsible. As a casual user you have no idea how these things are built," said Peter Asaro, an assistant professor at The New School in New York and an AI philosopher.
And as algorithms become more complex, its very creators may no longer understand how it works or what comes out.
"The accountability will be what they do about it when something bad happens," Asaro said."
"Anti-autonomous weapons advocate groups are concerned about how these cooperative swarms will select targets, according to Peter Asaro, co-founder of the International Committee for Robot Arms Control (ICRC) and spokesperson for the Campaign to Stop Killer Robots. He said it depends on whether these drones are more akin to sophisticated guided missiles controlled by humans, or autonomous killing machines.
"Indeed, panelist Peter Asaro—philosopher of science, technology, and media at The New School—said that machines aren't legal or moral agents. So if humans give robots responsibilities or rights they don't deserve, he said, it could cause problems for society.
As humans and robots engage more frequently, it will become increasingly important for robots to develop empathy, Asaro said.
“If they're going to interact with us socially, they're going to need to understand social structures,” he noted."
"“It's unclear who, if anyone, could be held responsible if an autonomous weapon caused an atrocity,” the campaign’s legal expert Peter Asaro told me in an email. “In order to commit a crime or war crime there must be intention. Robots aren’t capable of intention in the legal sense, so cannot commit crimes or be held accountable for their actions—This would make it easy to cause atrocities with killer robots, without anyone being legally responsible.”"
"Peter Asaro, a philosopher of science, technology, and media at The New School in New York City, has been working on addressing these fundamental questions of responsibility and liability with all autonomous systems, not just weapons. By exploring fundamental concepts of autonomy, agency, and liability, he intends to develop legal approaches for regulating the use of autonomous systems and the harm they cause.
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog will all be participating in this two-day conference.
Registration is open for We Robot 2015 and we have a great program planned:
Friday, April 10
Registration and Breakfast
Welcome Remarks: Dean Kellye Testy, University of Washington School of Law
Introductory Remarks: Ryan Calo, Program Committee Chair
For more information visit: https://citp.princeton.edu/event/lunch-timer-asaro-tang/
Location: Bowl 001, Robertson Hall
Food and discussion begin at 12:15 pm. Open to current Princeton faculty, fellows and students only. RSVP required. Co-sponsored with WWS and LAPA.
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog are listed as participants for We Robot 2014. Robotics is becoming a transformative technology. We Robot 2014 builds on existing scholarship exploring the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues. If you are on the front lines of robot theory, design, or development, we hope to see you.
The motion under debate will be:“Should there be an absolute ban on autonomous systems capable of using lethal force?” Two key speakers will argue for and against the motion, and respond to each other’s presentation. This will be followed by a discussion session with the audience, and a public vote.
FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City.
Peter Asaro (assistant professor in the School of Media Studies at The New School) and S. Matthew Liao (director of the Center for Bioethics at New York University) talk to Live Science's Denise Chow and Space.com's Tariq Malik about the ethics of AI.
Hours after gunman Micah Johnson ambushed police officers in downtown Dallas, he was killed by a bomb strapped on a police robot. Robots in the past have stopped a lot of dangerous situations, but using a robot to kill - that was a first for a domestic police force. Kris Van Cleave reports on the ethical questions about the use of robots to kill suspects.
Affiliate Scholar Peter Asaro is interviewed.