Ban Killer Robots before They Become Weapons of Mass Destruction

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
August 7, 2015

Last week the Future of Life Institute released a letter signed by some 1,500 artificial intelligence (AI), robotics and technology researchers. Among them were celebrities of science and the technology industry—Stephen Hawking, Elon Musk and Steve Wozniak—along with public intellectuals such as Noam Chomsky and Daniel Dennett. The letter called for an international ban on offensive autonomous weapons, which could target and fire weapons without meaningful human control.

This week is the 70th anniversary of the atomic bombing of the Japanese cities of Hiroshima and Nagasaki, together killing over 200,000 people, mostly civilians. It took 10 years before the physicist Albert Einstein and philosopher Bertrand Russell, along with nine other prominent scientists and intellectuals, issued a letter calling for global action to address the threat to humanity posed by nuclear weapons. They were motivated by the atomic devastation in Japan but also by the escalating arms race of the Cold War that was rapidly and vastly increasing the number, destructive capability, and efficient delivery of nuclear arms, draining vast resources and putting humanity at risk of total destruction. They also note in their letter that those who knew the most about the effects of such weapons were the most concerned and pessimistic about their continued development and use.

The Future of Life Institute letter is significant for the same reason: It is signed by a large group of those who know the most about AI and robotics, with some 1,500 signatures at its release on July 28 and more than 17,000 today. Signatories include many current and former presidents, fellows and members of the American Association of Artificial Intelligence, the Association of Computing Machinery and the IEEE Robotics & Automation Society; editors of leading AI and robotics journals; and key players in leading artificial-intelligence companies such as Google DeepMind, Facebook, and IBM’s Watson team. As Max Tegmark, Massachusetts Institute of Technology physics professor and a founder of the Future of Life Institute, told Motherboard, “This is the AI experts who are building the technology who are speaking up and saying they don’t want anything to do with this.”

Autonomous weapons pose serious threats that, taken together, make a ban necessary. There are concerns whether AI algorithms could effectively distinguish civilians from combatants, especially in complex conflict environments. Even advanced AI algorithms would lack the situational understanding or the ability to determine whether the use of violent force was appropriate in a given circumstance or whether the use of that force was proportionate. Discrimination and proportionality are requirements of international law for humans who target and fire weapons but autonomous weapons would open up an accountability gap. Because humans would no longer know what targets an autonomous weapon might select, and because the effects of a weapon may be unpredictable, there would be no one to hold responsible for the killing and destruction that results from activating such a weapon.

Read the full piece at Scientific American