We’re building superhuman robots. Will they be heroes, or villains?

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
November 2, 2015

Forget about losing your job to a robot. And don’t worry about a super-smart, but somehow evil, computer. We have more urgent ethical issues to deal with right now.

Artificial intelligence is replacing human roles, and it’s assumed that those systems should mimic human behavior — or at least an idealized version of it. This may make sense for limited tasks such as product assembly, but for more autonomous systems — robots and AI systems that can “make decisions” for themselves — that goal gets complicated.

There are two problems with the assumption that AI should act like we do. First, it’s not always clear how we humans ought to behave, and programming robots becomes a soul-searching exercise on ethics, asking questions that we don’t yet have the answers to. Second, if artificial intelligence does end up being more capable than we are, that could mean that it has different moral duties, ones which require it to act differently than we would.

Let’s look at robot cars to illustrate the first problem. How should they be programmed? This is important, because they’re driving alongside our families right now. Should they always obey the law? Always protect their passengers? Minimize harm in an accident if they can? Or just slam the brakes when there’s trouble?

These and other design principles are reasonable, but sometimes they conflict. For instance, an automated car may have to break the law or risk its passengers’ safety to spare the greatest number of lives on the outside. The right decision, whatever that is, is fundamentally an ethical call based on human values, and one that isn’t answerable by science and engineering alone.

Read the full piece at The Washington Post