Self-Driving Cars Will Teach Themselves to Save Lives—But Also Take Them

“With Go or chess or Space Invaders, the goal is to win, and we know what winning looks like,” says Lin. “But in ethical decision-making, there is no clear goal. That’s the whole trick. Is the goal to save as many lives as possible? Is the goal to not have the responsibility for killing? There is a conflict in the first principles.”

But if the moral philosophies are pre-programmed by people at Google, that’s another matter. The programmers would have to think about the ethics ahead of time. “One has forethought—and is a deliberate decision. The other is not,” says Patrick Lin, a philosopher at Cal Poly San Luis Obispo and a legal scholar at Stanford University. “Even if a machine makes the exact same decision as a human being, I think we’ll see a legal challenge.”"