The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
I am pleased to announce that our edited volume Robot Ethics: The Social and Ethical Implications of Robotics has now been released by MIT Press.
The preface and table of contents are below (incl. link to Ryan Calo's chapter on privacy):
“Nothing is stranger to man but his own image.”
– Karel Čapek in Rossum’s Universal Robots (1921)
Here's a preview of my forthcoming paper on robot ethics (with co-authors Keith Abney and George Bekey) in Artificial Intelligence journal, one of the best in its field.
"“We mean for that to happen. This premeditation is the difference between manslaughter and murder, a much more serious offense,” wrote Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University.
"“Elon Musk is a great visionary and a great inventor, and you have to admire his ambition and his moxie,” says Patrick Lin, philosophy professor and head of the emerging sciences group at California Polytechnic. “But it does seem he has a blind spot for ethical issues and their impact.”
Putting humans on Mars, he adds, could spread our shortcomings through the solar system: “It sounds like we’re going to be exporting our problems to another rock.”"
"Ryan Calo, an expert on robotics law at the University of Washington, is skeptical that it’s possible to translate the so-far theoretical ethical discussions into practical rules or system designs. He doesn’t think autonomous cars are sophisticated enough to understand the different factors a human would in a real-life situation.
"Patrick Lin, the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, says we humans are a fickle lot, and that we don’t always know what we want or what we can live with.
“What we intellectually believe is true and what we in fact do may be two very different things,” he told Gizmodo. “Humans are often selfish even as they profess altruism. Car manufacturers, then, might not fully appreciate this human paradox as they offer up AI and robots to replace us behind the wheel.”"
“With Go or chess or Space Invaders, the goal is to win, and we know what winning looks like,” says Lin. “But in ethical decision-making, there is no clear goal. That’s the whole trick. Is the goal to save as many lives as possible? Is the goal to not have the responsibility for killing? There is a conflict in the first principles.”
"The article then paraphrases philosophy professor Patrick Lin, whose work at Cal Poly focuses in part on the ethics of driverless cars. According to Lin, "On the one hand, [the trolley problem] is a great entry point and teaching tool for engineers with no background in ethics. On the other hand, its prevalence, whimsical tone, and iconic status can shield you from considering a wider range of dilemmas and ethical considerations.""
"“We can't cherry-pick the costs or savings to focus on,” says Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. Instead, he says,to fairly examine the ethics involved, we should consider impacts both on the individual and society level. “Yes, healthier people may mean lower health costs and more productivity, but that's a partial picture at best.
"These ethical programming decisions will be made as a matter of company policy, and buyers may find themselves forced to buy into the brand whose ethics most closely align with their own. "If you had to choose between a car that would always save as many lives as possible in an accident, or one that would always save you at all costs, which would you buy?" asks Lin."
"Gerdes has been working with a philosophy professor, Patrick Lin, to make ethical thinking a key part of his team’s design process. Lin, who teaches at Cal Poly, spent a year working in Gerdes’s lab and has given talks to Google, Tesla, and others about the ethics of automating cars. The trolley problem is usually one of the first examples he uses to show that not all questions can be solved simply through developing more sophisticated engineering.
“Even if it’s a rare problem, autonomous car manufacturers still need to specify some action [in the event of an unavoidable crash], and the wrong one could lead to massive lawsuits and alarmist headlines,” says Patrick Lin, director of the ethics and emerging sciences group at California Polytechnic State University.
SCOTT SIMON, HOST: