Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas. He teaches courses in ethics, political philosophy, technology ethics, and philosophy of law. Dr. Lin has appeared in international media such as BBC, Forbes, National Public Radio (US), Popular Mechanics, Popular Science, Reuters, Science Channel, Slate, The Atlantic, The Christian Science Monitor, The Times (UK), Wired, and others (see this page for more).
Dr. Lin is currently or has been affiliated with several other leading organizations, including: Stanford Law School's Center for Internet and Society, Stanford's School of Engineering (CARS), 100 Year Study on AI, World Economic Forum, New America Foundation, UN Institute for Disarmament Research, University of Notre Dame, University of Iceland's Centre for Arctic Policy Studies, US Naval Academy, and Dartmouth College. He earned his BA from University of California at Berkeley, and MA and PhD from University of California at Santa Barbara.
Cross-posted from The Atlantic.
In the year 2025, a rogue state--long suspected of developing biological weapons--now seems intent on using them against U.S. allies and interests. Anticipating such an event, we have developed a secret "counter-virus" that could infect and destroy their stockpile of bioweapons. Should we use it?
I am pleased to announce that our edited volume Robot Ethics: The Social and Ethical Implications of Robotics has now been released by MIT Press.
The preface and table of contents are below (incl. link to Ryan Calo's chapter on privacy):
“Nothing is stranger to man but his own image.”
– Karel Čapek in Rossum’s Universal Robots (1921)
Here's a preview of my forthcoming paper on robot ethics (with co-authors Keith Abney and George Bekey) in Artificial Intelligence journal, one of the best in its field.
This is a guest post. The views expressed in this article are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
In the first of this two-article series, we saw how augmented reality (AR) is causing friction between individual liberty and public interest. AR appmakers are being required by some parks to obtain a permit before they can “put” virtual objects in those public spaces, given the sudden crowds the apps can cause.
This article looks at the same core dilemma with another technology: automated driving.
With very rare exceptions, automakers are famously coy about crash dilemmas. They don’t want to answer questions about how their self-driving cars would respond to weird, no-win emergencies. This is understandable, since any answer can be criticized—there’s no obvious solution to a true dilemma, so why play that losing game?
"“There are moral, ethical reasons to not delegate the authority to kill people to machines,” said Peter Asaro, co-founder of the International Committee for Robot Arms Control, an international nonprofit opposed to military robots."
"Lethal robots cannot be made infallible, said Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo."
The advance drones could pose significant problems in the future. Stanford researchers Ryan Calo and Patrick Lin warn that there is a small chance that an advanced drone that does not rely on human controls, could go rouge in combat.
"There's no plan for humans to be totally out of the loop," says Ryan Calo, a Stanford University researcher. "But there are pressures that create incentives for ever more autonomy," he adds.
“Military robots are potentially indiscriminate,” says Patrick Lin, another Stanford researcher. “They have a difficult time identifying people as well as contexts, for instance, whether a group of people are at a political rally or wedding celebration.”
Robots are unquestioningly getting more sophisticated by the year, and as a result, are becoming an indelible part of our daily lives. But as we start to increase our interactions and dependance on robots, an important question needs to be asked: What would happen if a robot actually committed a crime, or even hurt someone — either deliberately or by mistake?
In an interesting recent essay in the Atlantic – ‘Is it Possible to Wage a Just Cyberwar?’ – Patrick Lin, Fritz Allhoff, and Neil Rowe argue that events such as the Stuxnet cyberattack on Iran suggest that the way we fight wars is changing, as well as the rules that govern them. It is indeed easy to see how nations may be tempted to use cyberweapons to attack anonymously, from a distance, and without the usual financial and personnel costs of conventional warfare. (See also Mariarosaria Taddeo’s interesting recent post on this blog.)
The Baker Forum was established by the Cal Poly President’s Council of Advisors on the occasion of two decades of service to Cal Poly by President Warren J. Baker and his wife, Carly, to further the dialogue on critical public policy issues facing the nation and higher education. The forum gives particular attention to the special social and economic roles and responsibilities of polytechnic and science and technology universities.
Attendees will hear leading speakers, participate in interactive breakout sessions, and network with key innovators in this exciting field. Don't miss what's in store for the Automated Vehicles Symposium 2016.
Affiliate Scholars Bryant Walker Smith and Patrick Lin are confirmed speakers.
For more information, visit the conference website.
For more information and to register visit the event website.
Professor Patrick Lin discusses key ethical, legal, and policy challenges in cyberwarfare. This event is part of the “IT, Ethics, and Law” lecture series, co-sponsored by the High Tech Law Institute.
Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.
SCOTT SIMON, HOST: