Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also an associate philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to nanotechnology, human enhancement, robotics, cyberwarfare, space exploration, and other areas. He teaches courses in ethics, political philosophy, philosophy of technology, and philosophy of law. Dr. Lin has appeared in international media such as BBC, Forbes, National Public Radio (US), Popular Mechanics, Popular Science, Reuters, Science Channel, Slate, The Atlantic, The Christian Science Monitor, The Times (UK), Wired, and others (see this page for more).
Dr. Lin is currently or has been affiliated with several other leading organizations, including: Stanford Law School's Center for Internet and Society, Stanford's School of Engineering (CARS), New America Foundation, UN Institute for Disarmament Research, University of Notre Dame, US Naval Academy, and Dartmouth College. He earned his BA from University of California at Berkeley, and MA and PhD from University of California at Santa Barbara.
Cross-posted from The Atlantic.
In the year 2025, a rogue state--long suspected of developing biological weapons--now seems intent on using them against U.S. allies and interests. Anticipating such an event, we have developed a secret "counter-virus" that could infect and destroy their stockpile of bioweapons. Should we use it?
I am pleased to announce that our edited volume Robot Ethics: The Social and Ethical Implications of Robotics has now been released by MIT Press.
The preface and table of contents are below (incl. link to Ryan Calo's chapter on privacy):
“Nothing is stranger to man but his own image.”
– Karel Čapek in Rossum’s Universal Robots (1921)
Here's a preview of my forthcoming paper on robot ethics (with co-authors Keith Abney and George Bekey) in Artificial Intelligence journal, one of the best in its field.
In the first of this two-article series, we saw how augmented reality (AR) is causing friction between individual liberty and public interest. AR appmakers are being required by some parks to obtain a permit before they can “put” virtual objects in those public spaces, given the sudden crowds the apps can cause.
This article looks at the same core dilemma with another technology: automated driving.
With very rare exceptions, automakers are famously coy about crash dilemmas. They don’t want to answer questions about how their self-driving cars would respond to weird, no-win emergencies. This is understandable, since any answer can be criticized—there’s no obvious solution to a true dilemma, so why play that losing game?
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Last week, the Dallas police killed a suspected gunman with a bomb-delivering robot. It was a desperate measure for desperate times: five law enforcement officers were killed and several more wounded before the shooter was finally cornered.
"“Today, drivers are not trained or tested for that change in control,” says Patrick Lin, director of the ethics and emerging sciences group at California Polytechnic State University. “Humans aren’t hardwired to sit and monitor a system for long periods of time and then quickly react properly when an emergency happens.”"
“Even if it’s a rare problem, autonomous car manufacturers still need to specify some action [in the event of an unavoidable crash], and the wrong one could lead to massive lawsuits and alarmist headlines,” says Patrick Lin, director of the ethics and emerging sciences group at California Polytechnic State University.
"Patrick Lin, director of the ethics and emerging sciences group at California Polytechnic State University, adds, “Allowing manufacturers to have variable training times may be useful in determining the proper amount of training ordinary drivers should have. But if government or a consortium of carmakers were to establish minimum standards of safety and training, that may give us more confidence than letting each manufacturer decide what’s best.”"
"“This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone’s death,” Lin said. “When we make programming decisions, we expect those to be as right as we can be.”
What right looks like may differ from company to company, but according to Lin, automakers have a duty to show that they have wrestled with these complex questions — and publicly reveal the answers they reach.
"“It’s one thing for a human to steer her car off a cliff and quite another thing for a machine to make that choice,” Lin says. “It’s also one thing for pedestrians to be struck by a car whose driver made a bad reflexive decision and quite another thing for them to be struck because the robot car was programmed deliberately to target them or put them at greater risk. Setting expectations can help with some of this, but probably not all.”"
Attendees will hear leading speakers, participate in interactive breakout sessions, and network with key innovators in this exciting field. Don't miss what's in store for the Automated Vehicles Symposium 2016.
Affiliate Scholars Bryant Walker Smith and Patrick Lin are confirmed speakers.
For more information, visit the conference website.
For more information and to register visit the event website.
Professor Patrick Lin discusses key ethical, legal, and policy challenges in cyberwarfare. This event is part of the “IT, Ethics, and Law” lecture series, co-sponsored by the High Tech Law Institute.
Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.
SCOTT SIMON, HOST: