Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas. He teaches courses in ethics, political philosophy, technology ethics, and philosophy of law. Dr. Lin has appeared in international media such as BBC, Forbes, National Public Radio (US), Popular Mechanics, Popular Science, Reuters, Science Channel, Slate, The Atlantic, The Christian Science Monitor, The Times (UK), Wired, and others (see this page for more).
Dr. Lin is currently or has been affiliated with several other leading organizations, including: Stanford Law School's Center for Internet and Society, Stanford's School of Engineering (CARS), 100 Year Study on AI, World Economic Forum, New America Foundation, UN Institute for Disarmament Research, University of Notre Dame, US Naval Academy, and Dartmouth College. He earned his BA from University of California at Berkeley, and MA and PhD from University of California at Santa Barbara.
Cross-posted from The Atlantic.
In the year 2025, a rogue state--long suspected of developing biological weapons--now seems intent on using them against U.S. allies and interests. Anticipating such an event, we have developed a secret "counter-virus" that could infect and destroy their stockpile of bioweapons. Should we use it?
I am pleased to announce that our edited volume Robot Ethics: The Social and Ethical Implications of Robotics has now been released by MIT Press.
The preface and table of contents are below (incl. link to Ryan Calo's chapter on privacy):
“Nothing is stranger to man but his own image.”
– Karel Čapek in Rossum’s Universal Robots (1921)
Here's a preview of my forthcoming paper on robot ethics (with co-authors Keith Abney and George Bekey) in Artificial Intelligence journal, one of the best in its field.
Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake.
Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car.
Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?
On a future road trip, your robot car decides to take a new route, driving you past a Krispy Kreme Doughnut shop. A pop-up window opens on your car’s display and asks if you’d like to stop at the store. “Don’t mind if I do,” you think to yourself. You press “yes” on the touchscreen, and the autonomous car pulls up to the shop.
If our leaders don’t even use email, can we trust them to make decisions about our brave new e-world? In a book released a few days ago—Cybersecurity and Cyberwarfare: What Everyone Needs to Know—we are immediately struck by how unprepared we really are as a society:
"Another panelist, philosophy professor Patrick Lin, said that expelling a student for non-criminal hate speech isn't really an option for a public university. He said if the university was to expel a student for something like that, it would be hit with a lawsuit that would go to the Supreme Court.
"A question was asked about Kyler Watkins, the former fraternity member who was photographed in blackface, and the numerous calls for him to be expelled from school. That sparked a larger discussion on free speech and First Amendment rights, with philosophy professor Patrick Lin saying expulsion would have been counterproductive.
"Patrick Lin, director of the ethics and emerging sciences group at California Polytechnic State University, said he sees "no evidence that Facebook's culture is unethical, though just one senior executive in the right place can poison the well."
"I'd guess that most Facebook employees want to do the right thing and are increasingly uncomfortable with how the proverbial sausage is made," Lin added."
"Patrick Lin, philosophy professor at Cal Poly, San Luis Obispo, is one of the few philosophers who’s examining the ethics of self-driving cars outside the Trolley Problem.
The Baker Forum was established by the Cal Poly President’s Council of Advisors on the occasion of two decades of service to Cal Poly by President Warren J. Baker and his wife, Carly, to further the dialogue on critical public policy issues facing the nation and higher education. The forum gives particular attention to the special social and economic roles and responsibilities of polytechnic and science and technology universities.
Attendees will hear leading speakers, participate in interactive breakout sessions, and network with key innovators in this exciting field. Don't miss what's in store for the Automated Vehicles Symposium 2016.
Affiliate Scholars Bryant Walker Smith and Patrick Lin are confirmed speakers.
For more information, visit the conference website.
For more information and to register visit the event website.
Professor Patrick Lin discusses key ethical, legal, and policy challenges in cyberwarfare. This event is part of the “IT, Ethics, and Law” lecture series, co-sponsored by the High Tech Law Institute.
Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.
SCOTT SIMON, HOST: