With very rare exceptions, automakers are famously coy about crash dilemmas. They don’t want to answer questions about how their self-driving cars would respond to weird, no-win emergencies. This is understandable, since any answer can be criticized—there’s no obvious solution to a true dilemma, so why play that losing game?
But we can divine how an automaker approaches these hypothetical problems, which tell us something about the normal cases. We can look at patent filings, actual behavior in related situations, and other clues. A recent lawsuit filed against Tesla reveals a critical key to understanding how its autopiloted cars would handle the iconic “trolley problem” in ethics.
Applied to robot cars, the trolley problem looks something like this:
Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake. Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car. Did your robot car make the right decision?
Either action here can be defended, and no answer will satisfy everyone. By programming the car to retake control and swerve, the automaker is trading a big accident for a smaller accident, and minimizing harm seems very reasonable; more people get to live. But doing nothing and letting the five pedestrians die isn’t totally crazy, either.
By allowing the driver to continue forward, the automaker might fail to prevent that big accident, but it at least has no responsibility for creating an accident, as it would if it swerved into the unlucky person who otherwise would have lived. It may fail to save the five people, but—as many ethicists and lawyers agree—there’s a greater duty not to kill.
Read the full piece at Forbes.