“[Our car] always does the right thing.”

These quotes, which have been attributed to people close to Google’s self-driving car project, suggest that Google’s cars (and hence their designers) are technically and morally infallible. This suggestion, though probably unintentional, concerns me.
My technical critique is straightforward: Systems fail. Engineers know this. So do lawyers. How can we credibly say, at this point or perhaps at any point, that “[n]othing is going to catch this car by surprise”?
Rather than asserting that computers are perfect drivers, we might ask whether they actually drive better than humans. In 2009, according to rough estimates, motor vehicles in the United States were involved in 5.5 million police-reported crashes (averaging 1.7 vehicles per crash) and 11 million total crashes over the three trillion miles (yes, trillion) that they traveled. These figures suggest a rate of one crashed vehicle every 160,000 miles. In comparison, Google’s cars have reportedly traveled some 200,000 miles with “occasional” intervention by safety drivers and 1,000 miles without any human intervention. I have not asked about the circumstances under which these human drivers intervene. I also don’t know the extent to which this travel is representative of the American driving experience; for example, it may include a disproportionately large or small share of urban environments, extreme weather, and unusual events.
According to my cursory analysis (which uses a Poisson distribution and assumes the accuracy of the national crash and mileage estimates), Google's cars would need to drive themselves (by themselves) more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars. If we look only at fatal crashes, this minimum skyrockets to 300 million miles. To my knowledge, Google has yet to reach these milestones.
(By the way, can we draw meaningful conclusions about safety from total crash data rather than from injury- and fatal-crash data alone? This issue merits discussion for at least half a million more miles.)
My moral critique starts with a question: What is “the right thing”? Imagine you are driving down a narrow mountain road between two big trucks. Suddenly, the brakes on the truck behind you fail, and it rapidly gains speed. If you stay in your lane, you will be crushed between the trucks. If you veer to the right, you will go off a cliff. If you veer to the left, you will strike a motorcyclist. What do you do? In short, who dies?
Although situations that give rise to “last-minute value judgments” like this are thankfully rare today and may be even rarer in the future, more of these judgments may ultimately be made ahead of time, whether explicitly or implicitly, whether by act or omission, and whether by engineers, companies, regulators, lawyers, or consumers. In crashes and conflicts that cannot be avoided, how should a self-driving car balance the welfare of its occupants with the welfare of others? And, critically, who should decide?
Engineering is about trade-offs: We replace one set of problems with another set of problems and hope that, in the aggregate, our new problems are smaller than our old ones. (Take just one example: Although antilock brake systems without electronic stability control decrease the risk of fatal collision for those outside the vehicle, they may increase that risk for those on the inside.) Careful design therefore requires selection of system boundaries, assessment of risks and opportunities, and analysis of costs and benefits. None of these decisions are value-free; indeed, cost-benefit analyses performed by administrative agencies may even involve explicit assumptions about the value of a human life.
I am optimistic that greater vehicle automation could significantly improve the safety and efficiency of our transportation system. I am also impressed by the tremendous progress that Google (among other companies and universities) has made toward autonomous driving. Neither of my critiques demean that work. Nonetheless, we must recognize that it is—and may always be—work in progress. Perhaps Google’s cars will never be caught by surprise; they are, after all, machines. But if we already expect these vehicles to “always do[] the right thing,” then we may be the ones to face an unwelcome surprise.