Driving at Perfection

Nothing is going to catch this car by surprise…. It’s going to see hundreds of feet in all directions. [You’re] not going to have a pedestrian ‘come out of nowhere’ or the ball coming to the middle of the street. This car senses a lot.

Our cars are designed to avoid the kinds of situations that force people to make last-minute value judgments while driving.

[Our car] always does the right thing.

Driving at PerfectionThese quotes, which have been attributed to people close to Google’s self-driving car project, suggest that Google’s cars (and hence their designers) are technically and morally infallible. This suggestion, though probably unintentional, concerns me.

My technical critique is straightforward: Systems fail. Engineers know this. So do lawyers. How can we credibly say, at this point or perhaps at any point, that “[n]othing is going to catch this car by surprise”?

Rather than asserting that computers are perfect drivers, we might ask whether they actually drive better than humans. In 2009, according to rough estimates, motor vehicles in the United States were involved in 5.5 million police-reported crashes (averaging 1.7 vehicles per crash) and 11 million total crashes over the three trillion miles (yes, trillion) that they traveled. These figures suggest a rate of one crashed vehicle every 160,000 miles. In comparison, Google’s cars have reportedly traveled some 200,000 miles with “occasional” intervention by safety drivers and 1,000 miles without any human intervention. I have not asked about the circumstances under which these human drivers intervene. I also don’t know the extent to which this travel is representative of the American driving experience; for example, it may include a disproportionately large or small share of urban environments, extreme weather, and unusual events.

According to my cursory analysis (which uses a Poisson distribution and assumes the accuracy of the national crash and mileage estimates), Google's cars would need to drive themselves (by themselves) more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars. If we look only at fatal crashes, this minimum skyrockets to 300 million miles. To my knowledge, Google has yet to reach these milestones.

(By the way, can we draw meaningful conclusions about safety from total crash data rather than from injury- and fatal-crash data alone? This issue merits discussion for at least half a million more miles.)

My moral critique starts with a question: What is “the right thing”? Imagine you are driving down a narrow mountain road between two big trucks. Suddenly, the brakes on the truck behind you fail, and it rapidly gains speed. If you stay in your lane, you will be crushed between the trucks. If you veer to the right, you will go off a cliff. If you veer to the left, you will strike a motorcyclist. What do you do? In short, who dies?

Although situations that give rise to “last-minute value judgments” like this are thankfully rare today and may be even rarer in the future, more of these judgments may ultimately be made ahead of time, whether explicitly or implicitly, whether by act or omission, and whether by engineers, companies, regulators, lawyers, or consumers. In crashes and conflicts that cannot be avoided, how should a self-driving car balance the welfare of its occupants with the welfare of others? And, critically, who should decide?

Engineering is about trade-offs: We replace one set of problems with another set of problems and hope that, in the aggregate, our new problems are smaller than our old ones. (Take just one example: Although antilock brake systems without electronic stability control decrease the risk of fatal collision for those outside the vehicle, they may increase that risk for those on the inside.) Careful design therefore requires selection of system boundaries, assessment of risks and opportunities, and analysis of costs and benefits. None of these decisions are value-free; indeed, cost-benefit analyses performed by administrative agencies may even involve explicit assumptions about the value of a human life.

I am optimistic that greater vehicle automation could significantly improve the safety and efficiency of our transportation system. I am also impressed by the tremendous progress that Google (among other companies and universities) has made toward autonomous driving. Neither of my critiques demean that work. Nonetheless, we must recognize that it is—and may always be—work in progress. Perhaps Google’s cars will never be caught by surprise; they are, after all, machines. But if we already expect these vehicles to “always do[] the right thing,” then we may be the ones to face an unwelcome surprise.

Photo by Chris Nakashima-Brown/No Fear of the Future

Comments

Personally, I think that the average human is a horrible driver because of inattention and road rage, and I think it's likely that safety drivers in self-driven cars, even ones that make many mistakes, will be safer.
My basis is anecdotal based on personal experience: I drive a car with lane-keep assist and radar-based dynamic cruise control, and I keep both active on the highway. Consequently my car alerts me when it thinks I'm drifting (there are many false positive, but few false negatives), and it keeps a safe follow distance, even when someone is tailing me at an unsafe distance, to which I used to respond by tailing the person in front of me. And although I keep my hands on the wheel any my foot resting on the brake pedal, I'm less likely to engage in road rage by changing speed or swerving. Moreover, the car has protected me and my wife on several occasions by braking when it detects that another car has lost control and may hit us.
But regarding your analysis: when a new car manufacturer introduces their first car models, I assume you would apply a similar technical critique because the new manufacturer is unproven. But I suspect that we don't require new car manufacturers to test-drive their first models for 0.75 million miles. If not, why not? If we have good reasons to not require this of new manufacturers, why would the same reasons not apply to self-driving cars?
Since two of the most significant causes of crashes, including fatal ones, are driver inattention and road rage, and assuming all else being equal, which I know is a big assumption, can you answer the following:
* What's the likelihood that a safety driver in a self-driven car would take control of the car to in order execute a road-rage maneuver? I assume it would be significantly less than the likelihood of road-rage in standard cars. More importantly, how much less is the likelihood of road rage in a self-driven car with a safety driver?
* What's the likelihood that, simultaneously, a safety driver will be inattentive *and* the self-driven car will make a mistake leading to a crash? It may be significantly less than the likelihood of a crash caused by inattention in a standard car. This would depend on whether and how much a safety driver's inattention would increase in a self-driven car, which would in turn depend on the likelihood of inattention in standard car, which I know is quite high, given the likelihood of food consumption, cell-phone use, makeup application, newspaper reading, drunk driving, etc. — so high in fact that it may not be possible to decrease a safety-drivers inattention by much in a self-driven car, and it may even be higher than the likelihood of a mistake by Google's self-driven cars. This would of course depend on the likelihood of mistakes by self-driven cars. I don't know these statistics, but I assume you do. Could you incorporate them into your analysis?
Sincerely,
Kaben Nanlohy

The real benefit to safety would be if all vehicles were computer controlled. I'm willing to bet a large number of accidents are caused by either driver distraction (including falling asleep) or high risk behaviors such as super speeding and dangerous maneuvers.

For those who have requested details, here's how I reached the estimates above. The data are 2009 annual; the sources are linked in the post itself.
Reported crashes: 5,505,000
Crashed vehicles: 9,534,000
Vehicles per crash: 1.732
Total crashes (reported and unreported): 10,800,000
Total crashed vehicles: 10,800,000*1.732 = 18,705,600 (note that this may be high if single-vehicle crashes are more likely to go unreported)
Vehicle miles traveled: 2.954 trillion
VMT per vehicle crashed: 2.954 trillion / 18,705,600 = 157,932
p-value: 0.01
n: -157,932*ln(0.01) = 727,302 miles (all crashes)
Fatal crashes: 30,979
Vehicles in fatal crashes: 45,435
VMT per fatal vehicle crashed: 65,015,957
p-value: 0.01
n: 299,409,546 miles (fatal crashes)
As the post notes, these calculations necessarily involve all sorts of assumptions, which are certainly susceptible to critique and refinement. And there may be other ways of demonstrating safety. The key point is that we're dealing with really big (or, depending on your perspective, small) numbers.

Add new comment