Stanford CIS

My Other Car Is a ... Robot? Defining Vehicle Automation

By Bryant Walker Smith on

The automobile, noted one scholar in 1907, “is variously referred to as [an] auto, autocar, car, machine, motor, motor car, and other terms equally as common but neither complimentary nor endearing.” Motorists, for their part, included “brutes,” “fat-headed marauders,” “honking highwaymen,” and “flippant fool[s]” who wrote themselves “down both a devil and an ass.” One hopes the horseless carriages of the future will earn monikers that are more flattering. In the meantime, we are left with assorted technical phrases like “electronic blind spot assistance, crash avoidance, emergency braking, parking assistance, adaptive cruise control, lane keep assistance, lane departure warnings and traffic jam and queuing assistance” to describe cars that (already) help us drive them, and with competing terms like fully automated, fully autonomous, self-driving, driverless, autopiloted, and robotic to describe cars that (may someday) drive us.

This article discusses efforts by researchers and regulators to systematically define, divide, and denote this growing spectrum of automotive automation. I first identify and compare approaches, and I then propose one of my own. Any approach may be overtaken by technological change or overcome by popular usage. Nonetheless, these efforts (and their coordination) are important, because effective legal, technical, and commercial communication depends in part on language that is clear and consistent. (The alternative is something that may or may not be called organic.)

The current status of these efforts varies. The National Highway Traffic Safety Administration (NHTSA) has yet to determine its approach to these definitional issues. SAE International’s On-Road Autonomous Vehicle Standards Committee (on which I serve) is working on definitions. The International Organization for Standardization (ISO) has published standards for adaptive cruise control (ACC), traffic impediment warning systems, and functional safety and is developing a lanekeeping standard. A project group organized by Germany's Federal Highway Research Institute (BASt) on the Legal Consequences of an Increase in Vehicle Automation recently published a significant report that includes definitions for levels of automation. The state of Nevada has defined “autonomous vehicle” by legislation and regulation. And a federal working group sponsored by the National Institute for Standards and Technology (NIST) has developed a more general approach for defining “Autonomy Levels for Unmanned Systems (ALFUS).”

How do these approaches compare? As an initial matter, although both “automation” (the dominant term in Europe) and “autonomy” (the dominant term in the United States) are frequently used to mean computer control, these words have subtly different definitions. Automation describes the replacement of human labor through technology; “automated driving” is therefore driving performed by a computer. In contrast, autonomy describes a system’s independence from external control; “autonomous driving” is therefore driving performed by itself. Without careful identification of the system and its boundaries, this term is unclear. After all, unlike today’s largely isolated driver-vehicle pairs, tomorrow’s motor vehicles might be tightly coordinated with each other as well as with elements of our physical and digital infrastructure—a concept that Dr. Steven E. Shladover argues is the opposite of autonomy.

Whatever it is called, computer control is often described as a continuum. On the left is purely human-controlled driving bereft of even basic assistance like antilock brakes and electronic stability control—in other words, your father’s father’s Oldsmobile. On the right is purely computer-controlled driving, for which definitions abound:

Between these two extremes, driving is shared by human and machine. This apportionment may be consecutive (if the technology is restricted to certain trip segments such as freeways) or concurrent (if the technology is restricted to certain subtasks such as lanekeeping). Several analytic frameworks divide this portion of the spectrum into multiple categories. BASt’s project group, for example, defines three intermediate levels:

Other approaches add additional variables. Dr. Shladover treats cooperation (his X-axis) as orthogonal to automation (his Y-axis). And ALFUS considers the degree of “human independence” to be only one of three elements that define “contextual autonomous capability”; the other two are the complexity of the mission given to the unmanned system and the complexity of the environment in which the system undertakes the mission.

My preliminary approach to describing vehicle automation combines several of these insights into two key parameters: domain and decision.

“Domain” identifies the environments for which a particular technology is intended. As Figure 1 shows, it encompasses two key variables: operating speed and traffic complexity (including automated motor vehicles only, a mixture of automated and conventional motor vehicles, and a mixture of pedestrians, bicyclists, and motor vehicles). It also indicates any particular road or weather conditions (such as construction, snow, and sunrise) for which the technology is not intended.

Figure 1, domain paramter, displays speed on the Y axis and traffic complexity on the X axis. The following examples are shown on the graph: self-parking, mining and agriculture, Audi traffic jam assist, platoons, adaptive cruise control, and obstacle avoidance.

“Decision” describes the extent of computer control within those domains. Driving requires numerous decisions regarding position, path, route, and trip, among others. Consider a typical drive: We select our destinations and their order (trip), the roads we take to reach those destinations (route), the lanes we use as well as the turns and merges we make onto them (path), and our speed and spacing within those lanes (position). As Figure 2 shows, each of these decisions may be made by a human, an onboard computer system, or an offboard computer system based on information provided (or perceived) by a human, an onboard computer system, or an offboard computer system. In this sense, the decision parameter encompasses both inputs (data) and outputs (instructions).

Figure 2, decision parameter, shows four triangle graphs, one for each of position, path, route, and trip. The corners of the triangles are labeled human (onboard), computer (onboard), and computer (offboard). The following examples are shown on each triangle: GPS, cruise control, ACC and lanekeeping, platoon, and Google test car.

These distinctions matter. A vehicle that relies solely on its own sensors is different from (though not necessarily superior or inferior to) one that relies on GPS signals, communication with other vehicles, or externally generated maps. Similarly, a vehicle that issues all of its own instructions is different from (though, again, not necessarily superior or inferior to) one that coordinates closely with other vehicles or follows remote commands. As with the conventional continuum, these splits are not absolute, and even a highly independent system capable of machine learning is still constrained by its code. Nonetheless, understanding the nature of computer control (rather than merely its extent) offers important insights about a particular technology’s risks and opportunities.

Photo by Audi USA.