Stanford CIS

Reversing Turing

By Brett Frischmann on

There is ample evidence to suggest that digital technologies are being designed and deployed not only to surveil and nudge us toward certain consumer preferences, but to train us to act like predictable machines. In the absence of an established framework for assessing these effects, we need a new test of humanity lost.

PHILADELPHIA – A few years ago, my (Brett’s) inbox was flooded with Facebook notifications indicating that friends had wished me a happy birthday. Annoyed, I logged in and changed the birthday to a random date in mid-summer. Six months later, my email inbox was flooded again. At first, I didn’t think much of it. Most of those Facebook friends didn’t know my actual birthday, and were responding automatically to a prompt. But there were also two messages from close relatives who knew my real birthday. Their responses resembled the others, raising the question of whether automation bias had trumped their human judgment. Was this just a single mistake, or was it indicative of something larger and more pernicious?

Consider another example. A person encounters an online user agreement, sees a little button that says “I agree,” and clicks it without thinking. This is a common experience. People don’t read terms of service, privacy policies, or other forms of electronic boilerplate, and rarely do they stop to think about the parties with whom they’re forming legally binding relationships online. But this click-to-contract interface (and the underlying legal rules that permit companies to use it) should be recognized for what it is: a system that engineers humans to behave robotically, and arguably without common sense.

Both examples are cases of techno-social engineering. As the scale and scope of this practice grows more pervasive and intimate through the ubiquitous embedding of networked sensors in public and private spaces (including our devices, our clothing, and even ourselves), it raises urgent ethical and political questions. By leading people to behave robotically, are our technologies fundamentally dehumanizing?

This question is as old as technology itself. In ancient Greece, Socrates worried that writing would destroy human memory and fundamentally alter who we are. During the Industrial Revolution, critics worried about assembly-line workers being treated like cogs in a machine. Media scholars later warned that television would turn people into unthinking vegetables. Like Karl Marx in the nineteenth century, critics in the twenty-first century worry about technological dehumanization spreading from the Amazon warehouse to the Uber driver’s vehicle and on to what Shoshana Zuboff of Harvard Business School describes as a commodification of all life itself.

Read the full piece at Project Syndicate.

Published in: Publication , Other Writing , Robotics