The Sorcerer's Apprentice, Or: Why Weak AI Is Interesting Enough

Not many people in the legal academy study artificial intelligence or robotics. One fellow enthusiast, Kenneth Anderson at American University, posed a provocative question over at Volokh Conspiracy yesterday: will the Nobel Prize for literature ever go to a software engineer who writes a program that writes a novel?

What I like about Ken’s question is its basic plausibility. Software has already composed original music and helped invent a new type of toothbrush. It does the majority of stock trading. Software could one day write a book. A focus on the achievable is also what I find compelling about Larry Solum’s exploration of whether AI might serve as an executor of a trust or Ian Kerr’s discussion of the effects of software agents on commerce.

I commute back and forth to Stanford from San Francisco and, to pass the time, I listen to the occasional audio book. A few weeks ago I finished Daniel Wilson’s Robopocalypse, slated to become a Steven Spielberg movie in 2013. The book was entertaining. It was also technically quite specific. Wilson is a roboticist with a PhD from Carnegie Mellon and was able to lend a certain realism to his doomsday scenario. Many of the robots he described exist in prototype and some of the ethical issues flow from contemporary human-robot interaction literature.

But like most scary robot stories, Wilson’s depiction of a robot revolution helped itself to a quixotic key ingredient: a sentient machine. The villain in Robopocalypse is a self-aware computer program called Archos that, in what must be a nod to Milo of Microsoft’s project Natal, presents itself as a soft spoken little boy. This psychotic, artificial toddler decides it would be a good idea to prune the human race by a few billion and therefore sets about coordinating a massive robot assault.

Strong AI, meaning the general intelligence of the sort we might expect from a conscious being, is a common feature of movies involving robots, killer or otherwise. Think Terminator or 2001: A Space Odyssey. But machine sentience, let alone malice toward people, is not plausible in anything like the short run. A friend in robotics at the University of Sidney described the state of the art this way: we have been doing AI since at least the 1950s when that term was coined at Dartmouth College. Sixty years later, robots are about as smart as insects.

In a lovely essay, Northwestern’s John McGinnis acknowledges the hurdles we would have to overcome to achieve strong AI. One is vastly increased computational power. I agree with McGinnis that gains of this sort are likely in light of the unchecked, exponential growth we have seen to date. The second, however, is software capable of leveraging that computational power into a form of intelligence. Here I think the case is thin. Time will tell, of course, and I should note that AI is but one of the technologies McGinnis examines in what promises to be a fascinating book, Accelerating Democracy.

Weak or "narrow" AI, in contrast, is a present-day reality. Software controls many facets of daily life and, in some cases, this control presents real issues. One example is the May 2010 "flash crash" that caused a temporary but enormous dip in the market. A subsequent report on the crash placed much of the blame on high-frequency trading algorithms. Danielle Keats Citron has written about the problematic role of autonomous software programs deployed by the government.

One of my favorite works of fiction to discuss AI's potential impact on society is Daemon, a recent novel by Daniel Suarez. Suarez’s vision is of a series of relatively simple software programs set into motion by a game designer and able to act on the world. Suarez is a more gifted writer than Wilson, in my view, but the book’s real appeal comes from the fact that most everything in the narrative could happen today. And, importantly, the book's villain is a really clever person—one who uses software manipulate and harm others. The result is eye-opening, the implications for law and society arguably immediate.

I would recommend any of these works. I am also happy to report that Solum, McGinnis, Kerr, and an AI expert are coming to Stanford Law School this October to discuss AI and the law on a panel. We hope to record and display it on Center for Internet and Society's website. But in my view, our first priority should be thinking through the negative ramifications of the many computer programs already capable of acting upon the world. Worrying that robots will become self-aware and hurt people feels a little like worrying that mops and brooms will become enchanted and ruin the sorcerer's house.

Cross-posted from Concurring Opinions.

Add new comment