Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
Judge Richard Posner took the occasion of the Boston bombing to remind us of his view that privacy should lose out to other values. Privacy, argues Judge Posner, is largely about concealing truths “that, if known, would make it more difficult for us to achieve our personal goals.” For instance: privacy helps the victims of domestic violence achieve their personal goal of living free from fear; it helps the elderly achieve their personal goal of staying off of marketing “sucker lists;” and it helps children achieve their personal goal of avoiding sexual predators online.
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today's gamers.
I’ve blogged on these pages before about the claim, popularized by Larry Lessig, that “code is law.” During the Concurring Opinions symposium on Jonathan Zittrain’s 2010 book The Future of The Internet (And How To Stop It), I cataloged the senses in which architecture or “code” is said to constitute a form of regulation. “Primary” architecture refers to altering a physical or digital environment to stop conduct before it happens. Speed bumps are a classic example. “Secondary” architecture instead alters an environment in order to make conduct harder to get away with—for instance, by installing a traffic light camera or forcing a communications network to build an entry point for law enforcement.
I have yet to sit down and read Evgeny Morozov’s new book, To Save Everything, Click Here: The Folly of Technological Solutionism. I certainly found his last book very thought provoking. But I did get a chance to read an op ed Morozov recently wrote in the Wall Street Journal with the provocative title “Is Smart Making Us Dumb?” The piece draws a distinction between mobile and other devices that are “good smart” and ones that are “bad smart.” Good smart devices “leave us in complete control of the situation and seek to enhance our decision-making by providing more information.” Morozov offers the example of a teapot that relays the state of the energy grid. Whereas bad smart ones “make certain choices and behaviors impossible,” a theme Lawrence Lessig, Jonathan Zittrain, and others famously develop under the rubric of "code."
I wrote a new essay entitled “Code, Nudge, or Notice?” that might interest CIS readers. The essay compares side-by-side three ways that the government tries to influence citizen behavior short of making it illegal. It uses contemporary examples, like the graphic warnings the FDA wants to put on cigarettes, to make the point that it sometimes hard to sort regulations into neat categories like “architecture,” “libertarian paternalism,” or “mandatory disclosure” (code, nudge, or notice). Instead, I argue that regulators should focus on the more fundamental difference between helping people and hindering them. Along the way, I make the point that all of forensics may be a kind of “code” that turns an ordinary location into a crime scene—sort of like putting a traffic camera up at an intersection only after someone runs the red light. Thoughts warmly welcome. Here is the abstract:
NO: It Is the Way to Kill Innovation
By Ryan Calo
The year is 1910. Orville and Wilbur Wright are testing their plane and happen to fly hundreds of feet over a stretch of land you own. Could you sue them?
Technically, you could. In 1910, your property rights extended ad coelum et ad inferos—up to heaven and down to hell. Anyone who flew over your property without permission was trespassing.
I am a law professor who writes about robotics. I’m also a big Paolo Bacigalupi fan, particularly his breakout novel The Windup Girl involving an artificial girl. So for me, “Mika Model” was not entirely new territory. For all my familiarity with its themes, however, Bacigalupi’s story revealed an important connection in robotics law that had never before occurred to me.
"“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.
"Bryant Walker Smith from the University of South Carolina proposed regulatory flexibility for rapidly evolving technologies, such as driverless cars. “Individual companies should make a public case for the safety of their autonomous vehicles,” he said. “They should establish measures and then monitor them over the lifetime of their systems. We need a diversity of approaches to inform public debate.”
"UW Law Professor Ryan Calo, says imagine you’ve been placed on no-fly list.
“It’s not as though there’s some dossier that you could look at and see exactly what’s going on. It’s the result of artificial intelligence in that sense, combing through lots of information and spitting out a likelihood that you’re a problem,” he said. “How do you appeal that? What recourse do you have?”
"In relation to the role of government in AI, Ryan Calo, assistant law professor at the UW and faculty director of the Tech Policy Lab, and one of the speakers, suggests that the government isn’t trying to control the use of AI, but realizes its technological significance.
“The White House realizes that people must channel resources to research AI and to remain globally competitive,” Calo said.
"A future where ROSS, or similar robot lawyers, is used across the country might not be too far away, according to Ryan Calo, a law professor and writer who focuses on the intersection of technology and law. “The use of complex software in the practice of law is commonplace — for instance, in managing discovery,” said Calo. “Watson is a tool — in law or medicine or another context — to assist professionals in making judgments. Eventually, I bet not using these systems will come to be viewed as antiquated and even irresponsible, like writing a brief on a typewriter.”'
U.S. Sen. John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, will convene a hearing on Wednesday, November 16, 2016, at 3:00 p.m. entitled “Exploring Augmented Reality.” The hearing will examine the emergence, benefits, and implications of augmented reality technologies. Unlike virtual reality that creates a wholly simulated reality, augmented reality attempts to superimpose images and visual data on the physical world in an intuitive way.
• Mr. Brian Blau, Research Vice President, Gartner
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
CIS Affilate Scholar Ryan Calo wil be part of a panel titled "Understanding the Implications of Open Data".
How can open data promote trust in government without creating a transparent citizenry?
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Simon Jack reports from Seattle on robots at work. From the Boeing factory where robots make planes to a clothes shop where a robot helps him buy a new pair of jeans. Plus Ryan Calo, professor of law at the University of Washington, grapples with the question of who to blame when robots go wrong, and whether there is such a thing as robot rights.
There are a million ways people might use drones in the future, from deliveries and police work to journalism. But in this episode, we’re going to talk about consumer drones — something that you or I might use for ourselves. What does the world look like when everybody with a smart phone also has a drone?
"“We don’t need to get to this crazy world in which robots are trying to take over in order for there to be really difficult, interesting complex legal questions,” says Ryan Calo, professor of law at the University of Washington, “That’s happening right now.”
Here’s a sample:
“How do we make sure these drones are not recording things that they shouldn’t," Calo says, "and those things aren’t winding up .... on Amazon servers,or somehow getting out to the public or to law enforcement?"