Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
Over Christmas, I received a series 530 Roomba, the robotic vacuum cleaner from iRobot. It cleans the floor really well. But that is all it does. This year at the Consumer Electronics Show, iRobot revealed the prototype AVA. It is, essentially, an open robotic platform. Think of it as an iPad with a body. It has no dedicated purpose and, importantly, it has an API and will run software made by third-party developers.
Yes, apps for robots. This is a wonderful development, one that I predicted in a forthcoming essay in Maryland Law Review. As iRobot founder Colin Angle points out, "If you think of the thousands of apps out there: Which iPad apps would be more cool if they moved?" More importantly, would you not be more inclined to buy a personal robot that came with thousands of programs, with more on the way.
UPDATE: The New York Times published most of the rest of my comments on Bits Blog. Thanks!
I was quoted in a cover story in today's New York Times as saying, essentially, that law enforcement was "just trying to do their job" in pushing for greater subpoena power. This particular remark was an aside, made if anything to soften the impression that I was overly critical of the government. For instance, I lamented that consumers do not understand the state of the electronic privacy law and spoke about the dangers of dragnet or otherwise excessive surveillance. (Presumably I am one of the unnamed "[e]lectronic privacy and civil rights advocates" that worries "because the WikiLeaks court order gained such widespread attention, it could have a chilling effect on people’s speech on the Internet.")
I did not mean to imply that we should not push back against government and in fact praised Google and Twitter for having done so. I did offer that the government's purpose in pushing for greater surveillance power was not to erode civil liberties for its own sake, but in order to protect Americans by detecting and punishing crimes. But the gist of my remarks was that we need more protection, not less. Some of my talking points appear below for context.
Affiliate scholar Marvin Ammori offers eight good reasons why the United States should not prosecute Wikileaks founder Julian Assange. I mostly agree with Ammori’s analysis and write to emphasize one point: an Assange trial, regardless of outcome, would help the government gloss over one of the worst security breaches in modern history. And the First Amendment could supply this distraction’s brightest fireworks.
The website Wikileaks recently published hundreds of thousands of confidential State Department cables. These communications apparently reveal the details of conversations with, and personal impressions and assessments of, foreign leaders and diplomats. Many fear that the leak will undermine international relations in profound and unknowable ways. One of the unintended consequence of the leak, however, may be to strengthen the case for a national consumer privacy law.
UPDATE: As told to Jules Polonetsky over at The Future of Privacy Forum, Capital One was engaging in "totally random" rate changes that were not related to browser type. On the other hand, according to the Wall Street Journal, Capital One was at one point using [x+1] data to calibrate what credit card offers to show.
The other day, I suggested that the facts of the Clementi suicide may perfectly illustrate why no actual transfer of information is necessary for someone to suffer a severe subjective privacy harm. (Thanks to TechDirt and PogoWasRight for the write ups.)
Just now I learned about an allegation against Capital One that the company offered someone a different lending rate on the basis of what browser he used (Chrome vs. Firefox). A similar allegation was made against Amazon, which apparently used cookies for a time to calibrate the price of DVDs.
Here you have a clear objective privacy harm: your information (browser type) is being used adversely in a tangible and unexpected way. It matters not at all whether a human being sees the information or whether a company knows "who you are." Neither personally identifying information, nor the revelation of information to a person, is necessary for there to be a privacy harm.
NO: It Is the Way to Kill Innovation
By Ryan Calo
The year is 1910. Orville and Wilbur Wright are testing their plane and happen to fly hundreds of feet over a stretch of land you own. Could you sue them?
Technically, you could. In 1910, your property rights extended ad coelum et ad inferos—up to heaven and down to hell. Anyone who flew over your property without permission was trespassing.
I am a law professor who writes about robotics. I’m also a big Paolo Bacigalupi fan, particularly his breakout novel The Windup Girl involving an artificial girl. So for me, “Mika Model” was not entirely new territory. For all my familiarity with its themes, however, Bacigalupi’s story revealed an important connection in robotics law that had never before occurred to me.
"Even the stereo, which was affected by Lexus's update, can create an unsafe situation, robotics law expert Ryan Calo told the Monitor. For example, buggy software might cause the radio to blare suddenly, startling the driver and causing an accident.
Tesla recently introduced a software update to control the whole vehicle, Dr. Calo tells the Monitor, although he says Lexus' update is technically not critical to safety.
The result, he says, is that "The line between control-critical and entertainment systems is not perfectly clean.""
"“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.
"Bryant Walker Smith from the University of South Carolina proposed regulatory flexibility for rapidly evolving technologies, such as driverless cars. “Individual companies should make a public case for the safety of their autonomous vehicles,” he said. “They should establish measures and then monitor them over the lifetime of their systems. We need a diversity of approaches to inform public debate.”
"UW Law Professor Ryan Calo, says imagine you’ve been placed on no-fly list.
“It’s not as though there’s some dossier that you could look at and see exactly what’s going on. It’s the result of artificial intelligence in that sense, combing through lots of information and spitting out a likelihood that you’re a problem,” he said. “How do you appeal that? What recourse do you have?”
"In relation to the role of government in AI, Ryan Calo, assistant law professor at the UW and faculty director of the Tech Policy Lab, and one of the speakers, suggests that the government isn’t trying to control the use of AI, but realizes its technological significance.
“The White House realizes that people must channel resources to research AI and to remain globally competitive,” Calo said.
U.S. Sen. John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, will convene a hearing on Wednesday, November 16, 2016, at 3:00 p.m. entitled “Exploring Augmented Reality.” The hearing will examine the emergence, benefits, and implications of augmented reality technologies. Unlike virtual reality that creates a wholly simulated reality, augmented reality attempts to superimpose images and visual data on the physical world in an intuitive way.
• Mr. Brian Blau, Research Vice President, Gartner
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
Facebook is still reeling from the revelation that a British firm, Cambridge Analytica, improperly used millions of its users’ data. #DeleteFacebook is trending and those in the tech world are closely watching how users react to the news.
Can the tech giant turn a new leaf? What data are we willing to give up for the convenience of platforms? And would paying for services like Facebook solve the problem?
Nobody likes to wait in line. So today, Amazon removed that unpleasantness from the neighborhood grocery store. At Amazon Go, you walk in, pick up your groceries and walk out.
There are no checkout lines or scanners and almost no employees, just sensors and cameras. But what is that convenience going to cost you? We talk with Geekwire’s Todd Bishop and University of Washington law professor and privacy expert Ryan Calo.
Listen to the full interview at KUOW 94.9
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Simon Jack reports from Seattle on robots at work. From the Boeing factory where robots make planes to a clothes shop where a robot helps him buy a new pair of jeans. Plus Ryan Calo, professor of law at the University of Washington, grapples with the question of who to blame when robots go wrong, and whether there is such a thing as robot rights.