Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
Over Christmas, I received a series 530 Roomba, the robotic vacuum cleaner from iRobot. It cleans the floor really well. But that is all it does. This year at the Consumer Electronics Show, iRobot revealed the prototype AVA. It is, essentially, an open robotic platform. Think of it as an iPad with a body. It has no dedicated purpose and, importantly, it has an API and will run software made by third-party developers.
Yes, apps for robots. This is a wonderful development, one that I predicted in a forthcoming essay in Maryland Law Review. As iRobot founder Colin Angle points out, "If you think of the thousands of apps out there: Which iPad apps would be more cool if they moved?" More importantly, would you not be more inclined to buy a personal robot that came with thousands of programs, with more on the way.
UPDATE: The New York Times published most of the rest of my comments on Bits Blog. Thanks!
I was quoted in a cover story in today's New York Times as saying, essentially, that law enforcement was "just trying to do their job" in pushing for greater subpoena power. This particular remark was an aside, made if anything to soften the impression that I was overly critical of the government. For instance, I lamented that consumers do not understand the state of the electronic privacy law and spoke about the dangers of dragnet or otherwise excessive surveillance. (Presumably I am one of the unnamed "[e]lectronic privacy and civil rights advocates" that worries "because the WikiLeaks court order gained such widespread attention, it could have a chilling effect on people’s speech on the Internet.")
I did not mean to imply that we should not push back against government and in fact praised Google and Twitter for having done so. I did offer that the government's purpose in pushing for greater surveillance power was not to erode civil liberties for its own sake, but in order to protect Americans by detecting and punishing crimes. But the gist of my remarks was that we need more protection, not less. Some of my talking points appear below for context.
Affiliate scholar Marvin Ammori offers eight good reasons why the United States should not prosecute Wikileaks founder Julian Assange. I mostly agree with Ammori’s analysis and write to emphasize one point: an Assange trial, regardless of outcome, would help the government gloss over one of the worst security breaches in modern history. And the First Amendment could supply this distraction’s brightest fireworks.
The website Wikileaks recently published hundreds of thousands of confidential State Department cables. These communications apparently reveal the details of conversations with, and personal impressions and assessments of, foreign leaders and diplomats. Many fear that the leak will undermine international relations in profound and unknowable ways. One of the unintended consequence of the leak, however, may be to strengthen the case for a national consumer privacy law.
UPDATE: As told to Jules Polonetsky over at The Future of Privacy Forum, Capital One was engaging in "totally random" rate changes that were not related to browser type. On the other hand, according to the Wall Street Journal, Capital One was at one point using [x+1] data to calibrate what credit card offers to show.
The other day, I suggested that the facts of the Clementi suicide may perfectly illustrate why no actual transfer of information is necessary for someone to suffer a severe subjective privacy harm. (Thanks to TechDirt and PogoWasRight for the write ups.)
Just now I learned about an allegation against Capital One that the company offered someone a different lending rate on the basis of what browser he used (Chrome vs. Firefox). A similar allegation was made against Amazon, which apparently used cookies for a time to calibrate the price of DVDs.
Here you have a clear objective privacy harm: your information (browser type) is being used adversely in a tangible and unexpected way. It matters not at all whether a human being sees the information or whether a company knows "who you are." Neither personally identifying information, nor the revelation of information to a person, is necessary for there to be a privacy harm.
NO: It Is the Way to Kill Innovation
By Ryan Calo
The year is 1910. Orville and Wilbur Wright are testing their plane and happen to fly hundreds of feet over a stretch of land you own. Could you sue them?
Technically, you could. In 1910, your property rights extended ad coelum et ad inferos—up to heaven and down to hell. Anyone who flew over your property without permission was trespassing.
I am a law professor who writes about robotics. I’m also a big Paolo Bacigalupi fan, particularly his breakout novel The Windup Girl involving an artificial girl. So for me, “Mika Model” was not entirely new territory. For all my familiarity with its themes, however, Bacigalupi’s story revealed an important connection in robotics law that had never before occurred to me.
"“It being animate all of a sudden for some reason feels too invasive,” said Ryan Calo, a law professor at the University of Washington. “If [Ma] were to gain commercially in almost any way from this, and even arguably the notoriety he has gained from this, Scarlett Johansson could almost certainly sue him.”
"It’s sure to be a heady good time. Panel titles include “Legal Personhood For Robots,” “The Ethical Characteristics of Autonomous Robots,” and the drenched-in-wordplay “Siriously?
"The “Three Laws of Robotics,” which Isaac Asimov dreamt up for his Robotseries, remains an entirely fictional concept. In the real world — which is now full of robots — there are very few statutes regarding the behavior of automatons.
"So what now? It was unfortunate that the chat bot was deployed under the Microsoft brand name, with Tay’s Twitter responses seeming to come from Tay, not learned from anyone else, says Ryan Calo, a law professor at the University of Washington who studies AI policy. In the future, he proposes, maybe we’ll have a mechanism for labeling so that the process of where Tay is pulling responses from is more transparent."
"The legal system has been wrestling with what robots can and can’t do for longer than you might think. A new paper by Ryan Calo, a law professor at the University of Washington, paints a surprisingly colorful picture of this history, which Calo dates back to a 1947 plane crash involving an Army fighter plane on autopilot.
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
CIS Affilate Scholar Ryan Calo wil be part of a panel titled "Understanding the Implications of Open Data".
How can open data promote trust in government without creating a transparent citizenry?
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog will all be participating in this two-day conference.
Registration is open for We Robot 2015 and we have a great program planned:
Friday, April 10
Registration and Breakfast
Welcome Remarks: Dean Kellye Testy, University of Washington School of Law
Introductory Remarks: Ryan Calo, Program Committee Chair
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Simon Jack reports from Seattle on robots at work. From the Boeing factory where robots make planes to a clothes shop where a robot helps him buy a new pair of jeans. Plus Ryan Calo, professor of law at the University of Washington, grapples with the question of who to blame when robots go wrong, and whether there is such a thing as robot rights.
There are a million ways people might use drones in the future, from deliveries and police work to journalism. But in this episode, we’re going to talk about consumer drones — something that you or I might use for ourselves. What does the world look like when everybody with a smart phone also has a drone?
"“We don’t need to get to this crazy world in which robots are trying to take over in order for there to be really difficult, interesting complex legal questions,” says Ryan Calo, professor of law at the University of Washington, “That’s happening right now.”
Here’s a sample:
“How do we make sure these drones are not recording things that they shouldn’t," Calo says, "and those things aren’t winding up .... on Amazon servers,or somehow getting out to the public or to law enforcement?"