Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
Over Christmas, I received a series 530 Roomba, the robotic vacuum cleaner from iRobot. It cleans the floor really well. But that is all it does. This year at the Consumer Electronics Show, iRobot revealed the prototype AVA. It is, essentially, an open robotic platform. Think of it as an iPad with a body. It has no dedicated purpose and, importantly, it has an API and will run software made by third-party developers.
Yes, apps for robots. This is a wonderful development, one that I predicted in a forthcoming essay in Maryland Law Review. As iRobot founder Colin Angle points out, "If you think of the thousands of apps out there: Which iPad apps would be more cool if they moved?" More importantly, would you not be more inclined to buy a personal robot that came with thousands of programs, with more on the way.
UPDATE: The New York Times published most of the rest of my comments on Bits Blog. Thanks!
I was quoted in a cover story in today's New York Times as saying, essentially, that law enforcement was "just trying to do their job" in pushing for greater subpoena power. This particular remark was an aside, made if anything to soften the impression that I was overly critical of the government. For instance, I lamented that consumers do not understand the state of the electronic privacy law and spoke about the dangers of dragnet or otherwise excessive surveillance. (Presumably I am one of the unnamed "[e]lectronic privacy and civil rights advocates" that worries "because the WikiLeaks court order gained such widespread attention, it could have a chilling effect on people’s speech on the Internet.")
I did not mean to imply that we should not push back against government and in fact praised Google and Twitter for having done so. I did offer that the government's purpose in pushing for greater surveillance power was not to erode civil liberties for its own sake, but in order to protect Americans by detecting and punishing crimes. But the gist of my remarks was that we need more protection, not less. Some of my talking points appear below for context.
Affiliate scholar Marvin Ammori offers eight good reasons why the United States should not prosecute Wikileaks founder Julian Assange. I mostly agree with Ammori’s analysis and write to emphasize one point: an Assange trial, regardless of outcome, would help the government gloss over one of the worst security breaches in modern history. And the First Amendment could supply this distraction’s brightest fireworks.
The website Wikileaks recently published hundreds of thousands of confidential State Department cables. These communications apparently reveal the details of conversations with, and personal impressions and assessments of, foreign leaders and diplomats. Many fear that the leak will undermine international relations in profound and unknowable ways. One of the unintended consequence of the leak, however, may be to strengthen the case for a national consumer privacy law.
UPDATE: As told to Jules Polonetsky over at The Future of Privacy Forum, Capital One was engaging in "totally random" rate changes that were not related to browser type. On the other hand, according to the Wall Street Journal, Capital One was at one point using [x+1] data to calibrate what credit card offers to show.
The other day, I suggested that the facts of the Clementi suicide may perfectly illustrate why no actual transfer of information is necessary for someone to suffer a severe subjective privacy harm. (Thanks to TechDirt and PogoWasRight for the write ups.)
Just now I learned about an allegation against Capital One that the company offered someone a different lending rate on the basis of what browser he used (Chrome vs. Firefox). A similar allegation was made against Amazon, which apparently used cookies for a time to calibrate the price of DVDs.
Here you have a clear objective privacy harm: your information (browser type) is being used adversely in a tangible and unexpected way. It matters not at all whether a human being sees the information or whether a company knows "who you are." Neither personally identifying information, nor the revelation of information to a person, is necessary for there to be a privacy harm.
NO: It Is the Way to Kill Innovation
By Ryan Calo
The year is 1910. Orville and Wilbur Wright are testing their plane and happen to fly hundreds of feet over a stretch of land you own. Could you sue them?
Technically, you could. In 1910, your property rights extended ad coelum et ad inferos—up to heaven and down to hell. Anyone who flew over your property without permission was trespassing.
I am a law professor who writes about robotics. I’m also a big Paolo Bacigalupi fan, particularly his breakout novel The Windup Girl involving an artificial girl. So for me, “Mika Model” was not entirely new territory. For all my familiarity with its themes, however, Bacigalupi’s story revealed an important connection in robotics law that had never before occurred to me.
"In relation to the role of government in AI, Ryan Calo, assistant law professor at the UW and faculty director of the Tech Policy Lab, and one of the speakers, suggests that the government isn’t trying to control the use of AI, but realizes its technological significance.
“The White House realizes that people must channel resources to research AI and to remain globally competitive,” Calo said.
"A future where ROSS, or similar robot lawyers, is used across the country might not be too far away, according to Ryan Calo, a law professor and writer who focuses on the intersection of technology and law. “The use of complex software in the practice of law is commonplace — for instance, in managing discovery,” said Calo. “Watson is a tool — in law or medicine or another context — to assist professionals in making judgments. Eventually, I bet not using these systems will come to be viewed as antiquated and even irresponsible, like writing a brief on a typewriter.”'
"All of which begs the question, is this that big a deal? “We need to figure out what kind of danger drones actually prose,” says Ryan Calo, who specializes in law as it applies to robotics, at the University of Washington. “Is it enough to spend millions of dollars protecting against them at every airport?”"
"Ryan Calo, a law professor at the University of Washington, thinks that though this isn’t the first effort Google has made to curb what it deems dangerous advertising (even within the financial sector) it’s a substantial one that will have an effect for both consumers and payday lenders. “It’s one thing to have a bunch of lawmakers take a stand. It’s quite another to have the main search engine not carry ads,” Calo says. “It has a signaling function.
""If you're looking for an economically-efficient way to deliver packages, you'd be better off using a bicycle," said Ryan Calo, an assistant law professor at the University of Washington specializing in robotics."
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
CIS Affilate Scholar Ryan Calo wil be part of a panel titled "Understanding the Implications of Open Data".
How can open data promote trust in government without creating a transparent citizenry?
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog will all be participating in this two-day conference.
Registration is open for We Robot 2015 and we have a great program planned:
Friday, April 10
Registration and Breakfast
Welcome Remarks: Dean Kellye Testy, University of Washington School of Law
Introductory Remarks: Ryan Calo, Program Committee Chair
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Simon Jack reports from Seattle on robots at work. From the Boeing factory where robots make planes to a clothes shop where a robot helps him buy a new pair of jeans. Plus Ryan Calo, professor of law at the University of Washington, grapples with the question of who to blame when robots go wrong, and whether there is such a thing as robot rights.
There are a million ways people might use drones in the future, from deliveries and police work to journalism. But in this episode, we’re going to talk about consumer drones — something that you or I might use for ourselves. What does the world look like when everybody with a smart phone also has a drone?
"“We don’t need to get to this crazy world in which robots are trying to take over in order for there to be really difficult, interesting complex legal questions,” says Ryan Calo, professor of law at the University of Washington, “That’s happening right now.”
Here’s a sample:
“How do we make sure these drones are not recording things that they shouldn’t," Calo says, "and those things aren’t winding up .... on Amazon servers,or somehow getting out to the public or to law enforcement?"