Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
UPDATE: As told to Jules Polonetsky over at The Future of Privacy Forum, Capital One was engaging in "totally random" rate changes that were not related to browser type. On the other hand, according to the Wall Street Journal, Capital One was at one point using [x+1] data to calibrate what credit card offers to show.
The other day, I suggested that the facts of the Clementi suicide may perfectly illustrate why no actual transfer of information is necessary for someone to suffer a severe subjective privacy harm. (Thanks to TechDirt and PogoWasRight for the write ups.)
Just now I learned about an allegation against Capital One that the company offered someone a different lending rate on the basis of what browser he used (Chrome vs. Firefox). A similar allegation was made against Amazon, which apparently used cookies for a time to calibrate the price of DVDs.
Here you have a clear objective privacy harm: your information (browser type) is being used adversely in a tangible and unexpected way. It matters not at all whether a human being sees the information or whether a company knows "who you are." Neither personally identifying information, nor the revelation of information to a person, is necessary for there to be a privacy harm. Read more » about Browser Snobbery As Objective Privacy Harm (UPDATE)
Ann Bartow once criticized Daniel Solove for not providing enough “dead bodies” in his discussion of privacy. I tend to disagree that such proof is necessary. But privacy has seen a dead body recently—that of Rutgers University student Tyler Clementi.
The narrative around Clementi’s tragic suicide continues to shift. The press originally reported that Clementi killed himself after his roommate invited the entire campus to view footage of Clementi having sex with another man. The Associated Press is now reporting that, according to the roommate’s defense attorney, no one but he and his friend ever saw the video.
The question of whether the defendants recorded or broadcast the web cam is highly relevant to whether there has been a privacy violation. Yet it is hardly relevant at all to the question of whether there has been a privacy harm. Read more » about Clementi And The Nature Of Privacy Harm
I don’t know that generativity is a theory, strictly speaking. It’s more of a quality. (Specifically, five qualities.) The attendant theory, as I read it, is that technology exhibits these particular, highly desirable qualities as a function of specific incentives. These incentives are themselves susceptible to various forces—including, it turns out, consumer demand and citizen fear.
The law is in a position to influence this dynamic. Thus, for instance, Comcast might have a business incentive to slow down peer-to-peer traffic and only refrain due to FCC policy. Or, as Barbara van Schewick demonstrates inter alia in Internet Architecture and Innovation, a potential investor may lack the incentive to fund a start up if there is a risk that the product will be blocked.
Similarly, online platforms like Facebook or Yahoo! might not facilitate communication to the same degree in the absence of Section 230 immunity for fear that they will be held responsible for the thousand flowers they let bloom. I agree with Eric Goldman’s recent essay in this regard: it is no coincidence that the big Internet players generally hail from these United States. Read more » about Will Robots Be 'Generative'?
Prohibition wasn’t working. President Hoover assembled the Wickersham Commission to investigate why. The Commission concluded that despite an historic enforcement effort—including the police abuses that made the Wickersham Commission famous—the government could not stop everyone from drinking. Many people, especially in certain city neighborhoods, simply would not comply. The Commission did not recommend repeal at this time, but by 1931 it was just around the corner.
Five years later an American doctor working in a chemical plant made a startling discovery. Several workers began complaining that alcohol was making them sick, causing most to stop drinking it entirely—“involuntary abstainers,” as the doctor, E.E. Williams, later put it. It turns out they were in contact with a chemical called disulfiram used in the production of rubber. Disulfiram is well-tolerated and water-soluble. Today, it is marketed as the popular anti-alcoholism drug Antabuse.
Were disulfiram discovered just a few years earlier, would federal law enforcement have dumped it into key parts of the Chicago or Los Angeles water supply to stamp out drinking for good? Probably not. It simply would not have occurred to them. No one was regulating by architecture then. To dramatize this point: when New York City decided twenty years later to end a string of garbage can thefts by bolting the cans to the sidewalk, the decision made the front page of the New York Times. The headline read: “City Bolts Trash Baskets To Walks To End Long Wave Of Thefts.”
In an important but less discussed chapter in The Future of the Internet, Jonathan Zittrain explores our growing taste and capacity for “perfect enforcement."
Readers are likely familiar with the cyberlaw mantra that “code is law.” What’s striking is that since Lawrence Lessig published Code in 1999, relatively little has been written about the dangers of regulation by architecture, particularly outside of the context of intellectual property. Many legal scholars—Neil Katyal, Elizabeth Joh, Edward Cheng—have instead argued for more regulation by architecture on the basis that it is less discriminatory or more effective. Read more » about (Im)Perfect Enforcement
My new paper explores what is unique about privacy harm. How does privacy harm differ from other injury? And what do we gain by defining its boundaries and core properties? You can download the paper here; abstract after the jump. Your thoughts warmly welcome. Read more » about The Boundaries of Privacy Harm
United States Senate Committee on the Judiciary
“The Future of Drones In America: Law Enforcement and Privacy Considerations”
March 20, 2013
Full PDF available on the Judiciary website.
WRITTEN STATEMENT OF RYAN CALO
UNIVERSITY OF WASHINGTON SCHOOL OF LAW Read more » about The Future of Drones In America: Law Enforcement and Privacy Considerations
"“There will certainly be winners and losers,” said Ryan Calo, a professor of law at the University of Washington who focuses on robotics and public policy. “We’re talking about robots now because they are so versatile and affordable, and that will have profound affects on manufacturing, the entire supply chain and jobs.”" Read more » about New robots in the workplace: Job creators or job terminators?
"What are drones but flying smartphones, one app away from indispensable? We could see drones accompanying early morning joggers, taking sport, wildlife, and other photography to a new level, or mapping out hard-to-reach geographic terrain." Read more » about Bad laws would hurt good drones
Across the country, law enforcement and first responders are flying unmanned aircrafts to take aerial photographs of traffic accidents and crime scenes. As the technology improves and more police departments acquire permits to fly them, concerns about privacy and regulation increase. Read more » about Drones Come Home, Privacy Concerns Fly High
It's hard to find someone who can complain of his or her rights having been violated, because anyone's whose rights have been violated doesn't know it. Read more » about The Catch-22 That Prevents Us From Truly Scrutinizing the Surveillance State
"The question is, said Ryan Calo, assistant professor at the University of Washington School of Law and an organizer of an upcoming conference on robot law at Stanford Law School, “Now that this technology exists, what limits should we placing on it, but also, what limits should we be placing on tort laws in order to encourage it?”" Read more » about Should we put robots on trial?
Presented by the Center for Law and the Biosciences
Brain-computer interfaces are on the rise, but they may be vulnerable to hacking that reveals users' private information. Join us as Ryan Calo discusses the privacy risks of this emerging technology.
This event is free and open to the public, and will feature lunch from Net Appetit.
In celebration of National Robotics Week, the Silicon Valley Robot Block Party returns to the Volkswagen Automotive Innovation Lab @ Stanford on Wednesday, April 10 2013, from 1 to 6pm. Read more » about Robot Block Party 2013
The program committee for We Robot: Getting Down To Business invites you to join us for the second annual robotics and the law conference to take place April 8 and 9 at Stanford Law School. This year’s event is focused on the immediate commercial prospects of robotics and will include panels and papers on a wide variety of topics, including: Read more » about We Robot: Getting Down to Business
Technology Reporter Steven Henn leads a conversation on new innovations in face recognition technology and the legal & ethical challenges they raise with two leading privacy experts: University of Washington Law's Ryan Calo and Carnegie Mellon University's Alessandro Acquisti
It is not hard to imagine why robots raise privacy concerns. Practically by definition, robots are equipped with the ability to sense, process, and record the world around them. Robots can go places humans cannot go, see things humans cannot see. Robots are, first and foremost, a human instrument. And after industrial manufacturing, the principal use to which we’ve put that instrument has been surveillance. Read more » about Robots, Privacy & Society
On April 10, 2013, Stanford's Center for Law and the Biosciences welcomed CIS Affiliate Scholar Ryan Calo to campus for a discussion on law and emerging technology, with an emphasis on spyware for your brain. Read more » about The Center for Law and the Biosciences presents Ryan Calo
Hearing before the Senate Committee on the Judiciary on “The Future of Drones in America: Law Enforcement and Privacy Considerations” Read more » about The Future of Drones in America: Law Enforcement and Privacy Considerations
CIS Affiliate Scholar Ryan Calo interviews Neal Stephenson, author of Readme. Topics include privacy, virtual economics and security. Beth Cantrell, Greg Lastowka, and Tadayoshi Kohno also included in panel interview. This event was hosted by the University of Washington Law School. Read more » about Open Book Club: A Conversation With Neal Stephenson
It is not hard to imagine why robots raise privacy concerns. Practically by definition, robots are equipped with the ability to sense, process, and record the world around them. Robots can go places humans cannot go, see things humans cannot see. Robots are, first and foremost, a human instrument. And after industrial manufacturing, the principal use to which we’ve put that instrument has been surveillance. This talk explores the various ways that robots implicate privacy and why, absent conscientious legal and design interventions, we may never realize the potential of this transformative technology. Read more » about Robots, Privacy & Society- Cal Poly