Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
Judge Richard Posner took the occasion of the Boston bombing to remind us of his view that privacy should lose out to other values. Privacy, argues Judge Posner, is largely about concealing truths “that, if known, would make it more difficult for us to achieve our personal goals.” For instance: privacy helps the victims of domestic violence achieve their personal goal of living free from fear; it helps the elderly achieve their personal goal of staying off of marketing “sucker lists;” and it helps children achieve their personal goal of avoiding sexual predators online.
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today's gamers.
I’ve blogged on these pages before about the claim, popularized by Larry Lessig, that “code is law.” During the Concurring Opinions symposium on Jonathan Zittrain’s 2010 book The Future of The Internet (And How To Stop It), I cataloged the senses in which architecture or “code” is said to constitute a form of regulation. “Primary” architecture refers to altering a physical or digital environment to stop conduct before it happens. Speed bumps are a classic example. “Secondary” architecture instead alters an environment in order to make conduct harder to get away with—for instance, by installing a traffic light camera or forcing a communications network to build an entry point for law enforcement.
I have yet to sit down and read Evgeny Morozov’s new book, To Save Everything, Click Here: The Folly of Technological Solutionism. I certainly found his last book very thought provoking. But I did get a chance to read an op ed Morozov recently wrote in the Wall Street Journal with the provocative title “Is Smart Making Us Dumb?” The piece draws a distinction between mobile and other devices that are “good smart” and ones that are “bad smart.” Good smart devices “leave us in complete control of the situation and seek to enhance our decision-making by providing more information.” Morozov offers the example of a teapot that relays the state of the energy grid. Whereas bad smart ones “make certain choices and behaviors impossible,” a theme Lawrence Lessig, Jonathan Zittrain, and others famously develop under the rubric of "code."
I wrote a new essay entitled “Code, Nudge, or Notice?” that might interest CIS readers. The essay compares side-by-side three ways that the government tries to influence citizen behavior short of making it illegal. It uses contemporary examples, like the graphic warnings the FDA wants to put on cigarettes, to make the point that it sometimes hard to sort regulations into neat categories like “architecture,” “libertarian paternalism,” or “mandatory disclosure” (code, nudge, or notice). Instead, I argue that regulators should focus on the more fundamental difference between helping people and hindering them. Along the way, I make the point that all of forensics may be a kind of “code” that turns an ordinary location into a crime scene—sort of like putting a traffic camera up at an intersection only after someone runs the red light. Thoughts warmly welcome. Here is the abstract:
The term “hacking” has come to signify breaking into a computer system. A number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation. Other laws and standards incentivize private firms to use best practices in securing computers against attack.
"“Something goes wrong, but there’s no perpetrator,” said Ryan Calo, a professor at the University of Washington Law School who focuses on the intersection of tort law and technology, "because nobody intended this behavior.”
"Beyond that, the kinds of content Zuckerberg focused on in the hearings were images and videos. From what we know about Facebook’s automated system, at its core, it’s a search mechanism across a shared database of hashes. If a video of a beheading goes up that has been previously been identified as terrorist content in the database — by Facebook or one of its partners — it’ll be automatically recognized and taken down.
""If I were Facebook, I would be quite nervous about popular sentiment," University of Washington law professor Ryan Calo said.
U.S. Sen. John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, will convene a hearing on Wednesday, November 16, 2016, at 3:00 p.m. entitled “Exploring Augmented Reality.” The hearing will examine the emergence, benefits, and implications of augmented reality technologies. Unlike virtual reality that creates a wholly simulated reality, augmented reality attempts to superimpose images and visual data on the physical world in an intuitive way.
• Mr. Brian Blau, Research Vice President, Gartner
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
Facebook is still reeling from the revelation that a British firm, Cambridge Analytica, improperly used millions of its users’ data. #DeleteFacebook is trending and those in the tech world are closely watching how users react to the news.
Can the tech giant turn a new leaf? What data are we willing to give up for the convenience of platforms? And would paying for services like Facebook solve the problem?
Nobody likes to wait in line. So today, Amazon removed that unpleasantness from the neighborhood grocery store. At Amazon Go, you walk in, pick up your groceries and walk out.
There are no checkout lines or scanners and almost no employees, just sensors and cameras. But what is that convenience going to cost you? We talk with Geekwire’s Todd Bishop and University of Washington law professor and privacy expert Ryan Calo.
Listen to the full interview at KUOW 94.9
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Simon Jack reports from Seattle on robots at work. From the Boeing factory where robots make planes to a clothes shop where a robot helps him buy a new pair of jeans. Plus Ryan Calo, professor of law at the University of Washington, grapples with the question of who to blame when robots go wrong, and whether there is such a thing as robot rights.