Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
Is it lawful for a car to drive itself? In the absence of any law to the contrary, it should well be. A new bill is working its way through the Nevada state legislature that would remove any doubt in that state. A.B. 511 directs the Nevada Department of Transportation to authorize autonomous vehicle testing in certain geographic areas of Nevada. Should vehicles meet Nevada DOT standards, they would be permitted to "operate on a highway." The bill defines not only autonomous vehicle, but artificial intelligence as well. AI is "the use of computers and related equipment to enable a machine to duplicate or mimic the behavior of human beings." An autonomous vehicle uses "artificial intelligence, sensors, and [GPS] coordinates to drive itself." To be clear: autonomous vehicles are not yet the law of the land in Nevada. This bill must pass through two committees and receive a hearing before it can be voted on and become law. Some preliminary thoughts on the bill in its present form follow.
Requiring notice is an extraordinarily popular way to regulate. In online privacy, for instance, giving notice about their practices is among the only affirmative obligations websites face. The strategy is also one of the most heavily criticized. Not only does no one read privacy policies, skeptics rightly point out, but many believe that their mere existence guarantees certain base level protections that may or may not exist.
Should we give up on notice? My recent draft paper argues: maybe not. We should explore two possibilities, at any rate, before we do. The first is that regulators may sometimes select the wrong form of notice for the job. Today most website “terms” say that the company “may disclose data pursuant to lawful requests.” That does very little to further user understanding or action. But maybe it could work to:
As an alternative, I argue for a concept I've been calling "visceral" privacy notice. Rather than tell people at length what your privacy practices may be, you show them what they really are. Facebook took a step in this direction today, joining Google and Yahoo! in what I hope to be an emerging best practice.
The intuition that privacy and innovation are somehow opposed is surprisingly common. It is true that overzealous or reactionary appeals to privacy can cut off interesting ventures. (For instance, some believe Steamtunnels would have evolved into a social network in 1999 were it not shut down by the Stanford University due to privacy and copyright concerns.) But privacy generally supports innovation, and vice versa.
Privacy law scholars tend to be skeptical of markets. Markets “unravel” privacy by penalizing consumers who prefer it, degrade privacy by treating it as just another commodity to be traded, and otherwise interfere with the values or processes that privacy exists to preserve.
In a fresh and recent whitepaper, Brookings Institution senior fellow Benjamin Wittes and law student Jodie Liu turn the standard privacy argument on its head: as they see it, many supposed threats to our privacy actually benefit it.
The Federal Aviation Administration announced its proposal this morning for what rules should govern small unmanned aerial systems, meaning drones 55 pounds or lighter. We do not know how long it will take for the rules to go into effect. When they do, the new rules will permit vastly more drone use in the United States, bringing us closer into line with other countries where drones can be commercially operated today.
We are not ready for driverless cars because our public officials lack the expertise to evaluate the safety of this new class of automobiles.
"“[Social networks] can censor more or less anything they want and it also have incredible abilities to leave up as much as they wants to leave up,” said Ryan Calo, professor of law at the University of Washington and co-director of the school’s Tech Policy Lab."
"Shouting down web-based terrorist recruiting cells, that’s a good thing, said Ryan Calo, professor of law at the University of Washington and co-director of the Tech Policy Lab.
Porn on Twitter, maybe not such a good thing, he said. It could be offensive to religious Muslims (or Christians or Jews), the overwhelming majority of whom are not terrorists and want nothing to do with sexually explicit images.
"“Chatbots may be able to get us to say more about ourselves than an ordinary website,” says Ryan Calo, codirector of the Tech Policy Lab at the University of Washington.
"Even the stereo, which was affected by Lexus's update, can create an unsafe situation, robotics law expert Ryan Calo told the Monitor. For example, buggy software might cause the radio to blare suddenly, startling the driver and causing an accident.
Tesla recently introduced a software update to control the whole vehicle, Dr. Calo tells the Monitor, although he says Lexus' update is technically not critical to safety.
The result, he says, is that "The line between control-critical and entertainment systems is not perfectly clean.""
"“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
CIS Affilate Scholar Ryan Calo wil be part of a panel titled "Understanding the Implications of Open Data".
How can open data promote trust in government without creating a transparent citizenry?
CIS Affiliate Scholars Peter Asaro, Ryan Calo and Woodrow Hartzog will all be participating in this two-day conference.
Registration is open for We Robot 2015 and we have a great program planned:
Friday, April 10
Registration and Breakfast
Welcome Remarks: Dean Kellye Testy, University of Washington School of Law
Introductory Remarks: Ryan Calo, Program Committee Chair
"What will Amazon’s drone highway in the sky look like?
Probably not a drone highway. Amazon unveiled a proposal where low-level air space would be carved out for drones: 200 to 400 feet would be reserved for high-speed transit drones. Below, there would be space for low -speed local drone traffic, and above would be a no-fly buffer zone to keep drones out of manned-vehicle air space, aka flight paths.
Robots have been used in factories around the world for decades, often carrying out dangerous or highly repetitive operations. However the city of Dongguan, China, has become home to the first fully automated factory - where the workforce is made of up entirely of robots. Changying Precision Technology will only employ a small number of human staff who will monitor operations of the machinery, but all processes are completed by robotic equipment.
Is this a sign of things to come? Newsday spoke to Ryan Calo, a professor with the University of Washington Tech Policy Lab.
CIS Affiliate Scholar Ryan Calo on Good Morning America segment "Popularity of Drones Raises Privacy Concerns," many have reported drones with cameras invading their privacy.
Ryan Calo, Assistant Law Professor at the University of Washington and an affiliate scholar at the Stanford Center for Internet and Society, talks about testing Google’s driverless cars.
Listen to the full show at Marketplace Tech.