Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan's work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
UPDATE: Facebook explains the security procedure here. Apparently they only use photos if you have not set up another verification means. Also, I have confirmation that the photo identification is not being done for a secondary purpose.
I recently tried to sign on to Facebook from a coffee shop. I was told that I had to pass a security screening because of the "strange location." Fair enough. The actual test, however, was surprising. It was comprised of a multiple choice exam where I had to identify who was in a given picture.
A couple of things. First, some of the pictures were embarrassing. I doubt the person who uploaded them thought they would be used to screen for improper access. Think about it. Facebook is showing random private photos to people because it suspects they may not be the account holder. The photos must be private because they form the basis of a security screening.
I attended a fascinating thesis defense today on the subject of human-robot interaction by Stanford PhD candidate Victoria Groom. HRI experiments apparently tend to focus on human encounters with robots; few studies test the psychology behind robot operation. Groom’s work explores how we feel about the tasks we perform through robots. One of the more interesting questions she and her colleagues ask is: to what extent do we feel like it’s really us performing the task? The question is important where, as in the military, people work through robots to carry out morally charged tasks. And the answer might have repercussions for how we think about evaluation and punishment.
I started a new blog around robotics programming and scholarship at Stanford Law School. Some of us here believe that robotics is a transformative technology on par with the Internet. (We're not alone: the "roadmap for U.S. robotics" prepared for Congress by a coalition of robotics labs and research institutes is called "From Internet to Robotics.") I've said before and I'll say again: the age of Internet exceptionalism is over. We can now do "digital" things in the real world. The chief importance (and danger) of the Internet is the imaginative possibilities it opens up. Robotics is how we will prove the slogan Chris Anderson came up with in a slightly different context: "Atoms are the new bits." Please stay tuned.
Thanks to Elaine Adolfo for the image.
The response to WhatApp.org has been wonderful, thanks! We now have over 20 registered and approved experts from a wide variety of sectors, including privacy compliance, law, and computer science. Many (many) people have signed up, left comments, edited wikis, or suggested apps to review for privacy, security, and openness. (We're going to run out of apps to review, so please do "add an app" if you get a chance!) If you have comments or questions, please email firstname.lastname@example.org. It's a work in progress and we need your help. Thanks again---especially to the Rose Foundation for their generous support.
Over the last year, the FBI has had harsh words for Apple, accusing the tech giant of endangering human lives and aiding criminals by turning on encryption by default on the iPhone. When Google announced it would add the feature to Android, meaning that smartphone users would need to unlock their phones for police to be able to go through them, government officials and law enforcement representatives similarly freaked out.
Privacy law scholars tend to be skeptical of markets. Markets “unravel” privacy by penalizing consumers who prefer it, degrade privacy by treating it as just another commodity to be traded, and otherwise interfere with the values or processes that privacy exists to preserve.
"An ethicist says that now is the time to ponder the enigmatic questions of cyber law: “Robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument,” says Ryan Calo, a professor at the University of Washington School of Law. If, in the future, a demonstrably sentient machine claims the right that humans have to procreate, or build copies of itself, who can say nay?
"Ryan Calo, an assistant professor of law at the University of Washington Law School and co-director of the school’s Tech Policy Lab, has some ideas about that. He’s written papers about the government’s glaring lack of experience when it comes to evaluating new robotic technologies before. Robotic surgery technology is here to stay, and is going to get more advanced. How will regulators respond?
"Ryan Calo, a law professor at the University of Washington who specializes in issues concerning robotics, noted that Sony's focus on service-focused drone flight was an odd move for a company better known for its consumer-facing products. “I'm a little surprised by the business model,” Calo said. “Commercial drones are not as interesting as they could be.
"A legal expert has warned that the laws that govern robotics are playing catch-up to the technology and need to be updated in case robots 'wake up' and demand rights.
He also argues that artificial intelligence has come of age, and that we should begin tackling these problems before they arise, as robots increasingly blur the line between person and machine.
"“The law tends to assume that people intend what they do, or at least are able to foresee the consequences of what they do,” said Ryan Calo, an assistant professor of law at the Univ. of Washington’s School of Law, in an exclusive interview with R&D Magazine.
The prospect of systems making decisions with no personal foresight could result in personal injury with no perpetrator responsible. “And that’s the concern,” he said.
U.S. Sen. John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, will convene a hearing on Wednesday, November 16, 2016, at 3:00 p.m. entitled “Exploring Augmented Reality.” The hearing will examine the emergence, benefits, and implications of augmented reality technologies. Unlike virtual reality that creates a wholly simulated reality, augmented reality attempts to superimpose images and visual data on the physical world in an intuitive way.
• Mr. Brian Blau, Research Vice President, Gartner
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
CIS Affilate Scholar Ryan Calo wil be part of a panel titled "Understanding the Implications of Open Data".
How can open data promote trust in government without creating a transparent citizenry?
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Simon Jack reports from Seattle on robots at work. From the Boeing factory where robots make planes to a clothes shop where a robot helps him buy a new pair of jeans. Plus Ryan Calo, professor of law at the University of Washington, grapples with the question of who to blame when robots go wrong, and whether there is such a thing as robot rights.
There are a million ways people might use drones in the future, from deliveries and police work to journalism. But in this episode, we’re going to talk about consumer drones — something that you or I might use for ourselves. What does the world look like when everybody with a smart phone also has a drone?
"“We don’t need to get to this crazy world in which robots are trying to take over in order for there to be really difficult, interesting complex legal questions,” says Ryan Calo, professor of law at the University of Washington, “That’s happening right now.”
Here’s a sample:
“How do we make sure these drones are not recording things that they shouldn’t," Calo says, "and those things aren’t winding up .... on Amazon servers,or somehow getting out to the public or to law enforcement?"