The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
Download the paper here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209
The Scored Society: Due Process for Automated Predictions
Danielle Keats Citron
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project; Stanford Law School Center for Internet and Society
Frank A. Pasquale III
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project
January 7, 2014
"Civil liberty groups have warned against the inherent dangers of prejudiced AI in the legal system before. Law enforcement agencies now use tools to predict future crime, but several pressure groups argue that the system starts from a flawed and prejudiced base.
“It’s polluted data producing polluted results,” said Malkia Cyril, executive director of the Center for Media Justice."
"Her fictional scenario fits right into issues tackled by the burgeoning field of robot law, according to University of Washington law professor Ryan Calo. “There’s a physical, biological set of understandings that permeate the Constitution,” he said. For example, we give every person a vote, and we give every person the right to reproduce. But what if an AI can reproduce 10 million versions of itself every second? Do we give all of them a vote? And what if a robot wants to run for president? Does it have to wait 35 years, even if it is born with adult-level consciousness?
"Ryan Calo, a law professor at the University of Washington, says that we tend to talk about robots as if they are a future technology, ignoring the fact that we have already been living with them for several decades. “If you want to envisage the future in the 1920s, 1940s, 1980s, or in 2017, then you think of robots. But the reality is that robots have been in our societies since the 1950s,” he says."
"It’s less clear how such measures might help government officials and regulators grappling with the effects of smarter software in areas like privacy. “I’m not sure how useful it’ll be,” says Ryan Calo, a law professor at the University of Washington who recently proposed a detailed roadmap of AI policy issues. He argues that decisionmakers need a high-level grasp of the underlying technology, and a strong sense of values, more than granular measures of progress."
"As it stands, AIs in the US cannot be awarded copyright for something they have created. The current policy of the US Copyright Office is to reject claims made for works not authored by humans, but the policy is poorly codified. According to Annemarie Bridy, a professor of law at the University of Idaho and an affiliate scholar at Stanford University’s Center for Internet and Society, there’s no actual requirement for human authorship in the US Copyright Act. Nevertheless, the “courts have always assumed that authorship is a human phenomenon,” she says."
"“There is no possible way to have some omnibus AI law,” says Ryan Calo, a professor of law and co-director of the Tech Policy Lab at the University of Washington. “But rather we want to look at the ways in which human experience is being reshaped and start to ask what law and policy assumptions are broken.”"
"Musk has spoken out before about AI end times, in 2014 he likened working on the technology to “summoning the demon.” His propensity for raising sci-fi scenarios comes despite being very directly exposed to some of the near-term questions raised by artificial intelligence. “It’s always interesting hearing Elon Musk talk about AI killing us when a person died in a car he built that was self-driving,” says Ryan Calo, who works on policy issues related to robotics at the University of Washington."
"Arvind Narayanan, assistant professor in computer science at Princeton said, “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”"
""To have 100 people [at Carnegie Mellon] and to take 40 is rather dramatic," said University of Washington law professor Ryan Calo, who co-authored a paper on the pitfalls of the sharing economy, over the phone. "I do think that in many contexts Uber has been somewhat predatory, so it doesn't surprise me. Uber seems to live larger than life in this way."
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
In the summer of 1956, several key figures in what would become known as the field of "artificial intelligence" met at Dartmouth College to brainstorm about the future of the synthetic mind. Artificial intelligence, broadly defined, has since become a part of everyday life.
FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City.
Peter Asaro (assistant professor in the School of Media Studies at The New School) and S. Matthew Liao (director of the Center for Bioethics at New York University) talk to Live Science's Denise Chow and Space.com's Tariq Malik about the ethics of AI.
Recently, when Google announced its own version of Amazon's voice-recognition digital home assistant, the company did not spend a moment addressing any privacy safeguards nor concerns.
As Wall Street Journal tech reporter Geoffrey Fowler tweeted: "So just to review, Google says it wants to install microphones all over your house. And didn't talk about privacy."
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Listen to the full interview at Marketplace Tech.
"Calo recently signed an open letter that detailed his and others’ concerns over AI’s rapid progress. The letter was published by the Future of Life Institute, a research organization studying the potential risks posed by AI. The letter has since been endorsed by scientists, CEOs, researchers, students and professors connected to the tech world.
Listen to the full interview with Ryan Calo at BBC The Inquiry.
Billions of dollars are pouring into the latest investor craze: artificial intelligence. But serious scientists like Stephen Hawking have warned that full AI could spell the end of the human race. How seriously should we take the warnings that ever-smarter computers could turn on us? Our expert witnesses explain the threat, the opportunities and how we might avoid being turned into paperclips.
View the full video at Huffpost Live.
Famed physicist Steven Hawking warns that while success in creating artificial intelligence would be the biggest event in human history, it may also be our last. What can we do to prepare ourselves now before it's too late?
Hosted by: Alyona Minkovski
CIS Affiliate Scholar David Levine interviews Eran Kahana of the Maslon law firm, on artificial intelligence.
In the summer of 1956, several key figures in what would become known as the field of "artificial intelligence" met at Dartmouth College to brainstorm about the future of the synthetic mind. Artificial intelligence, broadly defined, has since become a part of everyday life. Although we are still waiting on promises of "strong AI" capable of approximating human thought, the widespread use of artificial intelligence has the potential to reshape medicine, finance, war, and other important aspects of society.