The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
Download the paper here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209
The Scored Society: Due Process for Automated Predictions
Danielle Keats Citron
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project; Stanford Law School Center for Internet and Society
Frank A. Pasquale III
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project
January 7, 2014
"“Right now these systems are either not doing what they’re supposed to be doing or they’re doing things in ways that allow the companies that are selling them to hide behind trade secrets, so what I would say … is that California should not deploy these systems in any aspect of government until it really feels like it understands what the system does, and that the system is amenable to the kinds of guarantees and processes and procedures that we have made formally to our citizens,” Calo said."
"Microsoft is working on some of these areas through groups such as the Partnership on AI, which includes rivals like Amazon.com Inc., Alphabet Inc.’s Google, Apple Inc. and Facebook Inc. Still, the call for more regulation in an emerging area like AI is unusual for technology companies, said Ryan Calo, a professor at the University of Washington School of Law, who has read the book.
""2017, perhaps, was a watershed year, and I predict that in the next year or two the issue is only going to continue to increase in importance," said Arvind Narayanan, an assistant professor of computer science at Princeton and data privacy expert. "What has changed is the realization that these aren't specific exceptions of racial and gender bias. It's almost definitional that machine learning is going to pick up and perhaps amplify existing human biases. The issues are inescapable.""
"Maybe that makes sense from the perspective of pure logic. But Ryan Calo, an expert in robotics and cyber law at the University of Washington in Seattle, says our laws are unlikely to bend that far. “Our legal system reflects our basic biology,” he says. If we one day invent some sort of artificial person, “it would break everything about the law, as we understand it today."
"But it may be more difficult for tech firms to justify scanning conversations in other situations, said Ryan Calo, a University of Washington law professor who writes about tech.
“Once you open the door, you might wonder what other kinds of things we would be looking for,” Calo said."
"“We are just at the beginning,” said Dr. Peter Asaro, a philosopher of science, technology, and media at the New School and co-founder of the International Committee for Robot Arms Control. “These ethical issues in society are going to have to be worked out. Where do we want machines? How are we going to manage the consequences of automation in different sectors?”"
""It's hard to say who is responsible. As a casual user you have no idea how these things are built," said Peter Asaro, an assistant professor at The New School in New York and an AI philosopher.
And as algorithms become more complex, its very creators may no longer understand how it works or what comes out.
"The accountability will be what they do about it when something bad happens," Asaro said."
"Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, says that regulating new technologies is always a delicate balancing act.
"Civil liberty groups have warned against the inherent dangers of prejudiced AI in the legal system before. Law enforcement agencies now use tools to predict future crime, but several pressure groups argue that the system starts from a flawed and prejudiced base.
“It’s polluted data producing polluted results,” said Malkia Cyril, executive director of the Center for Media Justice."
"Her fictional scenario fits right into issues tackled by the burgeoning field of robot law, according to University of Washington law professor Ryan Calo. “There’s a physical, biological set of understandings that permeate the Constitution,” he said. For example, we give every person a vote, and we give every person the right to reproduce. But what if an AI can reproduce 10 million versions of itself every second? Do we give all of them a vote? And what if a robot wants to run for president? Does it have to wait 35 years, even if it is born with adult-level consciousness?
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
In the summer of 1956, several key figures in what would become known as the field of "artificial intelligence" met at Dartmouth College to brainstorm about the future of the synthetic mind. Artificial intelligence, broadly defined, has since become a part of everyday life.
FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City.
Peter Asaro (assistant professor in the School of Media Studies at The New School) and S. Matthew Liao (director of the Center for Bioethics at New York University) talk to Live Science's Denise Chow and Space.com's Tariq Malik about the ethics of AI.
Recently, when Google announced its own version of Amazon's voice-recognition digital home assistant, the company did not spend a moment addressing any privacy safeguards nor concerns.
As Wall Street Journal tech reporter Geoffrey Fowler tweeted: "So just to review, Google says it wants to install microphones all over your house. And didn't talk about privacy."
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Listen to the full interview at Marketplace Tech.
"Calo recently signed an open letter that detailed his and others’ concerns over AI’s rapid progress. The letter was published by the Future of Life Institute, a research organization studying the potential risks posed by AI. The letter has since been endorsed by scientists, CEOs, researchers, students and professors connected to the tech world.
Listen to the full interview with Ryan Calo at BBC The Inquiry.
Billions of dollars are pouring into the latest investor craze: artificial intelligence. But serious scientists like Stephen Hawking have warned that full AI could spell the end of the human race. How seriously should we take the warnings that ever-smarter computers could turn on us? Our expert witnesses explain the threat, the opportunities and how we might avoid being turned into paperclips.
View the full video at Huffpost Live.
Famed physicist Steven Hawking warns that while success in creating artificial intelligence would be the biggest event in human history, it may also be our last. What can we do to prepare ourselves now before it's too late?
Hosted by: Alyona Minkovski
CIS Affiliate Scholar David Levine interviews Eran Kahana of the Maslon law firm, on artificial intelligence.
In the summer of 1956, several key figures in what would become known as the field of "artificial intelligence" met at Dartmouth College to brainstorm about the future of the synthetic mind. Artificial intelligence, broadly defined, has since become a part of everyday life. Although we are still waiting on promises of "strong AI" capable of approximating human thought, the widespread use of artificial intelligence has the potential to reshape medicine, finance, war, and other important aspects of society.