The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons?
The Convention on Certain Conventional Weapons (CCW) at the UN has just concluded a second round of meetings on lethal autonomous weapons systems in Geneva, under the auspices of what is known as a Group of Governmental Experts. Both the urgency and significance of the discussions in that forum have been heightened by the rising concerns over artificial intelligence (AI) arms races and the increasing use of digital technologies to subvert democratic processes.
The term “hacking” has come to signify breaking into a computer system. A number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation. Other laws and standards incentivize private firms to use best practices in securing computers against attack.
Download the paper here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209
The Scored Society: Due Process for Automated Predictions
Danielle Keats Citron
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project; Stanford Law School Center for Internet and Society
Frank A. Pasquale III
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project
January 7, 2014
"“In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights,” said Sonia Katyal, co-director of the Berkeley Center for Law and Technology. “Questions about privacy, speech, the right of assembly and technological construction of personhood will all reemerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all.”"
"While panelists acknowledged valid consumer protection concerns, they generally counseled against overly-cautious regulation and were instead in favor of allowing AI room to experiment and grow before turning to burdensome regulation. Ryan Calo, Associate Professor at the University of Washington School of Law, cited California’s new
""An algorithm could've given us Dred Scott or Korematsu," said Ryan Calo, a law professor at the University of Washington, referring to a pair of Supreme Court decisions now considered morally wrong. But it would not know, decades later, that it had misjudged.
In this way, a mechanical judge would be extremely conservative, Calo said, interpreting the law’s text without considering any outside factors at all."
"To Mr McLaughlin, targeting people by values such as “equality” or “tradition” is fine, but profiling their emotional state is not. As AI improves, he believes campaigns should steer clear of any technology that makes decisions that are unexplainable. “We do not want to unilaterally surrender capabilities to the right — nor do we want to behave as though the ends justify the means,” he says."
"The keynote speaker was Danielle Citron, Morton and Sophia Macht Professor of Law at the University of Maryland’s Francis King Carey School of Law. Citron addressed the rise of “deep fakes,” sophisticated fake audio and video that can be easily produced by people with access to the technology.
Citron warned the audience that the democratization of this technology could have devastating effects on the political process. She discussed the possibility of a fabricated video that incriminates or embarrasses a political candidate surfacing the night before an election.
"It isn't just our interactions with other humans that could be affected. “I worry a lot about how we’re building this world that’s supposed to be for convenience, comfort, and speed, but in fact makes us feel like someone is always listening, whether they are or not,” says Ryan Calo, a professor of law at the University of Washington who has studied the impacts of anthropomorphic robots on society.
"“There should be a whole gradation of how this [software] should work,” Daphne Keller, the director of the Stanford Center for Internet and Society (and mother of two), told Quartz. “We should be able to choose something in between, that is a good balance [between safety and surveillance], rather than forcing kids to divulge all their data without any control.”"
""We're seeing growing interest in applying computing things like machine learning, deep learning, artificial intelligence, etc. (think of IBM Watson stuff) to cybersecurity issues both in 'real time' and on a more strategic basis to try and identify trends and vulnerabilities before they become actual incidents," Richard Forno, assistant director of the Center for Cybersecurity and the director of the Cybersecurity Graduate Program at the University of Maryland, Baltimore County, tells CNBC Make It."
"Peter Asaro, vice chairman of the International Committee for Robot Arms Control, said this week that Google's backing off from the project was good news because it slows down a potential AI arms race over autonomous weapons systems. What's more, letting the contract expire was fundamental to Google's business model, which relies on gathering mass amounts of user data, he said.
"They're a company that's very much aware of their image in the public conscious," he said. "They want people to trust them and trust them with their data.""
"“While Google’s statement rejects building AI systems for information gathering and surveillance that violates internationally accepted norms, we are concerned about this qualification,” said Peter Asaro, a professor at The New School and one of the authors of an open letter that calls on Google to cancel its Maven contract.
The ongoing development and ever-increasing sophistication of artificial intelligence (AI) is giving rise to some fundamental ethical questions: Will machine-made decisions always be transparent and stay within human-defined parameters? To what extent can users retain control over intelligent algorithms? Is it possible to imbue self-learning systems with a sense of morality? And who decides what moral values these systems should to follow anyway?
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
In the summer of 1956, several key figures in what would become known as the field of "artificial intelligence" met at Dartmouth College to brainstorm about the future of the synthetic mind. Artificial intelligence, broadly defined, has since become a part of everyday life.
"Law professor Ryan Calo, co-director of the University of Washington's Tech Policy Lab, says AI-based monitoring of social media may follow a predictable pattern for how new technologies gradually work their way into law enforcement.
"The way it would happen would be we would take something that everybody agrees is terrible — something like suicide, which is epidemic, something like child pornography, something like terrorism — so these early things, and then if they show promise in these sectors, we broaden them to more and more things. And that's a concern.""
FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City.
Peter Asaro (assistant professor in the School of Media Studies at The New School) and S. Matthew Liao (director of the Center for Bioethics at New York University) talk to Live Science's Denise Chow and Space.com's Tariq Malik about the ethics of AI.
Recently, when Google announced its own version of Amazon's voice-recognition digital home assistant, the company did not spend a moment addressing any privacy safeguards nor concerns.
As Wall Street Journal tech reporter Geoffrey Fowler tweeted: "So just to review, Google says it wants to install microphones all over your house. And didn't talk about privacy."
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Listen to the full interview at Marketplace Tech.
"Calo recently signed an open letter that detailed his and others’ concerns over AI’s rapid progress. The letter was published by the Future of Life Institute, a research organization studying the potential risks posed by AI. The letter has since been endorsed by scientists, CEOs, researchers, students and professors connected to the tech world.
Listen to the full interview with Ryan Calo at BBC The Inquiry.
Billions of dollars are pouring into the latest investor craze: artificial intelligence. But serious scientists like Stephen Hawking have warned that full AI could spell the end of the human race. How seriously should we take the warnings that ever-smarter computers could turn on us? Our expert witnesses explain the threat, the opportunities and how we might avoid being turned into paperclips.
View the full video at Huffpost Live.
Famed physicist Steven Hawking warns that while success in creating artificial intelligence would be the biggest event in human history, it may also be our last. What can we do to prepare ourselves now before it's too late?
Hosted by: Alyona Minkovski