The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons?
The Convention on Certain Conventional Weapons (CCW) at the UN has just concluded a second round of meetings on lethal autonomous weapons systems in Geneva, under the auspices of what is known as a Group of Governmental Experts. Both the urgency and significance of the discussions in that forum have been heightened by the rising concerns over artificial intelligence (AI) arms races and the increasing use of digital technologies to subvert democratic processes.
The term “hacking” has come to signify breaking into a computer system. A number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation. Other laws and standards incentivize private firms to use best practices in securing computers against attack.
Download the paper here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209
The Scored Society: Due Process for Automated Predictions
Danielle Keats Citron
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project; Stanford Law School Center for Internet and Society
Frank A. Pasquale III
University of Maryland Francis King Carey School of Law; Yale University - Yale Information Society Project
January 7, 2014
"OpenAI's decision to keep the AI to itself makes sense to Ryan Calo, a professor at the University of Washington and co-director of the school's Tech Policy Lab, especially in light of a fake face-generating website that began circulating in mid-February.
"Ryan Calo, a law professor at the University of Washington, says it’s good to see the White House taking AI and its effects seriously. He also says it will take time to know whether the Trump administration is properly attending to the ethical and human rights questions raised by AI. “Are they aware enough of its social impacts and thinking about the effects on society and how to address the problems it creates?” Calo says. “That’s what we have to watch for.”"
"Ryan Calo, a cyber law expert at the University of Washington, told The Hill that Trump's executive order appears to incorporate some elements of former President Obama's AI plan.
Calo, the cyber law expert, told The Hill there needs to be an examination of "the way in which artificial intelligence can be biased, the way it can disproportionately harm vulnerable populations."
He said government needs to regulate how it procures AI technologies "so that we don’t 'unleash AI' on the world without thinking about its social impacts.""
"“2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable,” Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale, said in response to Pew’s question."
"While employees and customers can pressure companies to act ethically with regard to AI, more attention needs to be focused on laws and government oversight, said Ryan Calo, a law professor at the University of Washington, who is on the board of AI Now and gets funding from Microsoft. Without broad regulation, if some companies refuse to sell the software, others will step in.
"“In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights,” said Sonia Katyal, co-director of the Berkeley Center for Law and Technology. “Questions about privacy, speech, the right of assembly and technological construction of personhood will all reemerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all.”"
"While panelists acknowledged valid consumer protection concerns, they generally counseled against overly-cautious regulation and were instead in favor of allowing AI room to experiment and grow before turning to burdensome regulation. Ryan Calo, Associate Professor at the University of Washington School of Law, cited California’s new
""An algorithm could've given us Dred Scott or Korematsu," said Ryan Calo, a law professor at the University of Washington, referring to a pair of Supreme Court decisions now considered morally wrong. But it would not know, decades later, that it had misjudged.
In this way, a mechanical judge would be extremely conservative, Calo said, interpreting the law’s text without considering any outside factors at all."
"To Mr McLaughlin, targeting people by values such as “equality” or “tradition” is fine, but profiling their emotional state is not. As AI improves, he believes campaigns should steer clear of any technology that makes decisions that are unexplainable. “We do not want to unilaterally surrender capabilities to the right — nor do we want to behave as though the ends justify the means,” he says."
"The keynote speaker was Danielle Citron, Morton and Sophia Macht Professor of Law at the University of Maryland’s Francis King Carey School of Law. Citron addressed the rise of “deep fakes,” sophisticated fake audio and video that can be easily produced by people with access to the technology.
Citron warned the audience that the democratization of this technology could have devastating effects on the political process. She discussed the possibility of a fabricated video that incriminates or embarrasses a political candidate surfacing the night before an election.
The ongoing development and ever-increasing sophistication of artificial intelligence (AI) is giving rise to some fundamental ethical questions: Will machine-made decisions always be transparent and stay within human-defined parameters? To what extent can users retain control over intelligent algorithms? Is it possible to imbue self-learning systems with a sense of morality? And who decides what moral values these systems should to follow anyway?
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology. The event is free and open to the public but requires registration. -
In the summer of 1956, several key figures in what would become known as the field of "artificial intelligence" met at Dartmouth College to brainstorm about the future of the synthetic mind. Artificial intelligence, broadly defined, has since become a part of everyday life.
"Law professor Ryan Calo, co-director of the University of Washington's Tech Policy Lab, says AI-based monitoring of social media may follow a predictable pattern for how new technologies gradually work their way into law enforcement.
"The way it would happen would be we would take something that everybody agrees is terrible — something like suicide, which is epidemic, something like child pornography, something like terrorism — so these early things, and then if they show promise in these sectors, we broaden them to more and more things. And that's a concern.""
FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City.
Peter Asaro (assistant professor in the School of Media Studies at The New School) and S. Matthew Liao (director of the Center for Bioethics at New York University) talk to Live Science's Denise Chow and Space.com's Tariq Malik about the ethics of AI.
Recently, when Google announced its own version of Amazon's voice-recognition digital home assistant, the company did not spend a moment addressing any privacy safeguards nor concerns.
As Wall Street Journal tech reporter Geoffrey Fowler tweeted: "So just to review, Google says it wants to install microphones all over your house. And didn't talk about privacy."
The University of Washington School of Law is delighted to announce a public workshop on the law and policy of artificial intelligence, co-hosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.
Listen to the full interview at Marketplace Tech.
"Calo recently signed an open letter that detailed his and others’ concerns over AI’s rapid progress. The letter was published by the Future of Life Institute, a research organization studying the potential risks posed by AI. The letter has since been endorsed by scientists, CEOs, researchers, students and professors connected to the tech world.
Listen to the full interview with Ryan Calo at BBC The Inquiry.
Billions of dollars are pouring into the latest investor craze: artificial intelligence. But serious scientists like Stephen Hawking have warned that full AI could spell the end of the human race. How seriously should we take the warnings that ever-smarter computers could turn on us? Our expert witnesses explain the threat, the opportunities and how we might avoid being turned into paperclips.
View the full video at Huffpost Live.
Famed physicist Steven Hawking warns that while success in creating artificial intelligence would be the biggest event in human history, it may also be our last. What can we do to prepare ourselves now before it's too late?
Hosted by: Alyona Minkovski