Daphne Keller is the Director of Intermediary Liability at Stanford's Center for Internet and Society. Her work focuses on platform regulation and Internet users' rights.
Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.
In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.
To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.
Joan Barata is an international expert in freedom of expression, freedom of information and media regulation. As a scholar, he has spoken and done extensive research in these areas, working and collaborating with various universities and academic centers, from Asia to Africa and America, authoring papers, articles and books, and addressing specialized Parliament committees.
Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights.
Giancarlo F. Frosio is a Non-Residential Fellow at the Center for Internet and Society at Stanford Law School. Previously he was the Intermediary Liabilty fellow with Stanford CIS. He is also an Associate Professor at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Giancarlo also serves as Affiliate Faculty at Harvard CopyrightX and Faculty Associate of the Nexa Research Center for Internet and Society in Turin. Giancarlo is a qualified attorney with a doctoral degree (S.J.D.) in intellectual property law from Duke University Law School.
Today, someone asked me about the Internet and human well-being over the next decade. The question was a healthy provocation to look at the big picture. I chose “more helped than harmed” from the very short list of radio-button responses. Here’s my elaboration:
In a recent ruling, the Spanish Audiencia Nacional – the high court that referred the Google Spain case to the Court of Justice of the European Union (CJEU) – has somehow expressed opposition against imposing global delisting obligations on search engines.
(NB: This headline does not obey Betteridge’s Law.)
Hollywood studios, led by Universal, have sued TickBox TV in federal district court in California, bringing their campaign against set-top box (STB) piracy stateside after a big win earlier this year in the EU. Last spring, the Dutch film and recording industry trade association BREIN prevailed in copyright litigation against the distributor of a STB called the Filmspeler. The CJEU held that the Filmspeler’s distributor, Wullems, directly infringed the plaintiffs’ copyrights—specifically, their right of communication to the public—by selling STBs loaded with software add-ons that provided easy access to infringing programming online. (I blogged about the Filmspeler case here.)
As I’ve been writing about networked information technologies as “tools,” it’s worth reiterating that metaphors of space are not entirely without value, including in areas of the law that derive from laws relating to real property. Having noted in multiple prior posts the weaknesses of spatial metaphors, here I discuss some of their common applications in ways that are productive. There are two particular applications of the physical space metaphor to online platforms and services that are interesting:
1) Content moderation questions - including vetting of advertisers and user-submitted content;
2) Data Rights questions - if content is posted is it free to copy? To commercialize?
Thursday evening, the Attorney General, the Acting Homeland Security Secretary, and top law enforcement officials from the U.K. and Australia sent an open letter to Mark Zuckerberg. The letter emphasizes the scourge of child abuse content online, and the officials call on Facebook to press pause on end-to-end encryption for its messaging platforms.
[Stanford's Daphne Keller is a preeminent cyberlawyer and one of the world's leading experts on "intermediary liability" -- that is, when an online service should be held responsible for the actions of this user. She brings us a delightful tale of Facebook's inability to moderate content at scale, which is as much of a tale of the impossibility (and foolishness) of trying to support 2.3 billion users (who will generate 2,300 one-in-a-million edge-cases every day) as it is about a specific failure.
This past week, with some fanfare, Facebook announced its own version of the Supreme Court: a 40-member board that will make final decisions about user posts that Facebook has taken down. The announcement came after extended deliberations that have been described as Facebook’s “constitutional convention.”
"Hate speech that isn’t an imminent threat is still protected by the Constitution, noted Daphne Keller, a researcher at Stanford’s Center for Internet and Society and a former associate general counsel for Google. “A law can’t just ban it. And Congress can’t just tell platforms to ban it, either — that use of government power would still violate the First Amendment,” she said."
"People would be allowed to use pseudonyms when posting online, but platforms could be forced to hand out the users’ private information to third parties, including private persons, seeking prosecution for defamation or other crimes.
“The chilling effect for freedom of speech is real,” said Thomas Lohninger."
"“What’s not so clear yet is whether G.D.P.R. has had an effect on privacy and on corporate data practices,” said Omer Tene, vice president and chief knowledge officer at the International Association of Privacy Professionals. “Has the underlying business model of the internet changed? Is consumer privacy better? I think those questions are very much still open.”"
""In the first year, we've seen tens of thousands of complaints and data breaches," says Omer Tene, the IAPP's vice president and chief knowledge officer.
"But we've yet to see much evidence that the GDPR has led to an improvement in organisations' data practices.""
Register here: http://web.stanford.edu/dept/law/forms/conlawmay2019.fb
Friday, May 24
How Should Free Speech Principles Apply to the Content Policy of Internet Platforms?
• Danielle Citron, University of Maryland Carey School of Law
• Niall Ferguson, Stanford University
• Mary Anne Franks, University of Miami Law School
• Eugene Volokh, UCLA Law School
Moderator: Nate Persily, Stanford Law School
Lunch: 1:00 pm
Program: 1:30 pm - 3:00 pm
The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.
In this episode, The Stream speaks with tech industry experts and policy analysts to explore whether the Indian government’s plan will ensure public safety or set a dangerous precedent.
The latest in the EU's string of internet regulatory efforts has a new target: terrorist propaganda. Just as with past regulations, the proposed rules seem onerous and insane, creating huge liability for internet platforms that fail to do the impossible.
Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?