Daphne Keller is the Director of Intermediary Liability at Stanford's Center for Internet and Society. Her work focuses on platform regulation and Internet users' rights.
Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.
In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.
To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.
Joan Barata is an international expert in freedom of expression, freedom of information and media regulation. As a scholar, he has spoken and done extensive research in these areas, working and collaborating with various universities and academic centers, from Asia to Africa and America, authoring papers, articles and books, and addressing specialized Parliament committees.
Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights.
Giancarlo F. Frosio is a Non-Residential Fellow at the Center for Internet and Society at Stanford Law School. Previously he was the Intermediary Liabilty fellow with Stanford CIS. He is also an Associate Professor at the Center for International Intellectual Property Studies (CEIPI) at Strasbourg University. Giancarlo also serves as Affiliate Faculty at Harvard CopyrightX and Faculty Associate of the Nexa Research Center for Internet and Society in Turin. Giancarlo is a qualified attorney with a doctoral degree (S.J.D.) in intellectual property law from Duke University Law School.
The topic of how well the tool of black letter law works in the Internet law setting is of course huge, and associated with obvious definitional challenges. To point to but one; how ought we define “black letter law” in our present legal culture where legal rules necessarily must take account of the technical reality in which they operate? Indeed, given Wikipedia’s definition of “black letter laws” as laws that are “the well-established technical legal rules that are no longer subject to reasonable dispute,” one may legitimately question whether we can speak of any real black letter law within our field of enquiry. Fortunately, however, the panel was asked to approach only the more concrete topic identified in the description above.
Without a doubt, human rights law provides an important framework for the discussion of cross-border speech regulation. The International Covenant on Civil and Political Rights (ICCPR) in Article 19 clearly states the right to express opinions and ideas “regardless of frontiers” and the Internet is a particularly relevant tool and platform for the exercise of this right, both in its individual and social dimensions. There was a common underlying basic agreement among the different panelists as to the need to include a human rights perspective in content removal discussions, whether judicial, regulatory or legislative.
This panel addressed the right to be forgotten (RTBF) from a global perspective, presenting points of view from relevant stakeholders and academic researchers from different regions. As established in the Court of Justice of the European Union’s 2014 Google Spain case, this is a right under data protection law for individuals to request that search engines de-list specified results appearing in response to a search for the individual’s name. While search engines may decline to de-list results based on public interest considerations, the RTBF is still far broader than de-listing or removal rights in many countries, including the United States. This is especially the case since de-listing can also be requested for information that lawfully published online.
The topic of this panel was cross-border issues in the online enforcement of intellectual property rights. The speakers brought a range of perspectives from the movie industry (Ben Sheffner), the public interest sector (Corynne McSherry), academia (Annemarie Bridy), and the tech industry (Alex Feerst).
The panel began with a discussion of Equustek Solutions Inc. v. Jack, a case then pending before the Supreme Court of Canada. In the case, Google challenged a lower court’s injunction requiring it to remove search results not only from its Canadian services, but globally. The sites belonged to the defendants, who were accused of trade secret misappropriation and trademark infringement. The defendants fled Canada during the course of the litigation, which led the court to strike their defenses as a sanction. The trial court ultimately issued an order enjoining the defendants from using Equustek’s trade secrets and from selling infringing inventory. The defendants predictably disregarded the court’s order. They continued to sell products from various websites they controlled from indeterminate locations. Equustek asked Google to globally remove search results for the defendants’ websites, which Google refused to do. Google agreed only to remove infringing URLs from results on its Canadian search service at www.google.ca. Equustek argued before the trial court that Google should be compelled to do more.
The Program on Extremism Policy Paper series combines analysis on extremism-related issues by our researchers and guest contributors with tailored recommendations for policymakers.
Full paper available for download here.
"Ryan Calo, a professor of digital law and privacy law at the University of Washington School of Law, said that tech companies need to say, clearly and publicly, "when they will engage in censorship, if at all, at the behest of another nation."
“At a minimum, Apple and other tech companies should say publicly the conditions under which they will comply with Chinese or other requests to censor content,” Calo said. “The very act of laying out public criteria manages expectations and forces the company to consider its values.”"
"“I don’t think there’s a chance that major economies like the E.U. are going to accept C.D.A. 230” said Daphne Keller, the director of intermediary liability at Stanford Law School’s Center for Internet and Society. “So I’m not sure what the net effect is.”"
"“The key thing about this case is what preventive measures can be imposed on Facebook,” said Martin Husovec, an assistant law professor at Tilburg University’s Institute for Law, Technology and Society in the Netherlands.
"“There has been real mission creep with the right to be forgotten,” said Daphne Keller, a lawyer at Stanford University’s Center for Internet and Society. “First it was supposed to be about information found using search engines, but now we see it affecting news reporting.”"
After a lengthy legislative process, the GDPR is finally ready. As the most significant overhaul of data privacy laws in Europe in twenty years, it will have a profound impact on Silicon Valley technology companies offering online services in Europe. The recently announced Privacy Shield will affect most US organisations that receive personal information from Europe.
The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.
In this episode, The Stream speaks with tech industry experts and policy analysts to explore whether the Indian government’s plan will ensure public safety or set a dangerous precedent.
The latest in the EU's string of internet regulatory efforts has a new target: terrorist propaganda. Just as with past regulations, the proposed rules seem onerous and insane, creating huge liability for internet platforms that fail to do the impossible.
Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?