The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
Whether and when communications platforms like Google, Twitter and Facebook are liable for their users’ online activities is one of the key factors that affects innovation and free speech. Most creative expression today takes place over communications networks owned by private companies. Governments around the world increasingly press intermediaries to block their users’ undesirable online content in order to suppress dissent, hate speech, privacy violations and the like. One form of pressure is to make communications intermediaries legally responsible for what their users do and say. Liability regimes that put platform companies at legal risk for users’ online activity are a form of censorship-by-proxy, and thereby imperil both free expression and innovation, even as governments seek to resolve very real policy problems.
In the United States, the core doctrines of section 230 of the Communications Decency Act and section 512 of the Digital Millennium Copyright Act have allowed these online intermediary platforms user generated content to flourish. But, immunities and safe harbors for intermediaries are under threat in the U.S. and globally as governments seek to deputize intermediaries to assist in law enforcement.
To contribute to this important policy debate, CIS studies international approaches to intermediary obligations concerning users’ copyright infringement, defamation, hate speech or other vicarious liabilities, immunities, or safe harbors; publishes a repository of information on international liability regimes and works with global platforms and free expression groups to advocate for policies that will protect innovation, freedom of expression, privacy and other user rights.
The story so far:
In the ‘90s the Internet was created.
This has made a lot of people very angry and been widely regarded as a bad move.
(with apologies to Douglas Adams) Read more about The EARN IT Act: How to Ban End-to-End Encryption Without Actually Banning It
I've had a lot of positive feedback for the Intermediary Liability 101 slides I shared back in 2018, so I thought I'd post these updated ones now. They are based on a deck I presented to a European policymaking audience last month. Their focus tilts toward European examples -- but many of the issues captured here are universal. This version also has a longer section toward the end listing emerging issues and ideas (again, with a European lens). Read more about Intermediary Liability 101: An Update for 2020
This blog post will briefly discuss the ruling’s relevance for future EU legislation, and in particular for the Terrorist Content Regulation. TL;DR: Glawischnig-Piesczek does not discuss when a filtering order might be considered proportionate or consistent with fundamental rights under the EU Charter. It only addresses the eCommerce Directive, holding that a monitoring injunction is not “general” — and thus is not prohibited under the Directive — when it “does not require the host provider to carry out an independent assessment” of filtered content. This interpretation of the eCommerce Directive opens the door for lawmakers to require “specific” machine-based filtering. But it seemingly leaves courts unable to require platforms to bring human judgment to bear by having employees review and correct filters’ decisions. That puts the eCommerce Directive in tension with both fundamental rights and EU lawmakers’ stated goals in the Terrorist Content Regulation. Read more about The CJEU’s new filtering case, the Terrorist Content Regulation, and the future of filtering mandates in the EU