"Early on March 15, a suspected white nationalist terrorist stormed two New Zealand mosques, killing some 50 people. The suspect tweeted his plans and live-streamed the massacre on social media, the footage remaining up for hours. In the discussion that follows, two experts from Stanford’s Center for Internet and Society discuss online extremism, the European Commission’s pending draft regulation of online “terrorist content,” and the possibility of regulating hateful and violent content. Daphne Keller is Director of Intermediary Liability and was formerly Google’s Associate General Counsel. Joan Barata is a Consulting Intermediary Liability Fellow with CIS and has long experience advising international organizations such as the Organization for Security and Cooperation (OSCE) in Europe on freedom of expression.
Why should readers outside of Europe be interested in the EU’s new draft regulation of online terrorist content?
Keller: Europe is very much in the driver’s seat in regulating major platforms like Facebook and YouTube right now. That’s especially the case when it comes to controlling what speech and information users see. Whatever the EU compels giant platforms to do, they are likely to do everywhere —perhaps by “voluntarily” changing their global Terms of Service, as they have in response to EU pressure in the past. For smaller platforms outside the EU, the regulation will matter a lot as well. If readers remember the huge impact of the EU’s General Data Protection Regulation or GDPR, this one is very similar in its extraterritorial reach to websites or apps built by companies outside the EU. And it has the same enormous fines —up to 4% of annual global turnover. So any company that hosts user content, even tiny blogs or newspapers with comments sections, will need to deal with this law.
What does the regulation say?
Barata: Right now there are three versions of the regulation, which are being reconciled into a single draft in a “trilogue” process between the EU Parliament, Commission and Council. The drafts all define new responsibilities for companies hosting content posted by users, like Facebook, Twitter, Instagram and many others. The aim is to prevent the dissemination of what the text denominates “terrorist content.” Two of the drafts have particularly extreme provisions, including letting national law enforcement authorities skip trying to apply the law or respect free expression rights, and simply pressure platforms to take down users’ online expression under their Terms of Service. They also let authorities require any platform— even very small ones — to build technical filters to try to weed out prohibited content. That’s a problem because, from what little we know about platform filtering systems, there seem to be an awful lot of mistakes, which threaten lawful expression by journalists, academics, political activists, and ordinary users. The most recent draft, from the EU Parliament, is better because it drops those two provisions. But it still retains one of the worst requirements from the other drafts: platforms have to take down content in as little as one hour if authorities demand it. For almost any platform, but certainly for small ones, that kind of time pressure creates a strong reason to just take down anything the authorities identify.
Read the full interview at the SLS Legal Aggregate.
- Date Published:04/25/2019
- Original Publication:SLS Legal Aggregate