Filtering Facebook: Introducing Dolphins in the Net, a New Stanford CIS White Paper - OR - Why Internet Users and EU Policymakers Should Worry about the Advocate General’s Opinion in Glawischnig-Piesczek

Filtering Facebook: Introducing Dolphins in the Net, a New Stanford CIS White Paper
OR
Why Internet Users and EU Policymakers Should Worry about the Advocate General’s Opinion in Glawischnig-Piesczek

White Paper: Dolphins in the Net: Internet Content Filters and the Advocate General’s Glawischnig-Piesczek v. Facebook Ireland Opinion

Summer is winding down, and policymakers in Brussels are returning to an ambitious task: drafting new regulation for the Internet. Meanwhile, in Luxembourg, the Court of Justice of the European Union (CJEU) is deciding cases that will affect both Internet platforms’ operations and lawmakers’ choices in devising new laws. One case in particular, Glawischnig-Piesczek v. Facebook Ireland, has had surprisingly little attention, given how consequential it is likely to be. It calls on the Court to address the question at the heart of an ongoing political debate: whether and how EU Member States can require platforms to use automated filters in an attempt to detect and delete prohibited material in users’ posts.

The Court’s Judgment will, at minimum, interpret relevant provisions in the EU’s main existing Intermediary Liability law, the eCommerce Directive. It will almost certainly also tell courts – and lawmakers – something about more permanent limitations on state-backed filtering mandates, based on Internet users’ fundamental rights under the EU Charter. The case also raises a question, analogous to the one in Google’s pending “Right to Be Forgotten” case, about national courts’ power to order global takedown of expression or information that is legal in other countries.

The Opinion in Glawischnig-Piesczek by the Court’s Advocate General (AG) proposes unprecedented and troubling answers to all of these questions. If the Court follows his recommendations, it will likely disrupt both legislative agendas and the evolving human rights-based guidance about Internet content filters. 

A new Stanford Center for Internet and Society (CIS) White Paper, Dolphins in the Net, discusses and critiques the AG’s Opinion. It focuses in part on the very limited briefing and arguments the Court and AG received – a shortcoming that may explain the disconnect between the public debate over filters and the limited arguments considered by the AG. The White Paper also goes into detail about the fundamental rights and eCommerce Directive legal issues in the Opinion’s filtering recommendation, and briefly discusses the question of global enforcement.  It also flags important data protection and privacy issues that fall outside the scope of the White Paper, but illustrate the depth of unexamined issues in the case.

 

The Case So Far

The case was brought by Eva Glawischnig-Piesczek then the leader of Austria’s Green Party. She said that a Facebook user defamed her by posting a news article along with comments calling her a “lousy traitor” (miese Volksverräterin), a “corrupt oaf” (korrupter Trampel), and a member of a “fascist party” (Faschistenpartei). The Austrian courts agreed, and ordered Facebook not only to remove the post, but also to proactively block any users from saying these things again. Austria’s Supreme Court referred questions about filtering to the CJEU, along with a question about whether Facebook should have to comply globally with Austria’s unusually strict defamation law.

The AG’s Opinion says Austria can order Facebook to filter “identical” information from posts by any user, and “equivalent” information from posts by this particular user. It also approves extraterritorial enforcement of national defamation law in some cases. In its analysis of filtering, the Opinion seems to say that courts can order platforms to use automated filters, but cannot order them to have employees review filters’ work to catch and correct any errors. It implies that platforms themselves might automatically lose immunity under the eCommerce Directive when they use human review for this purpose – and that perhaps even ordinary notice and takedown operations systematically strip platforms of legal immunities. 

 

Policy Backdrop

The conclusion that automated filters are permissible, but human review is not, will sound very backwards to those who have participated (as CIS has) in recent EU legal and policy debates. Experts are fiercely divided about the fundamental rights implications of using filters in the first place. Civil society groups, the Council of Europe, and UN human rights rapporteurs have all raised serious concerns about relying on privately operated software to regulate online expression and information. Critics point out the risk of “dolphins in the net” – lawful expression and information that is misidentified, caught, and erased by automated systems. Even among proponents of filters, though, the importance of having humans review and correct filters’ decisions is widely recognized. The EU Commission, for example, urged platforms to use both filters and human review voluntarily in its 2018 Recommendation on tackling illegal content online.

The debate about the pending Terrorist Content Regulation illustrates the widely varying perspectives on filters. Many (though not all) security and law enforcement experts argued that filters were necessary and appropriate in response to violent extremist material online. The Commission and Council drafts of the Regulation included filtering requirements, but also said platforms should generally carry out human review. After considerable outcry from civil society and human rights organizations, Parliament removed the filtering requirement from its draft. But some observers expect that the Parliament’s new rapporteur for the trilogue process, conservative Polish MEP Patryk Jaki, will agree to reinstate the filtering mandate in the law’s final version.

Critics of the proposed filters in that law pointed out that even violent extremist content has important lawful uses, including in journalism, scholarship, and counter-speech. Using technical filters that cannot recognize the context for re-uses of this sort puts lawful and important expression at risk. Many pointed to the example of the Syrian Archive, a Berlin-based human rights NGO that lost over 100,000 videos documenting abuses in Syria when YouTube failed to recognize the videos’ lawful context. A broad coalition of civil society groups, in a letter to the EU Parliament, called existing filters “untested and poorly understood technologies,” and cited “serious problems researchers have found with the few filtering tools available for independent review[.]” The groups argued that requiring filters would be “a gamble with European Internet users’ rights to privacy and data protection, freedom of expression and information, and non-discrimination and equality before the law” that was “neither necessary nor proportionate as an exercise of state power.”

 

Process Problems in Glawischnig-Piesczek

The fundamental rights concerns identified by civil society and human rights organizations would appear even more pressing in a case like Glawischnig-Piesczek, which concerns not violent extremism but rude criticism of a political leader.  No civil society groups participated in the case as interveners, however, and the Court apparently did not hear this perspective.

Glawischnig-Piesczek has a problem common to Intermediary Liability disputes. The primary rights affected by the case’s outcome will be those of (1) the claimant, who was harmed by content online, and (2) other Internet users, whose rights may be harmed by filters. But only the claimant actually participates in the case. Other users are not represented. The Court does not hear from them, because the defendant isn’t an Internet user – it’s Facebook. The lack of representation for Internet users sets the Court up to miss important arguments and factual disputes that are common in the Brussels debate – which, in turn, risks undermining the Judgment’s legitimacy in the eyes of the EU’s political branches.

 

Fundamental Rights

Both the CJEU and European Court of Human Rights have said in the past that filtering obligations for platforms burden Internet users’ fundamental rights. But those burdens in this case – and the filtering injunction’s countervailing benefit for the plaintiff – are almost impossible to assess, because the filter at issue is so ill-defined. The AG’s Opinion at times seems to endorse a tool to block text, keeping users from posting the words, like “lousy traitor” or “Fascist party,” listed in the Austrian injunction. A filter like that would be remarkably overbroad, capable of blocking anything from friends’ banter to serious political discussions to news reporting about this litigation. Alternately, the AG might be supporting the filter the Austrian appeals court described: one that uses software to detect and block any image of the plaintiff if it is paired with the prohibited terms. An injunction like that, which would effectively require Facebook to use facial recognition software, would raise major data protection questions about the rights of data subjects whose images would be scanned. But those issues seemingly were not idfentified to the Court.

Without more information about the proposed filter, it is impossible to assess the fundamental rights consequences of ordering Facebook to use it. It’s particularly hard to assess an assumption the AG seems to make – that most of the filtered content will itself be illegal. It is to be hoped that the Court will exercise more caution, rejecting the filtering injunction entirely or instructing the Austrian courts to engage in more searching analysis of the proposed filter’s technical function and likely impact on other Internet users.

 

ECommerce Directive

The AG’s analysis of the eCommerce Directive is troubling as well. Because he concludes that courts can order platforms to use software-based filters, but not to have humans correct filters’ errors, his recommendation would seemingly preclude even the somewhat dubious protection that human review provides for users’ rights. Worse, his analysis suggests that platforms may systematically forfeit immunity under Article 14 if they carry out such review voluntarily. That’s bad news for the many platforms that already use a combination of technical filters and human review to find things like violent extremist images, and for government bodies including the EU Commission that urged them to do so. The AG’s reasoning has disturbing implications even for platforms carrying out ordinary notice and takedown. It suggests that even the bad faith or inaccurate notices that platforms often receive may be enough to effectively strip them of immunity, giving them even more reason to take down any post targeted by an accuser. That interpretation of the Directive would be inconsistent with both human rights guidance and CJEU precedent.

 

Other Problems

The White Paper explores the above-listed issues in much greater depth. It also briefly examines the question of global content removal. Those problems are just the tip of the iceberg, though. The White Paper flags but does not attempt to analyze separate concerns about competition, privacy or data protection, and equality – all of which merit the attention of experts in those fields. The privacy consequences of ordering Facebook to monitor and automatically assess all of its users’ posts, in particular, are significant – and all the more so if, as the Austrian court seemingly said, Facebook must run biometric facial identification scans on users’ pictures. There are also real questions about filters’ disparate impact on members of racial and linguistic minority groups.

Overall, the injunction in Glawischnig-Piesczek raises a snarl of difficult questions. But the litigation does not seem to have provided the AG or Court with the information and arguments needed to truly address them. That counsels a narrow ruling in the case, and a clear articulation of the relevant unanswered legal and factual questions. The Court’s remarks on fundamental rights, the eCommerce Directive, and technology will be taken to heart not only by the Austrian courts and platforms of all sizes, but by policymakers in drafting the EU’s future laws. A lot may depend on the Court issuing a wise and thoughtful ruling in this case.

  • Advocate General’s Opinion in Glawischnig-Piesczek
  • Stanford CIS White Paper: Dolphins in the Net: Internet Content Filters and the Advocate General’s Glawischnig-Piesczek v. Facebook Ireland Opinion

 

 

Add new comment