The CJEU’s new filtering case, the Terrorist Content Regulation, and the future of filtering mandates in the EU

 

A. Introduction

For several years now, one of the most hotly contested Internet policy questions in the European Union (EU) has been whether and how platforms like YouTube or Twitter can be required to proactively monitor their users’ posts in search of illegal content. Proposals for platforms to monitor user behavior by deploying technological filters were at the heart of the EU Copyright Directive, which passed into law in 2019; as well as the Terrorist Content Regulation, which is now in the final stages of negotiation. Filters are likely to be central to the coming years’ debates about the pending Digital Services Act, and to discussion of potential changes to the eCommerce Directive, which has structured platforms’ legal responsibility for user content in the EU for almost two decades.

A case decided by the Court of Justice of the European Union (CJEU) in October, Glawischnig-Piesczek v. Facebook Ireland, was widely expected to shed light on the subject of filtering requirements. It did shed light, but only a little. The Court discussed legislative rules that govern filtering under eCommerce Directive Article 15, but not the fundamental rights rules that legislators and Member State courts must apply under the EU Charter. The legal conclusions it reached will complicate fundamental rights analysis and legal paths forward for both “pro-filtering” and “anti-filtering” advocates in the evolving legislative debate. This blog post will briefly discuss the ruling’s relevance for future EU legislation, and in particular for the Terrorist Content Regulation. It builds on the much deeper analysis in Dolphins in the Net, my Stanford CIS White Paper about the Glawischnig-Piesczek AG Opinion.

TL;DR: Glawischnig-Piesczek does not discuss when a filtering order might be considered proportionate or consistent with fundamental rights under the EU Charter. It only addresses the eCommerce Directive, holding that a monitoring injunction is not “general” — and thus is not prohibited under the Directive — when it “does not require the host provider to carry out an independent assessment” of filtered content. This interpretation of the eCommerce Directive opens the door for lawmakers to require “specific” machine-based filtering. But it seemingly leaves courts unable to require platforms to bring human judgment to bear by having employees review and correct filters’ decisions. That puts the eCommerce Directive in tension with both fundamental rights and EU lawmakers’ stated goals in the Terrorist Content Regulation.

B. Background

This background section briefly describes (1) the case, (2) the background EU law affecting both the case and the draft Terrorist Content Regulation, (3) the state of filtering technology, and (4) relevant aspects of the Regulation. Readers already conversant with these topics may wish to skip directly to Section C, which discusses Glawischnig-Piesczek’s implications for future cases and laws, including the Terrorist Content Regulation.

(1) The case

The former head of Austria’s Green Party sued Facebook, saying that she was defamed by a user post that called her a “lousy traitor” (miese Volksverräterin), a “corrupt oaf” (korrupter Trampel), and a member of a “fascist party” (Faschistenpartei). Austrian first and second instance courts upheld her claim, and also held that Facebook must proactively block such posts from recurring. The Austrian Supreme Court referred the case to the CJEU, asking whether orders to block “identical” or “equivalent” content were permissible under Article 15 of the eCommerce Directive. (It also referred a second question I won’t address here: whether Austrian courts could require global compliance with the order.) The Court reviewed the case based on briefs from the parties and several government entities, but no civil society interveners participated in the case.

The CJEU’s ruling, which I examine in more detail in Section C.1 below, held that injunctions covering both identical and equivalent content are permitted by the eCommerce Directive, but that the injunctions must not require the platform to independently assess whether content violates the law. The Court did not discuss what requirements must be met for such an order to be proportionate and compatible with fundamental rights guarantees under the EU Charter. To the best of my knowledge, the Court’s next opportunity to provide such clarification is months or years away, arising from Poland’s challenge to the Copyright Directive.

The CJEU did not discuss exactly what content was to be filtered, or how the filter is supposed to work. These questions about filters’ real-world operations and effects on Facebook’s billions of other users will remain important, however, for Austrian courts assessing fundamental rights questions as the case continues. For example, if Facebook had to automatically block every instance of the text phrases “fascist party”, “lousy traitor”, and “corrupt oaf,” it would almost certainly take down numerous lawful posts — news coverage, legitimate political commentary, or teasing between friends, for example. These lawful “dolphins” caught in the filter’s net would be an important factor for a court assessing the filter’s proportionality.

The Austrian lower court described a different filtering model, in which Facebook would block posts only when the specified phrases appeared alongside any image of the plaintiff. That filter would presumably result in fewer mistaken removals, but it would raise additional and serious fundamental rights questions about data protection. To comply, Facebook would have to carry out biometric facial recognition scans affecting its other users and people who appear in their photos. Other filter variations, such as blocking “shares” of the entire original post defaming Ms. Glawischnig-Piesczek, can be imagined, but were not identified or discussed in the case to date.

(2) The law

Two major sources of law form the backdrop for legal debates about filtering in the EU. Glawischnig-Piesczek discusses only the first, Article 15 of the eCommerce Directive. Article 15 prohibits Member States from imposing “general” monitoring obligations on platforms. The meaning of the prohibited “general” monitoring, and the scope of potentially permissible “specific” monitoring, have long been debated. In principle, lawmakers in Brussels could amend Article 15 to change or clarify its rules, but to date many have expressed reluctance to do so. Supporters of filters have long argued that courts may impose targeted monitoring obligations for specifically identified content under Article 15 —analysis that is now supported by Glawischnig-Piesczek.

The second major source of law is the guarantee of fundamental rights under the EU Charter. The CJEU has noted in the past that requiring Internet platforms to monitor users’ communications burdens those users’ rights to privacy and data protection. It has also identified threats to freedom of expression and information, because an automated filter “might not distinguish adequately between unlawful content and lawful content, with the result that its introduction could lead to the blocking of lawful communications.” (SABAM v. Netlog Par. 50). The European Court of Human Rights has gone further, finding that Hungary violated the European Convention by holding a hosting platform strictly liable for defamation posted by its users — effectively requiring the platform to monitor users’ posts in order to avoid liability.

Scholars have identified other fundamental rights concerns with over-reaching platform liability laws, including rights to a fair trial and effective remedy for people whose online expression and participation are “adjudicated” and terminated by platforms. And an increasing body of studies suggests that when automated content filters attempt to parse human language, they disproportionately silence lawful expression by members of minority or marginalized racial and linguistic groups. These problems, along with potentially disproportionate deployment of flawed filters to target the speech of religious and ethnic minorities in contexts such as anti-terrorism, suggest that filtering mandates also implicate Internet users’ rights to equality and non-discrimination before the law.

(3) The technology

The most commonly deployed filters today are designed to find duplicates of known, specific images, audio, videos, or text. Many major platforms rely on duplicate-detection filters like PhotoDNA to find child sexual abuse images or videos, and to find violent extremist images or videos. Duplicate-detection filters are not perfect, but the most sophisticated ones can often find near-duplicates, like images that have been cropped or colorized. Duplicate-detection filters for written text are technically simpler and have existed for decades, but are notoriously error-prone, because specific words or phrases can so easily be unlawful in one situation but innocuous in another. To the best of my knowledge, no major platform used text filters to automatically block users’ posts prior to the Glawischnig-Piesczek ruling. Text filters can play an important role, however, in prioritizing posts for human review. 

The EU public debates to date, and objections based on filters’ errors, have largely focused on duplicate-detection filters. In the 2018-2019 discussion of the Copyright Directive, these filters were widely referred to as “upload filters.” It is rumored, however, that the vocabulary used in the Terrorist Content Regulation trilogue has shifted, referring to duplicate-detection filters as “re-upload filters” and reserving the term “upload filter” for software that would in theory be capable of detecting unlawful material the very first time the machine encounters it. The most visible public proponent of such software has been Facebook CEO Mark Zuckerberg, who has testified about the bright future he expects from machine learning or artificial intelligence-based content moderation. Several of the AI experts Facebook brought on board to build that future, however, have cautioned that while software is getting better at challenges like distinguishing broccoli from marijuana, it is very far from discerning the nuances of human communication. Review by humans remains critical for discerning the message conveyed by new material.

Facebook currently reports a very high rate — 98% or higher — of proactive, machine-based detection for the content the company classes as “terrorist propaganda.” But it’s unknown what portion of that figure is represented by particular technologies. These may include conventional duplicate detection (machines flagging copies or near-copies of previously identified material, as discussed above); threat profiling based not on content but on uploaders’ behavior (using spam-fighting tools to flag suspicious patterns of contacts, followers, or posting locations, for example); Artificial Intelligence capable of recognizing specific telltale images embedded in new material (detecting the ISIS flag or possibly weapons, for example), or other as-yet-undisclosed technologies that somehow come closer to understanding the message conveyed by newly posted content.

(4) The Terrorist Content Regulation

The Terrorist Content Regulation moved very rapidly through Commission, Council, and Parliament drafts over the course of 2018-19. It is currently being reconciled to a final version in trilogue negotiations between those three bodies — although, because of the intervening 2019 elections, there has been considerable turnover in the people who make up the three negotiating institutions. The Regulation assigns significant new powers to to-be-determined national authorities, who (in some drafts) might be local law enforcement bodies rather than courts or regulators. The powers include ordering platforms to take content down within one hour, and may include referring content for removal based on platforms’ Terms of Service or requiring platforms to filter users’ posts. Critics of the Regulation — including human rights officials and civil society organizations — have argued that its filtering provisions threaten fundamental rights. Because filters cannot assess the context in which information appears, they run the risk of suppressing materials which may be unlawful in one context (such as an ISIS recruitment video) but which in a new context are lawful (such as news reporting or counter-speech).

The three drafts of the Regulation vary considerably in their treatment of filtering. The Commission and Council drafts, but not the Parliament draft, create new filtering obligations, which will largely be defined and enforced by the to-be-determined national authorities. Platforms are required to take “proactive measures,” where appropriate, including using “automated tools” to identify duplicates of previously-removed content and to detect new terrorist content. (Council Art. 6) In addition a platform that receives takedown orders assumes a new quasi-regulatory relationship with the entity that issued the order. The platform must submit annual reports explaining “the specific proactive measures it has taken, including by using automated tools”. Authorities can then “evaluate the functioning of any automated tools used as well as the human oversight and verification mechanisms employed,” request additional measures, and ultimately mandate compliance through “a decision imposing specific additional necessary and proportionate proactive measures” on the platform. (Council Art. 6)

The three drafts being reconciled all specify that “where hosting service providers use automated tools” to assess user content, “they shall provide effective and appropriate safeguards to ensure that… particular decisions to remove or disable content… are accurate and well-founded”, and that these “safeguards shall consist, in particular, of human oversight and verifications” of filters’ decisions. (Art. 9) Such human oversight is mandatory for platforms using filters in the Parliament draft, but is required only “where appropriate and, in any event, where a detailed assessment of the relevant context is required” in the Commission and Council drafts. (Art. 9.2) This emphasis on human oversight (often also called “human review”) is consistent with statements from human rights organizations, including the Council of Europe, which has said that “[d]ue to the current limited ability of automated means to assess context,” platforms that use filters should “ensure human review where appropriate.”

The monitoring obligation in the Commission and Council drafts bears an ambiguous relationship with Article 15 of the eCommerce Directive. For the most part, lawmakers have indicated that they believe these obligations are consistent with Article 15. The Commission’s draft, for example, states that mandatory proactive measures “should not, in principle, lead to the imposition of a general obligation to monitor” in contravention of Article 15. But a Recital also says that “decisions adopted by the competent authorities on the basis of this Regulation could derogate from the approach established in Article 15(1)” of the Directive. (R 19) The Glawischnig-Piesczek ruling simplifies this relationship somewhat, by defining a class of filtering orders that can be issued consistent with Article 15.

C. Glawischnig-Piesczek’s implications for future cases and laws, including the Terrorist Content Regulation

Glawischnig-Piesczek leaves a series of unresolved issues for both later stages of the case itself and future EU laws affecting platform content moderation and filtering. It clears the way for courts to order filters under the eCommerce Directive, but simultaneously makes it harder for those filters to meet the requirements of fundamental rights under the EU Charter. The case tells us that under Article 15, platforms can be compelled to block specified content using automated means, but cannot be compelled to have human reviewers check filters’ work. This prohibition on requiring “independent assessment” by platforms effectively eliminates human review as an element of legally mandatory filtering operations. That’s a problem for the Terrorist Content Regulation, which in Article 9 requires human review of filters’ work. And it’s a problem for filtering proposals generally, because without human review, filtering is much harder to reconcile with fundamental rights.

In this section I will discuss (1) the Court’s reasoning about Article 15, (2) general fundamental rights questions for Member State courts and lawmakers in the wake of Glawischnig-Piesczek, (3) specific fundamental rights issues raised by the ruling’s limits on “independent assessment” by platforms, (4) limits on orders for platforms to find new, non-duplicate content, and (5) which national authorities can issue filtering orders.  

(1) The Court’s analysis of Article 15

The Court in Glawischnig-Piesczek interprets Article 15 to permit monitoring orders covering content “identical” or “equivalent” to material deemed illegal by a court. But it says that, to avoid being “general” and thus prohibited, the monitoring order must be capable of being carried out by automated means, without requiring “independent assessment” of the filtered content by platforms. The Court’s reasoning in support of this conclusion departs significantly from prior case law.

Article 15 of the eCommerce Directive says that Member States cannot impose on hosts any “general obligation to monitor the information which they transmit or store[.]” Other Articles say that courts can order platforms to “prevent” legal violations, however, and Recital 47 states that Article 15 does not prevent “monitoring obligations in a specific case.” In L’Oreal v. eBay, the CJEU discussed the boundaries between prohibited “general” monitoring and permissible “specific” injunctions. “[T]he measures required of the online service provider,” it explained, “cannot consist in an active monitoring of all the data of each of its customers.” (Par. 139) By contrast, courts could potentially issue more specific orders for a host to terminate a particular user’s account, or make him or her easier to identify. (Par. 141-142) Similarly, in Tommy Hilfiger v. Delta Center A.S., the CJEU held that courts could not require “general and permanent oversight” over all customers, but could require measures aimed at “avoiding new infringements of the same nature by the same” customers. (Par. 34)

In Glawischnig-Piesczek, the Court moves away from the standards set forth in these earlier cases, without citing or discussing them. The injunction it approves would seemingly require Facebook to monitor every post by every customer. Instead of defining prohibited “general” monitoring as monitoring that affects every user, the Court effectively defines it as monitoring for content that was not specified in advance by a court. Under Article 15, it concludes, platforms can be compelled to monitor for specific content “which was examined and assessed by a court… which, following its assessment, declared it to be illegal.” (Par. 35)

The Court holds that Member State courts can order platforms to filter both “identical” and “equivalent” content, so long as the courts’ orders are precise enough to avoid requiring independent judgment by the platform. As the Court explains,

It is important that the equivalent information… contains specific elements which are properly identified in the injunction, such as the name of the person concerned by the infringement determined previously, the circumstances in which that infringement was determined and equivalent content to that which was declared to be illegal. Differences in the wording of that equivalent content, compared with the content which was declared to be illegal, must not, in any event, be such as to require the host provider concerned to carry out an independent assessment of that content. (Par. 45)

This standard and the requirements of Article 15 can be met, the Court continues, so long as any monitoring is “limited to information containing the elements specified in the injunction, and its defamatory content of an equivalent nature does not require the host provider to carry out an independent assessment, since the latter has recourse to automated search tools and technologies.” (Par. 46-47, emphasis added.) This definition largely collapses the difference between “equivalent” and “identical” content, since both must be identified in advance with sufficient specificity to allow a machine to reliably, without human supervision, carry out a court’s order.

(2) Fundamental rights questions in post-Glawischnig-Piesczek filtering decisions

Courts issuing orders under the newly clarified Article 15 guidelines — including the Austrian courts in this case — will face difficult questions about the orders’ proportionality and compliance with fundamental rights. For example, before issuing a filtering injunction, must a court first determine that the prohibited image, text, or other content will foreseeably violate the law in every new context where it might be re-used? If a filter will foreseeably make mistakes, does the court have to balance the filter’s benefits for a claimant like Ms. Glawischnig-Piesczek against the rights of unknown future people affected by the errors?

Both sides of that balance are affected by filters’ real-world operations and consequences. On the claimant’s side, the question is whether a filter effectively protects legitimate interests and rights — like the reputation and dignity rights at issue in Glawischnig-Piesczek. A clumsy filter, with conspicuous errors causing a “Streisand effect” and additional negative attention to the claimant, might ultimately fail to protect her interests. Filters with even more complex or ambitious purposes, like protecting public safety and security in the case of filters targeting terrorist content, warrant particularly close analysis in this respect. As discussed in this filing with the European Commission, downsides of poorly designed or clumsy filtering efforts may include driving potential terrorist recruits into echo chambers in darker corners of the Internet; silencing moderate voices in communities vulnerable to radicalization; and fueling mistrust and anger within those communities. Researchers consistently identify perceived or actual mistreatment and social marginalization as important risk factors for radicalization. (See research discussed at 20-26 here.) All of this complicates the analysis of filters’ upsides in balancing fundamental rights considerations.

On the other side of the fundamental rights balance are the interests of Internet users affected by filters. In Facebook’s case, those users number in the billions, and post over half a million comments every minute. Operating at that scale, even a filter with an error rate of 0.1% would make the wrong decision 500 times each minute, or 720,000 times each day. The resulting removals of lawful material would affect fundamental rights including rights to expression and information, fair trial, and remedy. Given mounting concerns about errors disproportionately harming minority groups, they may also implicate rights to equality before the law. And regardless of filters’ errors, automated scanning and restriction of users’ posts would implicate data protection and privacy rights (both Charter rights and rights under provisions like GDPR Article 22) for the people whose communications or images are processed. As the facial recognition order from the lower court in Glawischnig-Piesczek illustrates, these people might not even be users of the platform, but third parties whose images or other information got posted by someone else.

There is also the question of how courts can gain adequate information about filters’ potential errors — both technical errors in failure to identify duplicates, and legal errors in failing to identify a duplicate’s changed meaning in a new context. In a typical platform liability case, no party before the court will truly share or represent the interests of Internet users who will be affected by a filtering injunction. Neither the plaintiff nor the platform has proper incentives to explain the technologies’ shortcomings. Platforms that depend on algorithmic content analysis for core economic functions like ad targeting, or that are banking on ambitious future AI-driven business models, are ill-suited to educate courts about the problems with those very technologies. Data protection issues, in particular, are unlikely to be raised by platforms that are simultaneously defending their own automated content analysis in other lawsuits or investigations by Data Protection Authorities. (Indeed, no party in Glawischnig-Piesczek seems to have briefed the courts about data protection concerns, despite CJEU precedent on point.) This lack of representation for user interests is a structural problem in intermediary liability cases, and makes them particularly important candidates for interventions or amicus briefing by experts and public interest organizations.

(3) Fundamental rights questions for filters without human review

Glawischnig-Piesczek raises particularly complex questions about the relative roles of humans and machines in deciding which expression and information can be shared on an Internet platform like Facebook. The complexity arises from the Court’s interpretation of Article 15, which precludes requiring platforms to “carry out an independent assessment” of content flagged by filters. The upshot seems to be that courts can make a platform like Facebook use technical filters, but can’t require that the company’s human employees assess or correct filters’ content removal decisions. (This analysis leaves open the possibility for platforms to voluntarily correct filters’ errors, even if courts cannot require them to do so. The CJEU avoids a dangerous implication — which as discussed in Dolphins in the Net, was raised by the AG’s opinion — that such review would compromise platforms’ core immunities under eCommerce Directive Article 14.)

The CJEU’s interpretation puts Article 15 in serious tension with the draft Terrorist Content Regulation, which requires “effective and appropriate safeguards” against erroneous removal, including human evaluation of filters’ decisions. (Art 9) It also raises questions under the 2019 Copyright Directive, which requires platforms to “prevent further uploads” of specific works, and states that “decisions to disable access to or remove uploaded content shall be subject to human review.” (Arts. 17.4 and 17.9) It may even raise questions about requiring platforms to consider appeals from users who say their expression was wrongly removed, since such appeals would effectively lead platform employees to review filters’ conclusions. Most importantly, though, Glawischnig-Piesczek puts Article 15 in tension with the fundamental rights protected by human evaluation. While there is room for debate about the absolute value of human review — experts have raised serious questions about human reviewers merely rubber-stamping machines’ decisions — it seems clear that deploying potentially flawed filters with no human review poses a greater threat to fundamental rights.

In the wake of Glawischnig-Piesczek, courts and lawmakers will face real challenges in identifying orders that are consistent with both Article 15 and fundamental rights. Seemingly, authorities acting under Article 15 can only mandate filters that, even without human review, are reliable enough to protect platform users’ rights. Which filtering injunctions might meet that test? One part of the answer is technical. A technologically accurate filter, which can identify content specified in an injunction with no false positives (taking down too much) or false negatives (taking down too little) would be the most acceptable. The other — and harder — part of the answer is legal, and has to do with lawful re-uses of prohibited content in new contexts. In the simplest case, an order covering content that has no lawful uses in other contexts — as is the case for child sexual abuse material in many countries — would pose the fewest problems for fundamental rights.

For content capable of lawful new uses, though, relying on automated content moderation without human review poses substantially greater challenges. That means the Article 15 standard set by Glawischnig-Piesczek will be particularly hard to meet. Extremist content in particular raises problems, since such material can and does reappear in news coverage, scholarly analysis, counterspeech by opponents of violence, archival efforts of human rights organizations, and other lawful and important contexts. Foregoing human review for content of this sort would be particularly hard to square with the fundamental rights guaranteed in the EU Charter.

(4) Orders requiring platforms to identify new, non-duplicate unlawful content

The Austrian court in Glawischnig-Piesczek only asked the CJEU about filtering for equivalent content, not about more ambitious filters intended to ascertain the meaning of other, never-before-seen material. But the Court’s answer would seemingly apply to any filter that does more than duplicate-detection: under Article 15, filtering mandates may issue only for content specified with sufficient precision that a machine, operating without human review, can identify and remove it. This constrains the possible meaning of Terrorist Content Regulation Article 6, which, in the Commission and Council drafts, refers to both (a) preventing re-upload of previously identified content, and (b) “detecting, identifying and expeditiously removing or disabling” other, presumably newly discovered, terrorist content. Glawischnig-Piesczek suggests that the scope of permissible orders in category (b) may be quite narrow.

 

(5) Orders from authorities other than courts

A related question specific to the Terrorist Content Regulation is whether Glawischnig-Piesczek effectively approves not only filtering orders from courts, but also orders by the other to-be-determined national authorities empowered by the Regulation. Questions about non-court authorities were not raised in the case, so any answers are somewhat speculative. Nonetheless, they will be important for the Terrorist Content Regulation. Issuance of filtering orders without judicial review implicates both fundamental rights and the eCommerce Directive.

To the extent that the Court implicitly drew on the Charter in answering the questions referred in Glawischnig-Piesczek, its analysis and approval of filters seemingly would concern only those ordered after adequate court process. The CJEU did not consider or address orders from other authorities. As an interpretation of the eCommerce Directive, the relevance of Glawischnig-Piesczek for non-judicial orders is more ambiguous. The Court at times seems to ground its ruling in Directive Article 18, which concerns only “court actions” to prevent future unlawful posts. (Par. 29) But Directive Article 14 refers also to orders from an “administrative authority” empowered by national law, and Recital 47 speaks broadly of “orders by national authorities.” Overall, the ruling’s significance for non-Court-issued filtering orders and for the broader empowerment of “national authorities” under the Terrorist Content Regulation remains open for debate.

 

D.Conclusion

The CJEU’s ruling in Glawischnig-Piesczek appears at first glance to broadly support new Internet filtering mandates. For those courts and lawmakers charged with actually defining filtering orders and ensuring their proportionality with fundamental rights, however, the ruling creates major new problems. Identifying real-world filters that meet EU Charter requirements while simultaneously fulfilling the CJEU’s specifications under Article 15 will be, at the very least, challenging.

Comments on this post are very welcome. I will iterate on this analysis in a future publication and appreciate any feedback.

Add new comment