Stanford CIS

Broad Consequences of a Systemic Duty of Care for Platforms

By Daphne Keller on

In a previous post, I described the growing calls for what I called a “systemic duty of care” (SDOC) in platform regulation. I suggested that SDOC requirements would create difficult questions in ordinary intermediary liability litigation. By encouraging or requiring platforms to review user content or exercise more control over it, SDOC laws would change courts’ reasoning in cases about YouTube’s liability for a defamatory video, for example. Plaintiffs in such cases may suddenly find it either much harder or much easier to prevail against platforms.

In this post, I will describe some possible formulations of the SDOC, and their potential consequences for policymakers’ intermediary liability and content moderation goals. I will also touch on fundamental rights and competition/innovation concerns, which shape legislative options in this space. To do this, I will focus in on just one legal framework: EU law under the eCommerce Directive and proposed Digital Services Act. I think, though, that the larger points of this post apply equally to legal proposals in the U.S., UK, and elsewhere. I will briefly speculate on those larger implications in the final section of this post.

This analysis draws on the straw-man SDOC model described in the prior post.

[Update June 10 2020: To be clear, I am not endorsing this model. I would be particularly concerned about any proposal that includes proactive monitoring obligations. Such obligations raise both the fundamental rights concerns that I have written about elsewhere and the concerns about doctrinal incoherence and unintended consequences that I discuss in these two posts.]

Even with those variables pinned down, there remain many possible formulations of the SDOC.

I. Prescriptive and Flexible Approaches to the Systemic Duty of Care

I will examine two broad SDOC models: a highly prescriptive one and a flexible one. Examining these two poles should help us think through other variations. For example, a hybrid regime might enumerate required minimum measures and also require “reasonable” or “proportionate” additional efforts. Such a combined rules/standards approach would, I think, raise a mixture of the issues I discuss below.

A.    The Prescriptive Model

Regulators could set clear, prescriptive rules defining a SDOC by specifically listing the proactive measures required of platforms. For example, regulators might require proactive filtering for child sex abuse material (CSAM) using the PhotoDNA system. Any such requirements would likely need regular review, and vary somewhat depending on a platform’s size or function, etc. But the point would be to provide clear legal requirements that must be met in order to comply with SDOC laws, and in order to claim immunity in ordinary litigation.

1.     Platform Immunities

A rule like this would strengthen SDOC-compliant platforms’ protection from damages, taking away plaintiffs’ ability to argue that, by engaging in the listed measures, platforms lost the benefit of Article 14 immunities. As discussed in the previous post, plaintiffs sometimes successfully use these arguments now, as happened in the Estonian Delfi litigation. That risk in turn can make platforms in the EU reluctant to adopt “Good Samaritan” voluntary content moderation measures. A change to eliminate this argument from plaintiffs’ toolkit would be something that advocates for victims of online harms should be aware of in assessing SDOC proposals.

If a plaintiff did successfully argue that a platform failed to meet the SDOC, on the other hand, the consequences would be significant. The plaintiff would thereby defeat one of the platform’s key defenses, making an ultimate finding of liability more likely. And a ruling that a platform had systemically failed to meet the SDOC and maintain its Article 14 immunities in one case would expose the platform to follow-on suits by other plaintiffs claiming injury from the same time-period.

A prescriptive SDOC would also probably increase platforms’ exposure to injunctions. Courts could, as in the U.K. Mosley v. Google case, find that because a platform already filtered for one class of prohibited content (CSAM, in the Mosley case), new orders to filter additional content would be reasonable, not unduly burdensome, and consistent with Article 15 of the eCommerce Directive. (Lawmakers could legislatively prohibit this boot-strapping of required SDOC filters into new, case-by-case judicial injunctions, but I have not heard that idea discussed.)

In this SDOC scenario, seemingly regulators could issue rules or guidelines requiring ex ante filtering measures -- like using PhotoDNA to filter CSAM using existing hash lists. Courts, meanwhile, could order ex post filtering for specific items of content identified in litigation against a particular platform. Following the CJEU’s Glawischnig-Piesczek ruling, this includes content identical or equivalent to that deemed illegal in a particular case.

Adding an ex ante SDOC to the legal picture would further erode the already tenuous distinction between prohibited “general” monitoring orders and permitted “specific” ones under Article 15. Is a regulatory requirement to filter using a CSAM hash list “specific” because hashes identify only specific images or videos previously deemed illegal? Does it matter if the list is supplemented over time (as such lists currently are) with additions that are never reviewed by courts or regulators? Suppose a regulator required platforms to search for all the words on a specific “bad words list” and take down posts that used those words in hate speech, while leaving up the posts that used those same words in artistic, scientific, or otherwise legitimate ways. Would that be “specific” (because it uses a list of specific terms) or “general” (because the content would not typically be equivalent to any previously identified material, and such review would require the platform to exercise its own legal judgment)?

2.     Content Moderation

Platforms that didn’t already carry out the required measures voluntarily would presumably adopt them once a prescriptive SDOC came into effect. They would thus find and remove more illegal content, meeting policymakers’ content moderation goals. Many platforms already voluntarily deploy the most reliable existing filters, though, and in particular use PhotoDNA for CSAM. So the overall change in real-world moderation practices would depend on how far regulators went in requiring additional, less reliable, or more costly proactive measures. (That in turn would lead to new concerns about Article 15, and about the fundamental rights and competition issues discussed below.)

Platforms that wanted to go beyond the law’s requirements and deploy novel “Good Samaritan” efforts that were not specified by regulators would, under this prescriptive version of the SDOC, be in a similar situation to the one they face today. The law would provide some disincentive to do so, because of the increased risk of losing Art 14 immunities or facing injunctions.

3.     Fundamental Rights

Human rights experts have long identified proactive monitoring and filtering as serious potential threats to Internet users’ rights, including free expression, information access, freedom from discrimination, and privacy. Since platforms often deploy such filters already, a highly prescriptive law requiring filters could technically be considered an improvement. It would make governments’ role in shaping platforms’ content takedown practices more transparent than it is today, and make human and fundamental rights legal tools more available to challenge practices that disproportionately burden fundamental rights. Filters that are mandated by law would be comparatively amenable to democratic accountability in the legislative or regulatory process. And platform users or civil society groups could more readily challenge them in court.

As a practical matter, though, SDOC filtering requirements would probably be bad news for fundamental rights. One problem would arise from courts in intermediary liability cases. As mentioned above, once platforms adopt one kind of filter to meet the SDOC, the chances of courts issuing additional filtering injunctions for the particular image, video, or text at issue in a specific case goes up. If the CJEU and Austrian courts’ analysis in Glawischnig-Piesczek is any indication, courts in this situation may make very little inquiry into fundamental rights.

Regulators establishing SDOC requirements, meanwhile, would quickly exhaust the list of proactive measures that can most plausibly be reconciled with fundamental rights. The filters that generate the fewest false positives and erroneous takedowns are the same ones that are already most widely deployed – in particular, PhotoDNA filters for illegal CSAM. Outside the CSAM context, filters almost invariably risk suppressing contextually-legal information, like an ISIS recruitment image re-used in news reporting, or a racial slur repeated by a victim of hate speech to describe her experience to friends. This imprecision and over-removal of lawful content has been platforms’ problem (and users’ problem) as long as filters, like the controversial GIFCT database for violent extremism, were deployed voluntarily. Once filters are mandated by the state, or listed as the legal precondition for essential legal immunities like Article 14, they become lawmakers’ problem. We should expect serious questions about their ability to require novel filters or other clumsy proactive measures consistent with fundamental rights.

4.     Competition and Innovation

Finally, highly prescriptive requirements can pose particular difficulties for small companies trying to compete with today’s incumbent platforms. One burden involves technical costs. SMEs are unlikely to be in the room where any “industry-wide” list of required measures gets negotiated. Even if the technologies on the list can be licensed from Facebook, YouTube, or other vendors, incumbents’ tools may be a mismatch for smaller competitors’ services – locking newcomers in to disadvantageous technical designs.

Another burden for SMEs involves the human and labor costs of managing error-prone filters, manually reviewing content, and handling appeals. If regulators create a SDOC that can only be met by large content moderation teams, it will significantly favor current tech giants over their smaller rivals. (This problem could theoretically be ameliorated by reducing the obligations of smaller platforms. But the model for this so far, in Article 17 of the EU Copyright Directive, is not encouraging.)

B.    The Flexible Model

At the other end of the spectrum, legislators or regulators could set flexible SDOC standards by listing broadly defined and open-ended obligations, and providing that actions taken to meet those obligations cannot be used as evidence of knowledge or control to defeat platform immunity in individual intermediary liability disputes. For example, the law could use wording modeled on the Terrorist Content Regulation, requiring platforms to take measures to “prevent the dissemination of” particular content without specifying the mechanism to do so. This version of the SDOC would be much more adaptive to changing circumstances or diverse platforms, but would create considerable uncertainty.

Realistically, platforms would almost certainly seek to comply with such a law by using tools that serve two purposes: (1) complying with the SDOC by taking down illegal content, and (2) simultaneously taking down additional content that violates platforms’ own, broader speech prohibitions under their Terms of Service (TOS). TOS-enforcement tools are almost by definition easier to build than law-enforcement tools. That’s in part because large platforms design their TOS rules to be enforceable at scale using as much automation as possible. Drafting TOS speech prohibitions that are broader and blunter than legal ones also spares platforms the expense of carrying out legal analysis for individual countries – as long as the TOS prohibits more content than the laws do. Factoring in divergent national laws, the Venn diagram for the EU would look more like this (but with a lot more circles):

Most content monitoring tools used today fall in this TOS-enforcement, rather than law-enforcement, category. The GIFCT hash database of extremist content, for example, covers material that violates platform TOS rules rather than laws. Some widely used CSAM hash lists are also overinclusive, filtering both illegal content and legal pornography that violates platforms’ policies. Similarly, Facebook’s machine learning tools for things like hate speech are trained to enforce the platform’s own standards.

Platforms’ inevitable reliance on TOS-enforcement tools to comply with a flexible but legally mandatory SDOC would generate a new set of legal wrinkles, beyond those of the prescriptive SDOC. (Unless lawmakers required tools to enforce national laws without taking down lawful expression – as the EU attempted to do in Article 17 of the Copyright Directive. The stakeholder dialogs about that mandate so far suggest that, even in relatively straightforward areas like copyright, such tools do not exist.)

1.     Platform Immunities

A flexible SDOC standard would create a whole new point of dispute in intermediary liability cases: whether platforms’ actions suffice to meet SDOC requirements. This would matter (1) because platforms risk penalties for violating the SDOC requirements themselves, and (2) because a platform that did not meet SDOC requirements would lose Article 14 immunity in the case, and likely in follow-on cases brought by other plaintiffs.

Having the platform’s systemic duty of care obligations determined in the context of harms to an individual plaintiff might also push interpretations of the SDOC in a different direction than a regulator would have crafted under more prescriptive rules, following public process with input from the public, NGOs, and technical experts. (This is one reason why the UK Online Harms proposal attempts to separate these roles: the regulator determines duty of care obligations, but only courts review individual cases.)

If the court determined that a platform’s proactive measures went beyond what the flexible SDOC required, this would also pose a litigation question: whether those surplus measures can be used by plaintiffs in litigation to defeat Article 14 immunity or support new monitoring injunctions. This is roughly the same issue that exists today for voluntary, “Good Samaritan” content moderation efforts, or would exist for efforts that exceed the requirements of a prescriptive SDOC. The answer might depend on whether the platform exceeded the law’s requirements by accident (using a cheap filter that errs on the side of removing lawful content) or design (using a filter designed to enforce TOS rules).

2. Content Moderation

In principle, a flexible approach would give platforms both legal incentive and leeway to innovate new content moderation methods and tools. In practice, the consequences are hard to predict. Platforms would have to navigate between moderating too little (thus failing to meet the SDOC requirement and losing Article 14 immunities) and moderating too much (thus meeting the SDOC, risking the loss of Article 14 immunities for exercising too much control, and potentially facing legal consequences for removing too much lawful expression). This uncertainty, combined with divergent SDOC standards or litigation outcomes and divergent underlying speech laws in different EU Member States, might make the flexible SDOC considerably more unpredictable than today’s already fragmented EU intermediary liability landscape.

3.     Fundamental Rights

On the up-side, flexible standards would give platforms more leeway to figure out meaningful technical improvements, and perhaps arrive at more nuanced automated assessment of content over time. And allowing platforms to enforce diverse TOS-based rules – instead of converging on a single standard driven by prescriptive legal requirements – could, in principle, contribute to a more pluralistic online speech environment. (In reality, diversity of platform speech rules might be driven more by lawmakers’ choice between two conflicting goals: on the one hand promoting true diversity and competition in the market for platform services, or on the other accepting incumbents’ dominance as the cost of more effective content regulation.)

The down-sides of open-ended SDOC standards could be considerable, though. Proactive measures devised by platforms themselves would, even when coupled with transparency obligations, be far less subject to meaningful public review, accountability, or legal pressure to protect fundamental rights than standards created through direct state action in the prescriptive model. If SDOC laws required proactive measures to enforce legal rules, but platforms could meet the requirement by deploying the clumsier tools designed to enforce broader TOS-based speech prohibitions, the SDOC would incentivize even more widespread removal and deployment of TOS-enforcement technologies. Free expression advocates may argue that this is a real problem – that by interpreting the SDOC this way, or by rewarding platforms with immunity for deploying TOS-enforcement tools, governments violate their own positive obligations to protect fundamental rights. This question, as well as questions about whether tools to enforce the more nuanced rules of national law are even possible to develop, should be squarely addressed before SDOC obligations are seriously considered.

4.     Competition and Innovation

Flexible standards are generally supposed to be better for competition. They allow SMEs to adopt measures that are reasonable and proportionate given their capacity or the nature of their business. That said, flexible SDOC rules do have some downsides for small companies, since those companies are generally unlikely to ever litigate to develop more legal clarity about the SDOC rules that apply to them. Litigation is expensive, and the investment may only be justified for platforms that expect to be repeat players, sued more than once on similar claims. The result may be an inevitable drift toward public understanding of SDOC law based only on precedent created by large platforms. By the same token, smaller platforms may have more reason to avoid litigation by adopting compliance measures that err on the side of removing content. That in itself can have anti-competitive consequences over time, making smaller platforms less attractive to users than their larger competitors.

II.               The Moderator’s Dilemma

Perhaps the most famous law encouraging content moderation is the “Good Samaritan” clause of the U.S. Communications Decency Act 230 (CDA 230). It encourages platforms to take user content down by immunizing “any action voluntarily taken in good faith to restrict access to” illegal or objectionable material. But CDA 230 also famously immunizes platforms for leaving user content up. It creates an unusual strict immunity – by contrast to the conditional immunity of notice-and-takedown-style laws like the DMCA or eCommerce Directive. Platforms immunized by the CDA can leave up illegal content, even if they know about it. This dual immunity is not a coincidence. CDA 230’s drafters concluded that without both protections, platforms would be stuck in what scholars have dubbed the “moderator’s dilemma”: afraid to engage in content moderation, lest their efforts cause courts to treat them like editors with legal responsibility for any unlawful material they failed to take down.

CDA 230’s dramatic resolution of the moderator’s dilemma was to simply eliminate liability for much of the illegal content posted by users. This is not an outcome we should expect to see in most legal systems, or that people in much of the world consider desirable. It was very much a product of its place (the U.S., with its unique faith in free speech and market-based solutions, and its uniquely high litigation risks), its time (the 1990s, during passage of the 1996 Telecommunications Act – a particularly deregulatory era, even by American standards), and some seriously complicated multi-year legislative and litigation gamesmanship. CDA 230’s supporters effectively bet that platforms would make greater efforts to clean up the Internet if they could do so without fear of resulting liability.

Both European and U.S. policymakers are now asking about other resolutions to the moderator’s dilemma. Can laws induce “Good Samaritan” efforts without offering the strict immunity component of CDA 230’s bargain? Or do notice-and-takedown based laws inevitably create the tension that has plagued platforms under laws like the eCommerce Directive or the DMCA all along?

Platforms have long faced pressure – from users, media, lawmakers, advertisers, and others – to do more aggressive content moderation. But such efforts have long created a legal Achilles Heel, endangering key immunities in litigation under laws like the eCommerce Directive and DMCA. Those conflicting imperatives, and the resulting appearance of “wanting to have it both ways” by posing as both responsible curators and neutral conduits, have until now largely been platforms’ problem. Attempting to graft a duty of care onto existing intermediary liability laws, though, makes it lawmakers’ problem. It’s not clear whether it is a problem that they, or anyone, can truly solve.

Published in: Blog