Systemic Duties of Care and Intermediary Liability

Policymakers in Europe and around the world are currently pursuing two reasonable-sounding goals for platform regulation. First, they want platforms to abide by a “duty of care,” going beyond today’s notice-and-takedown based legal models to more proactively weed out illegal content posted by users. Second, they want to preserve existing immunities, with platforms generally not facing liability for content they aren’t aware of.

These goals both sound reasonable. On the surface they sound compatible. But there are a number of devils in the details. I don’t know of any legal system that has effectively combined such a compulsory duty of care, or even an immunity for “Good Samaritan” voluntary content moderation efforts, with the reactive notice-and-takedown model found under major immunity laws like the EU’s eCommerce Directive and the US Digital Millennium Copyright Act (DMCA). I suspect the merger of the two will create unexpected headaches for courts, platforms, and even the people the duty of care was supposed to help: individuals or businesses harmed by illegal content online.

The hardest questions will be crystalized, I think, when courts in ordinary intermediary liability cases have to decide whether a platform’s proactive content moderation efforts expose it to new losses in ordinary litigation – either because the platform loses its immunity to damages under laws like Article 14 of the eCommerce Directive, or because it opens itself up to more sweeping injunctions. The time to work through those questions is now, before lawmakers pass laws with potentially pervasive unintended consequences.

In this post, I will tee up those questions as I see them. First, I will explain what I call the “systemic duty of care” (SDOC) model. I will discuss its high level benefits and drawbacks, and argue that the most consequential SDOC obligations will be those requiring platforms to monitor or filter user content. Second, I will describe how the SDOC is likely to reshape parties’ arguments about the key factors of knowledge and control in ordinary intermediary liability litigation. Third and last, I will lay out a strawman “duty of care” for more detailed discussion in a later post. This first post is intended to be relatively straightforward. It sets up the foundation I think we need in order to carefully think through the implications of a SDOC.

My second post, later this week, will be far more speculative. It will build on this post’s analysis to explore two possible formulations of the SDOC – a prescriptive version that lays out specific requirements, and a flexible version that leaves exact measures up to platforms. For each, I will discuss potential consequences for (1) platforms’ immunities, (2) platforms’ content moderation practices, (3) users’ fundamental rights, and (4) competition and innovation. Those consequences are seriously complex, and I’m certain there are some things I’ve missed. I hope that whatever I get right or wrong in that post can contribute to a careful and rigorous discussion of what a duty of care might mean.

 

I.              The Systemic Duty of Care

A.    The Principle: Regulating Content Moderation Systems, Not Punishing Failures on a Case-by-Case Basis

The “systemic duty of care” is a legal standard for assessing a platform’s overall system for handling harmful online content. It is not intended to define liability for any particular piece of content, or the outcome of particular litigation disputes. (It is thus distinct from the ordinary tort law concept of “duty of care” that U.S. readers are likely familiar with.) UK scholars Lorna Woods and Will Perrin advanced one influential version of this model. The UK’s Online Harms proposal adopts a related version, to be enforced by UK’s existing media regulator, OFCOM.

The basic idea is that platforms should improve their systems for reducing online harms. This could mean following generally applicable rules established in legislation, regulations, or formal guidelines; or it could mean working with the regulator to produce and implement a platform-specific plan. Improvements might include hiring more moderators, offering better training, or building simpler tools for users to flag harmful content and moderators to find and remove it. The regulator could take action against platforms that do not adhere to regulatory guidelines or sufficiently improve their overall systems. If a platform fails to take down a particular piece of unlawful content, the person harmed can still sue the platform in a regular court process. (And, at least in the UK’s version, courts would be the only venue for such case-by-case claims. The regulator would not consider individual cases.)

In one sense I have a lot of sympathy for this approach. Accepting that Internet-scale content moderation systems will inevitably fail in some cases, but working to optimize those systems as much as possible, is broadly sensible. And relying on trusted regulators (in countries where such things exist) can, in principle, be better than relying on courts in the first instance. Regulators can develop expertise, convene discussions, and work iteratively with platforms to reach better outcomes than courts could. They can also be charged with considering all of the rights and interests at stake, and not just those of the individual complainant and platform that might appear before a court. A regulator that took this duty seriously could assess whether particular mandates have enough upside (in protecting users from illegal content) to offset potential downsides (in harming other Internet users’ rights, or in setting standards that small competitors of today’s incumbents can’t meet). That broad perspective could provide a real advantage over courts today, which typically do not consider broader policy issues and only hear the complainant’s and the platform’s perspectives – but not those of third parties likely to be affected by the case’s outcome.

In another sense, I am quite leery of the duty of care idea. In the UK, at least, it has been used to elide the critical difference between content that is truly illegal and content that is harmful. The 2019 White Paper described a duty to remove “harmful” content – a category that can encompass expression and information that is protected by human rights law, and outside lawmakers’ authority to regulate.  

The idea of a duty of care for “harmful” content raises a question that is beyond the scope of this post: Given media regulators’ existing power to restrict “harmful” content in media like broadcast, can they restrict similar expression shared by ordinary users talking to their friends on Internet platforms? If so, how clear must they be about that exercise of state power – or how much can they conceal the state’s role by relying on platforms to prohibit “harmful” content” under Terms of Service? The first question raises deep and as-yet-inadequately-explored issues about fundamental rights, the legal and policy foundations of media regulation, and today’s platform regulation efforts. The second question is closely connected, I wrote about it here. But both are questions for another day. For purposes of this discussion, I will assume a SDOC model that is intended only to lead to removal of genuinely unlawful content.

 

B.    The Practice: Improved Notice and Takedown and New Monitoring Obligations

The actions platforms might take to comply with a SDOC generally fall into two categories. The first encompasses improvements to existing notice-and-takedown systems. Historically, platforms that received notices or otherwise learned about unlawful content largely responded by removing it, but platforms could also take other actions like demoting or demonetizing content, or facilitating user communications as in Canada’s “notice-and-notice” system for copyright.  The second SDOC category – which is in many ways more consequential – includes obligations for platforms to proactively detect and remove or demote such content. (As a terminology note, arguably only these second ones count as “duties of care” in EU parlance, given the eCommerce Directive’s reference to potential duties “to detect and prevent” illegal activities in Recital 48.)

Reactive Notice and Action Measures: Changes to notice and takedown might include requiring platforms to make it easier for notifiers to flag content, respond to those notifiers more quickly and with better information, or offer appeals to users whose posts are removed. Changes like this have long been contemplated as part of ordinary intermediary liability law -- for example in the European Commission’s 2012 Notice and Action proceeding. As a result, academic literature and civil society positions in this area are relatively well-developed. I think such changes, as well as slightly more far-reaching requirements grounded in notice and takedown (like account termination for repeat offenders, or transparency reporting requirements), could be adopted without significantly changing the dynamics of intermediary liability litigation.

Proactive Monitoring Measures: The second broad category of potential SDOC obligations would go beyond notice and takedown. Platforms would have to proactively monitor users’ communications, in ways that were, until recently, almost unheard of in most intermediary liability laws. For example, platforms might have to deploy automated filters to find duplicates of unlawful material, instruct employees to search for terms associated with illegal activity and take down suspicious results, or periodically review posts in forums with a history of illegality. These are the activities that I think may cause headaches for courts and parties in ordinary intermediary liability litigation, as well as troubling consequences for competition and for platform users’ fundamental rights.

 

II.            Ordinary Intermediary Liability Litigation Against the Backdrop of the Systemic Duty of Care

When plaintiffs prevail against platforms in intermediary liability cases, it is usually by establishing some version of the claim that the platform knew/should have known about or had control over the illegal content at issue in the case. This standard litigation dispute will get very complicated if the SDOC effectively requires a platform to assert more control over user posts, and gain more knowledge about them.

A.    Knowledge and Control in Intermediary Liability Cases

Questions about knowledge and control appear in different forms under various national laws. In the EU, for example, the CJEU’s L’Oreal ruling tells us that a platform can lose Article 14 immunity because it “plays an active role of such a kind as to give it knowledge of, or control over,” user content.  Knowledge can also strip a platform of immunity in the U.S. under both federal criminal law and the DMCA, and could be a trigger for liability under pre-Internet laws applied to publishers and distributors of third party content in areas like defamation and copyright. Laws that disregard knowledge are rare, but they do exist – like the blanket immunity of U.S. Communications Decency Act 230 (CDA 230) at one end of the spectrum, and strict liability at the other. 

Control is even more foundational to platform liability. That’s in part because courts and legislatures may consider it unjust to hold platforms liable for user behavior they can’t control – and, conversely, consider that a platform should exert control if it is capable of doing so. Doctrinally, a service that does not exercise control is likelier to fall into a statutorily protected category in the first place (like the access, caching, and hosting providers immunized by the U.S. DMCA and EU eCommerce Directive, and the “interactive computer services” immunized by CDA 230). A platform that exercises too much control will also lose immunity under rules like the L’Oreal “passivity” standard or the DMCA’s “right and ability to control” standard.

The eCommerce Directive and DMCA both permit certain injunctions, even against intermediaries that are otherwise immune from damages. Here again, the platform’s existing capabilities – its capacity to know about and control user content – matter. In the U.K. Mosley v. Google case, for example, the claimant successfully argued that because Google already used technical filters to block illegal child sexual abuse material, it could potentially be compelled to filter the additional images at image in his case.

Notice and takedown systems can in one sense be seen as formalized proxies for these more nuanced questions about knowledge and control. Receipt of an adequately substantiated notice, for example, can establish that the platform had enough knowledge to lose immunity. In any case, most real-world litigation happens when notice and takedown systems fail to resolve disputes (often because the plaintiff maintains the platform was never eligible for immunity and no notice is required, or because the platform maintains that it can leave content up despite receiving a notice). The upshot is that, regardless of the exact legal framing or terminology, disputes about knowledge and control feature prominently in intermediary liability litigation – and hence, in platforms’ routine decisions about content moderation.

B.    Parties’ Arguments About Knowledge and Control in Intermediary Liability Litigation Under the Systemic Duty of Care

When litigation about platforms’ knowledge and control play out against a backdrop of SDOC obligations, the script is likely to go something like this:

Plaintiff: This platform is no passive intermediary. It sets its own speech rules via its Terms of Service; it uses automated tools to detect, rank, and remove content; and its employees routinely review user posts to take down prohibited material. It is in control of what users see, much like a newspaper editor – and like that editor, should be liable for content it publishes. The platform has been notified about illegal content similar to this post before. It knew about the problem of posts like this, and should have known to search for them and take them down.

Defendant: The content moderation efforts plaintiff describes are legally required as part of our compliance effort under the SDOC. They can’t simultaneously make us liable in this case. That would put us in a damned if you do, damned if you don’t situation.

Court: Hmmm….  Let’s see what the statute says about this situation.

So, what answer should the statute give? The plaintiff’s argument really would put platforms in an impossible position. By meeting its duty of care, the platform would expose itself to liability in ordinary litigation. But the platform’s argument also seems to go too far. If activities carried out based on the SDOC can never be held against platforms, then arguments that plaintiffs use successfully now will stop working. For example, in the Estonian litigation that led to the ECtHR’s Delfi ruling, the plaintiff successfully pointed to the platform’s use of a text-based filter to identify and remove certain hateful terms as evidence that the platform exercised control over user speech and was not eligible for immunity under Estonia’s implementation of Article 14. If a SDOC law were in place, though, that part of the litigation would play out differently. The platform would make the argument from the script above – that SDOC compliance efforts can’t create liability exposure in ordinary lawsuits.

To be clear, my own opinion is that “Good Samaritan” efforts like the hate speech filter in Delfi shouldn’t undermine platforms’ Article 14 immunities even under current law. But plaintiffs can make these arguments today, and sometimes they succeed. If lawmakers are moving the goalposts and making it harder for such plaintiffs to prevail against platforms, they should be clear about it.

Lawmakers could defend that trade-off. They could say that disempowering plaintiffs in the inevitable cases where platforms make mistakes and leave illegal content up is justified, because the SDOC will improve overall content moderation. In Europe (compared to the more litigious U.S.), that position might be politically viable. But that doesn’t seem to be what they are saying, at least for now. They are saying that it is possible to have it both ways – to encourage proactive moderation efforts, but to preserve overall liability standards as they have existed to date. I’m not sure that’s right, at least not without some big, ugly policy trade-offs.

If I had to bet, I’d predict that legislators will simply avoid the question, establishing a SDOC without spelling out what the consequences will be for ordinary litigation. If they do address the litigation question, my best guess is that they make SDOC compliance a prerequisite for immunity, but not work out the details or implications of that linkage. For example, the law might provide that “platforms shall be eligible for limitations on liability only if they meet systemic duty of care requirements.”

That formulation would answer one big question about litigation, disadvantaging plaintiffs in cases like the ones I described above. At the same time, a provision like this would punt on a whole additional set of complex questions. Regulators and courts would be left to sort those out, and they would likely do so in ways that leave much to be desired. That’s what my next blog post will be about.

 

III.          A Straw-Man Systemic Duty of Care

Based on this post, I have narrowed in on the following straw-man version of the SDOC.

-       The SDOC sets a standard for platforms’ overall content moderation system (i.e. not a standard for liability in individual cases).

-       The SDOC coexists with ordinary intermediary liability laws like Article 14 of the eCommerce Directive.

-       Platforms must meet SDOC requirements in order to claim immunity under laws like Article 14 in ordinary intermediary liability litigation.

-       The SDOC requires proactive monitoring measures (i.e. not just improved notice and takedown).

-       The SDOC only requires platforms to tackle illegal content (meaning content that is prohibited by law, not content that might broadly be deemed “harmful”).

This model is a straw-man, intended to clarify analysis and discussion. It is not something I am advocating for. Indeed, I think it raises some very serious problems that have not been adequately surfaced in public discussion.

In the next blog post, I will use this straw-man to discuss two broad models for a SDOC: a “prescriptive” model, in which regulators spell out precisely the measures a platform must take, and a “flexible” model in which the measures are left undefined. Both, I suggest, will create complications. I will break out some likely consequences of each approach for platform immunities, content moderation practices, fundamental rights, and competition.    

 

 

 

 

Add new comment