Toward a Clearer Conversation About Platform Liability

Author(s): 
Publication Type: 
Academic Writing
Publication Date: 
May 7, 2018

Read the full response at Emerging Threats

CIS Intermediary Liability Director Daphne Keller drafted this piece for the Knight First Amendment Institute’s “Emerging Threats” essay series, as a response to an essay by Fordham Law School’s Olivier Sylvain, Discriminatory Designs on User Data. Sylvain argues that a core U.S. Internet law, Section 230 of the Communications Decency Act (CDA 230), wrongly immunizes platforms for their role in disseminating online content that harms vulnerable and marginalized groups. Given platforms’ evolving ability to algorithmically curate and target user-generated content, he argues, the law should assign Internet companies greater obligations to prevent such harms.

Keller’s response acknowledges the serious concerns about harms to marginalized groups, but argues that many of the problems identified in Sylvain’s essay would not be solved by changing CDA 230. CDA 230 has important upsides, including benefits for online free expression and for innovation and competition against today’s incumbent Internet platforms. As a result, before considering any further changes, it is important to be very clear about which of today’s online ills can fairly be attributed to CDA 230.

Some harms Sylvain discusses, such as Facebook’s targeting of ads based on users’ interest in anti-Semitic themes, are cause for serious concern. But they are not CDA 230 issues, because the targeting behavior that Sylvain describes is not illegal. If we want the law to prevent such targeting, we will need to change much more than just CDA 230.  In the meantime, CDA 230 actually encourages platforms like Facebook to suppress offensive-but-legal speech and behavior.

Other harms, such as the unexpected or non-consensual re-use of data and images shared by users, involve breaches of trust between a platform and an individual user. In the wake of the Cambridge Analytica scandal, there are important questions about whether US privacy and consumer protection laws adequately protect consumers against threats of this nature. But these, too, are not CDA 230 issues.

After discussing these legally distinct concerns, Keller turns to online threats that do implicate CDA 230. These include important and harmful developments such as the online sharing of non-consensual pornographic images. Responding to Sylvain’s suggestion that platforms’ immunities should be reduced when they actively curate content and target it to particular users, she notes that limitations of this sort are common in areas of law outside of CDA 230, and suggests that we have much to learn from experience under those laws.

In particular, Keller argues that we should be wary of legal models that would immunize platforms only if they are sufficiently “neutral.” Legal standards based on “neutrality” are extremely difficult to apply to intermediaries other than infrastructure providers such as ISPs. They may lead to serious unintended consequences – such as deterring platforms from policing for harmful material. A more sensible framing, she suggests, is one based on whether a platform “knows” about unlawful content.  While knowledge-based liability standards raise their own problems, they provide precedent and potentially workable standards from other areas of law – including US First Amendment jurisprudence.