Six Constitutional Hurdles for Platform Speech Regulation

Laws regulating platforms can also regulate their users. Some laws may protect users, as privacy laws often do. Others, including many well-intentioned regulations of online content, can erode protections for users’ rights. If such laws are crafted poorly enough, they will violate the Constitution.

 

This blog post lists six often-ignored constitutional parameters for U.S. lawmakers regulating platforms’ liability for online content, with a primary focus on the First Amendment. These parameters matter for proposals to modify Communications Decency Act 230 (“CDA 230”), the long-standing but now controversial law governing platforms’ liability for some of the illegal content shared by their users. My point is not to promote any particular set of speech rules, nor is it to praise the status quo in First Amendment jurisprudence. Rather, it is to map difficult terrain, and make it more navigable.   

 

Platform speech regulation laws often raise multiple overlapping constitutional questions, many of them head-scratchers. These are things almost no one studied in law school. The case law rarely provides clear guidance. But it does tell us that, at every turn, the First Amendment shapes Congress’s options. In this post, I will identify six issues that often come up, alone or together, in U.S. proposals. In a later post, I will use one particular proposal – a bill introduced last term by Representatives Malinowski and Eshoo -- to illustrate how confusing it can be when these issues can come together in a single, seemingly simple law.

 

The issues are:

  1. Congress can’t ban constitutionally protected speech.
  2. Laws that restrict only illegal speech, but foreseeably cause platforms to restrict legal speech, can violate the First Amendment.
  3. Laws requiring platforms to remove speech and laws requiring them to reduce its “reach” both trigger First Amendment scrutiny.
  4. Laws explicitly or implicitly requiring platforms to monitor and police their users raise multiple constitutional issues
  5. Laws designed to regulate conduct are a bad fit for regulating online speech, but Congress has reason to use them anyway.
  6. Congress probably can’t avoid First Amendment restrictions by merely incentivizing, instead of requiring, platforms to take down lawful speech.

 

Constitutional Restraints on Platform Speech Regulation

1. Congress can’t ban constitutionally protected speech.

 

This should go without saying. But somehow the assumption that Congress can change CDA 230 in ways that effectively outlaw lawful speech keeps cropping up in news coverage, scholarship, and legislative discussions. There’s a reason for this. Lots of offensive and even dangerous speech is protected by the First Amendment. Lawmakers and staffers can be forgiven for initially assuming they can restrict things the Supreme Court has actually said are constitutionally protected speech -- including racist hate speech, lies, and vile and traumatizing remarks made to grieving families. Many of the communications used to organize the violent January 6, 2021 attack on the Capitol, which left five people dead, are protected under the Supreme Court’s long-standing Brandenberg standard, for example. 

 

Only the Supreme Court can change these key doctrines. Plenty of respected scholars have argued that they should. But in the meantime, lawmakers will misunderstand both the problems with online content and the scope of their own power if pundits and academics keep conflating established categories of legal and illegal speech, encouraging Congress to believe it can legislate against both.

 

2. Laws that restrict only illegal speech, but foreseeably cause platforms to restrict legal speech, can violate the First Amendment. 

 

A widely held assumption goes something like this: “If Congress just requires platforms to take down illegal speech, it’s not responsible for what platforms do next. Risk-averse platforms might overreact and take down lawful speech, but that’s not a First Amendment problem.” 

 

That assumption is wrong. Laws that result in over-removal by intermediaries can and sometimes do violate Internet users’ First Amendment rights. They can also violate the platforms’ own First Amendment rights. As the Supreme Court said in overturning a strict liability law for booksellers, laws incentivizing excessive caution by intermediaries

tend to restrict the public's access to forms of the printed word which the State could not constitutionally suppress directly. The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered.

In other words, Congress must take care in delegating speech regulation to private companies. 

 

That leaves an open question about exactly what a permissible law looks like. How sloppy can the state mandate be before its speech-suppression consequences are so clear that courts should step in and, like the court in CDT v. Pappert, strike the law down? Most (but not all) American lawyers assume that a law like the Digital Millennium Copyright Act (DMCA) is constitutional, for example. The DMCA addresses a uniquely high-volume area of online infringement with a choreographed notice-and-takedown regime, including detailed procedural rules intended to protect users from wrongful takedowns. Similarly, few seriously question current law’s requirement for platforms to take action against child sexual abuse material, which is both uniquely horrific and easy to recognize. But putting platforms in charge of identifying illegal speech under complex doctrines like defamation or true threats -- which require nuanced assessment of facts and context -- would be very different.

 

Several writers have argued that the First Amendment demands such robust speech protections that it would effectively replicate the immunity created by statute in CDA 230. For my money, Jeff Kosseff’s more nuanced read is probably the best one. (But I don’t agree with all of it. Also, I wasn’t kidding about money – his article is paywalled.) And Eric Goldman explains well how CDA 230 conveys civil procedure advantages that the First Amendment does not. My own writing lays out a fair amount of constitutional law on intermediary liability, including here (pp 16-20) and here

 

The point here is that the real-world consequences of laws matter. They can define the constitutional limits on Congress’s options when instructing platforms to take down illegal user content. Importantly, some of the best protections for Internet users’ rights involve rules that can realistically only be created by statute. The procedural protections in the DMCA may be imperfect. (See pp. 22-31 here)  But they are better than anything a court could plausibly come up with in adjudicating proposed hazy standards like “reasonableness” or “recklessness” for platform liability. (See pp. 10-11 here on that problem, and “Litigation Process Problems” section here on the reasons courts fail to protect third party users’ rights in plaintiff v. platform litigation.)

 

3. Laws requiring platforms to remove speech and laws requiring them to reduce its “reach” both trigger First Amendment scrutiny.

 

An increasingly popular assumption among non-lawyers is, roughly, “Congress can’t regulate Constitutionally protected online speech, but there is no First Amendment issue if Congress regulates the reach and virality that platforms create through ranking or recommendation systems.” 

 

That, too, is incorrect. The speech/reach framing is great in defining platforms’ own options. (Hat tip to my colleague Renee DiResta for formulating it.)  But nothing about regulating amplification, recommendations, or ranking of users’ speech lets Congress avoid the First Amendment. I will explore the cases on this more in a forthcoming paper. [Update: Here it is!] There are a lot of them. The Supreme Court’s approach boils down to this oft-repeated point from U.S. v. Playboy.

 

It is of no moment that the statute does not impose a complete prohibition. The distinction between laws burdening and laws banning speech is but a matter of degree. The Government's content-based burdens must satisfy the same rigorous scrutiny as its content-based bans.

 

In other words, laws that “merely” regulate speakers’ reach and access to an audience clearly implicate the First Amendment. That doesn’t mean they are automatically unconstitutional, but they definitely face major uphill battles. The closest we have to precedent for laws like this come from areas like broadcast regulation, where the Court has allowed Congress to restrict dissemination of otherwise lawful speech on grounds that approximately relate to broadcast’s “reach” or invasiveness. It has also upheld compulsory carriage of certain speech or speakers by private companies with bottleneck control over communications channels. (See here pp. 18-22)  But the Supreme Court has emphatically rejected the idea that such precedent supports broadcast-style speech restrictions online. And even within the special area of broadcast and cable laws, the Court has approved only highly detailed regulatory regimes and evidence-backed lawmaking. 

 

The idea of limiting reach of harmful online content is intuitively appealing to many people, though. For one thing, highly viral content can do more damage than content few people see. (But that still only gives Congress leeway to act against the comparatively small category of viral illegal content, not lawful-but-awful content.) For another, it feels more reasonable to tell platforms to take responsibility for their own recommendations than to impose liability for every word users say. (Again, that unties Congress’s hands only for illegal content – and even for that, a must-not-amplify law would create the same First Amendment issues regarding over-enforcement as the must-remove laws discussed at #1). 

 

This is a difficult area of law. As I said, I have a whole paper pending on it. But the bottom line is that Congress will waste time and resources if it buys into the current fad for regulating amplification, and skips examining the constitutional limits on such laws.

 

4. Laws explicitly or implicitly requiring platforms to monitor and police their users raise multiple constitutional issues 

 

How far can lawmakers go in effectively compelling platforms to proactively monitor, filter, or police user speech? This one is an iceberg of an issue. It’s been mostly hidden in the U.S. debate so far, but things will be ugly once we hit it. Questions about platform filtering mandates have loomed large in EU legislation, litigation, and scholarship for several years now. To the extent they have come up in DC, the focus has mostly been on the Fourth Amendment. But express or implied filtering mandates also raise free expression issues and arguably equal protection issues.  

 

The Fourth Amendment problem with laws that require platforms to search for illegal user content is that carrying out legally mandatory (as opposed to voluntary) searches could make the companies agents of the state for legal purposes. That has consequences for real-world prosecutions. Evidence surfaced by platforms’ searches could be excluded as evidence in trials of serious wrongdoers – including purveyors of child sexual abuse material (CSAM). It’s in that context that this issue has mostly arisen so far. Justice Gorsuch, when he was a federal appellate judge, held that another nominally private party was a state actor under similar circumstances. (As my colleague Riana Pfefferkorn explains here, that actor was NCMEC – a complicated player under current statutes. Jeff Kosseff wrote lucidly about the key cases and platforms’ own status here.) For now, prosecutors have largely overcome these evidentiary objections, in part because U.S. law expressly says platforms are not required to monitor for CSAM. That makes it easier to frame platforms’ searches as unilateral, voluntary behavior rather than action required or coerced by the state. But if Congress changed the law to eliminate that assurance, and adopted liability standards that effectively pushed platforms to monitor their users’ communications, things would be very different.   

 

The First Amendment issue relates to the point the Supreme Court made in Smith (the bookseller case), about risk-averse intermediaries erring on the side of caution and taking down lawful speech. That can be bad enough if platforms take down everything they happen to see that might violate a law, or honor every bogus legal accusation. But the scope for over-enforcement expands dramatically if the intermediary has to pre-screen and make risk-averse judgment calls about every word users speak. This problem is compounded when platforms turn to automated filters to detect unlawful material. While such technologies are relatively accepted for CSAM, because it is both uniquely harmful and never lawful in any context, automated filters are deeply problematic for other kinds of speech. Filters can’t tell the difference, for example, between an ISIS recruitment video used to stoke violence and the same video used in news reporting, counterspeech, or scholarship. (I discuss that here and here pp. 20-26.) Finally, laws that create such pervasive legal risk for platforms incentivize them not to offer open forums for lawful speech in the first place. It’s easier to run “walled gardens” excluding risky users, or else to set very broad “voluntary” speech prohibitions under Terms of Service, rather than risk accidentally permitting illegal speech. All of these issues have been recognized and subject to debate, litigation, and free-expression based rulings from Europe’s highest courts. (I discuss the EU law on point here.) In the U.S., they would raise particular questions under our more robust prior restraint doctrine. It would be remarkable if U.S. legislators, despite our stronger constitutional protections for free expression, adopted weaker protections weaker than Europe’s against automated speech policing mandates. 

 

The final major human rights issue raised by implicit or explicit policing requirements for platforms is the one least explored in the legal literature about platform speech laws: disparate impact on users based on race, gender, religion, and similar protected categories. A growing body of empirical literature shows, for example, automated filters’ unfair enforcement against speakers of African American English. There is ample reason for concern about similar problems when humans, instead of machines, carry out the content moderation. Patterns of false accusations or unjustified suspicion against members of minority groups in ordinary life will not disappear as we privatize surveillance or move it online. Equal protection thus joins free expression rights and rights against government surveillance in providing grounds to question laws effectively require platforms to monitor their users.

 

To date, the U.S. has avoided all of these thorny questions by expressly spelling out in law that platforms don’t have to monitor their users. Both our CSAM law and our copyright law say that.  But 2020’s crop of CDA 230 proposals largely left these key provisions out – strongly implying that platforms, if they want to be safe, should take constitutionally suspect pre-screening and policing measures.  

 

5. Laws designed to regulate conduct are a bad fit for regulating online speech, but Congress has reason to use them anyway.

 

Passing new laws to regulate online speech is hard. Drafting is a nightmare, free speech advocates get up in arms, and courts may well overturn the laws at the end of the day. The Supreme Court’s stringent First Amendment precedent puts U.S. lawmakers in a difficult position compared to their international brethren, who can at least try to devise new, human-rights-compliant laws that explicitly define restrictions on online speech. Whether or not we agree with the goal of banning more speech, we are at least likelier to get clearly-defined rules from lawmakers who can admit that’s what they are doing.

 

U.S. legislators, lacking such flexibility, have a tempting work-around: to re-use or emulate existing laws, even if those laws were designed for very different situations involving few or no speech issues. Congress did that in the 2018 SESTA/FOSTA revision to CDA 230, for example. Its restrictions on “facilitating” prostitution were modeled on the pre-existing Travel Act, which is used to prosecute things like interstate prostitution rings. The result is that platforms were not sure what speech to take down, and went on removal sprees affecting vulnerable users, including people involved in commercial sex work. And while Congress was pretty clear that it was trying to regulate unlawful speech online, DOJ is now defending the law by saying that it doesn’t target speech at all – it’s just about conduct. (I am one of the counsel for plaintiffs in that case.)  

 

Unclear speech laws applied to (and thus interpreted by) platforms will predictably chill speech differently than those same speech laws applied directly to speakers. A speaker may decide to say things that fall in a legal gray area – whether because she knows her words are justified by the facts, because she is committed to a cause, or because she is willing to be a test case to challenge the law. A platform will almost always lack that knowledge, motivation, or bravery. Erring on the side of suppressing the user’s speech is safer. The vaguer the law, the more room for caution and error affecting lawful expression.

 

To make things more complicated, both statutes and constitutional precedent regarding speech often turn on subjective elements like a speaker’s intent to incite violence or bring about some other damaging outcome. Platforms presumably don’t have that kind of intention (or, in legal-speak, mens rea or scienter) in most cases. They often can’t tell what a user’s real intention is, either. Putting them in charge of identifying illegal speech seems doomed to a particularly high rate of error, when measured against those standards. Yet, leaving them with no obligations in the case of speech that seems legitimately likely to cause violence seems no better.   

 

In all honesty, I have a lot of sympathy for U.S. legislators on this one. This kind of First Amendment law isn’t my specialty, though. Perhaps experts in this area can identify better ways forward than the current problematic work-around: regulating speech while pretending not to.

 

6. Congress probably can’t avoid First Amendment restrictions by merely incentivizing, instead of requiring, platforms to take down lawful speech. 

 

Can Congress use its power to incentivize platforms to silence legal speech, instead of requiring them to do so? The idea of using that roundabout mechanism to shape platform speech policy came up in several proposals last year. A bill from Senator Josh Hawley, for example, would have withheld platforms’ immunity under CDA 230 unless platforms adopted “politically neutral” new rules for user speech. (With “political neutrality” being defined by political appointees in government…)

 

Congress is sometimes allowed to do this kind of thing. Effectively, it can bargain with the recipients of state benefits to forfeit some First Amendment rights in exchange for cash, tax-breaks, and similar forms of federal largesse. In Rust v. Sullivan, for example, the Court upheld a rule saying that health care providers who received federal funds couldn’t discuss abortions with their patients. But there are real limits on when and how Congress can reward people for giving up speech rights – or, in the case of these proposals, reward platforms for giving up their users’ speech rights. Generally speaking, Congress’s speech rules are supposed to operate within the confines of particular programs, like the Title X federal Family Planning Program in Rust. CDA 230, as a structuring law enabling a vast array of commercial and private activity, seems awfully broad to meet that test. The Court has also indicated that recipients should not have to forego sharing their views via alternative channels. That makes bargains of this sort an odd fit in bills advanced by those, like Hawley, who maintain that no adequate alternate channels exist when speakers are removed from major platforms.

 

There’s also the question of what exactly Congress has to bargain with. Just how much of a benefit is CDA 230? Those who say it merely codifies rules that the First Amendment would require anyway might conclude that the statute provides no real benefit, and therefore Congress can’t use it as consideration for speech-suppression deals. I don’t see it that way. I think CDA 230 offers important benefits in the form of litigation advantages and streamlined procedures; it provides relative legal certainty in a vast and diverse sector of the economy; and its ultimate purpose is to benefit the public. Those things are broadly analogous to the benefits of federal trademark registration. And the Supreme Court just held in 2017 that Congress could not strike bargains using those benefits. In Matal v. Tam, it struck down a law conditioning trademark registration on recipients’ compliance with restrictive speech rules.

 

This set of issues, which the Supreme Court called “notoriously tricky” in its 2017 ruling, bears closer examination. I don’t pretend to have exhaustively surveyed it. But I don’t think Congress has either. And I suspect that at the end of the day, its latitude to shape speech online using such a quid pro quo will be limited.   

 

 

Conclusion

I often think of the constitutional constraints on platform speech regulation like a series of bumpers in a pinball game. Navigating them is tricky but necessary. The first step is knowing they are there. 

 

I hope the checklist developed in this piece can be helpful to others in the field, and provide a sort of reader’s guide for those assessing new legislative or academic proposals. In a follow-up post to this one, I will use this list to walk through strengths and weaknesses of a specific piece of legislation.  

 

Add new comment