Stanford CIS

One Law, Six Hurdles: Congress's First Attempt to Regulate Speech Amplification in PADAA

By Daphne Keller on

In a previous post, I outlined six constitutional concerns that frequently arise when regulating platforms and online speech. These are not reasons why legislative change is impossible or even necessarily a bad idea. But they do define some of the difficult terrain that must be navigated to arrive at sound and constitutionally defensible proposals.

It can be hard to spot these issues in real-world legislation, because they tend to all come at you at once. In this post, I will use the Protecting Americans from Dangerous Algorithms bill, or PADAA, from Representatives Malinowski and Eshoo, as an example. That’s not because it’s the worst bill out there, or has great legislative momentum. It just happens to a illustrate a lot of legal issues in four short pages.

PADAA is also particularly timely for two reasons. The first is its subject matter: online planning or support for violent extremism, including things like the January 6th mob action at the Capitol. The second is the law’s focus on regulating platforms’ amplification, rather than simple hosting or transmission, of prohibited content. That’s an approach that has become quite popular in recent years, and we should expect to see more of it.

To recap, the issues from the previous post are:

  1. Congress can’t ban constitutionally protected speech.
  2. Laws that restrict only illegal speech, but foreseeably cause platforms to restrict legal speech, can violate the First Amendment.
  3. Laws requiring platforms to remove speech and laws requiring them to reduce its “reach” both trigger First Amendment scrutiny.
  4. Laws explicitly or implicitly requiring platforms to monitor and police their users raise multiple constitutional issues
  5. Laws designed to regulate conduct are a bad fit for regulating online speech, but Congress has reason to use them anyway.
  6. Congress probably can’t avoid First Amendment restrictions by merely incentivizing, instead of requiring, platforms to take down lawful speech.

This post will discuss each of those concerns as they arise in PADAA, and point out significant problems with four of the six. It will assume readers generally understand the six issues discussed in the last post, since recapping them would make this one even longer.

In short: Measured against constitutional standards, PADAA is pretty bad. I do not think it would or should survive First Amendment review in U.S. courts. Measured against current DC or cable television standards, though, PADAA looks downright reasonable. That’s mostly because our public and legislative discussion have gone so far off the rails. It can be hard to have a serious conversation in this environment, and Malinowski and Eshoo’s proposal is at least serious. Here’s how it fares against the six constitutional issues numbered above.

The Pros: To its credit, PADAA sets out to restrict only speech that violates long-standing law and presumably is not constitutionally protected (1). And it does so in a comparatively straightforward way: It takes away immunity for particular claims, rather than emulating other recent legislation that tried to extend Congress’s speech-regulation authority beyond constitutional limits by offering CDA 230 immunity as a quid pro quo for platform compliance (6).

The Cons: PADAA regulates speech using statutes that were not really designed for that purpose and can too easily be misinterpreted (5), particularly by risk-averse platforms who are effectively deputized to interpret the law under threat of legal penalty (2). To make matters worse, PADAA effectively incentivizes platforms to turn to flawed monitoring or filtering software to avoid new legal risks (4). PADAA might initially appear to offset those problems by only curtailing “reach” rather than speech itself, but that doesn’t change the strict First Amendment scrutiny that a court would apply (3).

A.   PADAA Overview

PADAA takes away CDA 230 immunity for platforms with over 50 million users, in situations where they have amplified two specific kinds of unlawful content. What I’ll call “amplification” is defined as use of an “algorithm, model, or other computational process to rank, order, promote, recommend, amplify, or similarly alter the delivery or display of information,” including both posts and entire accounts or groups. Some things like chronological ranking, user-voted ranking, and search results are carved out – platforms can do those things without losing immunity.

The claims plaintiffs can bring against platforms under PADAA fall into two buckets. First, there are claims under existing federal law regarding material support of foreign terrorism. Platforms already face criminal prosecution if they break those laws, with no immunity under CDA 230. (Because CDA 230 never blocks federal criminal claims.) What PADAA adds is civil exposure under the same laws. It’s unclear how big a risk that creates for platforms. Twitter, Facebook, and others have already won a dozen or more civil cases under these laws, and not always on CDA 230 grounds. Some courts have held that, regardless of CDA 230, ordinary platform operations did not violate the terrorism laws.

The second set of PADAA claims involve domestic extremism and threats to civil rights, under two statutes enacted in the 1860s in response to Ku Klux Klan action in the Reconstruction South. Chillingly, some passages, like the one that prohibits conspiring “to prevent, by force, intimidation, or threat, any person from accepting or holding any office, trust, or place of confidence under the United States,” could have been written today. Others, like one about conspirators who “go in disguise on the highway,” thankfully seem more dated. These 19th century laws also extend liability to entities which, like most platforms today, are not themselves party to such conspiracies. They hold liable “every person who, having knowledge” that such wrongs are about to be committed, and “having power to prevent or aid in preventing the commission of the same, neglects or refuses so to do.”

In short, PADAA revokes CDA 230 immunity for cases where platforms amplify user content, under pre-existing laws involving particularly dangerous speech. It sounds fairly simple. Here’s how it looks given the constitutional concerns discussed in the previous post, though.

B. PADAA and Constitutional Concerns

1.  Congress can’t ban constitutionally protected speech.

PADAA’s not too bad in this department. It doesn’t casually conflate known categories of constitutionally protected and unprotected speech, as has become too common in CDA 230 discussions. The substantive laws it invokes have been around a long time. One section of the foreign terrorism law even survived a First Amendment challenge before the Supreme Court. These laws aren’t perfect for this purpose, of course – and as I’ll address in later sections, they become even less perfect when used to regulate platforms. But they are actual laws, with longstanding pedigrees.

If devising constitutionally sound platform regulation requires navigating a series of barriers, PADAA’s drafters have cleared the first one. Things get rougher from here on, though.

2.  Laws that restrict only illegal speech, but foreseeably cause platforms to restrict legal speech, can violate the First Amendment.

PADAA doesn’t look so good by this measure. It has two distinct problems. One, which I’ll call “the usual problem,” involves platforms’ incentives to act against lawful speech. This issue is discussed and documented, and potential corrective measures are identified, in intermediary liability law and literature going back decades. I won’t rehash all that here. But I will delve into how these issues play out under PADAA in particular. The other problem, which I’ll call “the new wrinkle” is relatively rare: PADAA calls on platforms to act against entire user accounts or online groups, rather specific unlawful posts.

a.  The Usual Problem

User speech that might create liability for platforms under PADAA will range from some posts that are fairly easy to recognize as unlawful to others that even judges could not identify with certainty.

A platform seeking to avoid liability or even just litigation costs under PADAA would be wise to purge a broad category of content – legal, illegal, and impossible-to-classify – from its recommendation system. (Or from the entire platform – see end of Section 3.) PADAA makes this usual problem worse by stripping immunity not only in the case of content that itself constitutes or demonstrates a violation of the specified laws, but also for the hazier category of content “directly relevant to” such violations.

The most clearly defined speech rule in PADAA comes from one of the laws on material support of terrorism, 18 U.S.C. 2339B. That law prohibits providing material support to foreign terrorist organizations that the State Department designates, in a formal process that permits organizations to appeal the designation. This makes the law’s scope much clearer, and as the Supreme Court emphasized in upholding application of 2239B against a First Amendment challenge, helps avoid constitutional concerns. For platforms, the official list of designated organizations makes the task of identifying prohibited content comparatively clear -- though there are important unresolved First Amendment questions about even 2239B, as illustrated by Zoom’s termination of an academic event in 2020.

The scope of PADAA’s other speech rules is less clear. Other statutes governing support of foreign terrorism, like 18 U.S.C. 2239A, do not use the State Department list. As a result, the line defining prohibited speech may be harder to discern. Of course, any resulting over-removal of lawful speech is likely to fall on non-U.S. Internet users, cabining potential First Amendment objections.

The domestic claims under PADAA, for civil rights interference under 42 U.S.C. 1985 and 1986, potentially cover a broad range of U.S. speakers and speech. For example, platforms are exposed to claims for failure to prevent users from conspiring to “molest, interrupt, hinder, or impede” a federal official “in the discharge of his official duties.” That creates a lot of hard calls about user speech. Must platforms take action against users who urge friends not to pay taxes, not to participate in the census, or not to disperse when ordered to do so by federal authorities at a protest? What are platforms’ duties when speakers encourage jury nullification to counter racially biased policing or prosecution?

Hopefully courts can construe these civil rights laws to avoid First Amendment problems in such situations. But PADAA doesn’t put courts in charge of interpretation. It puts platforms in charge, with incentives to err on the side of over-enforcement. PADAA’s lack of even basic procedural protections for users – like being notified and given a chance to appeal removal or demotion – makes that problem even worse.

In quick research, I did not find any cases under these civil rights laws in which defendants challenged the laws on First Amendment grounds. What I did find were plenty of plaintiffs suing a mix of government and private defendants for violating their rights, including First Amendment rights. Some claimants allege violations by private prison operators, for example. One even said local police had violated her First Amendment rights by causing a web host to take down her site. That kind of question is at the heart of intermediary liability law: If you are an innocent user silenced by a complicated mix of state and private power, who, if anyone, do you sue? Our answers to that question right now are not good. State actors and platforms too often work together in ways that render both unaccountable, and leave ordinary Internet users out in the cold. PADAA, by using state power to effectively compel private content moderation under unclear rules, exacerbates that.

b.  The New Wrinkle

PADAA exposes platforms to liability for amplifying not only unlawful user content, but also any “page, group, account, or affiliation” that is directly relevant to a claim under the specified statutes. That substantially expands the law’s impact. In all but unusual cases (involving a person or group whose every post relates to unlawful activity), this part of PADAA would clearly result in at least some suppression of lawful speech. For large groups or longstanding accounts, the resulting burden on lawful expression would likely be considerable. A rule of this sort, cutting off audiences for not only specific unlawful communications but everything said by a particular speaker or within a particular group, raises major concerns about prior restraint under the First Amendment. I am not aware of cases or academic literature seriously examining the constitutional issues, about both speech and association rights, that arise when the law mandates platform action against groups or accounts.

User account termination is not unusual in the Internet context, particularly as a voluntary measure by platforms. In law, the DMCA requires them to terminate accounts in “appropriate circumstances” for repeat copyright infringement. But that’s after multiple claims identifying specific unlawful content, made under penalty of perjury, with penalties for bad faith accusations and potential reinstatement avenues for the accused. Even the validity of that rule has come into question, given users’ ever-increasing reliance on Internet communication platforms and the real possibility of abuse.

PADAA’s focus on groups in the extremism context is understandable. Group formation matters for real-world radicalization. And Facebook’s group recommendations played a major role in the Second Circuit’s Force v. Facebook case, in which a dissenting judge would have found no CDA 230 immunity against a terrorism claim. But sweeping, state-mandated measures against groups raise serious, and inadequately examined, constitutional questions.

3.  Laws requiring platforms to remove speech and laws requiring them to reduce its “reach” both trigger First Amendment scrutiny.

PADAA will likely be the first of a string of proposals to restrict platforms’ amplification of harmful speech, instead of prohibiting platforms from hosting or transmitting that speech altogether. Various pundits have embraced this idea as a model for legally mandated (rather than platform-initiated) content moderation. Some have even suggested that regulating only “reach” somehow lets Congress bypass First Amendment concerns. As I discussed in the previous post, and will discuss more in a forthcoming paper, that’s incorrect. PADAA and laws like it face the same First Amendment scrutiny regardless of whether they require platforms to “de-amplify” particular speech or erase it completely. That raises a pretty big question about this kind of legislation: If drafters believe platforms can accurately identify illegal speech, why not just require them to take it down?

To be clear, from a non-legal perspective I understand the desire to handle harmful and potentially illegal online speech by reducing its viral spread. “Sure, this law will lead to over-enforcement against lawful speech,” a supporter might reason, “but that’s OK because the speech won’t be gone entirely. It will just be taken out of recommendations.” That kind of balancing approach might carry the day under some free expression regimes internationally. In the U.S., though, the Supreme Court has been markedly unreceptive to laws limiting distribution of lawful speech.

A final note on PADAA’s rule against amplification is that, in practice, platforms may just delete the regulated content anyway. Taking it off the system entirely may be easier, cheaper, and safer than engineering new capabilities and enforcing new rules in recommendations features. That’s in part because once a platform knows enough about a piece of content to decide to de-amplify it under PADAA, that same knowledge or suspicion can expose it to liability under other laws. This includes the criminal versions of PADAA’s foreign terrorism claims, which have a “knowledge” scienter and are not immunized by CDA 230.

4.  Laws explicitly or implicitly requiring platforms to monitor and police their users raise multiple constitutional issues

PADAA does not require platforms to proactively monitor user speech. But for some reason it omits a standard provision found in existing U.S. intermediary liability laws, which specify that platforms don’t have to monitor. The resulting uncertainty about monitoring obligations has potential consequences for all three sets of constitutional rights discussed in the previous post: speech rights, equal protection rights, and rights against surveillance. The equal protection concerns may be particularly acute under the foreign terrorism laws, given the reported disparate impact of platforms’ over-enforcement against Muslims and Arabic-speakers. The 4th Amendment issues, by contrast, may be somewhat mitigated because PADAA does not require platforms to report the results of their searches to the U.S. government. That said, since platforms may be required to report to foreign government entities, who in turn may share that information with their U.S. counterparts, the end result for users may be the same.

What does Congressional silence on this key question mean? Is Congress effectively using its power to compel platforms to proactively police user content, without coming out and saying so? No one can agree on the answer to this question under the last similarly ambiguous law Congress passed, SESTA/FOSTA. Presumably the meaning of PADAA would be equally disputed. Plaintiffs would likely say platforms do have to monitor, in response to even vague allegations about prohibited content on the platform, to avoid being charged with liability under the statute’s “knowledge” scienter standard. Platforms would likely take the opposite stance in litigation, arguing that they can meet the law’s requirements by investigating allegations only about specific posts – without having to also proactively monitor everything else users say. But in reality, a law like PADAA would presumably cause more platforms to avoid this issue by “voluntarily” monitoring users, or adopting sweeping new prohibitions like the ones that followed SESTA/FOSTA. So the recurring intermediary liability formula appears for this issue, too. Ambiguity about a state mandate + risk-avoidant platform behavior = constitutionally problematic results.

5.  Laws designed to regulate conduct are a bad fit for regulating online speech, but Congress has reason to use them anyway.

PADAA is kind of a poster child for this problem. It weighs in at a slim four pages precisely because it relies on pre-existing laws. The specific laws it invokes have generally been used to regulate conduct, not speech. But the legal claims PADAA authorizes – the ones previously immunized by CDA 230 – are by definition about speech, since CDA 230 immunity only ever applies to claims treating a platform as a “publisher or speaker.”

To be clear, regulating speech is not a constitutional kiss of death. U.S. law routinely penalizes speech when it is used for things like fraud or threats. And cases about criminal conspiracies sometimes have to navigate tough speech questions, since conspiring typically involves talking. But clear prohibitions are the hallmarks of defensible speech laws in any legal system. For all the sympathy I expressed on this issue in my earlier post, the bottom line is that regulating speech through the back door, using rules that were never designed for that purpose, is a bad solution.

6.  Congress probably can’t avoid First Amendment restrictions by merely incentivizing, instead of requiring, platforms to take down lawful speech.

A number of bills in 2020’s bumper crop of proposed CDA 230 changes would have made immunities available only to platforms that met new legislative requirements. Senator Josh Hawley, for example, wanted to condition CDA 230 immunity on platforms’ adoption of “politically neutral” standards for content moderation. His bill, ironically titled the “Ending Support for Internet Censorship Act,” gave government the job of defining “political neutrality.”

PADAA does not quite fit in that category of bills. For one thing, while Hawley clearly sought to bypass First Amendment limits on state power, picking winners and losers among legal speech, PADAA in theory regulates only speech that is already illegal. For another, PADAA does not use CDA 230 immunity as the basis for a bargain between the federal government and platforms. Platforms can’t “earn” immunity by doing what Congress wants, as was proposed in initial drafts of the EARN IT Act. Instead, PADAA eliminates immunity from the outset for any case alleging that “amplifying” user content violates the specified federal statutes.

In all honesty, something about PADAA still feels naggingly like a quid pro quo to me. The behavior PADAA induces from platforms is likely to suppress lawful speech, regardless of drafters’ explicit goals. And Congress is securing that outcome by taking away the benefit of CDA 230. But it isn’t expressly offering immunities in exchange for platforms giving up their own or their users’ speech rights. So I think it doesn’t raise this issue in the way that the Hawley bill or EARN IT did.

Conclusion

At the end of the day, PADAA raises a number of familiar issues. Like SESTA/FOSTA, it uses fuzzy conduct-related statutes as a new basis for regulating online speech. It all but ensures over-enforcement against lawful expression by assigning risk-averse platforms to interpret the law, with no procedural protections for user rights and no effort to avoid driving platforms to use flawed automation.

It also raises one important new issue: whether Congress can avoid constitutional limits by regulating amplification of speech, instead of directly regulating speech itself. It takes a real effort of concentration to bring all the resulting constitutional issues into focus. We must simultaneously consider both the novel “only regulating reach” move and the more typical, but already indirect, move of “regulating platforms, which will then predictably over-enforce against their users.”

We should count ourselves lucky, then, that PADAA does not add yet another layer of indirection. If it had used the “immunity as reward for good behavior” move from Hawley’s bill and the original EARN IT Act, PADAA would have been even harder to understand, explain, and respond to. When it comes to laws regulating speech, hard-to-understand is bad. The world’s unacknowledged legislators may operate through multiple, nested layers of cantilevered inference. Its real ones should not.

Published in: Blog