CDA 230 Reform Grows Up: The PACT Act Has Problems, But It’s Talking About the Right Things

 

Alex Feerst, one of the great thinkers about Internet content moderation, has a revealing metaphor about the real-world work involved. “You might go into it thinking that online information flows are best managed by someone with the equivalent of a PhD in hydrology,” he says. “But you quickly discover that what you really need are plumbers.” The daily work of enforcing Terms of Service, or honoring legal takedown demands under laws like the Digital Millennium Copyright Act (DMCA), is all about the plumbing. If you don’t identify rules and operational logistics that function at scale, then you won’t accomplish what you set out to do. If you’re trying to enforce Terms of Service, you’ll get erratic and unfair outcomes. If you’re trying to enforce laws by making platforms liable for users’ unlawful posts, you’ll incentivize removal of lawful speech, encouraging platforms to appease anyone who merely claims online speech is illegal.

 

Until recently, despite the seemingly daily drumbeat of legislative proposals in this area, none of the lawmakers seemed to have talked to any plumbers. Bills like FOSTA and EARN IT proclaimed important goals, but did not lay out a system to actually achieve them. The original version of the EARN IT Act in particular failed in this regard. Its goal was incredibly important: to combat the scourge of child sexual exploitation online. But its mechanism for achieving that goal was basically a punt. EARN IT told platforms’ operational teams that they would be subjected to some rules eventually – but left those rules to be determined by an unaccountable body at some later date, after the law was already in place.  The more recent EARN IT draft, which passed out of committee on July 2, is, in a way, worse. It gives up on the idea of setting clear rules for platform content moderation operations at all. Instead, it exposes platforms to an unknown array of state laws under vague standards, to be interpreted by courts at some future date — leaving companies to guess how they need to redesign their services to avoid huge civil fines or criminal prosecution.

 

Against this backdrop, the “Platform Accountability and Consumer Transparency Act” (the PACT Act), sponsored by Sens. Brian Schatz (D-HI) and John Thune (R-SD), is a huge step forward. That’s not to say I love it (or endorse it). There are parts I strongly disagree with. Other parts build out ideas that might be workable, but it depends on details that are not resolved yet in the draft, or that just plain need fixing. Still: This is an intellectually serious effort to grapple with the operational challenges of content moderation at the enormous scale of the Internet. That in itself makes it remarkable in today’s DC environment. We should welcome PACT as a vehicle for serious, rational debate on these difficult issues.  

 

Focusing on operational logistics, as PACT does, is important. But of course, to achieve its legitimate ends, the law still has to get those logistics right. In that sense, PACT has a long way to go. The process-based rules that it sets forth need a lot more tire-kicking. They should get at least the level of careful review and negotiation that the DMCA did back in 1998: a lot of meetings, among a lot of stakeholders, putting in enough time to truly hash out questions of operational detail for those harmed by online content, users accused of breaking the law, and platforms. We seem less able to have careful discussion now than we were back then. But we should be much better equipped to do it. Today we have not only a broad cadre of academic and civil society experts in intermediary liability -- many of whom signed onto these Seven Principles on point -- but also an entire field of professionals who work on content moderation or on “trust and safety” more broadly. We have the plumbers! They know how things work! If PACT’s sponsors are serious about proposing the best possible law, they should bring them in to fix this thing.  

 

Big picture, PACT has (1) a set of rules for platform liability for unlawful content, (2) a set of consumer protection-based rules that mostly affect platforms’ voluntary moderation under Terms of Service, and (3) a few non-binding items.

 

1. Rules Changing CDA 230 Immunity and Exposing Platforms to Liability for Unlawful User Content

 

A. The Court Order Standard: Courts should decide what’s illegal, and once they do, platforms should honor those decisions.

 

If you want to walk back protections of CDA 230, taking away immunity for content that platforms know was deemed illegal by a court after a fair process is the low-hanging fruit. But this “court order standard” is no panacea. On the one hand, it is subject to abuse when frivolous claimants get default judgments, or just falsify court orders. On the other, some would argue that it creates too high a bar if it lets platforms leave up highly damaging material that we think they should be able to recognize as illegal without waiting for a court to act. (Though the worst know-it-when-you-see-it illegal content is proscribed by federal criminal law, for which Section 230 does not immunize platforms anyway.) For all its flaws, though, this standard would eliminate immunity in some of the most egregious cases currently covered by 230. It is the standard that many international human rights experts, civil society groups, legislatures, and courts have arrived at after wrangling for years with the problems created by fuzzier standards.

 

The PACT court order standard does have some problems. First, the way the bill is drafted makes it far too easy for subsequent amendments to eliminate the court order requirement entirely. A few wording changes in the bill’s Definitions section would leave the overall law with a very problematic notice and takedown model for anything that an accuser merely alleges is illegal. (More on that in the next section.) Second, the list of harms that can be addressed by court orders is weird. PACT allows court-order based takedowns under any federal criminal or civil law, or under state defamation law. That leaves out a lot of real harms that are primarily addressed by state law, like non-consensual sexual images (“revenge porn”). Meanwhile, no one I’ve talked to is even sure what the universe of federal civil claims looks like. It seems unlikely to correspond to the online content people are most concerned about. This focus on federal civil law crops up throughout PACT, and to be honest I don’t understand why. (If platforms were interpreting the law, under notice and takedown, then the problems with exposing them to all those divergent state laws would be obvious. But courts do the interpreting under the court order standard, so diversity of state laws is not the issue.) Having a list of the specific claims PACT would authorize would really help anyone who wants to understand the law’s likely impact.

 

B. The Procedural Notice and Takedown Model: The law should specify a process for accusers to notify platforms about allegedly (or in PACT’s case, court-adjudicated) illegal content, for platforms to respond, and for accused speakers to defend themselves.

 

This is intermediary liability 101.  We already have a decent (if flawed) model for legally choreographed notice and takedown in the DMCA. Frankly, it’s embarrassing that other US legislative proposals so far have not bothered to include things like “counter-notice” opportunities for speakers targeted by takedown notices, or penalties for bad-faith accusers (who are legion). To be clear, even with procedural protections like these, notice and takedown based on private individuals’ allegations, rather than court adjudication, would be seriously problematic for many kinds of speech. Imagine if anyone alleging defamation could make platforms silence a #metoo post or remove a link to news coverage criticizing a politician, for example. 

 

Hecklers will always send bogus or abusive notices, platforms will always have incentives to comply, and offering appeals to victimized speakers won’t be enough to offset the problem. But any kind of process is worlds better than amorphous standards like liability for “recklessness” or for content that platforms “should” have known about. No one knows what those standards mean in practice, and no one but the biggest incumbent platforms will want to assume the risk and expense of litigating to find out. This kind of legal uncertainty hurts smaller competitors more than bigger ones, in addition to threatening the speech rights of ordinary Internet users.  

 

PACT’s notice and takedown process isn’t perfect. Perhaps most troubling is the requirement to take down content within 24 hours of notification -- a standard much like the one recently deemed to unconstitutionally infringe users’ expression rights under French legal standards. The PACT Act has some carve-outs to this obligation, but the bottom line is a powerful new pressure to comply, even in cases of serious uncertainty about important speech or information. That pressure is, in my opinion, too blunt an instrument. There are more granular problems in PACT’s takedown process rules, too. For example, I initially read PACT to require these notices (like DMCA notices) to spell out the exact URL or location of the illegal content. That’s pretty standard in notice and takedown systems. But on review I realized the notice can be much clumsier and broader, merely identifying the accused’s account.   

 

C. Empowering Federal Agency Enforcers: Platforms are not immune from federal agency enforcement of federal laws or regulations.

 

I believe my summary captures the PACT drafters’ intent, but to be honest, I am not 100% sure what the words mean (and I’ve heard the same confusion from others). The bill says that platforms lose immunity against enforcement “by the Federal Government’’ of any “Federal criminal or civil statute, or any regulations of an Executive agency (as defined in section 105 of title 5, United States Code) or an establishment in the legislative or judicial branch of the Federal Government.’’ I think that means that HUD, EPA, FDA, CPSM, and others can bring enforcement actions. But maybe it also opens up exposure to civil claims broadly from DOJ? Here again, we could all understand this better if we had a list of the kinds of claims at stake. My suspicion is that there are not all that many areas where (a) agencies have relevant enforcement power, and also (b) 230 would even matter as a defense. (I don’t think 230 is necessarily a defense to important HUD housing discrimination claims, for starters.) But without a list of claims, it’s hard to say.

 

However odd this is in practice, I think I understand the theoretical justification. Making platforms take content down based on any accuser’s legal claim has obvious problems, but waiting for a court to decide is slow and can limit access to justice for victims of real legal violations. So it’s natural to look for a compromise approach, and empowering trusted government agencies is an obvious one. (This has been done or proposed in a lot of countries and is quite controversial in some – basically where the agencies are least trusted by civil society.) If agencies don’t have the authority to require content removal for First Amendment or Administrative Procedure Act reasons, but they do have power to bring a court case, this essentially puts the “heckler’s veto” power to threaten litigation in the hands of government lawyers instead of private individuals. In principle, that’s a reasonable place to put it, and we should be able to expect them to use their power wisely. In practice, the Supreme Court has repeatedly had to stop government lawyers from telling publishers and distributors what they can say, or enable others to say. That’s precisely what the First Amendment is about. This is also a strange time in American history to consider handing this power to federal agencies, given very serious concerns about their politicization, and the President’s recent efforts to use federal authority to shape platforms’ editorial policies for user speech. 

 

D. Empowering State Attorney General Enforcers: State attorneys general can bring federal civil claims against platforms (if their states have analogous laws, and the state AG consults with the federal one). 

 

This opens up claims under any federal civil law, not just the ones agencies enforce. (Maybe this implicitly means we should interpret the agency provision above as an authorization for AG Barr to bring any federal civil claim against platforms, too?) This again seems weird, because it’s not clear why federal civil law is the relevant body of law, or why empowering State AGs to enforce it solves our most pressing problems. That’s especially the case if they will enforce easily politicized or subjective rules, like FTC Act standards for “fairness” of TOS enforcement. 

 

State AGs have a tendency to push for enforcement of disparate state approaches, which raises obvious problems in governing the Internet. Some also have a pretty serious history of financially or politically motivated shenanigans, including taking sides in ongoing power struggles between corporate titans in the content, tech, and telecoms industries. One state AG, for example, literally sent Google a threatening letter extensively redlined by MPAA lawyers. Wired reports similar concerns about News Corp’s role in cultivating state AG attention to Facebook in 2007. Opening up more state forum shopping for these fights under PACT, and potentially subjecting platforms to conflicting back-room political pressures from red and blue state AGs, makes me pretty uneasy.

 

E. Ensuring That Platforms Are Not Required to Actively Police User Speech: PACT appears to preserve this essential protection for user rights, but needs clarification.

Requiring platforms to proactively monitor their users’ communications is the third rail of intermediary liability law. In Europe, it has been the center of the biggest fights in recent years. In the U.S., making platforms actively review everything users say in search of legal violations could raise major issues under both the 1st and 4th Amendments. So far, U.S. law has steered far clear of this -- federal statutes on both copyright and child sexual abuse material, for example, expressly disclaim any monitoring requirements for intermediaries.

 

PACT appears to hew to this important principle with a standard immunity-is-not-conditioned-on-monitoring provision. But… it’s not entirely clear if this passage actually does the job. That’s especially the case given some troublingly fuzzy language about requiring platforms to not just take down content but also “stop… illegal activity” by users. It’s not clear what that language means, short of dispatching platforms to police users’ posts or carry out prior restraints on their speech. Getting this language buttoned up tighter will be critical if the bill moves forward. (This one is really an issue for the legal/Constitutional nerds, not the content moderation operations specialists.)

 

F. Regulating Consumer-Facing Edge Platforms, Not Internet Infrastructure: PACT has some limits, but needs more.  

Whatever we think the right legal obligations are for the Facebooks and YouTubes of the world, those probably are not the right obligations for Internet infrastructure providers. Companies that offer Internet access, routing, domain name resolution, content delivery networks, payment processing, and other technical or functional processing in the deeper layers of the Internet simply don’t work the same way. For one thing, they are blunt instruments. Many of them have no ability to take down just the image, post, or page that violates a law -- they can only shut off an entire website, service, or app.  

 

PACT takes a step toward carving these providers out of its scope, but it doesn’t go far enough. (It only carves them out to the extent that they are providing service to another 230-immunized entity.) This shouldn’t be hard to fix.

 

2. Rules Regarding Platforms’ Voluntary Measures to Enforce Terms of Service 

 

A. The Consumer Protection Model: If platforms are going to enforce private speech rules under their Terms of Service, they should state the rules clearly and enforce them consistently. Failure to do so is a consumer protection harm, like a bait-and-switch or a failure to label food correctly.

 

[Update July 17: Staff involved in the bill tell me that the intent of this section is for the FTC to enforce failures of process in TOS enforcement (like failure to offer appeals, publish transparency reports, etc.), not for the FTC to determine which substantive outcomes are correct under the platform's TOS. That's a meaningful difference and would mitigate some of the First Amendment concerns I mention below. Clarification in the text about this (especially re the FTC assessing "appropriate" steps by the platform) would help, since most people I consulted with about the bill and this post seemed to read (or mis-read) the provision the same way I did.]

 

Consumer protection provides a useful framing, and one that critics on the left and right can often agree on. European regulators used consumer protection law to reach agreements about TOS enforcement with platforms in 2018, and ideas along these lines keep coming up inside and outside the United States. At a (very) high level, I like the idea that platforms should have to make clear commitments to users, and uphold them.  But there are very real operational and constitutional issues to be resolved. 

 

As a practical matter, I have a lot of questions. Exactly how much detail can we reasonably demand from platforms explaining their rules – is it enough if they provide the same level of detail as state legal codes, for example? Are they supposed to notify users every time some new form of Internet misbehavior crops up and prompts them to update the rules (which happens all the time)? How far do we want to go in having courts or regulators second-guess platforms in hard judgment calls? Speaking of those courts or regulators, how would we possibly staff them for the inevitable deluge of disputes? (I think PACT’s answer is that the FTC brings the occasional enforcement action but isn’t required to handle any particular complaint.)

 

Then there are the constitutional issues. If a regulator rejects the platform’s interpretation of its own rules in a hard case, is it essentially overriding platform editorial policy, and does that violate platforms’ First Amendment rights? Is the government essentially picking winners and losers among lawful user posts, and does that violate the users’ First Amendment rights? Even without activist court or agency interpretations, is it a problem generally to use consumer protection law to restrict what are essentially editorial choices? (This really isn’t the same as food labeling, for example. As the Supreme Court explained in rejecting a similar analogy years ago, “[t]here is no specific constitutional inhibition against making the distributors of food the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller.”) 

 

For all my sympathy for this approach, many of the things that trouble me most about PACT are in these sections. This is one place where the bill could most benefit from careful review by the plumbers I mentioned. I am not an operations specialist by Silicon Valley standards, and I did not try to vet every aspect of this proposal. But even I can see a lot of issues.  The bill requires many platforms to offer call centers, for example, not just the web forms and online communications typically used today. A noted trust and safety expert, Dave Willner, said in a panel I attended that this would lead to worse and slower outcomes for most users trying to solve real problems. He concluded that “you’d be better off taking the cash this would cost and burning it for heat.” (That’s a paraphrase. In spirit, it is the same thing I’ve heard from a lot of people.)  PACT also requires 14-day turnaround time for responding to notifications, which sounds good in theory but in practice may be truly difficult for small platforms facing hard judgment calls or sudden increases in traffic, notifications, or abuse. Even for larger platforms, a standard like this could force them to prioritize recent notices at the expense of ones that may be more serious (identifying more harmful content) or accurate (coming from a source with a good track record).  

 

B. Transparency Reports: Making platforms (or platforms over a certain size) publish aggregate data about content moderation, in a format that permits meaningful comparison between platforms.

 

I am a huge fan of transparency. Without it, we will stay where we are now: people on all sides of the platform content moderation issues will keep slinging anecdotes like mud. The ones with more power and media access will get more attention to their anecdotes. That’s a terrible basis for lawmaking. If lawmakers have the power to get the facts first and legislate second, we will all be better off.

 

That said, tracking more detail about the grounds for every content demotion or removal; what kind of entity sent the notice; what rule it violated for what reason; the role of automation; whether there was an appeal; etc. all add up to a fair amount of work – especially for smaller companies. They can’t track everything. The challenge is orders of magnitude bigger for transparency about what PACT calls “deprioritization” of content. Google, for example, adjusts its algorithm some 500-600 times per year, affecting literally trillions of possible search outcomes. It’s not clear what meaningful and useful transparency about that even looks like. Those of us who advocate for transparency should be smart about what precise information we ask for, so we get the optimal bang for our societal buck (and so we don’t fail to ask for something that will later turn out to matter a lot). Right now, although people like me put out lists of possible asks, and researchers who have been trying and too often failing to get important information from platforms often have very specific critiques of current transparency measures, we don't really have an informed consensus on what the priorities should be. So here, too, I say: bring on the plumbers, including both content moderation professionals and outside researchers.   

 

There’s a constitutional question here, too, though -- and it might be a really big one. Could transparency requirements be unconstitutional compelled speech (as the 4th Circuit recently found campaign ad transparency requirements for hosts were in Washington Post v. McManus)? Would they be like making the New York Times justify every decision to accept or reject a letter to the editor, or a wedding announcement? I haven't tried to answer that question. But we'll need to if this or other transparency legislation efforts move forward.

 

3. Non-Binding (I hope) Items

 

A. Considering future whistleblower protections:  The Government Accountability Office is directed to issue a report on the idea of protections and awards for platform employees who disclose “violations of consumer protection,” meaning improper TOS enforcement in content moderation.

 

I love whistleblowers. I even represented them at one point. But I shudder to think how politically loaded this particular kind of “whistleblowing” will be. The idea that a dissatisfied employee can bring ideologically grounded charges to whichever FTC Commission offices are staffed by members of his or her political party makes me want to hide in a cave. As one former platform employee told me, “bounties for selective leaking of stylized evidence against teammates is an episode of Black Mirror I’d be too scared to watch." This scenario is enough to make me wonder if those First Amendment concerns I touched on above – the ones about a government agency looking for opportunities to effectively dictate platform editorial policy in the guise of interpreting the platform’s own rules – are actually a really, really big deal. But… in any case, this whistleblower provision isn’t anything mandatory, for now.

 

B. A voluntary standards framework. The National Institute of Standards and Technology is directed to convene experts and issue non-binding guidelines on topics like information-sharing and use of automation.

 

This shares a suspicious resemblance with the “best practices” in EARN IT. Those were nominally not required in that bill’s original draft (but were in fact a prerequisite for preserving immunity). They are even more not required in EARN IT’s current draft (unless they become de facto standards for liability under the raft of state laws EARN IT unleashes). Perhaps I am naïve, but I am less worried about the voluntary standards proposed in PACT. For one thing, they won’t be crafted by nominees put in place by a who’s-who of DC heavy-hitters, as EARN IT’s would be. And the specific topics listed in the PACT Act – like developing technical standards to authenticate court orders – don’t all look like hooks for liability, like the ones in EARN IT. Most importantly, though, because PACT does not open platforms up to a flood of individual allegations under vague state laws, it leaves fewer legal blanks to be filled in by things like “voluntary” best practices or standards. Of course, the NIST standards might still come into play under PACT for courts assessing agency or AG enforcement of federal laws (or for platforms deciding whether to do what courts and AGs demand behind closed doors, to avoid going to court). So I may come to regret my optimism in calling them non-binding.   

 

Conclusion

I can’t tell you what to think about PACT. That’s in part because I am still trying to understand some of its key provisions. (What are these federal civil laws it talks about? Will DOJ be enforcing them? What are the logistics and First Amendment ramifications of its FTC consumer protection model for TOS-based content moderation?) But it’s also in part because its core ideas are things where reasonable minds might differ. It’s not disingenuous nonsense, and it’s not a list of words that sound plausible on paper but that legal experts know are meaningless or worse. It’s a list of serious ideas, imperfectly executed. If you like any of them, you should be rooting for lawmakers to do the work to figure out how to refine them into something more operationally feasible. You should be calling on lawmakers to bring in the plumbers.