Last month, I had the privilege of testifying in an unusually substantive and (mostly) collegial hearing about the law known as Section 230. This is the first of several posts excerpting portions of my testimony and subsequent responses to Members' Questions for the Record (QFRs). My official written testimony is here, a transcript is here, A version of the testimony with functioning reference links and the addition of QFR responses is here.
Below is the first portion of my testimony. It reviews the under-recognized value that I believe Section 230 is providing today, and argues that this value is easily recognized by comparison to non-Section-230 legal regimes including U.S. copyright law.
Thank you for the opportunity to appear at this hearing and discuss the legal backbone of today’s online speech environment: the law known as Section 230. [fn 1] Section 230 is widely maligned. But we should be realistic about the significant value it provides, and the harms that we would almost certainly face without it.
I speak today based on twenty-five years of experience in platform regulation as both a practicing lawyer and a legal scholar. For ten years, I experienced the impact of platform laws firsthand as counsel for Google, including as legal lead for web search. For another ten years at Stanford, I have studied and written about the practical and policy alternatives to Section 230 — including laws proposed or enacted around the world and here in the United States. Based on this experience, my take on Section 230 is much like Winston Churchill’s take on democracy:
Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time[.]
Churchill was addressing the UK Parliament in 1947, with the horrors of World War II and the Holocaust fresh in memory. He did not speak lightly of sin, woe, or the imperfection of human governance. We should not speak lightly of the very real dangers in the world today, or of the Internet’s role in facilitating them. But Section 230, despite its flaws, has proven its value in achieving the very goals that will be discussed in this hearing. Experience here and in other countries illustrates the foreseeable damage threatened by many alternatives to Section 230.
Congress could repeal or amend Section 230, but two major things would not change. First, the U.S. Constitution protects the vast majority of the hate speech, disinformation, and other offensive or dangerous “lawful but awful” speech online. Lawmakers cannot tell platforms to remove online speech if they have no constitutional power to restrain that speech in the first place. The idea that eliminating Section 230 would make platforms liable for users’ constitutionally protected speech is simply false.
Congress could write a law to punish platforms for carrying defamation, fraud, obscenity, or other truly unlawful speech. But if laws like that go too far in incentivizing intermediaries to suppress legal expression, they can also violate the First Amendment. This is the lesson of Smith v. California, a 1959 Supreme Court case. The law at issue in that case imposed liability only bookstores — but, the Court noted, a “bookseller’s self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered.” The whole public is affected by today’s platform regulations, too. Those laws must respect ordinary Internet users’ rights to receive and comment on news, share text messages and family vacation photos, read restaurant reviews on Yelp, and post book reviews on Amazon.
The second constraint on lawmakers’ options is practical. Well-intended regulations will succeed or fail based on what platforms and users actually do in response to changed legal rules. A law that aims to prevent violence but in fact incentivizes major platforms to prohibit all discussion of world events or politics is not a success — particularly if the same speakers simply migrate to other platforms and foment even more violence. Neither is a law that aims to protect political speech but in practice would turn every platform into an identical, unrestricted cacophony.
In the remainder of my testimony, I will discuss the important work that Section 230 is doing today, and question the likelihood that alternative proposals would yield better results. I will also distinguish viable approaches to regulating platforms’ “design” from approaches that are constitutionally suspect.
Section 230 is doing important work today
Every platform speech regulation, including Section 230, represents a prediction — and a gamble — about the real-world behavior of platforms and Internet users. Section 230’s prediction was based on what is now known as the “moderator’s dilemma.” Its drafters were motivated by two court rulings. Together, the rulings told nascent Internet platforms that attempting to moderate content would put them in an editorial role, with legal responsibility for users’ unlawful speech — but that they could avoid this risk by refusing to intervene, and tolerating all manner of unlawful or harmful posts. Section 230 was designed to avoid the resulting perverse incentives. It sought to encourage platforms to create and enforce editorial policies, and protect them from the very real risk that doing so would lead to liability.
Americans need not look far for evidence that Section 230’s prediction was correct. The law has produced an ecosystem of small, medium, and large platforms that can all afford to exist, and to adopt diverse approaches to content moderation, because they can't easily be sued out of existence. Comparing claims that are not immunized by Section 230 — federal crimes, intellectual property, trafficking, and prostitution — as well as foreign laws tells us a lot about the benefits Section 230 provides today.
Encouraging moderation and avoiding the moderator’s dilemma
The experience of the video hosting platform Vimeo illustrates the moderator’s dilemma in action. Vimeo employed content moderators in an effort to weed out both illegal uploaded content, such as obscenity, and content that violated the platform’s own rules, such as hate speech. That choice to moderate is precisely what Section 230 encourages. But because copyright law does not offer the same immunity, Vimeo was drawn into litigation for well over a decade about whether those moderators might have seen and recognized, but failed to remove, copyright-infringing content.
Vimeo has to date prevailed in court and survived the expense of litigation. Another smaller platform, Veoh, provides a more sobering example. Veoh and YouTube offered very similar services, and were sued on very similar copyright claims. Both ultimately won their cases, on nearly identical grounds. But being legally in the right was not enough to save Veoh. It went bankrupt in the process. YouTube, by contrast, was able to weather over $100 million in legal fees, and remains a behemoth today. Protection from devastating litigation costs, which accrue even in meritless lawsuits, is one of the most important benefits of Section 230. [fn 2]
Without Section 230, most platforms would have two safe courses to avoid liability for claims like defamation. They could moderate so thoroughly that only the blandest and least controversial material remains; or they could avoid moderation entirely and leave users to face the resulting glut of scams, pornography, dangerous diet advice, advocacy of violence, and more. This would not be an overall safer or better Internet than the one we have now.
Discouraging over-removal of lawful speech
Abundant evidence shows that under the “notice and takedown” systems established by many non-230 platform liability laws, platforms’ safest, cheapest, and easiest course is simply to honor every claim. This makes users’ online expression vulnerable to a “heckler’s veto,” and gives individuals, companies, and governments an avenue to silence speech simply by complaining about it.
Even under the U.S. Digital Millennium Copyright Act — a law that attempts to protect speakers by allowing for appeals, reinstatement of lawful content, and penalties against bad faith takedown demands — improper claims are extremely common, and far too often successful. Governments have used takedown demands to silence critical journalism and suppress video evidence of police brutality. Businesses have used them to target competitors. Activists and discredited scientists have used them to suppress inconvenient truths. Improper takedown claims are even more common under Europe’s Right to Be Forgotten laws. Among claims targeting 7.8 million webpages for removal from search results, Google reports that nearly half were legally invalid.
In theory, platforms should feel free to ignore complaints that target speech protected by the First Amendment. In practice, this is a pipe dream. Fighting is expensive, and platforms have little motivation to do it. They also often simply lack the information to know whether content is actually illegal. For example, they cannot know, and have little motivation to find out, whether allegedly defamatory news reporting about local political corruption is true or false. Even if platforms do have all the facts, the legality of speech often depends on complex doctrines like copyright fair use, or on nuanced, jurisdiction-specific precedent about who counts as a public figure in defamation cases. Litigating in the face of such uncertainty would be daunting even for more dedicated defenders of speech.
That mix of bad platform incentives and legal uncertainty would define a world without Section 230. In such a world, we should expect platforms to regularly yield to takedown demands regardless of their merits. When platforms are forced to litigate, we should not expect a clear body of rules to emerge. There are simply too many varying legal questions to resolve. Even beyond the First Amendment issues, every state offers its own wide variety of tort claims, and every platform presents slightly or significantly different facts against which those claims may be tested. The room for plaintiffs to argue that a new case is not governed by precedent would be vast.
Section 230 helps platforms resist improper pressure from the government, too. Without it, state and federal officials could credibly threaten retaliation through, for example, targeted use of agencies’ civil enforcement powers. Recent incidents in which broadcasters have yielded to what Chairman Cruz called “mafioso” tactics from FCC Chairman Brendan Carr illustrate the problem. Chairman Carr’s threats temporarily drove comedian Jimmy Kimmel off the air. When CBS declined to air Steven Colbert’s interview with Senate candidate James Talarico, the interview remained available on a Section-230-immunized platform, YouTube. The same dynamic could just as well arise with a liberal regulator and a conservative comedian or politician. As I tell my students, protecting lawful but unpopular speech from government “jawboning” is not a partisan issue. Everyone has reason to fear state power over their speech, whether under a current administration or a future one. History suggests that those who are societally marginalized or politically powerless are the likeliest victims of such abuse. It also suggests that when incumbents with close ties to and dependencies on government become too accommodating, it is the independent players who may show more spine. Section 230 is critical in allowing them to do so.
Encouraging competition and diversity of online platforms
A persistent myth in policy discussions holds that Section 230 only protects the biggest platforms. Nothing could be farther from the truth. Today’s incumbents would survive the turbulence and uncertainty of a world without immunities. Their smaller competitors, like Veoh, very likely would not. This is the real backdrop when large incumbents “come to the table” and embrace changes to immunity. They are signaling that they can live with the consequences. If lawmakers want the Internet to evolve past its current state — if they want to constrain the power of today’s tech giants — the law must enable new and niche competitors to thrive.
Real-world defendants in Section 230 cases include newspapers, universities, libraries, employers, bloggers, and providers of spam protection and anti-fraud tools. They also include Internet infrastructure providers like domain name registrars. For the many smaller companies and non-profits the law protects, litigation costs remain daunting even with Section 230 in place. For early-stage startups, with a reported average $55,000 per month to cover all expenses, they may simply be insupportable. Section 230 gives the next generation of competitors a fighting chance against today’s giants.
[1] As Blake Reid explains, the law is technically Section 230 of the Communications Act of 1934.
[2] One study found that in 28% of cases in which platforms raised Section 230 defenses, courts found plaintiffs’ claims invalid and resolved the cases without needing to consider Section 230. Courts relied on Section 230 as the primary basis for ruling in only 42% of cases.