What Online Content Are We Regulating? Illegal Speech, Offensive Speech, and Platform Value

This discussion, excerpted from my Who Do You Sue article, very briefly reviews the implications of what I call “must-carry” arguments – claims that operators of major Internet platforms should be held to the same First Amendment standards as the government, and prevented from using their Terms of Service or Community Guidelines to prohibit lawful speech. Because the First Amendment shields speech that most people consider offensive or even dangerous, requiring platforms to include this speech would seriously alter users’ experience in today’s major online speech forums. The change would likely reduce platforms’ value to most users and advertisers – and, as a result, their economic value.

In a separate section, excerpted here, I consider what “partial must-carry” legal regimes might avoid this problem – and what new problems such laws might create.

***

Some must-carry proponents seemingly aim to hold platforms to the same rules as the government, or want to convert them to common carriers—bound to deliver any message at all, or at least any that isn’t  illegal.[75] Such a  standard might leave platforms free to apply content-neutral “time, place, and manner” restrictions, as the government may in places like  public  parks  and  streets.  But  it  would also require platforms to preserve speech that many people find obnoxious, immoral, or dangerous. Real-world examples of legal online speech that  have  attracted  widespread outrage include “history of why  jews ruin the world”  and “how to burn jews.”[76]   Examples of speech protected under recent First Amendment case law include signs held by picketers near a soldier’s funeral saying “Thank God for IEDs” and “You’re Going to Hell.”[77] There is a reason public-interest groups and internet users typically urge platforms to take down more legal-but-offensive speech—not less.

Requiring platforms to carry speech that most users don’t want to see would also have serious economic consequences. Among other things, platforms would lose revenue from advertisers who do not want their brands associated with hateful or offensive content.[78] Converting platforms from their current, curated state to free-for-alls for any speech not banned under law would be seen by some as tantamount to nationalization. Platforms would almost certainly challenge it as an unconstitutional taking of property.

Even for committed free-expression advocates, it is not clear that requiring platforms to preserve all legal speech is in the public interest. There are speech rights on all sides of the issue. For one thing, platforms have their own First Amendment rights to include or exclude content. For another, platforms sometimes silence one aggressive user—or many—in order  to help another user speak. Without the platform’s thumb on the scales, some speakers, like female journalists barraged with not-quite-illegal threats of rape and violence, might be driven offline entirely

If making platforms carry all legal expression seems too extreme, must-carry proponents might argue that platforms should be held to some other standard. For example, they might be permitted to exclude only highly offensive speech, or could be required to apply rules fairly. I will explore some hypothetical regimes of this sort in the final section of this essay and argue that they all have real problems. Perhaps most troublingly, legally prescribed “decency” or “fairness” standards would take platforms out of the job of deciding what currently legal speech users can share, only to have the government do it instead. The   degree of regulatory intervention required to do so at internet scale would dwarf anything from the heyday of US broadcast regulation. And, unlike rules for mass media, these rules would govern speech by ordinary individuals. Letting platforms prohibit some legal speech would also fail to address the core concern that dissident or unpopular voices are being shut out of the most important forums for discussion today.

***

FOOTNOTES

[75] See Meyerson, “Authors, Editors, and Uncommon Carriers,” 114–15 (discussing obligations of common carriers).

[76] Julia Angwin, Madeleine Varner, and Ariana Tobin, “Facebook Enabled Advertisers to Reach ‘Jew Haters,’” ProPublica, September 14, 2017, http://www.propublica.org/article/facebook-enabled-advertisers-to-reach-... Daphne Keller, Toward a Clearer Conversation about Platform Liability, Columbia University, Knight First Amendment Institute, Emerging Threats series, 2018, https://knightcolumbia.org/content/toward-clearer-conversation-about-pla... (discussing misperception that CDA 230 causes platforms to tolerate such speech).

[77] Snyder v. Phelps, 562 U.S. 443, 454 (2011).

[78] Olivia Solon, “Google’s Bad Week: YouTube Loses Millions as Advertising Row Reaches US,” March 25, 2017, https://www.theguardian.com/technology/2017/mar/25/google-youtube-advert... -verizon (describing advertiser pressure on YouTube to change extremist content policies).

Add new comment