Platform Content Regulation – Some Models and Their Problems

Lawmakers today are increasingly focused on their options for regulating the content we see on online platforms. I described several ambitious regulatory models for doing that in my recent paper, Who Do You Sue? State and Platform Hybrid Power Over Online Speech. This blog post excerpts that discussion, and sketches out potential legal regimes to address major platforms’ function as de facto gatekeepers of online speech and information. Readers I’ve talked to so far have expressed particular interest in the Magic API's model, which speaks to both speech and competition concerns.

Of course, lawmakers’ options are simpler if their only goal is to make platforms take down more user-generated content. A clumsy law like FOSTA can achieve that goal easily – at the cost of driving a great deal of legal speech offline. I outlined more nuanced options for lawmakers purely pursuing takedown goals in this White Paper, and will offer a shorter rundown of available doctrinal “dials and knobs” in a piece coming out soon from Yale ISP.

The ideas outlined here assume that lawmakers want to shape platform behavior more broadly – including by constraining their discretionary power to take down users’ lawful speech under Terms of Service or Community Guidelines. I refer to arguments that platforms can be compelled to carry content against their will as “must carry” claims. Much of the paper is devoted to the likely Constitutional barriers to such claims or laws. This section also builds on discussions about exactly which speech and exactly which platform operations, including content ranking or amplification, might be affected by must-carry rules.

***

[I]t is far from clear that courts or Congress could, as a constitutional matter, compel platforms to carry particular speech or messages against their will. There are also serious policy and philosophical questions about whether we should want them to. Platforms’ rules against legal-but-offensive speech prevent real-world harms and curtail behavior that many people find morally abhorrent. They also protect platforms’ economic value as desirable destinations for users and advertisers.

On the other hand, leaving internet speech policy in the hands of private companies has ramifications going beyond major platforms to the entire internet speech ecosystem. If any private internet intermediary can exclude legal but unpopular speech—and has economic incentives to do so—where is that speech supposed to go? Has the public interest been served if it is driven offline entirely? As Justice Kennedy noted decades ago, "Minds are not changed in streets and parks as they once were. To an increasing degree, the more significant interchanges of ideas and shaping of public consciousness occur in mass and electronic media."[125] In function, internet platforms have largely replaced public spaces like parks and streets, while other private intermediaries have displaced public exchange systems like the postal service or cash currency. If private actors throughout the internet’s technical stack can exclude legal speech, and are pressured to do so, the online marketplace of ideas will look very different from the one imagined in First Amendment jurisprudence.

In response to comparable questions, the Supreme Court has occasionally approved must-carry obligations that “left room for editorial discretion and simply required [owners of communications channels] to grant others access to the microphone.”[126] This implies that a regime mandating a limited degree of access to important speech channels could, hypothetically at least, pass constitutional muster. It is hard to imagine what such a “partial must-carry” regime would look like in the real world, but I will sketch out some approaches in this section.

Regulating Bigness     

Lawmakers outside the United States have experimented somewhat with setting different rules for hosting platforms depending on their size. Germany’s NetzDG, for example, holds social networks with more than two million German users to stringent content-removal time lines, as well as higher standards of public transparency.[127] Drafts of the EU’s pending Copyright Directive would also include special obligations for larger hosts.[128] Setting different restrictions depending on size would create problematic incentives for growing start-ups and is generally not a common approach in American law. It is also hard to identify a workable definition of “bigness” that would not inadvertently sweep in complex entities like the thinly staffed, user-managed Wikipedia. Special rules for mega-platforms would conceptually align with some thinking about competition and the First Amendment, though, by imposing obligations only on those who control access to “scarce” communications channels or audiences.

Empowering Users

For many proponents of online civil liberties, the go-to solution for problems of platform content moderation is to give users themselves more control over what they see. Settings on YouTube or Twitter, for example, could include dials and knobs to signal our individual tolerance for violence, nudity, or hateful speech. This isn’t a cure-all, but it’s still a great idea. It’s been around at least since the 1990s, when technologies like the Platform for Internet Content Selection (PICS) were supposed to allow users to choose what web content appeared in their browsers. Both the Supreme Court in the seminal Reno v. ACLU internet First Amendment case and Congress in passing CDA 230 relied in part on the expectation that such technologies would empower consumers.

Today, there remains much to be done to give users more control over their information diet. There is perhaps a chicken-and-egg question about the paucity of end-user content controls today and the rise of major, centralized platforms. Did internet users stop demanding these tools once they found search engines and curated hosting platforms to protect them from the worst of the web? Or did they flock to centralized platforms because good tools for true end-user control did not exist? It may be that such tools have only limited promise as a technical matter, because they depend on accurate content labeling.

A user who wanted to block most racial epithets but retain access to rap lyrics, historical documents, and news reporting, for example, could do so only if people or algorithms first correctly identified content in these categories. That’s more work than humans could do at internet scale, and algorithmic filters have so far proven highly unreliable at tasks of this sort. Giving everyone his or her own content filter, too, might solve some problems while exacerbating others—particularly those involving “filter bubbles,” echo chambers, and attention scarcity.

Hybrid Regimes

As discussed above, there are important questions about whether must-carry advocates want platforms to carry all legal speech or just some of it, and about which platform operations they think should be affected. While the more aggressive civil libertarian position might demand that platforms carry any speech that is legal, some claimants—and the general public—almost certainly prefer curated platforms where civil discussion can take place. Speakers seeking an online audience in particular may not want their platform of choice to become so unattractive that other users depart or that advertisers cease funding platform operations.

Are there partial must-carry laws that could achieve this goal, and that would at least partially preserve platforms’ own control over ranking and removal? Below, I list a few possible models I have discussed over the years with experts in internet law and technology, both informally and in print or on conference panels. They all involve very serious trade-offs among competing values. Some of them sound simple in theory but would be monstrously complex in practice for lawmakers, technologists, or both. Most also have another thing in common, something that became clearer to me in the course of writing this essay. They reraise questions that smart people thought and wrote and legislated and litigated about not that long ago in the context of communications law. Today’s issues are not identical, because technology and social context have changed. But the parallels are strong. I am not the lawyer to explain that history or revive the arguments that communications experts have made in the past. I do think, though, that lawyers and thinkers in my field will need to understand them much better as the political conversation about must-carry obligations and overall platform regulation continues.

One possible approach would let platforms act against highly offensive or dangerous content but require them to tolerate more civil or broadly socially acceptable speech. That kind of legal regime has plenty of precedent: the FCC has long enforced rules like this for TV and radio, for example. But it would be a troubling solution for online speech for several reasons. First, we are far from a national or even local consensus about what’s highly offensive or dangerous. Second, rules of this sort would, like their TV and radio equivalents, require substantial and ongoing regulatory intervention and rulemaking to determine which theory of offensive and dangerous speech should prevail. Regulating platforms’ rapidly evolving and technically complex ranking algorithms would be particularly challenging.[129] Third, unlike broadcast regulation, rules limiting online speech would reach deep into ordinary people’s daily communications. And of course, fourth, laws that allowed platforms to take down some legal posts but not others would use state power to pick winners and losers among legal speech. That would require a massive rethinking of First Amendment law, to be resolved by equally massive litigation.

A second variant on this idea would try to avoid state-sanctioned value judgments about speech, and instead let platforms enforce any rules as long as they are “fair.” Fairness mandates that simply required rules and processes to be transparent or that barred discrimination on bases such as race or gender might be comparatively easy to define. But that kind of fairness wouldn’t achieve what many must-carry proponents want: equal treatment for different viewpoints. A viewpoint-neutrality rule is much harder to imagine. Would it require equal treatment for Democrats, Republicans, Socialists, Monarchists, and Anarchists? For people who like creamy peanut butter and people who like crunchy? For people urging us to invest in sketchy tech start-ups and people urging us not to? The potential rules here would also likely resemble the ones the FCC applied to older communications channels. The equal-time doctrine, for example, required broadcasters to give equal airtime to qualified candidates for public office. And the fairness doctrine required “fair” coverage for issues ranging from workers’ rights to nuclear power plant construction. Critics charged that the doctrine was unworkable and that it effectively enabled selective enforcement by an unaccountable bureaucracy. The FCC itself eventually decided the doctrine was unconstitutional, and President Reagan vetoed a bill that would have brought it back.[130]

A third variant might say that platforms have to make room for disfavored speech, but that they don’t have to promote it or give it any particular ranking. As Tim Lee put it, we could “think of Facebook as being two separate products: a hosting product and a recommendation product (the Newsfeed).”[131] On this model, a platform could maintain editorial control and enforce its Community Guidelines in its curated version, which most users would presumably prefer. But disfavored speakers would not be banished entirely and could be found by other users who prefer an uncurated experience. Platforms could rank legal content but not remove it.

This idea has a communications law flavor as well. Internet regulatory models have traditionally distinguished between “network layer” intermediaries, such as ISPs, that function as infrastructure and thus have must-carry or net-neutrality obligations; and user-facing “application layer” services like Facebook or Google. Users of application layer services typically want content curation and don’t want must-carry rules—or didn’t use to.[132] The increase in demands for must-carry mandates could be taken as calls to rethink the role of major platforms and to start treating them more like essential, network-layer internet infrastructure. A “rank but don’t remove” model would recognize this, requiring major platforms to offer an uncurated, unranked service but preserving their discretion over the curated version. The problems with this model are less extreme, since it at least avoids creating new state-sponsored speech rules. But it would still require extensive and ongoing regulation, resulting in distortion of market incentives and innovation, to decide what count as the “network” and “application” aspects of any given platform.[133] And it presumably would not be a very satisfactory solution for most must-carry proponents, since it would still largely deprive them of the audience they seek.

A final variant is what I think of as the “magic APIs” model.[134] It is broadly analogous to telecommunications “unbundling” requirements, which aim to insert competition into markets subject to network effects by requiring incumbents to license hard-to-duplicate resources to newcomers. In the platform context, this would mean that Google or Facebook opens up access to the “uncurated” version of its service, including all legal user-generated content, as the foundation for competing user-facing services. Competitors would then offer users some or all of the same content, via a new user interface with their own new content ranking and removal policies. Users might choose a G-rated version of Twitter from Disney or an explicitly partisan version of YouTube from a political group, for example. As Mike Masnick puts it:

Ideally, Facebook (and others) should open up so that third party tools can provide their own experiences—and then each person could choose the service or filtering setup that  they want. People who want to suck in the firehose, including all the garbage, could do so. Others could choose other filters or other experiences. Move the power down to the ends   of the network, which is what the internet was supposed to be good at in the first place.[135]

Letting users choose among competing “flavors” of today’s mega-platforms would solve some First Amendment problems by leaving platforms’ own editorial decisions undisturbed, while permitting competing editors to offer alternate versions and include speakers who would otherwise be excluded. But platforms would object on innumerable grounds, including the Constitution’s prohibition on most state takings of property. In any case,  this approach would also create a slew of new problems—beyond the ordinary downsides of regulatory intervention and disruption of private enterprise. The technology required   to make it work would be difficult, perhaps impossible, to build well; that’s the “magic” part. There are also serious questions about how such a system would interact with the complex, multiplayer technical infrastructure behind online advertising. And, perhaps most dauntingly, streamlined systems for users to effectively migrate to competing platform versions could far too easily run afoul of privacy and data-protection laws.[136]

It is far from clear to me that any of these ideas could have upsides that outweigh the downsides. But none has really been given a good tire-kicking by technical and legal experts, either. The current conversation about must-carry is in this sense in its infancy, for all the political light and heat it has generated.

***

FOOTNOTES

[125] Denver Area, supra note 17 at 802–3 (Kennedy, J., concurring)

[126]FCC v. League of Women Voters of California, 468 U.S. 364, 385 (1984).

[127]Act to Improve Enforcement of the Law in Social Networks, Article 1, Section 1 (English translation) (2017), https://www.bmjv.de/SharedDocs/Gesetzgebungsverfahren/Dokumente/NetzDG_e...? __blob=publicationFile&v=2.

[128]128 European Commission, Proposal for a Directive of the European Parliament and of the Council on Copyright in the Digital Single Market (September 14, 2016), https://ec.europa.eu/digital-single-market/en/news/proposal -directive-european-parliament-and-council-copyright-digital-single-market.

[129] See Blevins, “The New Scarcity” (describing logistical challenges of such regulation). For a sense of the challenge, Google reports that of more than two trillion search queries it receives each year, 15 percent are completely new. That works out to more than 800 million unprecedented search result sets—and opportunities for “unfair” ranking—each day. Barry Schwartz, “Google Reaffirms 15% of Searches Are New, Never Been Searched Before,” Search Engine Land, April 25, 2017, https://searchengineland.com/google-reaffirms-15 -searches-new-never-searched-273786.

[130] Kathleen Ann Ruane, “Fairness Doctrine: History and Constitutional Issues,” Congressional Research Service, July 13, 2011, https://fas.org/sgp/crs/misc/R40009.pdf; see also Adam Thierer, “Why the Fairness Doctrine Is Anything but Fair,” Heritage Foundation, October 29, 1993, https://www.heritage.org/government-regulation/report/why-the-fairness-d... Szóka, supra note 83 at 3 (“Opposition to reinstatement of the Fairness Doctrine has been in every GOP platform since 2008”); Wu, Is the First Amendment Obsolete? (a fairness doctrine for platforms would be “too hard to administer, too prone to manipulation, and too apt to flatten what has made the Internet interesting and innovative”).

[131] Timothy B. Lee, “Alex Jones Is a Crackpot—But Banning Him from Facebook Might Be a Bad Idea,” Ars Technica, August 6, 2018, https://arstechnica.com/tech-policy/2018/08/op-ed-alex-jones-is-a-crackp... -banning-him-from-facebook-might-be-a-bad-idea.

[132] Blevins, “The New Scarcity”; Bridy, “Remediating Social Media.”

[133] The line between curated and uncurated product components is difficult to draw: Would your racist uncle’s tweets simply never appear in your feed but be visible on his profile page? Be hidden from the casual visitor to his profile page but findable through search? If Google dislikes the only web page that has the text string “fifty purple pigeons eat potatoes,” could it rank a hundred other pages above it when users search for those exact words?

[134] APIs, or application program interfaces, are technical tools that allow one internet service to connect with and retrieve information from another.

[135] Mike Masnick, “Platforms, Speech and Truth: Policy, Policing and Impossible Choices,” Techdirt, August 9, 2018, https://www.techdirt.com/articles/20180808/17090940397/platforms-speech-... -impossible-choices.shtml (emphasis omitted). Jonathan Zittrain has similarly suggested that “Facebook should allow anyone to write an algorithm to populate someone’s feed.” Quoted in Michael J. Coren, “Facebook Needs to Hand Over Its Algorithm If It Really Wants to End Fake News,” Quartz, December 6, 2016, https://qz.com/847640 /facebook-needs-to-hand-over-its-algorithm-if-it-really-wants-to-end-fake-news.

[136] See generally Kevin Bankston, “How We Can ‘Free’ Our Facebook Friends,” New America, June 28, 2018, https://www.newamerica.org/weekly/edition-211/how-we-can-free-our-facebo... (discussing privacy laws’ potential “unintended consequence of locking in current platforms’ dominance by locking down their control over your data”).

Add new comment