What Platform Operations Are We Regulating? Platform Product Features, Content Removal, and Content Amplification

This discussion, excerpted from my Who Do You Sue article, describes the specific functions of online speech platforms that could potentially be affected by new regulation. It calls out in particular the differences between regulating platforms’ removal of online content versus regulating their ranking or amplification of that content. Many discussions of platform power over online speech and information fail to distinguish between, and account for, these two separate functions. 

The excerpt below focuses on what I call “must-carry” claims – arguments that the law should limit platforms’ power to prohibit or discriminate against certain content under their Terms of Service or Community Guidelines. But the product details it reviews are relevant for any laws that shape platforms’ interaction with user content. I sketch out potential models for such laws, designed to address major platforms’ function as de facto gatekeepers of online speech and information, here.

***

Today’s major internet platforms typically offer not one but many products and features. Facebook’s homepage, for example,  has included manually curated news headlines and an algorithmically sorted  feed  of  friends’ posts, as well as advertisements, event invitations, and notices about private messages. To get a handle on must-carry claims, we need to know which of these things claimants want to change.

Even within specific, high-profile product features—like Google’s web search results or Twitter’s news feed—platforms use a variety of mechanisms in response to disfavored content. To understand must-carry claims, we need to know which of these would be affected. Broadly, these mechanisms fall into two categories. First, platforms can exclude content, applying their content removal policies. Second, they can increase or decrease its visibility through their content ranking systems.[79]

Content removal, in which platforms erase or block access to material that violates their Community Guidelines, is the kind of curation most familiar in must-carry discussions. Platforms’ content-removal policies often begin as ad hoc responses to particular cases. But they can become culturally defining over the years, in part by driving recruitment or departure of key employees or board members.

Content ranking is the curation that platforms do on an ongoing basis in order to provide their basic user experience. Ranking algorithms, maintained and updated by teams of engineers, determine what material is considered most relevant or highest priority. For search engines, this ranking is the entire value proposition. But many social networks and other hosts also work hard to determine content order or layout, and strongly defend their right to do so. Ranking decisions may in practice be nearly as consequential as removal decisions, since few users will ever find information that is buried at the bottom of a news feed or search results. Public discussion and advocacy around ranking have become common in recent years.

Advocates for must-carry regimes often focus solely on removal rather than ranking. For the sake of simplicity, I follow suit and use the term “removal” in much of this essay. But the distinction matters for some moral or legal questions. For example, some argue that platforms bear more responsibility for amplifying content than they do for merely hosting it—and as a result, that they should be more willing to down-rank content  than to remove  it completely.[80] The distinction could also matter for legal disputes about platforms’ First Amendment rights (as discussed in the next section) as well as for the complexity and feasibility of potential regulations (in the final section).

***

FOOTNOTES

[79] There are also more nuanced options, some of which blur the lines between these categories. Platforms can put a label on disfavored content (as Facebook does with discredited news stories); put it behind a warning page (as YouTube does for some violent or sexual material); disable comments or ads (as YouTube did with Dennis Prager’s videos); remove it in some geographic regions (as Facebook, Twitter, YouTube, and other platforms do for legal violations); or make it available only for users who opt in to see the content (as Google does with pornography in search results). The demotion option, which Google has used for copyright-infringing sites, is one reason that “removal” and “ranking” decisions are not always entirely separate.

[80] See, e.g., Renee DiResta, “Free Speech Is Not the Same as Free Reach,” Wired, August 30, 2018, https://www .wired.com/story/free-speech-is-not-the-same-as-free-reach.

Add new comment