In defending Facebook against government scrutiny by invoking First Amendment rights, we’ve overlooked the legal consequences for CDA 230 and risk constitutionalizing the web.
In the wake of recent reporting of Facebook’s alleged liberal curation of its trending newsfeed and Sen. John Thune’s subsequent letter to CEO Mark Zuckerberg seeking answers about these allegations and demanding a meeting, constitutional scholars, press advocates, and civil libertarians have mobilized the First Amendment in the company’s defense. The Electronic Frontier Foundation’s (EFF) Sophia Cope argued that the letter constitutes “an improper intrusion into editorial freedom,” and Stanford Law lecturer Thomas Rubin wrote in Slate that “we should be concerned about this federal intrusion into an independent organization’s editorial process.”
But in our rush to ensure the integrity of the First Amendment, especially important in the murky context of digital civil liberties, we’ve overlooked the consequences of deciding that Facebook and similarly situated social media platforms are speakers, and in particular press speakers. Here, I unpack two particularly important legal consequences: (1) what happens to CDA 230 when Internet intermediaries invoke First Amendment protections, and (2) what is the limit to expanding First Amendment rights to new types of press actors?
1. Policing the Boundary Between the First Amendment and CDA 230.
Those who are advocating for a First Amendment interest in Facebook’s editorial practices assume that courts will recognize the distinction between the company’s curation decisions in its trending news feed and the third-party content it serves up in that feed. In other words, that Facebook “speaks” or exercises “editorial control” when it chooses how to rank or display content, but is not adopting the content of the articles themselves as its speech. Given the massive efforts at First Amendment expansionism happening in other areas of the law (what scholars have called a neo or “new Lochner” moment), I’m less sanguine. We must explicitly and vigilantly police the boundary between Internet companies’ ability to claim First Amendment protections in their editorial practices versus CDA 230 immunity in the third-party content that they host.
Policing this legal boundary matters because we risk cannibalizing CDA 230, which provides that intermediaries are not liable for the third-party content that they host. As EFF correctly explains, this provision “is one of the most valuable tools for protecting freedom of expression and innovation on the Internet.” Fundamentally, CDA 230 creates the legal backbone that makes possible companies like Facebook and Twitter, which are built in large part on hosting or curating content. As a result, CDA 230 protects users’ expression, because companies don’t worry about being legally liable for the content that you and I post, and fosters innovation and economic growth in the digital economy. Without it, companies that host third-party content would have been sued out of existence in the first place. Here’s the key language from 47 U.S.C. § 230: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Historically, ensuring the integrity of CDA 230 hasn’t been an issue because Internet companies like Facebook, Twitter, and others acted exclusively as intermediaries. But as these companies play the role not only of intermediary but also of content curator and producer, the inherent contradiction between CDA 230 and the First Amendment comes to the fore: the First Amendment protects you when you claim speech or press activity as your own, while CDA 230 immunizes you when you host the content of others.
The bottom line is that to protect CDA 230 – the statue that undergirds the structure of third-party websites and ensures that they are not liable for the content that you and I post – it is essential that companies maintain clear lines between their editorial and intermediary functions. In other words, these companies can’t have it both ways: they cannot be both non-liable for third-party content and at the same time invoke the high bar of First Amendment protection for the very same third-party speech. Perhaps a solution for Facebook would be to formally silo groups like its team (presumably a hybrid product and editorial team?) running the trending news feed, whereas its other services, like ads, continue to represent the company’s work as an intermediary.
2. We Need a Limiting Principle for “the Press” or We Risk Constitutionalizing the Web.
The concern that CDA 230 could be cannibalized by First Amendment claims is exacerbated by the lack of a limiting principle as to who or what can claim the type of First Amendment press right that advocates have outlined. The First Amendment question here is narrow: does human content curation coupled with an algorithm qualify as an editorial judgment? Both EFF and Rubin argue that, per Miami Herald v. Tornillo, it does. This analysis means that Facebook is acting not as a mere corporate speaker, but rather as the press.
Given the fluidity and instability within journalism these days, we should expect that the legal definition of who counts as the press for First Amendment purposes will change, too. But what’s the limiting principle? Are algorithms making editorial judgments? Is every search engine, in line with what Eugene Volokh has argued on behalf of Google, an editor?
The Facebook trending newsfeed example could offer a very narrow definition and clear limiting principle. Its trending news algorithm functions, it appears, with lots of hands-on interaction from humans; that’s part of what started the brouhaha in the first place. So the limiting principle for an algorithm to count as part of the press is that it has not simply been created by a human and unleashed into the wild, but rather that it is the result of consistent, perhaps daily human decisionmaking. I realize many will disagree with this proposition, and they have a good argument: because algorithms are written by people, the results that they generate reflect human biases and intentions; in other words, they are a way that editorial choices are produced. But if we understand algorithms that are highly attenuated from their human creators as invested with constitutional rights as “the press,” then we risk constitutionalizing the entire Internet. Not only would such First Amendment expansionism likely favor corporate speakers over individuals, but more problematic is that the press clause would be rendered meaningless. There are a multitude of algorithms at work, and if the press clause covers all of them, it will be inappropriately and dangerously diluted.
We can avoid that route by adopting the tight human-algorithm nexus I described as a limiting principle for when algorithms can be understood as warranting press protections. Moreover, Facebook’s trending newsfeed offers the opportunity to establish a second limiting principle: a functional definition of the press, as opposed to a legacy or institutional definition. This is a position that I’ve argued for in the past. In opposing the 2013 federal shield legislation (I was working as a summer intern at EFF), I argued that legal definitions of the press should turn on the practice and not the profession of journalism. Since Facebook is a non-traditional news source, any press protections that it claims would necessarily be tied to an argument about its functional role curating content and making editorial decisions, not on its professional background or institutional claim to legitimacy. Such a move could be a benefit for all sorts of non-traditional, non-institutional journalists.