The Supreme Court is about to review a constitutional challenge to two unprecedented and very complicated laws regulating social media. The laws were enacted by Texas and Florida in order to counter “censorship” and alleged anti-conservative bias of major Internet platforms like Facebook or YouTube. Both laws have “must-carry” rules that restrict platforms’ ability to moderate content under their preferred editorial policies, and “transparency” rules including requirements for platforms to notify users when their posts have been moderated. In the NetChoice cases, the Court agreed to review just one of the many questions the cases present: whether the laws violate the First Amendment.
This is an FAQ to explain some basics about the cases, and address some fairly complicated questions that students, reporters, and others have asked me. I will add more entries over the coming weeks. The FAQ is mostly focused on questions that might not have obvious answers, or that are particularly hard to understand, or that I suspect have fallen between the cracks and not been addressed enough in parties’ and amici’s briefs to the Court. In other words, this will get pretty wonky.
I am very focused on these cases because they bring together questions I have worked on for two decades, first as a lawyer for Google, where I led the legal team for web search, and then as an academic at Stanford since 2015. I think the platforms should win in NetChoice. But I also think that both must-carry laws and transparency laws are important and far more complicated, as a matter of both policy and constitutional law, than these cases might suggest. NetChoice should be an easy case, because the Texas and Florida laws were so badly designed and drafted. But that shouldn’t be the final word on either topic. For more on my own positions: here is my take on must-carry issues generally, and here is the amicus brief I filed in NetChoice with Jack Balkin and the Yale MFIA clinic on behalf of Francis Fukuyama; here is my Senate testimony about platform transparency generally, and here is a detailed constitutional analysis of the Texas and Florida transparency rules.
With that, on to the FAQs!
BASICS
- What do the laws say?
- Who sued whom for what, and then what happened?
- What do we know about the Justices and these cases?
THORNIER QUESTIONS
- Are these cases about discrimination?
- Do these laws actually impose common carriage mandates?
- Did the Court decide on any of this in Taamneh or Gonzalez last term?
- What would it mean for platforms to be "consistent" or "viewpoint-neutral" in moderating user content?
- What does Section 230 have to do with these cases?
- What does Florida’s law actually say?
- If the platforms win this case, will lawmakers be unable to regulate platforms with rules about things like privacy, discrimination, or competition?
QUESTIONS I HOPE TO EXPLORE IN LATER POSTS
- Don’t these laws just make platforms show people what they want to read?
- Is this a case about political debate or about Nazis and terrorists?
- Are the notice and appeal rules just basic consumer protection measures?
- How do the NetChoice cases relate to Murthy v. Biden, the case this term about informal “jawboning” pressure by governments for platforms to remove content?
- How do the NetChoice cases relate to Lindke and Garnier, the cases this term about users’ First Amendment rights to follow and engage with lawmakers on social media?
- What parts of these statutes will the Court actually review?
- Which platforms do these laws actually regulate?
- Is this case like Turner or is it like Denver Area?
- Does Florida have it in for eCommerce sites?
BASICS
1. What do the laws say?
Both laws are quite long, and include provisions that are not at issue in the cases. My informal annotated copies of the laws are here and here. The parts at issue in the cases are (1) so-called “must-carry” rules that restrict platforms’ ability to moderate content, and (2) “transparency” or “notice” rules that require platforms to notify users about moderation actions and (in Texas’s case) allow the users to appeal the platforms’ decisions. There are some edge questions about which parts of the transparency laws are in scope for review.
Florida’s must-carry law has special rules limiting platforms’ ability to moderate content posted by so-called “journalistic enterprises” and content “by or about” political candidates. It also requires that all moderation be carried out “consistently,” though it’s unclear exactly what that means. The Florida law was hastily written and can be hard to understand. On a close read, many parts don’t say what they might initially appear to say. Florida argues that the law actually contains important exceptions to its must-carry rules, based on its interpretation of the federal platform immunity statute known as Section 230. The only specific exception spelled out in the law allows platforms to moderate “obscene” content if it comes from journalistic enterprises.
Texas’s must-carry law prohibits platforms from moderating content based on its “viewpoint.” (That, too, could mean a lot of different things.) It lists specific exceptions, including allowing platforms to moderate unlawful material, as well as some specified other highly offensive or harmful material, regardless of its viewpoint. Texas could make the same argument that Florida does about the law incorporating exceptions from Section 230, but I think its briefs have been comparatively vague about this topic so far.
2. Who sued whom for what, and then what happened?
Two trade associations representing platforms, NetChoice and CCIA, brought the cases. They prevailed in the case against Florida at the district court and again at the 11th Circuit. In Texas, platforms won at the district court but lost at the 5th Circuit. Platforms’ initial pleadings raised three big claims based on (1) the First Amendment, (2) Section 230, and (3) the “Dormant” Commerce Clause, which sometimes limits states’ ability to regulate issues affecting interstate commerce. Of these claims, only the First Amendment arguments were raised and accepted by the Supreme Court for review.
For those who want more detail on the cases below:
- Texas: In Texas, the platforms won on First Amendment grounds at the District Court, and lost on First Amendment grounds in the 5th Circuit. While the case was progressing, it had brief foray to the Supreme Court, with the parties disputing whether the law would come into effect while that stage of litigation continued. The Court said the law would not go into effect for the time being, though Justices Thomas, Alito, Gorsuch, and Kagan dissented. The 5th Circuit’s ruling focused almost entirely on First Amendment arguments, though the court said in a footnote that platforms had “forfeited” Section 230 arguments by mentioning them in one sentence of a brief. The three-judge panel in the 5th Circuit issued three opinions. Some portions of the lead opinion, by Judge Oldham, were supported by a majority of the panel. The common carriage analysis was not, it had only his own vote.
- Florida: In Florida, the platforms prevailed in the district court on both Section 230 and First Amendment grounds. The 11th Circuit ruled on First Amendment grounds in rejecting the law’s must-carry provisions as unconstitutional. It ruled primarily on First Amendment grounds about the transparency mandates (with a footnote rejecting platforms’ Section 230 arguments). It upheld most of those mandates, but rejected the key provision now being considered by the Supreme Court—the requirement to notify users about content moderation—because of the burden it would place on platforms’ editorial choices.
3. What do we know about the Justices and these cases?
Clarence Thomas has been the most outspoken on relevant topics, strongly expressing concerns about major platforms’ power over public discourse. He has issued two procedurally unusual opinions on the topic. (Unusual in that they did not accompany cases resolved by the overall Court, but were instead published as concurrences to pro forma dispositions of cases the Court did not hear.) The most relevant for NetChoice was issued in relation to the Knight case about users’ First Amendment rights to access Trump’s Twitter account, which raised questions that are being separately reviewed by the Court this term in the Lindke and Garnier cases. In that opinion, Justice Thomas discussed possible sources of law for imposing must-carry obligations on platforms, including common carriage and public accommodations law. His other opinion, in Malwarebytes, was more focused on Section 230's statutory immunities. Thomas was also the author of last term’s Taamneh ruling, which contains language relied upon heavily (and I think inappropriately) in Texas’s and Florida’s briefs.
Justice Alito wrote a brief opinion, joined by Justices Thomas and Gursuch, dissenting from the Court’s decision to temporarily keep the Texas law from coming into effect. His opinion expressed sympathy for many of Texas’s arguments. Justice Kagan also dissented, perhaps because of her objections to use of the Court’s “shadow docket” for such decisions.
Justice Kavanaugh issued a highly relevant dissent in a net neutrality case while he was on the D.C. Circuit Court of Appeals. He would have allowed Internet access providers to proceed with arguments that the rules violated their First Amendment rights. In explaining his reasons, Justice Kavanaugh expressed reservations about the government’s ability to “regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter.” (The opinion is long, the excerpt I use for teaching is here.)
THORNIER QUESTIONS
Are these cases about discrimination?
These are cases about whether platforms can “discriminate” against certain messages posted by users. But they are not about the kind of “discrimination” at issue in civil rights cases that consider things like desegregation. In their final briefs, the states argue that cases about that second kind of discrimination—discrimination against people based on characteristics like race, gender, religion, or sexual orientation—support their laws. I think this is plainly wrong, and mostly just an attempt to muddy the waters or make the mandates sound more like traditional common carriage laws. My analysis is in this Lawfare piece.
Do these laws actually impose common carriage mandates?
Texas and Florida argue extensively that their rules for online speech are justified on the basis of older “common carriage” laws. The appeal of that argument is obvious. Major platforms really do control important communications channels, and common carriage laws do sometimes apply to communications resources. But the legal foundations for their arguments are dubious. And in any case, the rules that Texas and Florida impose depart in many important ways from traditional common carriage mandates.
Many briefs in the case focus on whether platforms are common carriers, or could be legally treated as common carriers. But those questions mostly only matter if the laws at issue in NetChoice actually impose common carriage obligations. I think they do not. Instead, the Texas and Florida laws impose new forms of state power over online speech, in ways that cannot be justified in the name of “common carriage.”
What are common carriage laws? Common carriage laws have generally obliged entities like railroads, stagecoach companies, or telegraph operators to offer the same service to all customers (or all who are paying for the same service, like first class or coach tickets). Laws like this, which ensure the availability of basic and shared foundations for other human endeavors, were and still are incredibly important. They ensure that new businesses can count on being able to ship products and communicate with buyers or sellers; give families and friends the means to stay in touch and visit; and let political organizers count on their newsletters and phone calls getting through. Net neutrality laws for Internet access providers look a lot like traditional common carriage (a number of amicus briefs in NetChoice talk about that). One compelling illustration of common carriage laws' importance can be seen in the history of Black consumers in the Jim Crow South ordering from the Sears catalog, in order to bypass local merchants who wouldn’t serve them. That might never have happened without laws ensuring that both catalogs and goods would be delivered.
When do common carriage laws apply? Law on this is well covered in party and amicus briefs, I won’t attempt to add to it here. Public Knowledge, TechFreedom, Christopher Yoo and more speak to this on the platform side; as do Adam Candeub and others on the states’ side.
Did Texas and Florida enact common carriage laws? Not unless we give a whole new meaning to the words “common carriage.” Of course, there is no completely fixed definition for that term: as Blake Reid has detailed, the rules for historical common carriers varied widely. So did the legal justifications for imposing those rules, and the consequences for the kinds of First Amendment questions raised in NetChoice. But generally we can think of common carriage laws as requiring carriage of just about everything. (Or perhaps everything that isn’t illegal, or everything that doesn’t disrupt service for other customers.) In NetChoice, Texas and Florida are at pains to explain that this is not what their laws require. Texas insists that the platforms’ examples of being compelled to carry lawful but “vile” expression such as pro-terrorist content are merely “fanciful,” and Florida says platforms can remove broad categories of lawful speech under “standards of their choosing[.]”
The obligations that Texas and Florida seek to impose do not look much like common carriage.
- Florida has special carriage rules for some specific news and election-related communications. That’s an understandable policy goal, if very poorly executed in Florida’s legislation. Federal lawmakers as far back as 1792 subsidized news delivery, and later allowed carriers like telegraph operators to prioritize it. But Florida’s requirements for platforms to give special treatment to (bizarrely-defined) “journalistic enterprises” and any post “by or about” political candidates is something else altogether. If anything, it looks more like the FCC’s content-based public interest rules for broadcasters than common carriage.
- Texas’s law requires viewpoint neutrality for most lawful content, but sections 143A.006(a)(2)-(3) let platforms freely remove certain kinds of violent, threatening, or harassing “lawful but awful” content that its legislators couldn’t quite stomach.* (I describe the broader issues with that approach here, and the specific rules here.) Florida is now claiming that its law, too, has content-based carveouts allowing platforms to remove some offensive but legal material. Platforms can freely restrict lawful but violent, harassing, or pornographic content (but presumably not material that’s objectionable on other grounds), Florida maintains, thanks to an unorthodox read of Section 230.
That approach—drawing lines between state-approved and state-disapproved legal speech—looks somewhat like the kind of “decency” rule the FCC might impose on TV stations. But it doesn’t look like most common carriage. The post office can’t refuse to deliver sex- or violence-oriented magazines like Playboy or Soldier of Fortune, and phone companies can’t cut you off because your long distance romance led to some heated calls. If the “carry just some particular content that the government approves of” version of the Texas and Florida laws is common carriage, it is a weird version. It is also one that, unlike “neutral” carriage mandates, raises the major First Amendment questions that come with content-based speech laws.
- Both Texas and Florida also require some form of “consistency” or “viewpoint-neutrality.” It’s unclear what those rules mean in practice. Trying to define real-world content moderation rules under those standards gets pretty weird pretty fast. These laws seem to allow platforms to set high level rules for user speech, but not more granular ones. Some of the potentially permissible rules do look common-carriage-esque: railroads could eject drunk and disruptive passengers, for example; platforms could presumably do the online equivalent. (They would just need to be viewpoint-neutral in doing so. They couldn’t oust users who are bothering other people by yelling their support for the Dallas Cowboys, while allowing disruption from equally vocal fans of the Eagles or Commanders.)
But a “viewpoint-neutrality” requirement would also seem to allow platforms to decide what topics users can talk about. Texas’s brief explains its own law as permitting platforms to “block categories of content, such as violence or pornography.” That’s precisely what platforms might try doing if the law were upheld. A platform that doesn’t want to host racist diatribes might just shut down all discussions of race, for example. Or it might try to limit posts to “neutral” or “factual” messages. (Lawsuits about this could get ugly very fast. Would judges or juries have to decide which claims about race or gender are factual?)
Those kinds of rules—no talking about race, no talking about politics, no talking about climate change, etc.—would look nothing like the rules that can be adopted by traditional common carriers like phone companies. This approach to content moderation would also get unmanageable pretty fast. It’s not clear what speech would even be left once platforms imposed viewpoint-neutral rules for weighty topics like transgender rights or the Israel-Hamas conflict, or frivolous ones like Taylor versus Kanye or whether hot dogs are sandwiches. Whatever the outcome was, it wouldn’t look much like older common carriage regimes.
- The states’ transparency rules, including the notice and appeal requirements that the Court will review in NetChoice, impose substantial state supervision over platforms’ decisions about speech and content. Attorneys General can investigate if they think platforms have not truthfully or accurately explained their reasons removing content. Both they and private plaintiffs can bring suits that require courts to examine those same questions. As I discuss in this article (at 22-29) and in examples in this blog post, and as Eric Goldman has described, this opens the door to disputes over any number of culture-war speech issues, and to state-imposed resolutions. This is not at all like the kinds of disclosures required of common carriers about things like rates, schedules, or compliance with technical standards.
Maybe these rules are so unmanageable that, at the end of the day, the only way to comply with “consistency” or “viewpoint neutrality” rules really would be to carry everything. That would be much more like common carriage, and would in some ways simplify the states’ arguments. But it would also be very far from what the laws appear to say on their face. It would also be very far from the way they are depicted in the states’ briefs or the Fifth Circuit ruling.
None of this resolves the actual question in the case: Whether states can override platforms’ editorial rights to moderate content. But since Florida and Texas keep justifying their laws on the basis of historical common carriage regimes, it is important to recognize just how different their laws actually are, and how much more state power they establish over online speech.
* The carveouts apply if content meets the following two definitions. These definitions use words similar to First Amendment standards describing content that may permissibly be prohibited by law. But Texas’s wording encompasses more presumably-lawful content. The law also permits removal of “unlawful expression” separately in 143A.006(a)(4), so if these provisions were solely about unlawful expression they would be redundant/surplusage.
- 143A.006(a)(2): Content that “is the subject of a referral or request from an organization with the purpose of preventing the sexual exploitation of children and protecting survivors of sexual abuse from ongoing harassment[.]” This seems to allow organizations with the correct purpose to refer anything for removal, and bypass Texas’s viewpoint neutrality requirement. Perhaps the intention was instead to cover content relevant to (1) “preventing the sexual exploitation of children” (a category that sounds broader than unlawful CSAM) and (2) “protecting survivors of sexual abuse from ongoing harassment” (a category that sounds broader than unlawful harassment). The “prevent” and “protect” language seem to imply ex ante protection from expected future bad content, which in First Amendment terms sounds like a prior restraint.
- 143A.006(a)(3): Content that “directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge[.]” This covers two things.
(1) Content that “directly incites criminal activity,” a standard that somewhat resembles the Brandenburg standard for incitement to violence (but without some requirements like imminence).
(2) “[S]pecific threats of violence targeted against a person or group” based on the listed attributes, which seems broader than Brandenburg or the Court’s “true threats” standard. I think the list of targeted groups only applies to limit applications of (2), leaving users free to threaten people based on their political affiliations, homosexuality, or status as other kinds of government workers including election workers. But perhaps the list also applies to limit applications of (1).
Did the Court decide on any of this in Taamneh or Gonzalez last term?
In short, no. But Taamneh contains wording and dicta that Texas and Florida have quoted extensively in their briefs, implying that Taamneh involved relevant factual conclusions, holdings, or platform arguments that are inconsistent with their position in NetChoice. I don't think those passages in Taamneh actually support the states' position, given both the case's holding and its motion-to-dismiss posture. I also don't think some of those parts of Taamneh were accurate or supported by the record in the first place.
Of particular relevance are Taamneh's statements saying or suggesting that the defendant platforms did not engage in these very standard content moderation practices:
- Enforcing discretionary policies against lawful content.
- Proactively screening uploaded content to block prohibited material.
- Algorithmically promoting or demoting posts based on their content.
- Removing specific content, like tweets or YouTube videos.
- Deplatforming users by terminating their accounts.
This Lawfare post discusses some key differences between the facts described by the parties’ briefs and those described in the Taamneh ruling; this law review article discusses the relationship between NetChoice and Taamneh. (The article takes issue with Texas's earlier characterization of Taamneh to the Court at notes 150-153 and associated text. I did warn that this was going to get wonky.) Both of those sources also summarize what happened overall in Gonzalez and Taamneh.
What would it mean for platforms to be “consistent” or “viewpoint-neutral” in moderating user content?
Florida’s law requires that each platform apply its moderation rules “in a consistent manner[.]” Texas’s says platforms may not “censor” expression based on its “viewpoint.” There is plenty of room for dispute about what those rules actually mean. I explore some possible interpretations in the FAQ about common carriage. But here are some examples to ponder.
For questions like these, the biggest issue for First Amendment purposes isn’t which answer is right. The big issues are how platforms are supposed to know which answer is right in order to avoid liability, and what role courts and state actors like Attorneys General can and should play in establishing the answers.
- A relatively silly example: As explained in Reddit's NetChoice brief, one of the only known cases filed against a platform under the NetChoice laws involved the r/startrek subreddit—which, as the name indicates, is a forum for discussing Star Trek. The plaintiff said his rights under Texas’s law were violated when moderators removed his post calling a Star Trek: The Next Generation character, Wesley Crusher, a “soy boy.” For the forum to be sufficiently “viewpoint neutral” and survive this claim under Texan law, would Reddit and its moderators need to:
- Prohibit calling any character a soy boy? Or prohibit any statement, pro or con, on the question of whether characters are in fact soy boys?
- Prohibit calling any character by any comparable disparaging terms? Wikipedia lists least one term that I think is probably worse than soy boy, to most people. Will juries or judges in Texas have to decide which epithets and slang terms express equivalently strong disparaging viewpoints?
- Prohibit any slurs that denigrate people based on what they eat, so as to maintain viewpoint neutrality about dietary choices?
- Allow characters to be called soy boys only if the term is deemed accurate because the character is sufficiently wimpy, with due consideration for characters’ age, attributes, and character arc over seasons of the show?
- A weightier example: Florida says that a platform “could adopt a policy of removing content that promotes terrorism” as long as the platform does not apply the policy, “for example, to forbid content praising ISIS but allow content praising Al-Qaeda.” If that is the policy, can the platform
- Remove foreign groups that are not on the State Department’s designated terrorist organizations list, and if so, which groups?
- Remove content posted by domestic groups that it considers to be terrorist organizations? If so, which groups? How do the Proud Boys fit in, or Antifa?
- Rely on the ADL’s Glossary of Extremism and Hate to decide which groups to consider terrorist, or would doing so itself show the platform's lack of viewpoint-neutrality?
To my mind, the difficulty of answering these basic questions about the law's meaning makes them vague. (I can’t recall if anyone has argued that they are thus void for vagueness as a First Amendment matter. Possibly NetChoice did in the Texas district court.)
A smaller wrinkle comes from statutory interpretation. As I discuss here, parts of the states’ rules could be interpreted to require consistency as between users rather than as between content posted. That doesn’t make the rule any clearer, though. It may not even make it any different, because the same expression can mean different things coming from different people, or in different contexts. For example, one user who posts the phrase “this is so queer” might mean to express gay pride, another might mean to express a slur, and yet a third might (if they are old enough, quirky enough, or perhaps British enough) just mean to say that something is odd.
What does Section 230 have to do with these cases?
Florida’s and Texas’s statutes both have provisions about when their laws will yield to federal law, or be unenforceable in light of that law. Florida’s statute specifically references Section 230, Texas’s refers more vaguely to content that platforms are "specifically authorized to censor by federal law." Both states argue that, in order to interpret their laws and apply the First Amendment, the Court must first engage with and resolve complicated statutory interpretation questions under Section 230, and reach conclusions that no lower court has endorsed. It would be quite a development if the Court turned NetChoice into a case about those statutory provisions now, having declined to consider overlapping ones last year in Gonzalez—particularly given that these Section 230 issues were not mentioned in the NetChoice cert grant, briefed to the Court until now, or considered in the courts of appeals.
The states’ arguments about Section 230, if accepted, would also make their laws more clearly content-based. That in turn would make them even more likely to be reviewed under strict scrutiny and struck down under the First Amendment. Presumably the states decided this was a worthwhile gamble. If the Court adopted every step of the states' apparent reasoning, its logic would raise similar constitutional concerns about Section 230.
Big picture: Are the platforms’ First Amendment arguments here inconsistent with Section 230?
Texas says that Section 230 was “an effort by Congress to recognize that entities like the Platforms are not speakers but conduits for their users’ speech.” That’s a common claim from the political right. Many 230 critics on the political left, by contrast, often reach precisely the opposite statutory interpretation, saying that the law actually requires platforms to moderate content more actively in order to qualify for immunity.
Both interpretations are unfounded in the law’s text, history, or judicial interpretation to date. Congress enacted Section 230 specifically to encourage platforms to adopt and enforce editorial policies. As the law’s drafters explained in an amicus brief, it is entirely consistent for platforms to claim 230’s immunity and also assert First Amendment rights to apply their own editorial standards. Legislative creation of an immunity or legal defense would not, in any case, take away First Amendment rights.
Charges of inconsistency between First Amendment rights and 230 immunities often seem primarily grounded in policy objections. Critics charge that the two shouldn’t co-exist as a matter of fairness, or that they are logically incoherent. That’s a different question than anything raised in NetChoice, though. And the question of which mix of freedoms and liabilities for platforms is optimal as a public policy matter is actually phenomenally complicated. My quick take on the high level questions presented are in op ed form here and in more academic form (at p. 135) here. Some really interested economic modeling of platforms’ likely behavior under varying regimes is here.
Some of the states' arguments imply that legal analysis of speech and immunity issues for ranking is meaningfully different from that for hosting, because ranking is the platform's own act.
- Section 230 and ranking: In Gonzalez, plaintiffs argued that platforms’ ranking algorithms were not immunized. The Court did not resolve this issue. Section 230’s drafters filed a brief saying that when Congress used Section 230 to encourage platforms to moderate content, that obviously included organizing it. (In fact, that verb appears in the definition of immunized entities in Section 230.) My own brief with the ACLU emphasized that hosting content without sequencing it would render many platforms useless. Imagine, for example, if YouTube’s homepage simply showed the most recently uploaded videos, regardless of topic, language, quality, popularity, or relevance. Florida seemingly now agrees with that reasoning. It says that the quintessential act immunized by Section 230—hosting user speech on the Internet—necessarily “requires organizing it.” Texas, by contrast, filed an amicus brief in Gonzalez supporting the plaintiffs. It argued that by using ranking algorithms to organize and display content, YouTube “went beyond passively hosting” and stepped outside of Section 230’s protections.
- The First Amendment and ranking: There is no real doubt that, under Supreme Court case law, selecting and arranging third party content can be First Amendment-protected expression. At the same time, communications carriers' First Amendment rights can sometimes be overridden. Questions about ranking in this context are not that new. One issue in must-carry cases about cable carriers involved their power to "reposition" channels into less favorable positions, where audiences were less likely to find them. Eugene Volokh, who provides academic support for some potential variants on must-carry laws, has suggested that platforms’ ranking decisions warrant stronger First Amendment protection than their hosting decisions (i.e. what content to leave up or take down). I’m not sure I agree with that. But since Texas and Florida regulate both ranking and hosting, it is not a distinction of much consequence in NetChoice.
In interpreting the states’ laws, why would the Court need to interpret the language of Section 230?
Florida’s law may, by its terms, “only be enforced to the extent not inconsistent with federal law and 47 U.S.C. s. 230(e)(3),” which is the part of Section 230 saying that the statute preempts inconsistent state laws. Texas’s law doesn’t mention Section 230, but does exclude from its viewpoint-neutrality rules any material that platforms are “specifically authorized to censor by federal law.” Florida’s brief in particular goes into detail in asserting that the Court’s First Amendment analysis should turn on its interpretation of Section 230.
What questions about Section 230’s statutory language do the states want the Court to answer?
Florida’s argument builds on an interpretation of Section 230(c)(1) and (c)(2)(A) that has been popular for several years on the political right—including in then President Trump’s executive order and subsequent petition for FCC rulemaking about Section 230. For a sense of the legal snarl involved, that generated over a thousand comments to the FCC. I will try to compress the issue here, hopefully without making it too hard to follow.
Depending which interpretations of Section 230 one accepts, some major and diverging potential consequence for Florida’s must-carry provisions might be:
- Platforms can enforce any content moderation rules they want. Under this interpretation of the law, which applies Section 230(c)(1), Florida’s must-carry rules would be basically without effect. Applying 230(c)(1) for this purpose is supported by textual analysis of the statute in cases like Barnes v. Yahoo, Domen v. Vimeo, and Murphy v. Twitter. As far as I know only one case, e-ventures v. Google, rejects that reasoning and holds that only 230(c)(2) is relevant to must-carry cases, though Justice Thomas's Malwarebytes opinion also supports it.
- Platforms can enforce any rules as long as they act in “good faith.” This is the standard that would apply if courts decided that only Section 230(c)(2)(A) acts as a bar to must-carry claims, but that platforms can decide for themselves what content is "objectionable." That section immunizes platforms for ““any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
There’s not a ton of precedent on the meaning of the “good faith” language. Perhaps that is because courts prefer relying on 230(c)(1) as a simpler rule and one that avoids making judges the arbiters of fraught questions about lawful online speech. But, for example, the 9th Circuit has said that excluding content based on anti-competitive animus is not good faith moderation. (Though weirdly, it did so in a case under Section 230(c)(2)(B), which does not repeat the wording of 230(c)(2)(A).) Florida seemingly tries to expand the range of suits that might overcome Section 230(c)(2)(A) by stating in its law’s findings that when platforms “unfairly” moderate content, they are “not acting in good faith,” and that platforms have already “unfairly censored” Floridians in the past. Litigation over terms like “good faith” or "unfairness" in speech moderation might look like litigation over do now.
- Platforms can moderate content only on the grounds that it is sexual, violent, or harassing. This is the interpretation of Section 230 that both Texas and Florida advanced below, and that the Trump administration adopted. It interprets the categories of “objectionable” content in Section 230(c)(2)(A) as limited to the kinds of material enumerated in the statute. This read of the statute has not been accepted by any court. Before the Supreme Court, Florida cites 230(c)(2)(A) to argue that platforms could remove pornography and graphic violence. Following this read of the statute, platforms could seemingly be prevented from moderating even when the content at issue violates laws against things like fraud, drug sales, non-violent material support of terrorism, non-harassing defamation, civil rights offenses, or copyright infringement; as well as lawful but objectionable content not enumerated in Section 230, like hate speech or disinformation.
Florida's choice to advance this last interpretation of its law—saying the Florida statute allows platforms to freely remove content in the categories enumerated in Section 230, but not other content — s arguably a self-own. It make's Florida's law more clearly content-based, which should trigger strict scrutiny. On the other hand, if the Court accepted this reading of Section 230 (which, again, would be remarkable in this First Amendment case), it would create constitutional difficulties for that law, as well.
Texas’s brief gestures to statutory interpretation questions about Section 230 that are even deeper in the weeds.
- It implies that immunity is available only “for the purpose of defamation and similar torts[.]” That argument about Section 230 came up in in Gonzalez, too, but lower courts have consistently rejected it.
- It raises questions about the scope of the carve-out under §230(f)(3) for platforms that are “responsible, in whole or in part, for the creation or development” of unlawful content. Texas appears to argue that platforms lose immunity if they host content from multiple sources and “combin[e] multifarious voices,” which would seem to render Section 230 itself a nullity for platforms with more than one user.
Blake Reid discusses these and other 230 interpretation questions in relation to NetChoice here (that article has since been updated, but the older draft is more focused on NetChoice).
What does Florida’s law actually say?
Florida’s law is hard to parse. I have a Google doc tracking easy-to-misunderstand provisions here. Every time I go back to the statute, I find something new—or sometimes revise my previous understanding—so I’m keeping this in the doc, where it is easy to update.
If the platforms win this case, will lawmakers be unable to regulate platforms with rules about things like privacy, discrimination, or competition?
COMING SOON
I hope to soon post more NetChoice FAQs, such as these.
- Don’t these laws just make platforms show people what they want to read?
- Is this a case about political debate or about Nazis and terrorists?
- Are the notice and appeal rules just basic consumer protection measures?
- How do the NetChoice cases relate to Murthy v. Biden, the case this term about informal “jawboning” pressure by governments for platforms to remove content?
- How do the NetChoice cases relate to Lindke and Garnier, the cases this term about users’ First Amendment rights to follow and engage with lawmakers on social media?
- What parts of these statutes will the Court actually review?
- Which platforms do these laws actually regulate?
- Is this case like Turner or is it like Denver Area?
- Does Florida have it in for eCommerce sites?