FAQs About The NetChoice Cases at the Supreme Court, Part 2

This is the second in a hopefully finite series of blog posts about the legal issues in the NetChoice cases, in which platforms raise First Amendment challenges to social media laws in Texas and Florida. The first post, which includes some more basic questions, is here. Since publishing it, I've put out another Lawfare post exploring in more depth the idea that the states' laws might somehow make platforms carry important speech without also making them carry what the Fifth Circuit called "vile" speech.
This FAQ was written hastily, and has more risk of typos and errors than usual. If I find those, I will correct them. (With the likely exception of the changing font size. Sorry. Fixing that requires manually altering the HTML.) 
Here are the questions for this round:
  1. Who will win?
  2. What will platforms do if they lose?
  3. Does the Court "get" the legal issues in these cases?
  4. Don’t these laws just make platforms show people what they want to read?
  5. Is this case about making platforms carry literal Nazis and terrorists, or about making them permit all viewpoints in a democratic debate?
  6. Does Florida have it in for eCommerce sites?
  7. Still to come (maybe)


1. Who will win?

I talk to lots of smart people who think the platforms will win, full stop. I wouldn't be surprised by a more mixed result. Possibilities along those lines include:

  1. Platforms win on must-carry but lose on the notice and appeal rules. There's not a particular doctrinal reason for that to happen. But at the level of optics it would look even-handed, and deal a legal blow to platforms to satisfy popular demand and "balance out" last term's cases.  
  2. Staged resolution, like in Turner. The Court could turn this into a staged process, by announcing a new standard for when and how states may override platforms’ editorial rights (a setback for platforms), and then remanding for further proceedings applying that standard in lower courts (where platforms might or might not still win). That is basically what happened with cable must-carry rules in Turner. In 1995 the Court identified state interests that might justify the rules. After a remand back to lower courts, the Court ultimately upheld the law in 1997. That resolution sounds sensible and orderly (and could follow logic suggested in this article by Kyle Langvardt and Alan Z. Rozenshtein). But it would be a major shift in the law, for better or worse. And because this is a state law case, not a fight about FCC rules, whatever the Court said would provide a blueprint for an unpredictable and likely chaotic next round of state must-carry laws and litigation. 
  3. A confusing outcome and few clear rules, like in Denver AreaThe 1996 Denver Area ruling, about cable carriage mandates, produced six opinions spread out over some 120 pages. Discerning majority-supported rules is like doing a logic puzzle for the LSAT. I actually love Denver Area, but an outcome like that in NetChoice would be rough. 


2. What will platforms do if they lose?

Platforms could lose the NetChoice cases a number of ways. Most dramatically, the Court could simply uphold both laws. Or it could uphold one (probably Texas's), or grant a partial or staged loss in the manners discussed in the previous FAQ. Here are some things that could happen next.

Fighting the laws on other grounds: In any of these scenarios, the platforms would still have other arrows in their quiver. Their initial challenges to the Texas and Florida laws also raised two other arguments, under the Dormant Commerce Clause and the federal immunity statute known as Section 230. Those arguments are mostly parked back in the lower courts, but platforms could try going back to preserve their existing injunctions or seek new ones based on them. Platforms won on the Section 230 argument in the Florida district court (with fairly brief analysis) and the 11th Circuit didn’t address it. In Texas, the district court did not address 230. The 5th Circuit said platforms had forfeited the argument—rather dubiously, in my opinion, but I am not sure if or how platforms would seek review of that decision. That creates yet another thing to fight about, in this scenario. No court below ruled on the Dormant Commerce Clause issues.

Withdrawing service from Texas and Florida: In principle, platforms could decide to geoblock Texas and Florida and just stop offering service there, rather than complying. That is complicated for a ton of reasons. It would involve closing offices, terminating a lot of contracts, and foregoing a lot of revenue. It’s not clear how well the geoblocking would actually work, particularly for users on mobile devices, given technical limitations. And under Texas's law at least, arguably platforms aren’t allowed to geoblock, since the law prohibits discriminating based on “a user ’s geographic location in this state.” (So… platforms can’t offer the service they want, and can’t withdraw from the market either?) Texas’s law seems to require compliance outside of Texas, in any case. A claimant who merely “does business in this state” and accesses platforms from another state (or country, perhaps) can still sue platforms for not following Texas's rules.

Unleashing a tide of Internet garbage on Texas and Florida: Platforms could comply by just opening the firehose and sending Texas and Florida the unmoderated glut of spam, hate speech, pro-anorexia and pro-suicide content, misinformation, and other harmful or offensive online speech that their lawmakers asked for. I am 100% certain that they all fantasize about doing this, even if they won't actually pull the trigger. Unleashing this material would be bad news for actual vulnerable people in Texas and Florida. It would also antagonize advertisers, and perhaps also violate platforms' contracts with them, content licensors, and other business partners. And it could expose platforms to liability under other laws—including trafficking and prostitution claims of the sort the Texas Supreme Court has said are not immunized under Section 230, and conceivably even child protection laws like the one Texas passed but is currently enjoined from enforcing. (I discussed the problems with such "simultaneous must-carry and must-remove obligations" here at page 141.)

Actually trying to comply: This would be an enormous, expensive, high-stakes guessing game, with an overlay of strategic positioning in anticipation of (lots and lots of) litigation. I've been through a version of that and it's actually kind of fun for the lawyers. But from a public interest or constitutional perspective, it's a bad way for laws to work. No one knows what the "viewpoint-neutrality" rule in Texas or the "consistency" rule in Florida actually mean. (I wrote about this and some other NetChoice FAQs here.) The rules seem to allow platforms to decide what topics users can talk about; or as Texas said in its brief, platforms can decide to “block categories of content, such as violence or pornography.” Discerning what speech rules are “topic-based” or “category-based” and which ones import impermissible viewpoint bias sounds like fodder for endless litigation and disagreement. I’ve speculated elsewhere that platforms might do things like shutting down all discussions of race or climate change. The other more content-specific carriage mandates and exceptions in the laws, like Florida’s rules for journalism and speech by or about candidates, are hard to parse for separate drafting-based reasons, and open the door to a separate strain of litigation. Finally the requirements to notify users about content moderation actions, and in Texas allow them to appeal, would add a major new administrative burden for platforms, with its own extensive opportunity for litigation. Individual plaintiffs in Florida can win $100,000 by arguing that a notice is not sufficiently “precise and thorough,” which provides a strong incentive to litigate early and often. This may lead to more rapid judicial clarification of the rules, to a great deal of expense and chaos, or all of the above.

3. Does the Court "get" the legal issues in these cases?

I’m not sure if anyone fully grasps the issues in these cases. I've been thinking and writing about platform must-carry rules for years, and I still learn new things or discover new angles regularly.  The specific Texas and Florida stateutes add to the complexity, generating so many overlapping questions that many relevant aspects of the states’ laws were barely addressed in briefs or in rulings below. (For example, Texas has a whole regime requiring platforms to accept and respond to user notices alleging that content is illegal—a mandate that seemingly may be part of the questions the Court agreed to review. In another year, that provision alone could have generated major litigation leading to Supreme Court review. In this case, it is barely a footnote.) The sheer complexity of the Court's task would be exacerbated if it agreed with states' insistence that any given provision in their laws is severable, and can stand even if other portions are struck down. But even the most fundamental legal questions about imposing must-carry mandates on platforms are in some ways quite novel and untested.

That makes these cases very different from last term’s Gonzalez and Taamneh cases. Parties and (especially) amici in those cases asked the Court to rule on a long list of legal issues, many of which had not been litigated in the cases so far. There was plenty of risk of chaotic outcomes if the Court accepted those invitations. But at least most of those issues had a meaningful history of litigation, scholarship, and public and legislative debate, often going back decades. By contrast, the issues in NetChoice simply have not had time to be the subject of all that much serious analysis. Relevant case law about older media like broadcast, cable, or newspapers is well understood. But efforts to sort out how all of that might apply to the Internet are new.  

To illustrate how fast the public discussion on this has evolved, when I was writing my 2019 article on must-carry laws, most U.S. tech law and policy experts I talked to had literally never thought about the possibility that laws like this could exist. I had a hard time explaining to many people why I thought the subject mattered. (Obviously some specialists had given this thought, including Eugene Volokh and Donald Falk in 2012. That number increased as people like Steve Bannon started talking about platforms as utilities; and as Facebook's French litigation over its removal of an anatomically detailed Gustave Courbet painting picked up more U.S. media coverage.) Outside the academy, there has been very little public debate on the topic beyond rhetoric, no legislative process outside of Texas and Florida, and no case law reviewing targeted legislation prior to 2021. In 2019, the first year I covered must-carry issues in my Stanford class, the topic took up just a small part of one day’s discussion. By 2021 it occupied two full class days. This term it was the focus of an entire exam question. By contrast, students in specialized courses over the past twenty-five years could have learned about many of the content liability issues in Gonzalez and Taamneh

The specific laws at issue in the NetChoice cases also developed very, very fast. Lawmakers in Texas and (even more so) Florida didn’t do that much in the way of meaningful legislative fact-finding, or pause for thorough deliberative process in devising their statutory rules. (For comparison, the cable must-carry law from the Turner cases was the product of three years of congressional hearings.) And we have had nothing like the kind of judicial tire-kicking in lower courts that usually ripens new questions for Supreme Court review. 

The transparency or notice and appeal questions in NetChoice are even more novel and untested. I think the First Amendment issues raised by the notice rules are significantly more complex than those raised by the must-carry rules. The notice and appeal mandates also bear far less similarity to any laws that were addressed in prior First Amendment litigation. There just aren't that many cases about ongoing, industrial-scale disclosure of editorial decisions. I scrambled to write a constitutional analysis of the NetChoice transparency and notice rules in a law review article, but I think it and the few other publications on the topic barely scratch the surface. The Court's resources for assessing this part of the case are slim. The parties and courts barely addressed the notice and appeal provisions in the proceedings below. At the 11th Circuit, for example, the platforms spent one page of their 67-page brief on transparency. The lower courts understandably followed the parties' lead, offering very quick assessment of these questions. A handful of amicus briefs to the Supreme Court speak to the issues, and often do a good job. But there is only so much they can elucidate, particularly given the degree of focus on the Zauderer standard.   

In the 2023 oral arguments for Gonzalez and Taamneh cases, the justices signaled quite a bit of caution about not being, in Justice Kagan’s words, “the nine greatest experts on the Internet. All nine of them asked very thoughtful questions, and the Court’s ultimate rulings were narrow. I have quibbles with the outcome, but the overall approach was wise and, well, judicious. We should hope that the justices' stockpile of patience and caution has not been exhausted. Those capabilities are likely to be even more important for NetChoice.  


4. Don’t these laws just make platforms show people what they want to read?

Texas says in its brief to the Supreme Court that its law “just enables communication between willing speakers and willing listeners[.]” Florida similarly says that the “choice of what content can be viewed is driven by the user.” It goes so far as to claim that its law, which restricts platforms from “inhibit[ing] the ability of a user to be viewable by or to interact with another user,” serves “the State’s interest in empowering consumers to decline to receive unwanted material.”

That is not what either state’s law actually does. The idea that the Texas and Florida laws leave users free to choose what content they will see is thoroughly debunked in a great brief filed by Internet and First Amendment law scholars, building on work by James Grimmelmann. It calls the two laws “the most radical experiments in compelled listening in United States history.” As the brief explains, users seeking the speech that they do want to hear must, under the states’ laws, first sort through a welter of other speech that “the users do not want and are affirmatively trying to avoid.” The brief reviews at length the First Amendment precedent protecting listeners’ rights to be free from such “state-compelled listening.”

The states’ arguments build on some explicit or implied premises that fall apart on examination.

Argument #1: These laws don’t require platforms to carry anything. Maybe technically that’s true in Texas (though not Florida). But both states effectivelyrequire platforms to carry content against their will if they want to continue offering anything like their current services. 

Argument #2: Users choose who to follow on social media, so they will only see what they want. Florida argues that the “choice of what content can be viewed is driven by the user,” who can control their experience, “for example, by subscribing to newsfeeds that display content on chosen topics [or] ‘following’ or ‘friending’ people whose content they want to view[.]” This is often an appealing argument to people whose mental model of a platform comes from Facebook or Twitter ten years ago. But it’s inaccurate as a description of most users’ experience today on most Internet platforms covered by the laws. Uber, Etsy, and YouTube are all covered by Florida's law, for example, according to its brief. Seeing only content from accounts that you “follow” on those services would largely defeat the purpose of using them at all. And while Florida suggests that users also choose what to see by searching, which might be more relevant for those services, Florida's law ensures that search results on those services will be shaped by state mandate--not user choice.
Even Facebook and X/Twitter  mostly don’t work this way anymore. Twitter, for example, expanded its feed to include content from non-followed accounts a decade or so ago. The company instead intersperses novel content and accounts, designed to help users discover new interests (and maintain engagement), in the ranked feeds that users generally seem to prefer. The users who really do prefer seeing only followed accounts tend to come from the small but vocal minority of people who are happier with a reverse-chronological order feed, which major social media platforms already offer. There’s nothing wrong with that, but it doesn’t work for everyone and it is not the way social media works today for most people. When users follow a lot of accounts (as I do), chronological order just generates what Benedict Evans called a “random sample, where the randomiser is simply whatever time you yourself happen to open the app[.]” Chronological feeds also introduce a number of problems that ranked feeds are better able to avoid.

Argument #3: The statutes have carve-outs to enable meaningful user choice.  Texas says that its law “specifically allows platforms to facilitate user choice as to what they want to hear and from whom, thus ensuring that no one is forced to hear anything they would rather not[.]” Presumably that claim refers to Section143A.006, which says that platforms are not restricted from “authorizing or facilitating a user’s ability to censor specific expression on the user’s platform or page at the request of that user.” Given that wording, the provision is largely useless as a source of user control. It doesn’t let users choose “from whom” they will hear, as Texas claims, because it doesn’t let them block accounts -- just specific expression. That leaves users with no protection against, say, a harasser who keeps saying new and different things. Blocking "specific expression" might mean at best that platforms can offer users crude and error-prone text filters to block pre-determined “specific” offensive terms. That's nothing like the more effective user empowerment options offered by apps like Block Party and other "middleware" tools, or even by platforms themselves.  

Florida makes a similar claim that its law lets users “decline to receive unwanted material” by opting out of algorithmic ranking, under Section 501.2041(2)(f). Of course, that would force users who want to discover new accounts without navigating a barrage of unwanted dreck or who simply like ranked feeds to give up on finding the speech they are actually interested in. As the district court noted, this provision also seems on its face to let speakers override listeners’ preferences, by insisting that their posts “be shown to other users in chronological order[.]”


5. Is this case about making platforms carry literal Nazis and terrorists, or about making them permit all viewpoints in a democratic debate?


The NetChoice cases are about all of those things at once, as I discuss in a Lawfare post here. Any serious account of the laws’ impact, and constitutionality, needs to grapple simultaneously with their effect on “important” speech as well as the full panoply of “lawful but awful” speech the laws unleash and impose on unwilling listeners. 

It may well be that the laws' drafters only wanted to make platforms carry what the Fifth Circuit called “political, religious, and scientific dissent[.]” Perhaps legislators never intended to make them carry content from “terrorists and Nazis” or other material that the Fifth Circuit called “vile expression[.]” But the actual statutes they enacted don't achieve that outcome, or even try. They make no general distinction between important speech (which must be carried) and "vile" speech (which may be taken down). Nor is there any reason to believe that platforms attempting to comply with the laws would or could make distinctions like that. If the statutes did differentiate between important and “vile” speech, they would have even bigger First Amendment problems than they do now.  

A. But weren’t platforms enforcing politically biased rules and unfairly censoring conservative speech?

They might have been. Maybe they still do. It’s very hard to tell from the outside, though everyone has their own preferred anecdata on this point. But these cases are not about what platforms did or did not do. They are about what lawmakers did. The Court is assessing whether these specific statutes violate the First Amendment. What matters is what the statutes say.  

B. Did the older must-carry cases that Texas and Florida rely on force intermediaries to carry Nazi or terrorist speech?

This is kind of a deep-cut question, and it is debatable how or whether the answer matters. If that’s not your thing, you should probably skip it.

Older laws about common carriers like phone companies really do require them to convey plenty of “lawful but awful” material. If the Texas and Florida laws imposed common carriage obligations of that sort (which they don’t), then cases about those older carriers could be important. Those cases are generally about consensual one-to-one communications, though – not about services like TikTok or YouTube that “broadcast” content to a potentially unlimited audience. Outside of the snarly net neutrality context, those cases are also mostly about illegal content, are pretty old, and come from lower courts.

The Supreme Court must-carry cases that more clearly govern NetChoice, and that are emphasized in throughout the party briefs, are interestingly different. Almost all of them concern politely expressed speech on important topics—like gay rights groups participating in a parade, military recruiters participating in on-campus job fairs, or activists gathering signatures about the UN’s policy toward Israel in a shopping mall. A whole paper could be written on whether these attributes of the speech matter to the rulings, and why, and how that squares with other First Amendment principles. But in any case, while cases about laws restricting speech often involve really ugly material, the Court has almost never had to weigh in on laws that made carriers show such highly offensive speech to the general public. The exception – the remarkable 1996 ruling about public and leased access cable channels in Denver Area – was a jurisprudential catastrophe, with six different rulings and very few agreed rules.

Here is a quick and simplified overview of the key cases with an emphasis on the content at issue. Again, if you are a casual reader, I would consider skipping all this!

  • Turner: Congress can make cable companies carry local broadcast TV content—a category which itself is regulated to limit content ranging from pornography to profanity to hoaxes or falsities about crimes or catastrophes.
  • PruneYard: California lawmakers can make a mall owner tolerate activists’ polite signature-gathering on the premises. The speakers in the case were “high school students who sought to solicit support for their opposition to a United Nations resolution against ‘Zionism[,]’” and their “activity was peaceful and orderly, and, so far as the record indicates, was not objected to” by mall patrons. The students relied on a California state constitutional provision that itself sounds somewhat limited. It establishes a right to engage in "speech and petitioning, reasonably exercised[.]” (emphasis added)
  • FAIR: Congress can condition funding on schools permitting military recruiters to come to campus. The Court emphasized the importance of military recruitment, and its own general deference to Congress in this area.
  • PG&E: A public utilities commission cannot require a utility company to include in its billing envelopes messages from an organization critical of the utility’s practices.
  • 303 Creative: Colorado public accommodations laws cannot require a web designer, who opposed gay marriage on religious grounds, to create websites for gay weddings.  
  • Hurley: Massachusetts public accommodations law cannot require a parade operator to include a gay rights organization in its parade.
  • Tornillo: Lawmakers cannot impose a “right of reply” rule requiring a newspaper to publish a candidate’s responses to editorials critical of his candidacy.
  • Denver Area: This cable case is fascinating but very hard to analyze as precedent given both subsequent rulings and the confusing mix of concurrences and dissents in the case itself. It considered the First Amendment rights of video content creators (analogs of Internet users) whose speech appeared on public or leased access channels operated by cable companies (analogs of platforms). These "users" challenged federal laws that imposed new rules on the "platforms" managing their content. The cable companies themselves were not parties, and the ruling was not about their speech rights. In six overlapping opinions, the Court struck down a rule requiring carriers to segregate patently offensive sexual content, as well as a separate rule permitting them to exclude sexually explicit content or material soliciting or promoting unlawful conduct. It upheld a third rule permitting carriers to exclude similar content on leased access channels.  
  • Halleck: Community activists whose cable show criticized the private vendor operating Manhattan’s public access cable channel were kicked off the channel by the vendor. The Court holds that they have no First Amendment claim against the vendor.
  • Marsh: The corporate owner of a company town assumed First Amendment obligations, and violated a speaker’s rights by refusing to let her distribute religious literature outside the town’s post office.


6. Does Florida have it in for eCommerce sites?

Florida’s brief twice emphasizes that its carriage mandates are particularly justifiable for “e-commerce sites like Etsy and eBay.” That seems… weird and funny, mostly. It is hard to see why the state would have more power to require platforms to carry offensive, harmful, or irrelevant material in forums mostly used for shopping, as opposed to forums mostly used for speech. Their legal hook here is the PruneYard case, which said California could force a mall owner to let students gather signatures on a political issue. But malls really were seen as the “modern public square” in their heyday, while I don’t think anyone seriously thinks of eBay or Etsy that way. Florida’s use of its scant word count for this purpose seems like a strange choice tactically. Maybe there is some culture war skirmish to blame here, much as there was when the state originally carved out Disney from the law’s ambit, then put it back in based on disagreement over “don’t say gay” laws. Or maybe there is a better argument than I’m seeing.

7. Still to Come (maybe)

If I manage to put up a third FAQ, it will finally speak to the much-neglected notice and appeal rules. As I explained in this longer article, I think those and other transparency rules actually raise questions at least as important and complex as the must-carry mandates.  But they have been addressed only shallowly in ruling and party briefs to date. I will also try to get to

  1. How do the NetChoice cases relate to Murthy v. Biden, the case this term about informal “jawboning” pressure by governments for platforms to remove content?
  2. How do the NetChoice cases relate to Lindke and Garnier, the cases this term about users’ First Amendment rights to follow and engage with lawmakers on social media?
  3. What parts of these statutes will the Court actually review?
  4. Which platforms do these laws actually regulate?



Add new comment