Stanford CIS

FAQs about the NetChoice Cases at the Supreme Court, Part 1

By Daphne Keller on

The Supreme Court is about to review a constitutional challenge to two unprecedented and very complicated laws regulating social media. The laws were enacted by Texas and Florida in order to counter “censorship” and alleged anti-conservative bias of major Internet platforms like Facebook or YouTube. Both laws have “must-carry” rules that restrict platforms’ ability to moderate content under their preferred editorial policies, and “transparency” rules including requirements for platforms to notify users when their posts have been moderated. In the NetChoice cases, the Court agreed to review just one of the many questions the cases present: whether the laws violate the First Amendment. 

This is an FAQ to explain some basics about the cases, and address some fairly complicated questions that students, reporters, and others have asked me. I will add more entries over the coming weeks. The FAQ is mostly focused on questions that might not have obvious answers, or that are particularly hard to understand, or that I suspect have fallen between the cracks and not been addressed enough in parties’ and amici’s briefs to the Court. In other words, this will get pretty wonky.

I am very focused on these cases because they bring together questions I have worked on for two decades, first as a lawyer for Google, where I led the legal team for web search, and then as an academic at Stanford since 2015. I think the platforms should win in NetChoice. But I also think that both must-carry laws and transparency laws are important and far more complicated, as a matter of both policy and constitutional law, than these cases might suggest. NetChoice should be an easy case, because the Texas and Florida laws were so badly designed and drafted. But that shouldn’t be the final word on either topic. For more on my own positions: here is my take on must-carry issues generally, and here is the amicus brief I filed in NetChoice with Jack Balkin and the Yale MFIA clinic on behalf of Francis Fukuyama; here is my Senate testimony about platform transparency generally, and here is a detailed constitutional analysis of the Texas and Florida transparency rules.

With that, on to the FAQs!

 

BASICS

  1. What do the laws say?
  2. Who sued whom for what, and then what happened?
  3. What do we know about the Justices and these cases?

 

THORNIER QUESTIONS

  1. Are these cases about discrimination?
  2. Do these laws actually impose common carriage mandates?
  3. Did the Court decide on any of this in Taamneh or Gonzalez last term?
  4. What would it mean for platforms to be "consistent" or "viewpoint-neutral" in moderating user content?
  5. What does Section 230 have to do with these cases?
  6. What does Florida’s law actually say?
  7. If the platforms win this case, will lawmakers be unable to regulate platforms with rules about things like privacy, discrimination, or competition?

 

QUESTIONS I HOPE TO EXPLORE IN LATER POSTS

  1. Don’t these laws just make platforms show people what they want to read?
  2. Is this a case about political debate or about Nazis and terrorists?
  3. Are the notice and appeal rules just basic consumer protection measures?
  4. How do the NetChoice cases relate to Murthy v. Biden, the case this term about informal “jawboning” pressure by governments for platforms to remove content?
  5. How do the NetChoice cases relate to Lindke and Garnier, the cases this term about users’ First Amendment rights to follow and engage with lawmakers on social media?
  6. What parts of these statutes will the Court actually review?
  7. Which platforms do these laws actually regulate?
  8. Is this case like Turner or is it like Denver Area?
  9. Does Florida have it in for eCommerce sites?

 

 

BASICS

1. What do the laws say?

Both laws are quite long, and include provisions that are not at issue in the cases. My informal annotated copies of the laws are here and here. The parts at issue in the cases are (1) so-called “must-carry” rules that restrict platforms’ ability to moderate content, and (2) “transparency” or “notice” rules that require platforms to notify users about moderation actions and (in Texas’s case) allow the users to appeal the platforms’ decisions. There are some edge questions about which parts of the transparency laws are in scope for review. 

Florida’s must-carry law has special rules limiting platforms’ ability to moderate content posted by so-called “journalistic enterprises” and content “by or about” political candidates. It also requires that all moderation be carried out “consistently,” though it’s unclear exactly what that means. The Florida law was hastily written and can be hard to understand. On a close read, many parts don’t say what they might initially appear to say. Florida argues that the law actually contains important exceptions to its must-carry rules, based on its interpretation of the federal platform immunity statute known as Section 230The only specific exception spelled out in the law allows platforms to moderate “obscene” content if it comes from journalistic enterprises

Texas’s must-carry law prohibits platforms from moderating content based on its “viewpoint.” (That, too, could mean a lot of different things.) It lists specific exceptions, including allowing platforms to moderate unlawful material, as well as some specified other highly offensive or harmful material, regardless of its viewpoint. Texas could make the same argument that Florida does about the law incorporating exceptions from Section 230, but I think its briefs have been comparatively vague about this topic so far.

2. Who sued whom for what, and then what happened?

Two trade associations representing platforms, NetChoice and CCIA, brought the cases. They prevailed in the case against Florida at the district court and again at the 11th Circuit. In Texas, platforms won at the district court but lost at the 5th Circuit. Platforms’ initial pleadings raised three big claims based on (1) the First Amendment, (2) Section 230, and (3) the “Dormant” Commerce Clause, which sometimes limits states’ ability to regulate issues affecting interstate commerce. Of these claims, only the First Amendment arguments were raised and accepted by the Supreme Court for review.

For those who want more detail on the cases below: 

3. What do we know about the Justices and these cases?

Clarence Thomas has been the most outspoken on relevant topics, strongly expressing concerns about major platforms’ power over public discourse. He has issued two procedurally unusual opinions on the topic. (Unusual in that they did not accompany cases resolved by the overall Court, but were instead published as concurrences to pro forma dispositions of cases the Court did not hear.) The most relevant for NetChoice was issued in relation to the Knight case about users’ First Amendment rights to access Trump’s Twitter account, which raised questions that are being separately reviewed by the Court this term in the Lindke and Garnier cases. In that opinion, Justice Thomas discussed possible sources of law for imposing must-carry obligations on platforms, including common carriage and public accommodations law. His other opinion, in Malwarebytes, was more focused on Section 230's statutory immunities. Thomas was also the author of last term’s Taamneh ruling, which contains language relied upon heavily (and I think inappropriately) in Texas’s and Florida’s briefs.

Justice Alito wrote a brief opinion, joined by Justices Thomas and Gursuch, dissenting from the Court’s decision to temporarily keep the Texas law from coming into effect. His opinion expressed sympathy for many of Texas’s arguments. Justice Kagan also dissented, perhaps because of her objections to use of the Court’s “shadow docket” for such decisions.

Justice Kavanaugh issued a highly relevant dissent in a net neutrality case while he was on the D.C. Circuit Court of Appeals. He would have allowed Internet access providers to proceed with arguments that the rules violated their First Amendment rights. In explaining his reasons, Justice Kavanaugh expressed reservations about the government’s ability to “regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter.” (The opinion is long, the excerpt I use for teaching is here.)

 

THORNIER QUESTIONS

Are these cases about discrimination?

These are cases about whether platforms can “discriminate” against certain messages posted by users. But they are not about the kind of “discrimination” at issue in civil rights cases that consider things like desegregation. In their final briefs, the states argue that cases about that second kind of discrimination—discrimination against people based on characteristics like race, gender, religion, or sexual orientation—support their laws. I think this is plainly wrong, and mostly just an attempt to muddy the waters or make the mandates sound more like traditional common carriage laws. My analysis is in this Lawfare piece.

Do these laws actually impose common carriage mandates?

Texas and Florida argue extensively that their rules for online speech are justified on the basis of older “common carriage” laws. The appeal of that argument is obvious. Major platforms really do control important communications channels, and common carriage laws do sometimes apply to communications resources. But the legal foundations for their arguments are dubious. And in any case, the rules that Texas and Florida impose depart in many important ways from traditional common carriage mandates. 

Many briefs in the case focus on whether platforms are common carriers, or could be legally treated as common carriers. But those questions mostly only matter if the laws at issue in NetChoice actually impose common carriage obligations. I think they do not. Instead, the Texas and Florida laws impose new forms of state power over online speech, in ways that cannot be justified in the name of “common carriage.”

What are common carriage laws? Common carriage laws have generally obliged entities like railroads, stagecoach companies, or telegraph operators to offer the same service to all customers (or all who are paying for the same service, like first class or coach tickets). Laws like this, which ensure the availability of basic and shared foundations for other human endeavors, were and still are incredibly important. They ensure that new businesses can count on being able to ship products and communicate with buyers or sellers; give families and friends the means to stay in touch and visit; and let political organizers count on their newsletters and phone calls getting through. Net neutrality laws for Internet access providers look a lot like traditional common carriage (a number of amicus briefs in NetChoice talk about that). One compelling illustration of common carriage laws' importance can be seen in the history of Black consumers in the Jim Crow South ordering from the Sears catalog, in order to bypass local merchants who wouldn’t serve them. That might never have happened without laws ensuring that both catalogs and goods would be delivered.

When do common carriage laws apply? Law on this is well covered in party and amicus briefs, I won’t attempt to add to it here. Public Knowledge, TechFreedom, Christopher Yoo and more speak to this on the platform side; as do Adam Candeub and others on the states’ side. 

Did Texas and Florida enact common carriage laws? Not unless we give a whole new meaning to the words “common carriage.” Of course, there is no completely fixed definition for that term: as Blake Reid has detailed, the rules for historical common carriers varied widely. So did the legal justifications for imposing those rules, and the consequences for the kinds of First Amendment questions raised in NetChoice. But generally we can think of common carriage laws as requiring carriage of just about everything. (Or perhaps everything that isn’t illegal, or everything that doesn’t disrupt service for other customers.) In NetChoice, Texas and Florida are at pains to explain that this is not what their laws require. Texas insists that the platforms’ examples of being compelled to carry lawful but “vile” expression such as pro-terrorist content are merely “fanciful,” and Florida says platforms can remove broad categories of lawful speech under “standards of their choosing[.]”

The obligations that Texas and Florida seek to impose do not look much like common carriage.

Maybe these rules are so unmanageable that, at the end of the day, the only way to comply with “consistency” or “viewpoint neutrality” rules really would be to carry everything. That would be much more like common carriage, and would in some ways simplify the states’ arguments. But it would also be very far from what the laws appear to say on their face. It would also be very far from the way they are depicted in the states’ briefs or the Fifth Circuit ruling

None of this resolves the actual question in the case: Whether states can override platforms’ editorial rights to moderate content. But since Florida and Texas keep justifying their laws on the basis of historical common carriage regimes, it is important to recognize just how different their laws actually are, and how much more state power they establish over online speech.

 

* The carveouts apply if content meets the following two definitions. These definitions use words similar to First Amendment standards describing content that may permissibly be prohibited by law. But Texas’s wording encompasses more presumably-lawful content. The law also permits removal of “unlawful expression” separately in 143A.006(a)(4), so if these provisions were solely about unlawful expression they would be redundant/surplusage.

Did the Court decide on any of this in Taamneh or Gonzalez last term?

In short, no. But Taamneh contains wording and dicta that Texas and Florida have quoted extensively in their briefs, implying that Taamneh involved relevant factual conclusions, holdings, or platform arguments that are inconsistent with their position in NetChoice. I don't think those passages in Taamneh actually support the states' position, given both the case's holding and its motion-to-dismiss posture. I also don't think some of those parts of Taamneh were accurate or supported by the record in the first place.

Of particular relevance are Taamneh's statements saying or suggesting that the defendant platforms did not engage in these very standard content moderation practices: 

This Lawfare post discusses some key differences between the facts described by the parties’ briefs and those described in the Taamneh ruling; this law review article discusses the relationship between NetChoice and Taamneh. (The article takes issue with Texas's earlier characterization of Taamneh to the Court at notes 150-153 and associated text. I did warn that this was going to get wonky.) Both of those sources also summarize what happened overall in Gonzalez and Taamneh

What would it mean for platforms to be “consistent” or “viewpoint-neutral” in moderating user content?

Florida’s law requires that each platform apply its moderation rules “in a consistent manner[.]” Texas’s says platforms may not “censor” expression based on its “viewpoint.” There is plenty of room for dispute about what those rules actually mean. I explore some possible interpretations in the FAQ about common carriage. But here are some examples to ponder. 

For questions like these, the biggest issue for First Amendment purposes isn’t which answer is right. The big issues are how platforms are supposed to know which answer is right in order to avoid liability, and what role courts and state actors like Attorneys General can and should play in establishing the answers.

To my mind, the difficulty of answering these basic questions about the law's meaning makes them vague. (I can’t recall if anyone has argued that they are thus void for vagueness as a First Amendment matter. Possibly NetChoice did in the Texas district court.)

A smaller wrinkle comes from statutory interpretation. As I discuss here, parts of the states’ rules could be interpreted to require consistency as between users rather than as between content posted. That doesn’t make the rule any clearer, though. It may not even make it any different, because the same expression can mean different things coming from different people, or in different contexts. For example, one user who posts the phrase “this is so queer” might mean to express gay pride, another might mean to express a slur, and yet a third might (if they are old enough, quirky enough, or perhaps British enough) just mean to say that something is odd.

What does Section 230 have to do with these cases?

Florida’s and Texas’s statutes both have provisions about when their laws will yield to federal law, or be unenforceable in light of that law. Florida’s statute specifically references Section 230, Texas’s refers more vaguely to content that platforms are "specifically authorized to censor by federal law." Both states argue that, in order to interpret their laws and apply the First Amendment, the Court must first engage with and resolve complicated statutory interpretation questions under Section 230, and reach conclusions that no lower court has endorsed. It would be quite a development if the Court turned NetChoice into a case about those statutory provisions now, having declined to consider overlapping ones last year in Gonzalez—particularly given that these Section 230 issues were not mentioned in the NetChoice cert grant, briefed to the Court until now, or considered in the courts of appeals. 

The states’ arguments about Section 230, if accepted, would also make their laws more clearly content-based. That in turn would make them even more likely to be reviewed under strict scrutiny and struck down under the First Amendment. Presumably the states decided this was a worthwhile gamble. If the Court adopted every step of the states' apparent reasoning, its logic would raise similar constitutional concerns about Section 230.

 

Big picture: Are the platforms’ First Amendment arguments here inconsistent with Section 230?

Texas says that Section 230 was “an effort by Congress to recognize that entities like the Platforms are not speakers but conduits for their users’ speech.” That’s a common claim from the political right. Many 230 critics on the political left, by contrast, often reach precisely the opposite statutory interpretation, saying that the law actually requires platforms to moderate content more actively in order to qualify for immunity. 

Both interpretations are unfounded in the law’s text, history, or judicial interpretation to date. Congress enacted Section 230 specifically to encourage platforms to adopt and enforce editorial policies. As the law’s drafters explained in an amicus brief, it is entirely consistent for platforms to claim 230’s immunity and also assert First Amendment rights to apply their own editorial standards. Legislative creation of an immunity or legal defense would not, in any case, take away First Amendment rights. 

Charges of inconsistency between First Amendment rights and 230 immunities often seem primarily grounded in policy objections. Critics charge that the two shouldn’t co-exist as a matter of fairness, or that they are logically incoherent. That’s a different question than anything raised in NetChoice, though. And the question of which mix of freedoms and liabilities for platforms is optimal as a public policy matter is actually phenomenally complicated. My quick take on the high level questions presented are in op ed form here and in more academic form (at p. 135) here. Some really interested economic modeling of platforms’ likely behavior under varying regimes is here.

Some of the states' arguments imply that legal analysis of speech and immunity issues for ranking is meaningfully different from that for hosting, because ranking is the platform's own act. 

 

 

In interpreting the states’ laws, why would the Court need to interpret the language of Section 230?

Florida’s law may, by its terms, “only be enforced to the extent not inconsistent with federal law and 47 U.S.C. s. 230(e)(3),” which is the part of Section 230 saying that the statute preempts inconsistent state laws. Texas’s law doesn’t mention Section 230, but does exclude from its viewpoint-neutrality rules any material that platforms are “specifically authorized to censor by federal law.” Florida’s brief in particular goes into detail in asserting that the Court’s First Amendment analysis should turn on its interpretation of Section 230. 

 

 

What questions about Section 230’s statutory language do the states want the Court to answer? 

Florida’s argument builds on an interpretation of Section 230(c)(1) and (c)(2)(A) that has been popular for several years on the political right—including in then President Trump’s executive order and subsequent petition for FCC rulemaking about Section 230. For a sense of the legal snarl involved, that generated over a thousand comments to the FCC. I will try to compress the issue here, hopefully without making it too hard to follow. 

Depending which interpretations of Section 230 one accepts, some major and diverging potential consequence for Florida’s must-carry provisions might be:

Florida's choice to advance this last interpretation of its lawsaying the Florida statute allows platforms to freely remove content in the categories enumerated in Section 230, but not other content — s arguably a self-own. It make's Florida's law more clearly content-based, which should trigger strict scrutiny. On the other hand, if the Court accepted this reading of Section 230 (which, again, would be remarkable in this First Amendment case), it would create constitutional difficulties for that law, as well.   

Texas’s brief gestures to statutory interpretation questions about Section 230 that are even deeper in the weeds. 

Blake Reid discusses these and other 230 interpretation questions in relation to NetChoice here (that article has since been updated, but the older draft is more focused on NetChoice).

What does Florida’s law actually say?

Florida’s law is hard to parse. I have a Google doc tracking easy-to-misunderstand provisions here. Every time I go back to the statute, I find something new—or sometimes revise my previous understanding—so I’m keeping this in the doc, where it is easy to update.

If the platforms win this case, will lawmakers be unable to regulate platforms with rules about things like privacy, discrimination, or competition?

No. Not unless the Court chooses to adopt analysis that creates such a result—and there's no reason they need to do that. There are plenty of ways to resolve this case without pre-judging platform regulation questions about topics like privacy, competition, consumer protection, or discrimination. A lot of amici discuss this, including the Stern Center, the Knight First Amendment Institute, and New York (along with about twenty other states). Perhaps tellingly, many of the briefs in this area come from longstanding platform critics, and were filed in support of neither party.
 
Some briefs word this issue differently. They don’t say that a NetChoice win for platforms would necessarily preclude important other regulation, but that accepting the platforms’ particular arguments would do so. That could well be true. I haven't closely analyzed their briefs on this point, but the platforms have reason to pose their arguments in their strongest form, and ask the Court for the analysis most likely to help them in other major pending cases. A number of those cases, like NetChoice v. Bonta, relate to protection of minors. Many combine questions about state regulatory power over (1) platforms' collection and use of data from users, and (2) platforms' dissemination of users' speech. To my mind, those two issues can often be analyzed separately. (In First Amendment terms, this might involve breaking out Sorrell issues and Reno issues.) The doctrinal paths to striking down the Texas and Florida must-carry laws while preserving room for other regulation are too diverse to examine here. Our amicus brief proposes one; others might build on distinctions between conduct and speech, or between content-neutral and content-based laws.  
 

COMING SOON

I hope to soon post more NetChoice FAQs, such as these. 

  1. Don’t these laws just make platforms show people what they want to read?
  2. Is this a case about political debate or about Nazis and terrorists?
  3. Are the notice and appeal rules just basic consumer protection measures?
  4. How do the NetChoice cases relate to Murthy v. Biden, the case this term about informal “jawboning” pressure by governments for platforms to remove content?
  5. How do the NetChoice cases relate to Lindke and Garnier, the cases this term about users’ First Amendment rights to follow and engage with lawmakers on social media?
  6. What parts of these statutes will the Court actually review?
  7. Which platforms do these laws actually regulate?
  8. Is this case like Turner or is it like Denver Area?
  9. Does Florida have it in for eCommerce sites?
Published in: Blog , Intermediary Liability