The EARN IT Act Is Unconstitutional. First Up, the First Amendment.

The disastrous EARN IT Act bill is coming up for a hearing Wednesday morning in the Senate Judiciary Committee, where it’s been introduced as Senate bill S. 3398 (though, at this time, still no bill text – see here for that). In advance of that hearing, I have a few thoughts that go beyond my usual angle (encryption) to discuss the bill’s serious constitutional defects. In a series of posts, I’m going to cover why I believe the bill is unconstitutional under at least three separate amendments to the Constitution: the First, Fourth, and Fifth (with ramifications for the Fourteenth too). Caveat: I’m not actually an expert in these areas, but I know that each one of them is super complicated, so the actual experts out there should feel free to reach out to me to correct any mistaken analysis in these posts.

Let’s go. Up first, naturally, is the First Amendment.

I mentioned in my previous blog post that the best practices under the EARN IT Act would be likely to make providers censor lawful speech. Of course child sexual abuse material (CSAM) is illegal and not protected by the First Amendment. However, the bill would require a Commission to develop “best practices” that providers “may choose to engage in to prevent, reduce, and respond to” not only child sexual abuse imagery, but also “enticement, grooming, sex trafficking, and sexual abuse of children” as well.

As I mentioned, it’s more difficult than you might think even to figure out which images are in fact unlawful CSAM that falls outside the First Amendment. Restricting EARN IT only to CSAM would still implicate the First Amendment in light of the potential for providers to censor protected speech, because line-drawing, even with imagery, isn’t easy. (Even cartoons, which are protected by the First Amendment, get reported to NCMEC as CSAM, meaning they’d be at risk of censorship under EARN IT best practices.)

Adding those other categories of crimes makes the First Amendment problem with this bill even bigger than if it “only” involved CSAM. Enticement and grooming behavior can involve CSAM (e.g. showing a child sex abuse image to a potential child victim in order to normalize abuse). But it also involves, basically, free-text conversations between two individuals. That is: it involves online speech, which, in general, is protected by the First Amendment just like offline speech. There are many kinds of illegal online content, some more clear-cut than others. The further away you get from CSAM—the most open-and-shut, clearly-illegal kind of illegal online content in existence (and even then it’s not crystal clear)—the harder it becomes to spot the unlawful content. That’s especially true at the huge-scale volume of content on popular online services.

If it’s hard for a provider to tell what’s unlawful, but it’ll face legal liability if it gets it wrong, then the obvious incentive is for the provider to avoid the danger zone and censor lots of lawful content along with the unlawful content, just in case. This is exactly why Section 230 immunity is so important, because without it, platforms would constantly censor a lot of legal speech by their American users – maybe your speech – as a byproduct of trying to censor illegal speech, for fear that if they didn’t cast a wide net, some illegal speech would slip through and they’d get sued for it.

That’s exactly what’s happened with SESTA/FOSTA, the first law to curtail providers’ Section 230 immunity for a particular topic (sex trafficking), in whose footsteps EARN IT would follow if enacted. The law (I’ll just call it FOSTA) stripped providers’ Section 230 immunity for sex trafficking crimes, made it illegal to knowingly “facilitate” sex trafficking, and left it up to platforms to figure out what that meant for them and how to avoid liability for it. Predictably, that resulted in platforms’ suppressing lots of legal speech. An example is Craigslist’s decision to shut down its entire “personals” and “therapeutic services” sections, just in case some of the people supposedly seeking a romantic connection or offering “massage therapy” services were actually trafficking victims being prostituted.

FOSTA is currently being challenged in court as unconstitutional under the First Amendment, on grounds including that (1) it is a content-based prohibition of online speech (meaning it limits what people can say, rather than how they say it) that fails what’s called “strict scrutiny” (meaning the content-based restriction is necessary to further a compelling government interest, is narrowly tailored to achieve that interest, and uses the least restrictive means to achieve it), and (2) it’s overbroad (meaning it regulates a substantial amount of constitutionally protected expression).

EARN IT suffers from these same constitutional defects. In fact, EARN IT would go beyond FOSTA. Rather than simply strip providers’ Section 230 immunity for the crime at issue, period (with the predictable result of providers’ mass censorship of users’ legal speech), EARN IT instead dangles the carrot of continued immunity, which is guaranteed only by certified compliance with “best practices” with respect to 11 “matters” the bill requires the Commission to address.

That is, the Commission’s “best practices” would dictate to the provider directly what to do to avoid liability, rather than leaving it up to the providers to decide what speech to suppress, as FOSTA did. Make no mistake, these “best practices” will be congressionally-approved rules for policing online speech. And of those 11 “matters” to be addressed in the best practices, many “are written in an overly broad fashion, without clear definitions,” as a coalition of 25 civil society groups noted in a letter to the bill’s cosponsors.

This has the same overbreadth and tailoring problems under the First Amendment that FOSTA does. It exposes providers to potential criminal liability unless they follow “best practices” on topics that are overly broad in the amount of legal speech they would sweep in. Otherwise, if providers instead adopt “reasonable measures” regarding those same 11 matters, the bill incentivizes the providers to choose measures that suppress a lot of legal speech in order to steer far clear of the possibility of being deemed “unreasonable” for the goal of fighting child sexual exploitation.

We have been here before, even prior to FOSTA. A whopping 23 years ago almost to the day, the Supreme Court heard oral arguments in a case called Reno v. ACLU. That case, argued on March 19, 1997, established beyond question that the First Amendment applies online. It also struck down as unconstitutionally overbroad several portions of the 1996 Communications Decency Act (of which Section 230 is a surviving part) which criminalized engaging in “indecent” or “patently offensive” online speech, if the speech could be viewed by a minor. Reno demonstrates that Congress has long tried to pass legislation regarding children’s safety online – and it seems incapable of doing so without violating the First Amendment.

Even after the CDA’s failure in Reno, Congress passed another law called COPA aimed at restricting minors’ access to harmful material online. COPA criminalized online commercial speech that is “harmful to minors,” but gave providers a defense against prosecution if they showed that minors’ access to harmful content was restricted by some kind of age verification measure. COPA got struck down too; the courts ruled that the age verification requirement was not narrowly tailored to serve the government interest of protecting children online, and that it would chill protected speech. But it took multiple rounds of litigation for the law to finally die in 2009. (Fun fact: as a summer law intern at CDT in 2007, I helped write CDT’s amicus brief to the Third Circuit, when COPA was on its way up to the Supreme Court for the second time.)

2009 was over a decade ago. And yet here we are in 2020, and one of the “matters” that the Commission would have to cover in its best practices is “employing age rating and age gating systems to reduce child sexual exploitation” – the same damn thing deemed unconstitutional in COPA. And another thing: the point of age verification is to restrict minors’ access to legal pornography; it has no bearing on CSAM, which is illegal for anyone to access regardless of age. Why is this even in the EARN IT bill? Age verification is not narrowly tailored to fighting CSAM at all, it would just result in overbroad restrictions on minors’ access to online content. Again.

Has Congress really learned nothing after almost a quarter-century? I have no problem with making it illegal to sexually exploit children online. But I do have a big problem with censorship of legal speech online, and this bill is just the latest in a long line of congressional missteps that wound up doing just that. It’s easy to come up with hypothetical examples of potential “best practices” or “reasonable measures” under the EARN IT Act that demonstrate why the Act’s approach is overbroad and fails strict scrutiny, in order to illustrate how a law aimed at “just” illegal activity will end up censoring perfectly lawful speech.

Let’s keep focusing on enticement and grooming. Here are my hypothetical recommendations: “(1) In order to prevent grooming and enticement, providers should prevent adults from contacting minors on the provider’s service, including by ‘friending’ or ‘following’ the minor’s account. (2) It is strongly recommended for providers to prevent such contact even if the minor knows, or is reasonably believed to know, the adult, because 93 percent of child sex abuse victims know their abuser. (3) Providers should immediately and permanently ban any user from the service if the provider knows or should know that the user engaged in grooming or enticement activity with a minor.”

These recommendations could be “best practices” for “preventing, identifying, disrupting, and reporting child sexual exploitation” (which includes grooming and enticement). Or they could be “reasonable measures” regarding that same topic that a provider decides to adopt instead of the best practices. Either way, they seem like they might work, right? If you’re serious about fighting grooming and enticement online, then you really can’t be too careful. These measures sound like they’d be pretty effective at preventing grooming and enticement by cutting down on opportunities for them to happen. But, like the overbroad CDA and COPA of yesteryear, these measures would inevitably censor perfectly legal speech.

Let’s say there’s a social media account that the provider either knows (from the user providing a birthdate and gender) or can infer (from other data) belongs to an adult man; and there’s a second account that the provider knows or infers is a teenage boy. One account contacts the other; this gets flagged by the provider’s EARN IT compliance system (which would likely use indicators including the content of communications, where possible, as well as other information about the accounts involved).

·      Scenario 1: Facebook. The teenage boy is the only child of a single mother who has no other living family except her brother. The mom isn’t on Facebook, but her son is. The adult man is her brother, who joins Facebook and wants to friend his nephew and use Facebook Messenger to chat with him. The two accounts have no friends in common, the users have different last names, and they live in two far-apart cities, so to Facebook it looks like they don’t even know each other. Given all of those data points, in order to avoid the risk of liability under EARN IT, Facebook bars the uncle from being able to even friend his nephew at all, much less chat with him.

·      Scenario 2: Twitter. The adult man is a urologist who’s become an unlikely celebrity on Twitter due to his open, frank, and cheerful approach to discussing sexual health. He leaves his DMs open so that any user can message him privately, even if he doesn’t follow that user. The teenage boy has some questions which he’s too embarrassed to ask his pediatrician, so he DMs the urologist, using slang language for body parts and sex acts. The urologist DMs back, in a professional tone, using appropriate medical terminology, and answers his questions. Twitter DMs are not end-to-end encrypted (yet), so Twitter can “see” their content. Due to EARN IT, the company has begun scanning all DMs for potential grooming/enticement (something it already does for certain other purposes). It detects that a minor’s account has contacted an adult’s account, the latter replied back, they don’t follow each other or have followers in common, and the DMs they’ve exchanged contain sexual words. Twitter flags the DMs for review by an internal team and suspends the urologist’s account pending the outcome of the review. Twitter also notifies the urologist that his account is being investigated and, depending on the result, his account may be terminated and he will be banned from the service permanently, in accordance with a “one strike” rule instituted in order to avoid liability under EARN IT.

These are just two simple examples of how the EARN IT Act would result in the unconstitutional censorship of perfectly legal speech. Adults and children have a First Amendment right to talk to each other, even when that speech takes place online. Minors have a First Amendment right to seek and receive information about their health, including their sexual health. Yet in the name of cutting down on grooming or enticement online, the EARN IT Act would result in the censorship of vast amounts of constitutionally protected speech. That violates the First Amendment.

This bill must not pass. Want to tell your congressperson you oppose it? Digital rights orgs Fight for the Future and EFF both have ways for you to take action.

Add new comment