Prepared Remarks on U.S. Legal Considerations for Children's Online Safety Policy

I was recently invited to a private workshop on children's online safety policy, where I gave a short presentation about the U.S. legal context. Here are my prepared remarks. Note that they largely avoid giving my personal perspective on hotly-debated areas, such as the interaction between Section 230 and app design features, or proposals for age-verification requirements. It is an overview, not an op-ed, presented to an audience that, while it contained some tech policy experts, had many people who are new to these issues. I got asked by a few attendees to share my written remarks, and I'm glad to oblige.

 

Thank you for coming today. I’m Riana Pfefferkorn, and I’m a research scholar at the Stanford Internet Observatory. Prior to joining SIO, I spent my first 5 years at Stanford at the Center for Internet and Society, studying tech policy through a civil liberties-centric lens. Before that, I was a lawyer in private practice representing major tech companies on matters that are highly relevant in policy discussions today, like online privacy and Section 230. 

 

That background is why I’ve been asked to give this overview of legal considerations in the U.S. for online safety policy. This is one part descriptive, of some areas of relevant law, and one part prescriptive, of the pitfalls that I see for regulating in this space.

 

Policymakers in the U.S. have a very tricky job. That’s due to a couple of factors that make it especially challenging to legislate wisely in the area of tech policy – no easy feat anywhere, to be sure, given that legislation cannot change at the same pace technology does.

 

One factor is that the U.S. contains multitudes. We are a big, diverse country, with wildly divergent community standards. This is true even within California, home to 1 in 8 Americans. 

 

Another factor is our Constitution and federal laws, which differentiate us from other nations, even other Western democracies. 

 

These factors must be kept front and center when crafting children’s online safety policy.

 

There are some elements of children’s online safety measures that American lawmakers have imported from abroad, such as the Children’s Code in the UK that informed the Age-Appropriate Design Code here in California. 

 

But given the factors I just mentioned, it is not feasible to simply pluck policies from other countries and drop them wholesale into the U.S. framework. They will not fit right.

 

First, we are not a monoculture like some countries, especially small nations in parts of Northern Europe and East Asia. Different states have different, and sometimes flatly contradictory, social norms and state laws. We’ve seen how those deep-seated differences can impede Congress from passing federal laws in areas of tech policy. 

 

It is then up to the states to fill the vacuum. California has long led the nation as an innovative laboratory of democracy. Nevertheless, in 2023, it is undeniable that our state leadership’s idea of “child safety” differs in important ways from that of the governments of, say, Utah or Florida. 

 

And yet – whether trillion-dollar companies, non-profit organizations, or some guy running a Mastodon instance – if you operate an online service in the U.S., offering it on a free, open, global Internet that doesn’t recognize state borders, you still have to comply with every state’s laws, and figure out what to do in the case of contradictions. 

 

State-by-state variation creates a complex legal compliance landscape whose burden is not borne equally by everyone subject to it. And differing social norms make it impossible for tech entities to implement child safety policies that make everybody happy. 

 

We should keep those considerations in mind today when elucidating the policy trade-offs at stake in children’s online safety. And, ahead of everything else, in talking about “children’s online safety,” we’d better start by defining what that means to us, lest we talk past each other.

 

So: the breadth and depth of these United States is one thing that distinguishes policymaking here from some other countries.

 

Another is our federal and state constitutional protections, particularly for free expression and privacy from state surveillance. The robustness of these protections means that some ideas for online child safety that might fly elsewhere are simply off the table here. 

 

For example: Very few types of speech fall outside of constitutional protection in the U.S., and this constrains regulation of speech on the Internet. That the First Amendment protects free speech on the Internet, even that which might be inappropriate for children, has been settled law for a quarter of a century. The First Amendment protects not just the right to speak, but also the right to receive information. Thanks to the First Amendment, the state cannot simply pass a law banning online services from hosting certain types of so-called “lawful but awful” content deemed harmful to children, as the UK proposed to do in a past version of its contentious Online Safety Bill. 

 

Another example: The idea of a so-called “general monitoring obligation” requiring online service providers to proactively search out (and block) certain types of content from their services. This idea has repeatedly been proposed from Europe to India, for CSAM and hate speech for example. But, thanks to the Fourth Amendment, the government here cannot require providers to, say, search users’ accounts for child sex abuse imagery – because that would turn them into agents of the state conducting unconstitutional mass warrantless searches. 

 

On top of the federal Constitution’s robust civil liberties protections, CA’s state constitution provides additional protections for Californians’ free expression and privacy. For example, the state constitution’s privacy clause applies to private as well as public entities. 

 

I view all those protections as a feature, not a bug – but they do complicate the task of online child safety policymaking. Child safety is important, but children still have constitutional rights, and adults’ rights are even broader. Thus, laws protecting children must nevertheless abide by constitutional contours. (Precisely where those contours lie is a point of disagreement.)

 

Conversely, there are areas where U.S. law lags behind other countries. Notably, we’re the only United Nations member state that hasn’t ratified the UN Convention on the Rights of the Child (though we did ratify its Optional Protocol on child trafficking and CSAM). The rights guaranteed by the Convention include life, survival, and development; freedom from violence; and the highest attainable standard of health. They also include the rights of free expression, to seek and receive information and ideas of all kinds, and the right to privacy, including in one’s correspondence. These rights “are interdependent, non-hierarchical, and mutually reinforcing” – that is, no one right may be thrown out the window in the name of protecting some other right.

 

By ratifying the Convention, all of America’s peers have made commitments to protect children’s rights – commitments they (at least are supposed to) respect when they pass laws implicating children’s safety, online or off. The U.S., however, has not assumed this burden.

 

Likewise, our peers in the EU have the General Data Protection Regulation, or GDPR. The United States, by contrast, lacks a general federal privacy law. A few states including California have enacted state-level laws inspired by the GDPR. And at the federal level, we have a patchwork of protections for privacy and data security that apply in various contexts. However, generally applicable privacy bills in Congress have failed again and again.

 

These gaps make it treacherous for U.S. policymakers to import elements of other countries’ online child safety measures. Context is vitally important. Policymakers abroad are not writing child safety legislation on a blank slate; rather, what they draft must accord with existing legal protections – both for children’s rights, and for people’s online privacy and data security. 

 

We forget that context at our peril. Imagine if Congress passed a law inducing online service providers to collect personally identifiable information for age-assurance purposes, without having first enacted a privacy law requiring them to keep all that PII safe. It would be like making someone get on a flying trapeze after refusing to install a safety net underneath.

 

So: When looking to other jurisdictions for inspiration in crafting child safety policy, U.S.-based policymakers must account for how statutory differences make imitating some ideas imprudent, and how constitutional differences make imitating other ideas impossible

 

I’ve mentioned a few areas of law that inform policymaking for children’s online safety. Next I’ll discuss a few more that are relevant to our topics today. Probably the most obvious is Section 230, the federal law that largely prevents online service providers, such as social media platforms, from being held liable for the activities of their users. This means that civil lawsuits and state criminal charges can’t be brought against providers for hosting, say, cyberbullying, health misinformation, pro-eating disorder content, or other user content that might be harmful to children. 

 

Section 230 has limits. It does not grant providers immunity for violations of federal IP law or federal criminal law, including federal CSAM law, and there’s also a controversial carve-out for certain sex trafficking offenses. What’s more, providers are not immune from potential liability if they contributed to the illegality of the content in question.

 

Lately, it’s in vogue to argue that Section 230 should not apply to bar liability for the design of an online service’s features, as distinct from the user content the service carries. That angle is highly pertinent to children’s online safety legislation because it informs efforts to regulate dark patterns, addictive UI, gamification, and other design features. The fate of this “design exception” argument may be decided by a pending Supreme Court case about whether Section 230 applies to YouTube’s algorithmic recommendations. Basically: Stay tuned. 

 

Section 230’s immunity has been broadly interpreted by the courts since its enactment in the mid-’90s. But even if a court rules that 230 does not apply to a particular charge, that does not mean the provider is automatically guilty of the accusation: it merely means it must face the litigation instead of getting it dismissed early on. The plaintiff or prosecutor still has to prove their case. Thus, even if the Supreme Court holds that YouTube must face that lawsuit over its algorithmic recommendations, that’s no guarantee the plaintiffs would win. A defendant whose 230 argument fails may well ultimately prevail in the lawsuit on other grounds.

 

It’s worth noting that often, when people think they’re mad about Section 230, they’re actually mad about the First Amendment. Not only does the First Amendment protect a lot of unpleasant user content, it also protects online services’ own editorial discretion to choose what content to carry or ban, amplify or downrank, etc. Even if Section 230 were repealed tomorrow, the First Amendment would still protect providers’ content-moderation preferences – even the ones you don’t like.

 

Of course, neither the First Amendment nor Section 230 protects CSAM. Child sex abuse material is one of the few categories of content that fall outside First Amendment protection, and it’s illegal for providers to knowingly host it. Federal law requires that when providers have actual knowledge of CSAM on their service, they must report it to NCMEC. As said, the Fourth Amendment precludes the state from requiring providers to look for CSAM, but they can search their services voluntarily (and many do) without violating the Fourth Amendment, because those are private searches by a private actor. 

 

Those private searches are also allowed by the Electronic Communications Privacy Act, or ECPA for short. ECPA is a federal law governing providers’ ability to access and disclose users’ communications, communications metadata, and basic information about a user (such as their email and IP address). The law draws a distinction between content and non-content data, between voluntary and compulsory disclosures, and between disclosures to law enforcement and to private parties. With some exceptions, it prohibits providers from disclosing the contents of user communications, such as emails and DMs, to private parties. As for police, they must get a warrant for those contents. 

 

On top of the federal ECPA, California has its own CalECPA law, which says that “in most cases, [state] police [agencies] must obtain a warrant from a judge before accessing a person’s private information, including data from personal electronic devices, email, digital documents, text messages, and location information.” Policymakers must be aware that ECPA and CalECPA tie providers’ hands in terms of what they can disclose, and to whom, in child safety matters. 

 

Speaking of law enforcement access to information: Encryption is legal in the U.S., full stop. Federal law allows the providers of online services, such as messaging apps, to encrypt user data so that even they, the provider, cannot decrypt it. Likewise, it’s lawful for Apple and Google to design encrypted smartphones that the companies themselves cannot unlock for law enforcement. And it’s lawful for Apple and WhatsApp to, as of recently, start offering encryption functionality that users can turn on to protect their cloud backups, such that the cloud hosting company can no longer read the contents of the backed-up data. And nothing in current U.S. law permits law enforcement to force the providers of encrypted devices and services to change their product design in order to enable law enforcement access to users’ data.

 

Encryption is a standard best practice for protecting data privacy and security, and it’s been vital to safeguarding Americans – including children – against hackers, foreign adversaries, domestic abusers (including parents), and other threats. Indeed, in the absence of a federal privacy law, consumer protection authorities, from the Federal Trade Commission to state attorneys general, have been very active in recent years in bringing enforcement actions for unfair and deceptive business practices against firms that fail to adequately secure user data, including ed-tech companies that hold lots of data about kids – and did a shoddy job of protecting it. 

 

At the same time, strong encryption for devices and digital data complicates both providers’ trust & safety programs and governments’ criminal investigations. How to protect users’ (including children’s) privacy, security, and personal safety in a ubiquitously-encrypted world, is an ongoing discussion that I’m sure we’ll touch on today.

 

Finally, an area of law where providers basically write the law themselves is terms of service, or TOS for short. TOS are legally binding contracts that dictate what is and isn’t acceptable behavior on a particular service. Providers commonly use their TOS to set the rules for keeping children safe, such as by clearly banning CSAM, plus grooming, cyberbullying, or non-consensual deepfakes, to name a few examples. Providers setting their own community rules is an important component of the online child safety landscape. New challenges are presented by the exploding popularity of the so-called “fediverse,” such as Mastodon, where each server sets its own rules, and trust & safety functions can’t be centralized the way they can at Instagram or YouTube. 

 

This has been a high-level overview of just some of the many legal considerations that come into play when policymakers try to regulate child safety online. It is a daunting task, to say the least, and as I’ve laid out, I think America’s unique social and legal context make the task especially hard here compared to other countries. No surprise, then, that for every law implicating online child safety that lawmakers have passed, there’s a lawsuit challenging it.

 

In today’s event, we’ll be highlighting a lot of competing trade-offs when it comes to children’s online safety. We’ll be looking for areas of agreement on what constitutes sound policy – and maybe we’ll even find some. But don’t get discouraged if we don’t. Everyone in this room has the shared desire, in good faith, to protect children online, even if we disagree on what that means in practice. Let’s carry that shared sense of purpose into the afternoon. I’m looking forward to the discussions today. Thank you.

Add new comment