Stanford CIS

Applying Federal Digital Communications Privacy Law to Hosted AI Models

By Riana Pfefferkorn on

On March 31, I gave a virtual guest lecture in Mailyn Fidler’s course “The Digital Fourth Amendment” at Harvard Law School. Prof. Fidler invited me to come talk about warrants to AI companies after she read my October 2025 blog post about a warrant issued to OpenAI. As a sort of follow-up to that blog post, I’ve reproduced a portion of my lecture below (with some tweaks to better fit the blog post format).

AI models are used by hundreds of millions of people every day. They have replaced traditional search engines for many users, in addition to becoming companions that some people get very emotionally attached to. The larger the context window for a particular model – the more it can “remember” – the more useful it is to the user… and the more comprehensive the log the AI company is storing about someone’s life, creating a treasure trove for law enforcement. How should we think about AI chatbots and their providers for purposes of digital privacy law?

The SCA Requires a Warrant for AI Users’ Prompts and Models’ Responses

Georgetown Law professor Paul Ohm recently published a paper for Lawfare about the problems posed by reverse searches. In it, he notes that keyword searches and geofence warrants are the two main types of reverse searches right now, but that the list is likely to expand to include AI chatbots, given their popularity and the “gold mine of evidence” they contain about users. Ohm’s argument is that reverse searches probably aren’t authorized under the Stored Communications Act (SCA) and also likely violate the Fourth Amendment – a question on which the Supreme Court will very soon hear oral argument in a case called Chatrie

Another Lawfare paper on the SCA, published the same day as Ohm’s, goes a bit further in discussing AI chatbots. Digital surveillance experts Rick Salgado and Stephanie Pell delve into the legislative history of the Electronic Communications Privacy Act (ECPA, of which the SCA is a part) to support their argument that “a search query or prompt to be processed by an AI service for text or image generation” qualifies as “contents of an electronic communication” for SCA purposes, “even if there is no second party or communicant .… The term does not require the involvement of some ‘other’ from whom it is sent or to whom it is conveyed.” Legislative history is also invoked to argue that AI companies should count as, at minimum, RCS providers (as other commentators have argued). Implicitly, I believe they’re arguing that AI companies count as ECS providers too.

To Salgado and Pell’s argument, I will add that there have been numerous ECPA cases involving transmissions between a user and a first-party server – also known as “visiting a website.” Your browser sends info to a server, the server sends data back; it’s a computer, not a human, on the other end of the transmission. For over 20 years, the courts have had zero trouble understanding those transmissions to be “electronic communications.” What counts as “content” has proved more divisive, but things like search queries users type into a search engine, or information users enter into a form provided by a website, have qualified as “contents.” Therefore, if you’re a user interacting with an AI chatbot on your phone or your laptop, those interactions seem very clearly to be “contents of electronic communications.” (Not just the user prompts; the AI’s response to a prompt, which may be text, an image, a video, or a combination thereof, is also straightforwardly “information concerning the substance, purport, or meaning of” an “electronic communication.”

So: For SCA purposes, AI model prompts and responses are “contents of electronic communications,” and AI companies count as ECS and/or RCS providers. 

The Federal Government Seems to Think So, Too

Does the federal government agree? Looking at the warrant to OpenAI that I wrote about last fall, I think it does. In the affidavit filed in support of the search warrant application, paragraph 4 says: “This Court has jurisdiction to issue the requested warrant because it is “a court of competent jurisdiction” as defined by 18 U.S.C. § 2711. See 18 U.S.C. §§ 2703(a), (b)(1)(A), & (c)(1)(A).”

Those citations are to the SCA: Section 2711 is the SCA’s definitions section; section 2703 deals with compelled disclosures to law enforcement of customer communications (both content and non-content information). 2703(a) is about “contents of wire or electronic communications in electronic storage”; 2703(b) is about “contents of wire or electronic communications in a remote computing service”; 2703(c) is about “records concerning [an] electronic communications service or remote computing service.”

These citations indicate to us that the government – or at least, the Special Agent with Homeland Security Investigations (HSI) making this affidavit – thinks of OpenAI as being subject to the SCA, and that the government thinks ChatGPT’s prompts and responses are contents of electronic communications in electronic storage. However, the affidavit is kind of cagey about whether the government considers OpenAI to be an ECS provider or an RCS provider; the affidavit simply tosses off a string citation to Section 2703(a), (b), and (c) all in a row. At minimum, the government seems to think OpenAI counts as either one or the other, and possibly both. (The distinction has long been critiqued as unworkable in the modern Internet age.) 

That said, maybe I’m reading way too much into this language. It is entirely possible that the affiant just took his usual warrant affidavit template for stored communications (e.g., emails, cloud storage files) and copy-pasted the language into the OpenAI warrant application, without thinking about it too much. It’s also possible that, sitting in a field office up in Portland, Maine, the HSI agent here didn’t consult with any D.C.-based higher-ups. Agency leadership typically likes to have a say in whether to try out a novel type of legal process for electronic surveillance — such as a reverse prompt warrant to an AI company under the SCA. Perhaps boilerplate language in one paragraph of a warrant affidavit is too thin a reed to bear the weight I’m giving it.

Still, whatever the backstory, on the face of it this OpenAI warrant was sought under the SCA. Absent any other contextual information, we may as well treat that as a concession by the federal government that the SCA applies to AI companies, and that means “get a warrant.” 

The latter conclusion is further bolstered by the fact that another part of the federal government, the Federal Bureau of Investigation (FBI), was recently revealed to have served a warrant to xAI and thereby gotten a suspect’s prompts to xAI’s Grok AI tool. (This was not a reverse warrant; unlike in the OpenAI case, the suspect’s identity was already known, so the FBI could specify whose account they wanted.) The warrant to xAI is sealed, so we don’t know what authority (i.e., the SCA) it invoked. Nevertheless, it is another indication that federal government policy when seeking AI user data is to get a warrant.

AI Companies Do, and Should, Stand Up for Users’ Privacy

Of course, they had little choice, really. OpenAI’s policy for government requests for user data says, “OpenAI US … only discloses requested user content to a law enforcement request in response to a valid warrant or equivalent.” That’s a standard requirement for tech companies to impose on government demands post-Warshak. (xAI’s policy is more ambiguous, requiring “appropriate legal process such as a subpoena, court order, or warrant.”) 

It is important and meaningful that OpenAI has staked out this “get a warrant” position from the get-go, while the number of government demands for user data that it receives still remains shockingly minuscule for a company that counts more than one-tenth of the Earth’s population as weekly active users. As Harvard Law now-3L Jackie O’Neil wrote in her 2025 essay about geofence warrants (before the Supreme Court’s grant of cert in the Chatrie case on that topic):

While courts wrestle with options for sophisticated legal regulation of [reverse warrants], innovation outside of the criminal procedure context may be a stopgap. Technology companies wield ultimate control over [reverse] warrants’ efficacy. … Without means to compel … private companies to retain or organize their data, law enforcement agencies are at the mercy of large private companies with respect to [reverse warrants].

Put simply, AI companies already have a big role to play in protecting users’ digital privacy. AI tools like ChatGPT have become runaway successes with gargantuan user bases, long before the courts have had an opportunity to apply the niceties of ECPA definitions to them and the data they have about their users. Deciding in advance to demand a warrant for the troves of personal data they hold, and then communicating that policy to law enforcement and the public, is a way of setting norms and expectations, ensuring internal practice consistency, gaining public trust, and (hopefully) preempting government attempts at shenanigans

Protecting User Privacy Means Saying “No” to Overbroad Reverse Prompt Warrants

What remains to be seen is whether OpenAI (and its brethren) will push back if and when shenanigans do happen. The warrant to OpenAI sought only one specific user’s account, but if OpenAI could locate one account that entered a particular prompt and got a particular response, that implies the ability to return a list of multiple accounts that entered a particular prompt (though this would violate its requirement that legal demands “unambiguously identify the user account(s) at issue”). Like I said in my original post, we don’t know OpenAI’s precise capabilities with regard to its gargantuan data stores, but they seem to be quite considerable.

AI companies should keep in mind that it’s in their interest, not just that of their users, not to let overbroad reverse prompt warrants become the new geofence warrants, whose constitutionality will soon be decided in Chatrie. Once Google started complying with them, geofence warrants ballooned to constitute over a quarter of all warrants Google received in the U.S. Eventually, Google changed how it stores user location data so that it couldn’t comply with those warrants anymore. There’s a lesson there for AI companies’ in-house counsel (many of whom came from Google, as it happens). 

I am skeptical that the result in Chatrie, whatever it may be, will definitively settle the constitutionality of reverse warrants in the AI context to the satisfaction of all involved (including AI companies, their users, law enforcement, and judges). After all, following the Court’s last big digital Fourth Amendment decision in 2018, many lower courts elected to read the decision narrowly when considering other flavors of digital surveillance. And anyone to whom the Chatrie outcome proves unfavorable will have an incentive to split hairs. 

We can anticipate litigation in the years to come over how Chatrie applies to AI. But that only makes it all the more important for AI companies to proactively adopt a robustly privacy-protective stance in the here and now. The major AI companies will rapidly become very powerful evidence intermediaries. We, the general public and the users of those companies, also have power. We can influence how the big AI companies respond to novel law enforcement demands. Consider how the number of downloads of Claude surged after Anthropic stood up to the Department of Defense over issues including the mass surveillance of Americans. That’s the kind of signal from users that’s hard for these companies to ignore. It wasn’t that long ago that it was in vogue for tech companies to loudly stand up for their users’ rights. It could become cool again. We can help make that happen.