Stanford CIS

Well, At Least the Anti-States’ Rights AI EO Spares AI-CSAM Laws

By Riana Pfefferkorn on
Photo by Igor Omilaev / Unsplash

On December 11, 2025, President Trump signed an executive order (EO) that purports to deprive states of the ability to regulate artificial intelligence (AI) – to the modest extent possible given the limited power of EOs, which cannot require or forbid states to do anything. (Whether specific state AI laws will founder under other authorities, such as Section 230 or the First Amendment, is a separate question beyond the scope of this blog post.)

There is plenty to say about this EO, much of it unflattering (since it is very bad). However, one positive aspect is that the EO appears to tell the executive branch not to hassle states for cracking down on AI-generated child sex abuse material (AI-CSAM), the topic of dozens of state laws enacted in recent years. And because that “hands off” direction extends to online child safety more broadly, it looks like states will also be able to turn their attention to the red-hot topic of AI chatbots’ safety risks for children without drawing White House ire. (Again, a caveat that other authorities might still doom those state laws; also, laws purporting to protect children online are often terrible ideas, whether this administration approves of them or not.) 

Section 8(a) of the EO tasks two senior administration officials with proposing a legislative recommendation for how to preempt state-level AI laws. There are significant changes to this section since the draft of the EO that leaked last month. Newly added is Section 8(b), which provides a list of carve-outs: 

“(b) The legislative recommendation called for in subsection (a) of this section shall not propose preempting otherwise lawful State AI laws relating to:

(i) child safety protections;

(ii) AI compute and data center infrastructure, other than generally applicable permitting reforms;

(iii) State government procurement and use of AI; and

(iv) other topics as shall be determined.”

Nominally, Section 8(b) only modifies Section 8(a) – the legislative proposal for preempting state AI laws. Elsewhere in the EO, Section 3 requires the Attorney General to create an “AI Litigation Task Force (Task Force) whose sole responsibility shall be to challenge State AI laws inconsistent with” White House AI policy (namely, “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI”). Section 4 calls for the creation of a list of litigation targets, i.e., state AI laws that conflict with that policy; Section 5 directs the withholding of federal broadband funding from states unless they regulate AI the way the White House wants them to. 

None of those sections contains the same limiting language that Section 8(b) imposes on 8(a). Nevertheless, I read that list of carve-outs as implicitly a statement of policy that carries over to the previous sections. That is, I find it highly unlikely that states’ AI-CSAM laws will be deemed to conflict with the EO’s stated policy (Section 4) or the federal FTC Act (Section 7), that the AI Litigation Task Force will sue all those states for having passed laws prohibiting AI-CSAM (Section 3), or that those laws will be implicated in the carrot/stick game of federal broadband funds (Section 5). This is for several reasons.

One, AI-CSAM prohibitions are perfectly consonant with the EO’s stated AI policy anyway. As said, that policy is “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” By and large, state AI-CSAM laws and bills target AI models’ outputs, not the models themselves. They will be enforced against the end user who creates and/or shares AI-CSAM using a generative AI tool, not the entities that provided the tool (though we are starting to see laws and bills targeting nudifier app services). Criminalizing AI-CSAM doesn’t hamper the business of OpenAI, Meta, Google, et al., nor the development of open-source models by nonprofits or academia. 

In fact, federal government policy on AI-CSAM is already very clear. Congress passed, and the President signed, the TAKE IT DOWN Act earlier this year. The law outlaws nonconsensual deepfake pornography whether it depicts adults or minors, with both criminal and civil liability. Plus, using a computer to create CSAM of real kids has been a federal crime for almost three decades already, as I cover in this paper.

Second, for this administration to sue states to try to invalidate their child-protection laws is what seasoned political experts, in their specialized jargon, would call “a completely dumbfuck idea.” “Child safety” is already a sacrosanct topic, as I’ve learned from a decade working on tech policy. Invoking “child safety” tends to shut down nuanced discussion and rational thought, and it makes opposing even bad bills very difficult politically, as voicing concerns gets you called a pedophile. As one of the state legislative staffers interviewed for my recent AI-CSAM paper remarked, “Nobody objects to trying to protect children.” 

There would be terrible optics to the Department of Justice (DOJ) going after states for CSAM- or other child safety-related laws. That would be true at any time, but it is particularly ill-advised now. The Epstein files (of which another batch are in the news today) have proven remarkably resilient as a topic preoccupying the American public, even among Trump’s own base. Even Trump loyalists in the GOP voted to release the files. The President’s approval rating is in the toilet. He is not going to tell Pam Bondi to give Gavin Newsom more ammo against him. 

Third, executing the EO will require resource management. There are only so many federal government employees available to carry out the EO – especially since this administration has illegally fired so many federal workers and redirected so many others to helping kidnap nannies and gardeners, with a measurable impact on child safety investigations. Whoever works on implementing the EO will have only so many hours in the day, and with six different sections of the EO creating various workstreams, it’s possible some employees will end up on more than one. I assume the number of personnel carrying out the EO will be relatively modest, at least compared with the number tasked with immigration enforcement. Or redacting the Epstein files

As said, there are dozens of states with AI-CSAM laws. And CSAM is just one topic; states are proposing and passing way more bills on AI besides that. For the federal government to challenge every state AI law simply does not scale. That means the carve-outs listed in Section 8(b) – which allows for “other topics as shall be determined” – are an operational necessity. It’s also why I think Section 8(b)’s list will, in practice, be read to apply to the previous sections too. The EO tasks various agencies and individuals with time-sensitive projects; as they plan their work, they can now deem the Section 8(b) topics out-of-scope. Whereas many states have “State AI laws relating to child safety protections,” far fewer have the kinds of big-picture AI governance laws, like those passed by Colorado and California, that are likely to be the highest-priority targets under this EO. The 8(b) carve-outs free up employees’ limited time and resources to focus on those priorities. (To be clear, I’m not saying I agree with the EO – only that triage is unavoidable for those tasked with implementing it.)

Finally, it bears noting that the wording of Section 8(b) exempts state AI laws relating to “child safety protections,” not AI-CSAM in particular. I suspect that this wording is intentionally broad so that it encompasses not just AI-CSAM legislation, but also other online child safety bills (in past, present, or future sessions) – of which there are a lot – that incidentally or explicitly cover AI, not just social media, gaming, and the like. 

Child safety has been a predominant issue in the federal and state legislatures alike for several years running, and even if state laws addressing that topic might be “burdensome” on AI companies, it is a hard sell politically to let only AI companies get a hall pass from compliance. That’s why child safety was also a carve-out from some versions of a proposed federal legislative moratorium on state AI laws earlier this year. The House passed a moratorium (sans child safety exemption) in its version of the One Big Beautiful Bill Act, but the Senate ultimately removed it, prompting this small-beer simulacrum from the executive branch.

What’s more, AI chatbots are the latest, hottest topic in online child safety right now, following multiple high-profile cases of teen suicides allegedly related to chatbot interactions. States are already starting to enact legislation, and I believe AI chatbots will be as popular a legislative topic in the next few state legislative sessions as AI-CSAM was in the last few. After all, “Nobody objects to trying to protect children.” 

To that end, the EO’s child-safety carve-out may operate as a limit on EO execution strategy – even though, again, it’s found only in Section 8, not other sections. In practice, I predict that the “child safety protections” language in Section 8(b) will be read by the AI Litigation Task Force, the Federal Trade Commission, et al. as an instruction not to get in states’ way as they start to regulate AI chatbots with respect to child safety, even if those regulations would otherwise be squarely in the EO’s sights. For example, Section 7 of the EO (evoking July’s “Woke AI” EO) frowns on “State laws that require AI models to alter their truthful outputs.” Technically, that would include, say, a bill that would prohibit chatbots from providing true information to under-18 users about how to make a noose, like ChatGPT allegedly gave to a teenager before his suicide. Under the EO’s child safety carve-out, the administration is not likely to challenge a bill like that (or a part of a bill, or the enforcement thereof by the state). 

Lastly, to the extent that some states do meekly go along with this EO instead of standing up for themselves (I thought “states’ rights” were sacrosanct?), the child-safety carve-out also serves as a green light to the states. Even though the EO is plainly meant to scare them out of passing and enforcing new laws regulating AI, it’s a sign that they can pass AI laws governing CSAM, chatbots, and so on, and the administration won’t punish them for it. 

In conclusion, attacking states’ right to regulate AI and protect their own residents is abhorrent. But there are more than 50 states and territories, and their legislatures seem determined to ensure that their constituents are benefited by AI, not harmed by it. Even GOP-led state governments are standing up against the new EO. And no wonder: not only do they have to protect their own right to govern, but also, they’re listening to their constituents’ desires. More AI regulation is what Americans want. This EO is not going to stop that from happening.