Stanford CIS

How Do We Protect Children in the Age of AI?

on

As students return to classrooms this fall, many teachers are concerned about emerging AI tools getting in the way of learning. But a more worrisome AI trend is developing: Older kids are beginning to use “undress” apps to create deepfake nudes of their peers. Following a few news stories of incidents in places like California and New Jersey, the prevalence of this phenomenon is unclear, but it seems not to have overwhelmed schools just yet. That means now is the time for parents and schools to proactively plan to prevent and respond to this degrading and illegal use of AI. 

HAI Policy Fellow Riana Pfefferkorn studies the proliferation and impact of AI-generated child sexual abuse material. In a May 2025 report, she and co-authors Shelby Grossman and Sunny Liu gathered insights from educators, platforms, law enforcement, legislators, and victims to assess the extent of the problem and how schools are handling the emerging risk. 

“Although it’s early days and we don’t have an accurate view of how widespread the problem may be, most schools are not yet addressing the risks of AI-generated child sexual abuse materials with their students. When schools do experience an incident, their responses often make it worse for the victims,” Pfefferkorn says.

Read full interview at Stanford HAI