Stanford CIS

Deepfake Nudes in Schools: When AI-Enabled Abuse Hits the Classroom

on

AI-powered “nudify” apps are increasingly being used by students to create deepfake sexual images of classmates, but most schools still do not have clear policies for prevention, reporting, discipline, or victim support.

Riana Pfefferkorn, a policy fellow at Stanford HAI, joins DeepFake Dialogues to discuss her policy brief, Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy, and what schools, lawmakers, platforms, and communities need to understand about how this is actually playing out.




Published in: Press , Artificial Intelligence , CSAM