In this HAI seminar, Policy Fellow at Stanford HAI, Riana Pfefferkorm, discussed her research paper on AI-generated child sexual abuse material (AI CSAM).
AI CSAM poses distinct harms, including reputational damage and emotional distress when images are created from photos of fully clothed children. The rise of “nudify,” “undress,” and face-swapping apps has made it easy for unskilled users to produce such material, leading to incidents in schools where male students have targeted female peers. The paper assesses how educators, platforms, law enforcement, state legislators, and AI CSAM victims are thinking about and responding to AI CSAM, providing a chance for schools to proactively prepare their AI CSAM prevention and response strategies.
This seminar was recorded on December 3, 2025 at Stanford University.