Stanford CIS

New Paper on AI-Generated CSAM

By Riana Pfefferkorn on

Today, Lawfare published my paper on the law and policy implications of AI-generated child sex abuse material (CSAM), and there’s a podcast episode accompanying it featuring me and my Internet Observatory colleague David Thiel. The paper is here, and the podcast is here

tl;dr: Some AI-generated CSAM is First Amendment-protected speech; some is not - including faked porn images of teen girls. But platforms will report it all, which will overload the reporting system (which gets over 30 million reports per year as-is). 

Thank you to Lawfare for inviting me to publish this paper, which was inspired by David's important paper last year exposing the small but growing problem of generative AI-created CSAM, a new variant on the old problems of online child sex abuse and exploitation and the sexual harassment of women and girls.

Published in: Blog , Cybersecurity