The rapid advancement of artificial intelligence has made it easier than ever for bad actors to create child sexual abuse material, leaving prosecutors and lawmakers struggling to keep up.
Despite efforts by tech companies, law enforcement and activists, offenders consistently exploit system loopholes, open-source AI models and ready-made sexual exploitation platforms to generate imagery of both identifiable and nonexistent children, according to experts and law enforcement officials who spoke with NBC News.
...
Law enforcement alleged that the man had used Stable Diffusion as well as “special add-ons created by other Stable Diffusion users that specialized in producing genitalia,” which allowed him to “generate photo-realistic images of minors,” according to the brief in the ongoing case. A lawyer representing the man declined to comment.
In response to a request for comment on the case, a Stability AI spokesperson said to NBC News that it “is deeply committed to preventing the misuse of AI and has always prohibited the use of our image models and tools for unlawful activity, including all attempts to edit or create CSAM.”Riana Pfefferkorn, a policy fellow at the Stanford University’s Institute for Human-Centered Artificial Intelligence, said the use of open-source platforms has made it difficult for authorities to crack down on AI-generated CSAM.
- Date Published:2.28.2026
- Original Publication: NBC News