Skip to Content

Hundreds of images of child sexual abuse found in dataset used to train AI image-generating tools

By Catherine Thorbecke, CNN

New York (CNN) — More than a thousand images of child sexual abuse material were found in a massive public dataset used to train popular AI image-generating models, Stanford Internet Observatory researchers said in a study published earlier this week.

The presence of these images in the training data may make it easier for AI models to create new and realistic AI-generated images of child abuse content, or “deepfake” images of children being exploited.

The findings also raise a slew of new concerns surrounding the opaque nature of the training data that serves as the foundation of a new crop of powerful generative AI tools.

The massive dataset that the Stanford researchers examined, known as LAION 5B, contains billions of images that have been scraped from the internet, including from social media and adult entertainment websites.

Of the more than five billion images in the dataset, the Stanford researchers said they identified at least 1,008 instances of child sexual abuse material.

LAION, the German nonprofit behind the dataset, said in a statement on its website that it has a “zero tolerance policy for illegal content.”

The organization said that it received a copy of the report from Stanford and is in the process of evaluating its findings. It also noted that datasets go through “intensive filtering tools” to ensure they are safe and comply with the law.

“In an abundance of caution we have taken LAION 5B offline,” the organization added, saying that it is working with the UK-based Internet Watch Foundation “to find and remove links that may still point to suspicious, potentially unlawful content on the public web.”

LAION said it planned to complete a full safety review of LAION 5B by the second half of January and plans to republish the dataset at that time.

The Stanford team, meanwhile, said that removal of the identified images is currently in progress after the researchers reported the image URLs to the National Center for Missing and Exploited Children and the Canadian Centre for Child Protection.

In the report, the researchers said that while developers of LAION 5B did attempt to filter certain explicit content, an earlier version of the popular image-generating model Stable Diffusion was ultimately trained on “a wide array of content, both explicit and otherwise.”

A spokesperson for Stability AI, the London-based startup behind Stable Diffusion, told CNN in a statement that this earlier version, Stable Diffusion 1.5, was released by a separate company and not by Stability AI.

And the Stanford researchers do note that Stable Diffusion 2.0 largely filtered out results that were deemed unsafe, and as a result had little to no explicit material in the training set.

“This report focuses on the LAION-5b dataset as a whole,” the Stability AI spokesperson told CNN in a statement. “Stability AI models were trained on a filtered subset of that dataset. In addition, we subsequently fine-tuned these models to mitigate residual behaviors.”

The spokesperson added that Stability AI only hosts versions of Stable Diffusion that includes filters that remove unsafe content from reaching the models.

“By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content,” the spokesperson said, adding that the company prohibits use of its products for unlawful activity.

But the Stanford researchers note in the report that Stable Diffusion 1.5, which is still used in some corners of the internet, remains “the most popular model for generating explicit imagery.”

As part of their recommendations, the researchers said that models based on Stable Diffusion 1.5 should be “deprecated and distribution ceased where feasible.”

More broadly, the Stanford report said that massive web-scale datasets are highly problematic for a number of reasons, even with the attempts at safety filtering, because of their possible inclusion of not just child sexual abuse material but also because of other privacy and copyright concerns that arises from their use.

The report recommended that such datasets should be restricted to “research settings only” and that only “more curated and well-sourced datasets” should be used for publicly distributed models.

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Business/Consumer

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KIFI Local News 8 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content