AI Image Generators Were Trained on Child Porn

That training helps systems produce more explicit imagery of children, though fake
By Newser Editors and Wire Services
Posted Dec 25, 2023 2:50 PM CST
AI Image Generators Were Trained on Child Porn
David Thiel, chief technologist at the Stanford Internet Observatory and author of its report that discovered images of child sexual abuse in the data used to train artificial intelligence image-generators, poses for a photo on Wednesday, Dec. 20, 2023 in Obidos, Portugal.   (Camilla Mendes dos Santos via AP)

Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built. Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world, per the AP. Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they've learned from two separate buckets of online images—adult pornography and benign photos of kids.

But the Stanford Internet Observatory (SIO) found more than 3,200 images of suspected child sexual abuse in the giant AI database Large-scale Artificial Intelligence Open Network (LAION), an index of online images and captions that's been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated. The response was immediate. On the eve of the Wednesday release of the SIO report, LAION told the AP it was temporarily removing its datasets "to ensure they are safe." It added it has "has a zero tolerance policy for illegal content."

While the images account for just a fraction of LAION's index of some 5.8 billion images, the Stanford group says it is likely influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times. It's not an easy problem to fix, and traces back to many generative AI projects being "effectively rushed to market" and made widely accessible because the field is so competitive, said SIO chief technologist David Thiel, who authored the report. "Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention," Thiel said in an interview.

story continues below

A prominent LAION user that helped shape the dataset's development is London-based startup Stability AI, maker of the Stable Diffusion text-to-image models. New versions of Stable Diffusion have made it much harder to create harmful content, but an older version introduced last year—which Stability AI says it didn't release—is still baked into other applications and tools and remains "the most popular model for generating explicit imagery," according to the Stanford report. Many text-to-image generators are derived in some way from the LAION database, though it's not always clear which ones. OpenAI, maker of DALL-E and ChatGPT, said it doesn't use LAION and has fine-tuned its models to refuse requests for sexual content involving minors. Read more from the AP here. (More artificial intelligence stories.)

Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X