Getty Images/iStockphoto

Vendors struggle to prevent GenAI use in child sexual abuse

Major large language model and image generation system vendors agree to work on limiting the ability of GenAI technology to create and spread child sexual abuse material.

A Wisconsin man was arrested May 17 and charged with using a generative AI text-to-image model to create thousands of sexually explicit images of preadolescent children.

The arrest of the 42-year-old man -- who is also charged with sending explicit, and illegal, AI-generated obscene images to a minor -- came as the explosively growing GenAI industry grapples with the capacity for criminals to use the technology to exploit children.

Predators have used technology for decades as the rise of the internet spawned a dramatically pervasive way to distribute child sexual abuse images. But now, with the newfound power, speed and scalability GenAI gives predators to create and spread child sexual abuse material (CSAM), child safety advocates fear another such inflection point is at hand.

In response, two nonprofit organizations dedicated to responsible technology have spurred a group of tech giants and other AI vendors to agree to technical design principles for GenAI systems to prevent and reduce CSAM and child sexual exploitation material.

The signatories to the white paper released on April 23, titled "Safety by Design for Generative AI: Preventing Child Sexual Abuse," include Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, Stability.ai and Teleperformance.

The document was authored by Thorn, a child sexual abuse prevention nonprofit co-founded by actors Demi Moore and Ashton Kutcher, in collaboration with nonprofit All Tech Is Human and some of the signatory companies.

Uniting some of the biggest names in GenAI in common cause against one of the most insidious online threats to children was greeted as a positive step and one that goes beyond the symbolic.

It's great that they're recognizing that this is an impending problem, and they're proactively doing something and they're doing something together.
Elizabeth JeglicProfessor of psychology, John Jay College

"It's great that they're recognizing that this is an impending problem, and they're proactively doing something and they're doing something together," said Elizabeth Jeglic, a professor of psychology at John Jay College and sexual violence prevention researcher.

"To come up proactively with a cohesive strategy is definitely a good thing. Now, to tell you whether it's going to be effective, that's really going to be hard," she continued. "Because how do you measure effectiveness?"

Beyond gauging its success, the initiative is also up against other challenges including who will monitor the AI companies to ensure they live up to their commitment and what resources the vendors will bring to bear on the problem.

In the meantime, GenAI technology can be used -- and is being used -- to exploit children in a variety of ways. The expanding prevalence of AI-generated CSAM makes sifting through ever-increasing amounts of content to identify victims more difficult, according to Thorn.

With GenAI, child abuse perpetrators can now relatively easily generate new abuse material of children and sexualize benign imagery by making images look like a particular child or a completely fictional one. And GenAI models can provide bad actors with information about how to sexually abuse or coerce a child as well as details on how to destroy evidence.

What is safety by design?

The path to combating these growing threats that Thorn, All Tech Is Human and the participating vendors chose is technological.

Safety by design is "a proactive approach to product design," according to Thorn. It requires vendors to anticipate threats during the development and design of GenAI systems and to design safeguards rather than retrofit fixes after the technology has caused harm.

"The goal of these principles is to make it such that models are not capable of producing this type of abuse material," said Rebecca Portnoff, vice president of data science at Thorn. "That's why we have, in the development section, details about mitigations and what it looks like to build these models so they are less capable of producing AI-generated child sexual abuse material and other related child abuse material."

Meanwhile, the vendors that agreed to the GenAI design principles already had people, departments or programs dedicated to child sexual abuse prevention.

Even so, Stability.ai, developer of one of the most used AI text-to-image models, has seen its Stable Diffusion platform used by child sexual abuse perpetrators, including the accused Wisconsin man, Steven Anderegg. Other AI image-generating systems are also used, according to investigators.

During an online roundtable on April 25 about the design principles, hosted by Thorn, representatives from Stability.ai and other signatory vendors maintained that their companies are paying serious attention to the capacity of predators to use their technology to harm children.

"We are investing a lot of time and money into developing safeguards. It starts from the point in time that we create training data sets, really making sure that we're curating safe training data sets," said Ella Irwin, senior vice president of integrity at Stability.ai.

"We want to make sure we're working with high-quality, safe data," she continued. "And then making sure that we are effectively monitoring and detecting suspicious activity that's happening on our API and on our platform."

Also at the event, Chelsea Carlson, technical program manager for child safety at GenAI pioneer OpenAI, said: "To truly benefit all of humanity means protecting and empowering the most vulnerable among us. And that guiding principle shapes our research. It shapes our product development and it informs how we engage with the larger global community.

"Our commitment to the safety by design principles is motivated by the fact that we recognize that we have a profound responsibility," she added.

Design standards for GenAI technology to prevent and reduce child sexual abuse.

Safety by design challenges

However, as Irwin of Stability.ai noted, developing GenAI technology to limit its ability to create and distribute CSAM is difficult.

As laid out in the "Safety by Design" document, the vendors agreed to focus on preventing and reducing child sexual abuse during the major phases of the AI lifecycle: development, deployment and maintenance.

For example, continuously stress-testing models presents something of a technical challenge, said David Thiel, big data architect and chief technologist at the Stanford Internet Observatory, whose recent work has focused on eliminating online child exploitation and the use of GenAI for child sexual abuse. Thiel has collaborated with Thorn on some of the research.

A notable problem for those testing AI models is that it's illegal -- in the U.S., at least -- to create images of child sexual abuse. Instead, whoever is doing the stress-testing must use proxies, or text descriptions, of harmful images.

"In our research, we've actually found that generative AI is good at describing things like violent imagery, and it works well for doing content moderation," Thiel said. "It's not so great when it comes to explicit content."

Another challenging piece of the design safety principles is provenance -- determining the source of AI-generated CSAM and what content has been generated by AI, according to Thiel.

"The problem is really just that we're very far behind on it. Most of these models were released with very little attention paid to actually identifying the outputs of the systems that are creating them," he said. "Some places will complain that they don't have the resources to develop that, which I think is nonsense.

"If they put anywhere near the amount of research into detection as they did toward generation, I think we'd be in a much better place," Thiel added.

But like many others in the tech world who are working on the societal problem of child sexual abuse and exploitation, Thiel is hopeful that the design standards will have a positive effect on GenAI technology that's in the pipeline.

"I'm at least somewhat optimistic that the safety by design principles will be better applied moving forward with generative models," he said. "I think future generative models will be designed somewhat better, and people will take those things into account when developing video generative models that have wide utility."

One pressing question yet to be resolved is how to and who will oversee the vendors' attempts to abide by the design principles over the years. Thorn itself is not an independent third-party auditor, Portnoff noted.

"But the role of third-party auditing in this type of work is incredibly important. It's one thing to make a commitment. It's another thing to show progress on the commitment," she said. "It's another thing to have an independent party come in and assess how well you have acted on this commitment."

Meanwhile, U.S. lawmakers have put forth a number of bills to criminalize the creation, distribution and possession of AI-generated CSAM.

Part of the continuing work that will come out of the design safety principles initiative is engaging with standard-setting organizations such as the IEEE or NIST to possibly perform such auditing, Portnoff said.

Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience.

Dig Deeper on AI technologies