On Wednesday, American attorneys general from all 50 states and four territories sent a letter to Congress urging lawmakers to establish an expert commission to study how generative AI can be used to exploit children through child sexual abuse material (CSAM). They also call for expanding existing laws against CSAM to explicitly cover AI-generated materials.
“As Attorneys General of our respective States and territories, we have a deep and grave concern for the safety of the children within our respective jurisdictions,” the letter reads. “And while Internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult.”
In particular, open source image synthesis technologies such as Stable Diffusion allow the creation of AI-generated pornography with ease, and a large community has formed around tools and add-ons that enhance this ability. Since these AI models are openly available and often run locally, there are sometimes no guardrails preventing someone from creating sexualized images of children, and that has rung alarm bells among the nation’s top prosecutors. (It’s worth noting that Midjourney, DALL-E, and Adobe Firefly all have built-in filters that bar the creation of pornographic content.)
“Creating these images is easier than ever,” the letter reads, “as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see. And because many of these AI tools are ‘open source,’ the tools can be run in an unrestricted and unpoliced way.”
As we have previously covered, it has also become relatively easy to create AI-generated deepfakes of people without their consent using social media photos. The attorneys general mention a similar concern, extending it to images of children:
“AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions. This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.”
When considering regulations about AI-generated images of children, an obvious question emerges: If the images are fake, has any harm been done? To that question, the attorneys general propose an answer, stating that these technologies pose a risk to children and their families regardless of whether real children were abused or not. They fear that the availability of even unrealistic AI-generated CSAM will “support the growth of the child exploitation market by normalizing child abuse and stoking the appetites of those who seek to sexualize children.”
Regulating pornography in America has traditionally been a delicate balance of preserving free speech rights but also protecting vulnerable populations from harm. Regarding children, however, the scales of regulation tip toward far stronger restrictions due to a near-universal consensus about protecting kids. As the US Department of Justice writes, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Indeed, as the Associated Press notes, it’s rare for 54 politically diverse attorneys general to agree unanimously on anything.
However, it’s unclear what form of action Congress might take to prevent the creation of these kinds of images without restricting individual rights to use AI to generate legal images, an ability that may incidentally be affected by technological restrictions. Likewise, no government can undo the release of Stable Diffusion’s AI models, which are already widely used. Still, the attorneys general have a few recommendations:
First, Congress should establish an expert commission to study the means and methods of AI that can be used to exploit children specifically and to propose solutions to deter and address such exploitation. This commission would operate on an ongoing basis due to the rapidly evolving nature of this technology to ensure an up-to-date understanding of the issue. While we are aware that several governmental offices and committees have been established to evaluate AI generally, a working group devoted specifically to the protection of children from AI is necessary to ensure the vulnerable among us are not forgotten.
Second, after considering the expert commission’s recommendations, Congress should act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM. This will ensure prosecutors have the tools they need to protect our children.
It’s worth noting that some fictional depictions of CSAM are illegal in the United States (although it’s a complex issue), which may already cover “obscene” AI-generated materials.
Establishing a proper balance between the necessity of protecting children from exploitation and not unduly hamstringing a rapidly unfolding tech field (or impinging on individual rights) may be difficult in practice, which is likely why the attorneys general recommend the creation of a commission to study any potential regulation.
In the past, some well-intentioned battles against CSAM in technology have included controversial side effects, opening doors for potential overreach that could affect the privacy and rights of law-abiding people. Additionally, even though CSAM is a very real and abhorrent problem, the universal appeal of protecting kids has also been used as a rhetorical shield by advocates of censorship.
AI has arguably been the most controversial tech topic of 2023, and using evocative language that paints a picture of rapidly advancing, impending doom has been the style of the day. Similarly, the letter’s authors use a dramatic call to action to convey the depth of their concern: “We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
Apparently they’d prefer people to go out and sexually abuse children, I guess?