AI-generated child sex imagery has every US attorney general calling for action

On Wednesday, American attorneys general from all 50 states and four territories sent a letter to Congress urging lawmakers to establish an expert commission to study how generative AI can be used to exploit children through child sexual abuse material (CSAM). They also call for expanding existing laws against CSAM to explicitly cover AI-generated materials.

“As Attorneys General of our respective States and territories, we have a deep and grave concern for the safety of the children within our respective jurisdictions,” the letter reads. “And while Internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult.”

In particular, open source image synthesis technologies such as Stable Diffusion allow the creation of AI-generated pornography with ease, and a large community has formed around tools and add-ons that enhance this ability. Since these AI models are openly available and often run locally, there are sometimes no guardrails preventing someone from creating sexualized images of children, and that has rung alarm bells among the nation’s top prosecutors. (It’s worth noting that Midjourney, DALL-E, and Adobe Firefly all have built-in filters that bar the creation of pornographic content.)

“Creating these images is easier than ever,” the letter reads, “as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see. And because many of these AI tools are ‘open source,’ the tools can be run in an unrestricted and unpoliced way.”

As we have previously covered, it has also become relatively easy to create AI-generated deepfakes of people without their consent using social media photos. The attorneys general mention a similar concern, extending it to images of children:

“AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions. This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.”

When considering regulations about AI-generated images of children, an obvious question emerges: If the images are fake, has any harm been done? To that question, the attorneys general propose an answer, stating that these technologies pose a risk to children and their families regardless of whether real children were abused or not. They fear that the availability of even unrealistic AI-generated CSAM will “support the growth of the child exploitation market by normalizing child abuse and stoking the appetites of those who seek to sexualize children.”

Regulating pornography in America has traditionally been a delicate balance of preserving free speech rights but also protecting vulnerable populations from harm. Regarding children, however, the scales of regulation tip toward far stronger restrictions due to a near-universal consensus about protecting kids. As the US Department of Justice writes, “Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law.” Indeed, as the Associated Press notes, it’s rare for 54 politically diverse attorneys general to agree unanimously on anything.

However, it’s unclear what form of action Congress might take to prevent the creation of these kinds of images without restricting individual rights to use AI to generate legal images, an ability that may incidentally be affected by technological restrictions. Likewise, no government can undo the release of Stable Diffusion’s AI models, which are already widely used. Still, the attorneys general have a few recommendations:

First, Congress should establish an expert commission to study the means and methods of AI that can be used to exploit children specifically and to propose solutions to deter and address such exploitation. This commission would operate on an ongoing basis due to the rapidly evolving nature of this technology to ensure an up-to-date understanding of the issue. While we are aware that several governmental offices and committees have been established to evaluate AI generally, a working group devoted specifically to the protection of children from AI is necessary to ensure the vulnerable among us are not forgotten.

Second, after considering the expert commission’s recommendations, Congress should act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM. This will ensure prosecutors have the tools they need to protect our children.

It’s worth noting that some fictional depictions of CSAM are illegal in the United States (although it’s a complex issue), which may already cover “obscene” AI-generated materials.

Establishing a proper balance between the necessity of protecting children from exploitation and not unduly hamstringing a rapidly unfolding tech field (or impinging on individual rights) may be difficult in practice, which is likely why the attorneys general recommend the creation of a commission to study any potential regulation.

In the past, some well-intentioned battles against CSAM in technology have included controversial side effects, opening doors for potential overreach that could affect the privacy and rights of law-abiding people. Additionally, even though CSAM is a very real and abhorrent problem, the universal appeal of protecting kids has also been used as a rhetorical shield by advocates of censorship.

AI has arguably been the most controversial tech topic of 2023, and using evocative language that paints a picture of rapidly advancing, impending doom has been the style of the day. Similarly, the letter’s authors use a dramatic call to action to convey the depth of their concern: “We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

Apparently they’d prefer people to go out and sexually abuse children, I guess?

2 Likes

That’s the only goal that my mind can arrive at… Make all fictional outlets just as illegal as CSAM to encourage real sexual abuse. I am absolutely disgusted by the sick society we live in.

4 Likes

Gotta keep the racket going somehow. Empty prisons is bad for business profit margins.

3 Likes

There is absolutely zero credibility to this claim, and even moreso, I have actual evidence of the contrary. The virtual child pornography communities which dabble in this type of non-illustrated, computer-generated or AI-generated images abide by a set of rigorous standards and ethics, grounded both by well-understood legal prohibitions (no real children) and an understanding that anything which involves the use of, including the likeness of, a real minor being problematic and are banned from their sites and archives.

If these fallacious and unfounded claims of ‘normalization’ had any weight to them, then this wouldn’t be the case. Indeed, the SCOTUS themselves were correct in their assertion about those who create/consume virtual child pornography preferring to use fictional materials vs. subjecting themselves to criminal liability by dabbling in CSAM.

https://www.law.cornell.edu/supct/html/00-795.ZO.html

The Government next argues that its objective of eliminating the market for pornography produced using real children necessitates a prohibition on virtual images as well. Virtual images, the Government contends, are indistinguishable from real ones; they are part of the same market and are often exchanged. In this way, it is said, virtual images promote the trafficking in works produced through the exploitation of real children. The hypothesis is somewhat implausible. If virtual images were identical to illegal child pornography, the illegal images would be driven from the market by the indistinguishable substitutes. Few pornographers would risk prosecution by abusing real children if fictional, computerized images would suffice.

Of course, there are a lot of things to consider when addressing the generative AI matter, but inquiries about when/if to ban or suppress material should only be considered with respect to the rights of real, actual children.
I have zero doubts that laws targeting deepfakes of real children wouldn’t pass Constitutional muster, since the likeness or face of a real, identifiable minor is rightfully protected as an extension of their ‘person’.

But anything beyond that interest that deliberately threatens to impede the rights of artists/creators of this material or their consumers who have valid First Amendment claims to them should be scrutinized and disallowed, if not presumptively found to be without merit.

The 54 Attorneys General who signed onto this letter did so as a means to virtue signal, to peddle misinformation and probably attack valid SCOTUS precedents. The child pornography exception to the First Amendment is grounded firmly in the interests of real children, not the idea of children. To apply the restrictions on speech/expression for child pornography (now called CSAM) to the mere idea trivializes the intended rationale and extends it to areas that affect how we deal with real abuse.

The obscenity doctrine is frequently brought up in these debates, however, it is widely criticized as ‘unconstitutional’ and ‘unworkable’, likened highly to the equally self-contradictory and irreconcilable ‘separate but equal’ doctrine.

My hope is that these cries will go ignored, or be glossed over for more pressing matters by those in Congress, though, I’d be remiss if I didn’t admit that I think a commission appointed at the behest of Congress to study this would be a good idea. However, whether or not they would accept the commission’s findings if they were unfavorable to blanket censorship remains to be seen.

My intuition tells me that it would be a repeat of the Meese Commission’s Report on Obscenity and Pornography, where the whole commission was co-opted by biased actors for the sake of validating their biases, rather than study things scientifically.

Pinging @terminus @Gilian and @elliot again, since someone posted a thread about it already.

4 Likes

No one is screaming for help, but, somehow, a national crisis has developed.

This looks like a pretext to advance draconian measures. Parts of the piece resemble magical thinking. Merely because a crime is possible doesn’t justify sweeping bans. Just because throwing eggs at cars is poor behavior doesn’t justify banning eggs.

Chie shows a quote from Ashcroft v. FSC that states a good reason to allow CGI. Of course, pearl clutchers are never satisfied.

Few pornographers would risk prosecution by abusing real children if fictional, computerized images would suffice.

4 Likes

Everyone wants to live like a hero. Few want to die like them. That’s the problem.

2 Likes

Now we need those in power to read this and actually care. At this point, the most efficient way to draw their attention to something else is to start a hot war, preferably with a bigger country, like China.

Fictional content is being attacked on all fronts. They are demanding such a ban in the still ongoing UN Cybercrime treaty:

While the EU has also forced big platforms to fight AI generated imagery by introducing the DSA (Digital Services Act).

There is no realistic way that AI images will remain untouched. If the UN cybercrime treaty goes through as is, then lolicon will be banned in UN nations as well.

4 Likes

We need to de-emotionalize the discussion somehow. As long as “ewww!” counts more than “there is scientific evidence supporting the idea that fictional material is harmless or in some cases even helpful”, we stand absolutely no chance.

3 Likes

The stupid peasants’ brain haven’t really evolved since the days of witch burnings and moralism, though. I mean, people still think that Castlevania’s Dracula is a bad guy for justly punishing those villains, and I do mean villains in the classic Latin sense.

2 Likes
1 Like

People are not rational, they do what they want for emotional reasons and then attempt to find a logical reason afterwards. They rationalize.

4 Likes

There’s a great wartime Disney propaganda short about the subjects of Reason and Emotion:

1 Like