Generative AI is NOT CSAM unless it depicts a real minor

Generative AI CSAM is not CSAM unless it involves the use of a real minor or their likeness to create explicit sexual content.

A troubling blog post from the NCMEC which discusses the advent of generative AI content. This post serves to counter this notion and hopefully correct some of the misinformation outlined therein.
I found this blog post while I was browsing their homepage looking to access the CyberTipline in order to report an illegal website looking to share CSAM on social media.

The article, to its credit, is that it’s mostly relegated to issues involving cases that have victims, not mere characters or fictional ‘children’, who by definition, do not exist and cannot be the victims of child sexual exploitation.

However, that ends the pleasantries.

As if the creation of this imagery wasn’t terrible enough, NCMEC also has received reports where bad actors have tried to use this illegal GAI content to extort a child or their family for financial means. Furthermore, users of the technology to create this material have used the argument that, “At least I didn’t hurt a real child” and “It’s not actually a child…”

With regard exclusively to that last sentence, they have a point, one that the NCMEC has both acknowledged and accepted both internally and publicly on more than one occasion.
Though, I suspect that the quotation was intended to be received in-line with its previous statement, whereby they may claim that such materials were taken with innocuous photos, rather than photos or videos depicting recorded acts of child sex abuse, somehow makes the inclusion of a real-child’s likeness less impactful or meaningful.

It is important that @prostasia step forward and act as a meaningful voice in this argument, lest we trivialize and undermine the focus on CSAM prevention; whereby victims become less of a focus and more of a talking point to suppress allegedly problematic content without due cause or evidence.
As taken from Ashcroft v. Free Speech Coalition

Ferber recognized that “[t]he Miller standard, like all general definitions of what may be banned as obscene, does not reflect the State’s particular and more compelling interest in prosecuting those who promote the sexual exploitation of children.” 458 U.S., at 761.

The arguments that these fictional materials, on their own, constitute criminality or harm is and has always been without merit, especially when considering the NCMEC’s own internal and public-facing policies and practices within this field, and the continued absence of any scientific evidence that supports any causal relationship with harmful acts or behaviors, all the while science continues to point to positive effects of these materials in providing people with prevention efforts.

I’m horrified at how such an innovative and useful organization like the NCMEC can get caught up in rhetoric drummed up by the British IWF, whose legal systems, policies, etc. are not up-to-par with what most experts within CSE, paraphilia research, and prevention think about the issue.

CSAM has never been interpreted by the majority of the world to include contents or materials which are not actually children. Fictional ‘children’ are not people, and therefore cannot suffer abuse. The NCMEC has always been good to emphasize this being the case.

I’ve seen some people try to argue the contrary, that they are ‘depictions’ of child sex abuse, but even this argument fails because it’s still not a real child. It goes from “it is a child” to “it looks like a child”.

Under this logic, explicit adult content produced with consenting petite/youthful adult actors with child-like features who could be misinterpreted as being far younger than they actually are could be CSAM, or even a doll, or drawing, or painting.
Attempts to undermine or dismiss the very real and necessary requirement that an actual victim be involved do nothing to actually help victims and only serves to conflate the reality of their abuse with an arbitrary matter of viewpoint, trivializing the very focus that is intended to be in their benefit with accommodating insecurity and discomfort with the subject matter, while also turning matters intended to deal with abuse into thought-crimes.

GAI CSAM is CSAM. The creation and circulation of GAI CSAM is harmful and illegal. Even the images that do not depict a real child put a strain on law enforcement resources and impede identification of real child victims.

It is only CSAM if it depicts a real child, as the legal definitions set forth by the US Supreme Court, Congress, and the legal systems of multiple US states have observed.

By prohibiting child pornography that does not depict an actual child, the statute goes beyond New York v. Ferber, 458 U.S. 747 (1982), which distinguished child pornography from other sexually explicit speech because of the State’s interest in protecting the children exploited by the production process.

In contrast to the speech in Ferber, speech that itself is the record of sexual abuse, the CPPA prohibits speech that records no crime and creates no victims by its production. Virtual child pornography is not “intrinsically related” to the sexual abuse of children, as were the materials in Ferber. 458 U.S., at 759. While the Government asserts that the images can lead to actual instances of child abuse, see infra, at 13—16, the causal link is contingent and indirect. The harm does not necessarily follow from the speech, but depends upon some unquantified potential for subsequent criminal acts.

Moreover, while it is possible that such materials may add to the load of investigators tracking down and identifying victims, this concern is greatly overstated, as no resource I’ve been able to find has asserted that these tools would severely cripple or permanently impede efforts to do the good work of identifying and rescuing victims. Sorting thru materials that may “appear to be” minors engaged in sexual conduct and materials that actually are is part of the job. Users of websites and platforms are only required to report acts or materials which may constitute actual CSAM or CSAM-related activities, in addition to grooming behaviors.

Investigative tools and methods exist to determine whether an image is photographic of a real-life event or synthetically created, or is an altered photograph. Moreover, it is very possible that generative AI models can be queried to determine what’s within their training data, and even their own outputs studied to determine whether they are based on CSAM imagery.

It is essential that federal and state laws be updated to clarify that GAI CSAM is illegal and children victimized by sexually exploitative and nude images created by GAI technology have civil remedies to protect themselves from further harm. Additionally, legislation and regulation is needed to ensure that GAI technology is not trained on child sexual exploitation content, is taught not to create such content, and that GAI platforms are required to detect, report, and remove attempts to create child sexual exploitation content and held responsible for creation of this content using their tools.

But the law already does precisely this. 18 USC 2256 provides the federal definitions for child pornography/CSAM, which are:

(8) “child pornography” means any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where—

(A)

the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;

(B)

such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or

(C)

such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.

(9) “identifiable minor”—

(A) means a person—

(i)

(I)

who was a minor at the time the visual depiction was created, adapted, or modified; or

(II)

whose image as a minor was used in creating, adapting, or modifying the visual depiction; and

(ii)

who is recognizable as an actual person by the person’s face, likeness, or other distinguishing characteristic, such as a unique birthmark or other recognizable feature; and

(B)

shall not be construed to require proof of the actual identity of the identifiable minor.

. . .

(11)

the term “indistinguishable” used with respect to a depiction, means virtually indistinguishable, in that the depiction is such that an ordinary person viewing the depiction would conclude that the depiction is of an actual minor engaged in sexually explicit conduct. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.

This definition is more than sufficient to outline what types of content or conduct is to be defined as CSAM, requiring that such materials be sufficiently realistic and also a depiction of a person who actually exists.

While it is possible that materials may be generated that do not depict real people, but may appear to depict a non-existent person, the requirement that a real-life person be used does not escape necessity, and is something that can be proven in a court of law.
It also outlines exactly how the use of a real child’s likeness can be proscribed, which is precisely how various criminal cases (even more recent ones) have been used fruitfully to bring those who sexually exploit real children to justice.
Splicing a real child’s face, likeness, or otherwise identifiable characteristics in such a way that they would appear pornographic would run afoul of these laws and leave those responsible open to prosecution.

@elliot I’d be more than happy to pen an official Prostasia blog post regarding this.

2 Likes

I hate this narrative too. I mean yes, there are forms of AI-generated images that are harmful – like existing CSAM image series being expanded by AI, or deepfakes created by images of real children, or AI models that are trained on real CSAM. But it is wrong to assume that all AI-generated images are like that.

This is especially frustrating because I believe that AI-generated images could be a real asset in the fight against real CSAM. The IWF created a report last year which show that there are at least some people present in darknet CSAM forums who are somewhat morally self-aware and would prefer alternatives that don’t involve real children.

At an extreme end, some perpetrators claim that AI-generated images comprise the future of CSAM – eventually replacing the need for real CSAM

Now image, for a second, if there were some government-certified AI models that could generate ethical child pornography that does not involve any real children? You could watermark images created by these models, so both the user and law enforcement could easily check that these images were legal and safe.

How many people who are currently active on these illegal CSAM forums might be steered away from this if there was an alternative that could produce better and higher quality images tailored to their likings and keep them safe from prosecution?

These avenues are not even explored – on the contrary, the IWF openly scoffs at anyone who dares to think that images not involving real children might not be as bad as real CSAM. How a child protection organization can have so little regard of the important difference of whether an actual child experienced harm or not is beyond me.

From a purely scientific thought point. Looking at the human form from say a medical/genus class of a hominid species, there are only so many shapes a particular age grouping of maturity can take within the homo-sapien, sapien. Therefore, I feel/think, once Ai is familiar enough with a genus of a hominid, mammalian species; it will be able to replicate different iterations and variations of any particular age group of a particular nature of that form, regardless of “specific training” (ie. individual) input. This is the divergence of ‘Training on an existing, individual, input’ vs. ‘training on a generic, species level inout’. You would not be able to discern where the line is drawn between the two.

Definitely would be open to seeing a draft of this. I’m temporarily doing most of the work on the blog so you can DM me here or send it to my email (which I believe you have) once you have something

1 Like

It seems that the overall intentions and contents of my post may have been inadvertently vindicated by none other than the FBI and IC3. If my understanding of this PSA is correct, then it would still only be limited to depictions made with or which use the likenesses of actual minors.

https://www.ic3.gov/Media/Y2024/PSA240329

FBI/IC3 Public Service Announcement

Alert Number: I-032924-PSA

March 29, 2024

Child Sexual Abuse Material Created by Generative AI and Similar Online Tools is Illegal


The FBI is warning the public that child sexual abuse material (CSAM) created with content manipulation technologies, to include generative artificial intelligence (AI), is illegal. Federal law prohibits the production, advertisement, transportation, distribution, receipt, sale, access with intent to view, and possession of any CSAM,1 including realistic computer-generated images.

Background

Individuals have been known to use content manipulation technologies and services to create sexually explicit photos and videos that appear true-to-life. One such technology is generative AI, which can create content — including text, images, audio, or video — with prompts by a user. Generative AI models create responses using sophisticated machine learning algorithms and statistical models that are trained often on open-source information, such as text and images from the internet. Generative AI models learn patterns and relationships from massive amounts of data, which enables them to generate new content that may be similar, but not identical, to the underlying training data. Recent advances in generative AI have led to expansive research and development as well as widespread accessibility, and now even the least technical users can generate realistic artwork, images, and videos — including CSAM — from text prompts.

Examples

Recent cases involving individuals having altered images into CSAM include a child psychiatrist and a convicted sex offender:

  • In November 2023, a child psychiatrist in Charlotte, North Carolina, was sentenced to 40 years in prison, followed by 30 years of supervised release, for sexual exploitation of a minor and using AI to create CSAM images of minors. Regarding the use of AI, the evidence showed the psychiatrist used a web-based AI application to alter images of actual, clothed minors into CSAM2.
  • In November 2023, a federal jury convicted a Pittsburgh, Pennsylvania registered sex offender of possessing modified CSAM of child celebrities. The Pittsburgh man possessed pictures that digitally superimposed the faces of child actors onto nude bodies and bodies engaged in sex acts3.

There are also incidents of teenagers using AI technology to create CSAM by altering ordinary clothed pictures of their classmates to make them appear nude.

Recommendations

References


1 The term “child pornography” is currently used in federal statutes and is defined as any visual depiction of sexually explicit conduct involving a person less than 18 years old. See 18 U.S.C. § 2256(8). While this phrase still appears in federal law, “child sexual abuse material” is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child. :leftwards_arrow_with_hook:

2See https://www.justice.gov/usao-wdnc/pr/charlotte-child-psychiatrist-sentenced-40-years-prison-sexual-exploitation-minor-and :leftwards_arrow_with_hook:

3See https://www.justice.gov/opa/pr/registered-sex-offender-convicted-possessing-child-sexual-abuse-material :leftwards_arrow_with_hook:

I think it’s rather clear from the wording of the FBI post that it is limited to using real images of children, or materials which are indistinguishable from them.
I don’t think this would in any way, shape, or form impact the legality or availability of computer-generated images that are ultimately distinguishable from a real minor. I wish they would be more clear in that regard, but they explicitly cite 18 USC 2256(8) and even listed specific cases where real minors were affected.

This does not apply to contents where the minor does not exist and is still readily distinguishable from a photograph - or modified photograph - of a real minor.

If anyone thinks that there is reason to doubt this, I have included a footnote taken from the FBI’s June 2023 PSA, (bolded emphasis added by myself).

Generally, synthetic content may be considered protected speech under the First Amendment; however, the FBI may investigate when associated facts and reporting indicate potential violations of federal criminal statutes. Mobile applications, “deepfake-as-a-service,” and other publicly available tools increasingly make it easier for malicious actors to manipulate existing or create new images or videos. These tools, often freely found online, are used to create highly realistic and customizable deepfake content of targeted victims or to target secondary, associated victims.

My biggest fear is that this FBI PSA will be misinterpreted, and people will mistake this PSA to mean that any virtual/CG depiction is illegal, rather than just depictions made with real minor photos, or are indistinguishable from them.

This isn’t even a matter of perception, this is cold-hard legal fact.

https://www.criminallegalnews.org/news/2023/jun/15/arkansas-supreme-court-reverses-11-counts-possession-child-pornography-because-cgi-images-do-not-depict-image-child/

For purposes of the issues in the instant opinion, a “person” is “[a] natural person.” § 5-1-102(13)(A)(i) (Repl. 2013). The Court explained: “Thus, although § 5-27-602(a)(2) includes possession of CGI, the criminal act is limited to possession of imagery depicting or incorporating the image of a child – a natural person under seventeen years of age – engaging in sexually explicit conduct. Section 5-27-602(a)(2) necessarily excludes CGI that does not depict or incorporate the image of a child under the statutory definition.

The Court’s reading of § 5-27-602(a)(2) is consistent with Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002), in which the U.S. Supreme Court “declared unconstitutional as violative of the First Amendment § 2256(8)(B) of the Child Pornography Prevention Act of 1996 (“CPPA”), which prohibited ‘any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture that is, or appears to be, of a minor engaging in
sexually explicit conduct.’”

While New York v. Ferber, 458 U.S. 747 (1982), held that child pornography is deemed speech outside the protection of the First Amendment, it is based on the fact that child pornography is “intrinsically linked” to the sexual abuse of children, i.e., production of child pornography could be accomplished only by recording the crime of sexually abusing children. But the CPPA’s proscription of computer-generated imagery or “virtual imagery” went too far because it recorded no crime, and it created no victims by its production.

1 Like