There is a categorical distinction between AI-generated images of fictional, non-existent ‘children’ and deep-fakes of actual children, by which the likeness of a real child is used, and is the focal point of the image.
Materials that do not involve, let alone depict, a real child are not to be conflated with materials which are deepfake productions of actual minors.
I’ve been monitoring a lot of the chatter and discussion going on between talking heads in tech and law spheres, and only a handful seem to really understand why CSAM is harmful, and it’s not because it may seem icky or unsettling or offensive, it’s because behind that image is a real child who was abused in order for that image to exist.
The abuse is intrinsic, a required element, and because the photographed material stands as something that can be turned into a commodity, that commodity in-turn becomes a physical form of child abuse that stimulates a market for that abuse.
Regardless of how this may make people feel, the focus is and has always been on the children and their welfare.
The same argument exists for the faces or likenesses or identifying characteristics of real children as well. They cannot and did not consent to being depicted in such a way, and as such, they are prone to being harmed and exploited in ways that an adult would have more agency and leeway to address.
This cannot, however, be said about materials which do not involve a real child’s abuse, nor implicate their likeness in any way. Even in the training data, images of generic faces created with the use of 3D CGI or the manipulation of youthful adult faces can be used to train and create images with a similar level of realism, but due to the inherent constraints of AI, are actively distinguishable from that of a real photograph.
Tools even exist to help assess whether an image, be it of a fictional ‘child’ or the misuse of a real child’s likeness, is AI generated.
The argument that the technological landscape has changed to such a degree where people cannot tell what is real and what isn’t is not based in reality. It may be more difficult, but the telling signs are still there. The technology has fundamental limitations that cannot be overcome or overlooked.
Would this require more hands-on investigative work to determine whether a child is real or face in assessing AI? Absolutely. But that’s been the case since before the advent of AI. The NCMEC’s V-Identifier program was literally created to address this very question, and they are more than equipped to deal with this in a way that does not undermine civil liberties and freeedom of speech or compromise a well-reasoned approach to a societal issue.
Deepfakes are not the same as purely generative AI.