AI-generated Fictional Sexual Material (FSM) vs. CSEM

There is a categorical distinction between AI-generated images of fictional, non-existent ‘children’ and deep-fakes of actual children, by which the likeness of a real child is used, and is the focal point of the image.
Materials that do not involve, let alone depict, a real child are not to be conflated with materials which are deepfake productions of actual minors.

I’ve been monitoring a lot of the chatter and discussion going on between talking heads in tech and law spheres, and only a handful seem to really understand why CSAM is harmful, and it’s not because it may seem icky or unsettling or offensive, it’s because behind that image is a real child who was abused in order for that image to exist.
The abuse is intrinsic, a required element, and because the photographed material stands as something that can be turned into a commodity, that commodity in-turn becomes a physical form of child abuse that stimulates a market for that abuse.

Regardless of how this may make people feel, the focus is and has always been on the children and their welfare.

The same argument exists for the faces or likenesses or identifying characteristics of real children as well. They cannot and did not consent to being depicted in such a way, and as such, they are prone to being harmed and exploited in ways that an adult would have more agency and leeway to address.

This cannot, however, be said about materials which do not involve a real child’s abuse, nor implicate their likeness in any way. Even in the training data, images of generic faces created with the use of 3D CGI or the manipulation of youthful adult faces can be used to train and create images with a similar level of realism, but due to the inherent constraints of AI, are actively distinguishable from that of a real photograph.

Tools even exist to help assess whether an image, be it of a fictional ‘child’ or the misuse of a real child’s likeness, is AI generated.

The argument that the technological landscape has changed to such a degree where people cannot tell what is real and what isn’t is not based in reality. It may be more difficult, but the telling signs are still there. The technology has fundamental limitations that cannot be overcome or overlooked.

Would this require more hands-on investigative work to determine whether a child is real or face in assessing AI? Absolutely. But that’s been the case since before the advent of AI. The NCMEC’s V-Identifier program was literally created to address this very question, and they are more than equipped to deal with this in a way that does not undermine civil liberties and freeedom of speech or compromise a well-reasoned approach to a societal issue.

Deepfakes are not the same as purely generative AI.

2 Likes

The problem is that some AI models might be trained with actual CSEM or children and that would be unethical too even if the resulting children isn’t a specific children the children used to train that model would have their image exploited and it’s impossible to know if that was the case

Photorealism in general poses an issue because even if it doesn’t harm children it’s hard or impossible to distinguish from the real stuff which creates lots of potential complications

Deepfakes/Morphing, AI and photorealistic 3D are their own category and should be treated in a nuanced way rather than lumped together with actual CSAM like it happens now

The first should be illegal even though not as harshly punished as proper CSAM, AI I’m more ambivalent: the AI itself that produces the images needs to be certified that it only used images of adults that authorized it

In either case photorealism can’t just circulate freely on the web because it’ll get flagged immediately, but due to it’s potential use for research and such it could be used through controlled channels that guarantee no real children were used as material or models

Yes, and those models can be scrubbed or analyzed. I’ve been looking into the feasibility of working with various FSM communities to set up a communal dataset where each and every image is curated, and can be vetted.

It was also revealed that models can even be queried to determine whether CSEM was used to train them, some time ago.

1 Like

We can just demand the models Parameters and Training Data be public. That would in general be a good idea to prevent / work against copyright issues and so on. Alternatively we can make these info State eyes only to preserve possibility of commercialization.

they can be circumvented / worked around of. They also can falsely mark real images as AI Images. So they aren´t a perfect fix. Rather what we should enforce is reproducibility. Stable Diffusion by default writes model, prompt and parameter into the images meta data. Using these you can recreate every image ever made by stable Diffusion. If that fails (we might need to allow for some paint / photoshop edits here) the picture should just be declared illegal.

If I legitimately make the image using AI I have no reason to delete this data. Faking it on the other hand most likely is neigh impossible simply because of the complexity of the issue. (You would need to create a fake model that does not content the actual image or any like it but will produce it. All the while the input images parameters and prompt would need to be logical to a human as well as other output for that model to be logical)

Given what’s at stake I think that type of image should be regulated and controlled by government approved bodies of experts only to ensure no actual children were exploited, the risks are too great and we can’t have them circulating freely or get commercialised by anybody due to their nature

That’s also the only way people might even be willing to accept such a thing

Non realistic images such as lolicon or stylized 3d that don’t involve real children should pose no problem, if anything their ban creates huge risks for freedom of expression and endangers free markets and free speech due to them being popular on Japanese websites.
This and other useless bans on other types of fictional pornography, people should be free to access drawings even just for curiosity and preservation of art or freedom of information. Other than that there’s the obvious issues of potential censorship of valid artistic works and the problems that come with giving fictional characters human rights as well as identifying if a character counts as a depiction of a minor at all or not which always ends up being stupid.

But at least on the above when it goes to court people agree it’s dumb AF, and the laws themselves can just be disapplied if countries feel inclined due to their inherent vagueness and lack of clarity

It also greatly undermines government credibility, creates confusion and panic in the general public and delegitimizes the fight against real CSAM, which is awful!

I also think we need general reforms of the laws, sexting is already a problem: it’s absurd that under many jurisdictions minors themselves can be persecuted for possessing images of themselves or sharing them with their partner only even if both are minors or consenting

Countries are starting to realize that only now despite it being fucking obvious, the laws in general are too outdated, the sentences often really harsh and they lack any kind of nuance to allow the judge for lower sentences in less severe cases or to abandon prosecution at all if the sharing was private and between legally consensual paetners

To begin with it’s already an extreme measure to criminalize the mere possession of material at all, no other type of material is criminalized this way: you can legally own murder or torture videos even if people will question you

I’m not saying this extreme measure is wrong but it can’t just be used as an excuse to stump down pedophiles in a sort of witch hunt, it needs to be carefully balanced in order to ensure its goals are achieved while balancing human rights

The goal shouldn’t be to punish people for their sexual interests or their thoughts, however repulsive people might find them, the goal is first and foremost to prevent the spread of those images in order to ensure the rights of privacy, image and mental well being of the children portrayed themselves are protected

We’re all well aware of the tendency for police to just keep running CSEM websites just to catch more assumed predators instead of shutting them down, really tells you what the real mentality and priority of the general public is: not protecting children but hunting down anyone that they think might be a potential child predator

Child predators should obviously be prevented from doing harm but if you also hit innocent people and the minors you’re trying to protect themselves in the process maybe you should re-evaluate your methods

1 Like

To begin with laws against CSAM should be written with the intent to protect children, it’s their legal basis

The idea being that children cannot consent to have this type of private image of themselves viewed or spread and therefore they need to be gone, especially when they portray actual abuse and not mere self-generatednstuff

But in practice they lack any sort of nuance, they were made in period (late 80s, 90s) where there was a really big moral panic about child predators despite the number of such cases being down. The idea morphed more and more into punishing anyone that might even look at children that way, the assumption being that any possessor of CSAM was a potential predator

With the web and nefarious people that tried to contact children this exacerbated, but at the same time we have a whole lot of other stuff that gets unfairly targeted because of the shortsightedness and ignorance of legislators: self-generated CSAM(which needs to be removed by can’t be prosecuted), loli/shota art, literature even

The rest of the world criticized Japan for being slow in criminalizing possession of child pornography, but Japan was merely being cautious and now the rest of the world is seeing the effects of this lack of caution and reasonableness

2 Likes

I’ve used AI to merge the face of famous people with my doll’s face. Now it becomes a different face altogether. A face taken from a public image of an actor, whether they’re of legal age or not, is still available publicly, shouldn’t be considered off-limits. There was no abuse involved. The image is suitable for public pr either to promote a film, taken at an event, of from the film itself.
So, if I train an AI on THAT face, is it the same as using the actual person I used to create it? I guess, in a way? But no where near the same as using known images created from abusive situations.

1 Like

imo, if the intent is to make the face look like or be based on a specific person, then you’re using that person’s likeness. If the photos you create are pornographic, then you’re using that person’s likeness in porn. This requires consent, which children can’t provide.

2 Likes

Japan is also big on something called Privacy. Photographing without someone’s permission? Taboo. Photographing inside a shop? Taboo. Even Facebook didn’t take off there.

What evil monsters write a law so twisted, perverse, and plain wrong that it forces academically schooled, decent, and respectable judges to make so obviously perverse a verdict?
…and, well, yeah, that’s pretty much the point I wanted to come to.
The evil monsters are ECPAT, by the way, which I described in my last article, the fundamentalist Christian organization that claims they care about children. Oh, and they vigorously defended the man’s conviction over this image, too, arguing about “violations of children on a conceptual level”. Even the police argued publicly against the law banning possession of child abuse imagery here, arguing – from their viewpoint – that it protects child molesters, as they are forced to hunt comics fans instead of real crime. This is similar to my argumentation in this and the previous article.
Yes, this is the same ECPAT that made it illegal for you to possess naked or sexual images of yourself from before your 18th birthday, arguing that it violates “children in general on a conceptual level”."

Also, in the immortal words of Rick Falkvinge (from a decade ago), the founder of the Swedish Pirate Party (after he retired).

1 Like