Paedophiles using AI to turn singers and film stars into kids

Also found a Reddit thread discussing this article: Reddit - Dive into anything

1 Like

For an organization that has admitted false reports of CSAM significantly slow down their review process and harm their reviewers, the IWF sure has some interesting priorities

5 Likes

It’s also getting annoying having to constantly fact check them on the effects of these materials. I get that Brits don’t believe in free speech.

It’s also commonly-known among AI enthusiasts that the question as to whether a model was trained on CSAM (or any particular image) is one that can be answered by simply querying the model and analyzing the output. A generative-AI output trained on CSAM is just CSAM with modifications.

It’s also very unlikely that images trained on real abuse are being made. I don’t doubt it, but compared to the quality of images that don’t even have real photos of children in their training data (save for misc. 3D-modeled/rendered images of faces of non-existent people). There’s really nothing to worry about.

I’m worried that the constant alarmism by the BBC and IWF will pollute discourse.

3 Likes

I love how these redditors use moral argument against sex dolls and discuss if they cause harm. At the same time they indulge in other morally disgusting content, but hey it’s not sexual! Big YouTubers literally abusing, cutting off the head of baby dolls is considered funny.

What if that encourages psychos to do the same? Sadists exist. How is this not harmful?

Anyone with a brain can tell it is about morality and thats it. Also, mentioning Japan while they have much lower CSA rates DESPITE having CLSD on public display and other outlets. It is so cringe.

6 Likes

On that note, the Online Safety Bill became law today…

1 Like

On that note, the EU chat control law has been rejected:

https://www.msn.com/en-us/news/world/privacy-busting-chat-control-plans-rejected-by-european-parliament-as-csam-law-heads-into-final-stretch/ar-AA1iU76a

the commission is now trying to save it by deleting a bunch of invasive parts (no backdoor for encryption and only scan when suspected for crime). However, I hope that it will get buried for good in two weeks, because laws can change, but a system stays.

4 Likes

postponed … the thing isn´t off the table just did not come through the Parlament yet. Might do so in the future through

1 Like

Idk about you guys but… touching kids is bad :wink:

Genuinely curious: when you posted that, were you expecting people to disagree? Knowing full well that you’re in the forum for a literal child protection organization? What was even your goal there?

5 Likes

Yup, touching kids is bad. Research indicates that child molestation victims develop several issues later in life. Um, thanks for telling us stuff we already know? :person_shrugging:

5 Likes

From the article:

“For example, the IWF found hundreds of images of two girls whose pictures from a photoshoot at a non-nude modelling agency had been manipulated to put them in Category A sexual abuse scenes. The reality is that they are now victims of Category A offences that never happened.”

They are victims of a violation of their privacy, and the use of their images without permission & without paying them a royalty. But they aren’t “victims of Category A offenses.” That’s just absurd.

2 Likes

The IWF seems to think that it does not matter if a real child has been hurt or even involved in the production of an image. For them, it is ethically the same.

What’s more concerning for me [Susie Hargreaves, CEO], is the idea that this type of child sexual abuse content is, in some way, ethical. It is not.

To clearly distinguish CSAM content that is not generated or edited by AI technology, this report uses ‘real CSAM’. This term should not be taken to diminish the severity or criminality of AI CSAM.

Effectively articulating the criminality of AI CSAM can be a challenge – there are groups who seek to lessen the severity of these images: they ‘don’t have real children’, or ‘don’t hurt anyone’.

They never bother to argue why AI generated content should be seen as just as bad as real CSAM.

They also don’t differentiate between the type of AI generated content. Whether it’s images of CSAM victims generated by models trained on their material, or images generated of non-existing children using non-criminal images as training data set, it’s all the same to them.

What if perpetrators lack the necessary datasets? Various threads found on dark web forums shared large sets of faces of known victims for creating deepfakes or for training AI models. Indeed, one thread was called, ‘Photo Resources for AI and Deepfaking Specific Girls’. Perpetrators discussed how to gather images and choose which to use for fine-tuning.

In another vein, evidence has been found of perpetrators creating virtual ‘characters’ – entirely AI-generated children whose models may have been trained on real children but do not resemble real children – comparable to ‘virtual celebrities’ or ‘VTubers’ – and sharing packs of their images

I think it’s troublesome that a child protection agency seems to care so little about whether real children were harmed. It seems to me they care more about fighting and extinguishing the sexual attraction to children then preventing children from being harmed. Hence, they see no ethical difference between someone watching real CSAM or generating completely fictional images of non-existing children. They even think it’s a bad problem when people generate completely legal non-sexual images.

Most AI-generated images assessed were not criminal. This reflects a large appetite for images of children outside scenarios containing explicit sexual activity .

This might have a real negative impact on the protection of children. Their research seems to imply that there are at least some people on Darknet CSAM forums with some level or moral awareness, who would prefer the usage of ethical AI-generated material over images depicting real harm:

Many individuals claim that AI CSAM is more ethical than real CSAM and use this claim as a justification for generating and posting AI CSAM.

“the future of CP is already here… and not offending anyone…”

Others emphasise that their AI CSAM images were generated without CSAM fine-tuned models, perhaps as an effort to legitimise those generations:

“All of these were created without any real world child porn whatsoever.”

At an extreme end, some perpetrators claim that AI-generated images comprise the future of CSAM – eventually replacing the need for real CSAM:

“ [AI CSAM] makes CSAM unthinkable. Anyone who might before have justified needing CSAM in order to quell some irresistible urge will have no more excuses

Would it not be great if we could offer those individuals completely legal and ethical ways to generate their own pornography to their liking? We already know that the current ways of fighting CSAM hardly work, as the issue only seems to be getting bigger. AI could be a great chance to diminish demand for real CSAM (at least among consumers who are MAPs, which to be fair is only a fraction), by providing alternatives that are ethical, safe and to be frank probably much better.

Yet the IWF is working to destroy these possibilities rather than to explore them, by actively campaigning to criminalize more and more of it and to get AI CP to be treated as real CSAM.

Creating and distributing guides to the generation of AI CSAM is not currently an offence, but could
be made one.

All quotes are from their report How AI is being abused to create child sexual abuse imagery, which they published in October.

3 Likes

For one thing, it sounds like they want pure, classic, content-based censorship. And once they get it, they’ll want to expand it and expand it and expand it.

As for their basis for believing that AI-generated content is no different than actual child porn, they tend not to be specific about that since it boils down to sympathetic magic thinking. Taken to its logical conclusion, by their reasoning, making a doll which looks like someone and then sticking pins in it is no different than an actual physical assault on that individual.

2 Likes

They’re Brits. They always do.

Look that that “tell us your password” law (RIPA). Abused by everybody for reasons other than the “one” it was implimented for; like catching parents trying to send their kids to the wrong caliber school or something.

1 Like

So do they care about protecting children from abuse, or not? They’re LITERALLY confirming every single hypothesis and theory ethicists have had since the early 2000s while simultaneously trying to dismiss it all with “nuh uh!”

They seem more concerned over ideals and concepts with actual victims taking a backseat to some arbitrary school of thought.

People need outlets in order to reconcile their pedophilic sexual desires with the reality that they must not be acted on in a way that involves or implicates a real minor. Using purely AI-generated content, by definition, is not this.

@elliot what do you think of this?

3 Likes

I think the IWF needs to take a moment to read their own description of why they exist and stop getting distracted from that mission

They’ve made it pretty clear lately that they care more about infringing on privacy than they do about actually protecting children. They’ve become just another promotional tool for the surveillance state.

Sirius put it better than I could

4 Likes