Also found a Reddit thread discussing this article: Reddit - Dive into anything
For an organization that has admitted false reports of CSAM significantly slow down their review process and harm their reviewers, the IWF sure has some interesting priorities
Itâs also getting annoying having to constantly fact check them on the effects of these materials. I get that Brits donât believe in free speech.
Itâs also commonly-known among AI enthusiasts that the question as to whether a model was trained on CSAM (or any particular image) is one that can be answered by simply querying the model and analyzing the output. A generative-AI output trained on CSAM is just CSAM with modifications.
Itâs also very unlikely that images trained on real abuse are being made. I donât doubt it, but compared to the quality of images that donât even have real photos of children in their training data (save for misc. 3D-modeled/rendered images of faces of non-existent people). Thereâs really nothing to worry about.
Iâm worried that the constant alarmism by the BBC and IWF will pollute discourse.
I love how these redditors use moral argument against sex dolls and discuss if they cause harm. At the same time they indulge in other morally disgusting content, but hey itâs not sexual! Big YouTubers literally abusing, cutting off the head of baby dolls is considered funny.
What if that encourages psychos to do the same? Sadists exist. How is this not harmful?
Anyone with a brain can tell it is about morality and thats it. Also, mentioning Japan while they have much lower CSA rates DESPITE having CLSD on public display and other outlets. It is so cringe.
On that note, the Online Safety Bill became law todayâŚ
On that note, the EU chat control law has been rejected:
the commission is now trying to save it by deleting a bunch of invasive parts (no backdoor for encryption and only scan when suspected for crime). However, I hope that it will get buried for good in two weeks, because laws can change, but a system stays.
postponed ⌠the thing isn´t off the table just did not come through the Parlament yet. Might do so in the future through
Idk about you guys but⌠touching kids is bad
Genuinely curious: when you posted that, were you expecting people to disagree? Knowing full well that youâre in the forum for a literal child protection organization? What was even your goal there?
Yup, touching kids is bad. Research indicates that child molestation victims develop several issues later in life. Um, thanks for telling us stuff we already know?
From the article:
âFor example, the IWF found hundreds of images of two girls whose pictures from a photoshoot at a non-nude modelling agency had been manipulated to put them in Category A sexual abuse scenes. The reality is that they are now victims of Category A offences that never happened.â
They are victims of a violation of their privacy, and the use of their images without permission & without paying them a royalty. But they arenât âvictims of Category A offenses.â Thatâs just absurd.
The IWF seems to think that it does not matter if a real child has been hurt or even involved in the production of an image. For them, it is ethically the same.
Whatâs more concerning for me [Susie Hargreaves, CEO], is the idea that this type of child sexual abuse content is, in some way, ethical. It is not.
To clearly distinguish CSAM content that is not generated or edited by AI technology, this report uses âreal CSAMâ. This term should not be taken to diminish the severity or criminality of AI CSAM.
Effectively articulating the criminality of AI CSAM can be a challenge â there are groups who seek to lessen the severity of these images: they âdonât have real childrenâ, or âdonât hurt anyoneâ.
They never bother to argue why AI generated content should be seen as just as bad as real CSAM.
They also donât differentiate between the type of AI generated content. Whether itâs images of CSAM victims generated by models trained on their material, or images generated of non-existing children using non-criminal images as training data set, itâs all the same to them.
What if perpetrators lack the necessary datasets? Various threads found on dark web forums shared large sets of faces of known victims for creating deepfakes or for training AI models. Indeed, one thread was called, âPhoto Resources for AI and Deepfaking Specific Girlsâ. Perpetrators discussed how to gather images and choose which to use for fine-tuning.
In another vein, evidence has been found of perpetrators creating virtual âcharactersâ â entirely AI-generated children whose models may have been trained on real children but do not resemble real children â comparable to âvirtual celebritiesâ or âVTubersâ â and sharing packs of their images
I think itâs troublesome that a child protection agency seems to care so little about whether real children were harmed. It seems to me they care more about fighting and extinguishing the sexual attraction to children then preventing children from being harmed. Hence, they see no ethical difference between someone watching real CSAM or generating completely fictional images of non-existing children. They even think itâs a bad problem when people generate completely legal non-sexual images.
Most AI-generated images assessed were not criminal. This reflects a large appetite for images of children outside scenarios containing explicit sexual activity .
This might have a real negative impact on the protection of children. Their research seems to imply that there are at least some people on Darknet CSAM forums with some level or moral awareness, who would prefer the usage of ethical AI-generated material over images depicting real harm:
Many individuals claim that AI CSAM is more ethical than real CSAM and use this claim as a justification for generating and posting AI CSAM.
âthe future of CP is already here⌠and not offending anyoneâŚâ
Others emphasise that their AI CSAM images were generated without CSAM fine-tuned models, perhaps as an effort to legitimise those generations:
âAll of these were created without any real world child porn whatsoever.â
At an extreme end, some perpetrators claim that AI-generated images comprise the future of CSAM â eventually replacing the need for real CSAM:
â [AI CSAM] makes CSAM unthinkable. Anyone who might before have justified needing CSAM in order to quell some irresistible urge will have no more excuses
Would it not be great if we could offer those individuals completely legal and ethical ways to generate their own pornography to their liking? We already know that the current ways of fighting CSAM hardly work, as the issue only seems to be getting bigger. AI could be a great chance to diminish demand for real CSAM (at least among consumers who are MAPs, which to be fair is only a fraction), by providing alternatives that are ethical, safe and to be frank probably much better.
Yet the IWF is working to destroy these possibilities rather than to explore them, by actively campaigning to criminalize more and more of it and to get AI CP to be treated as real CSAM.
Creating and distributing guides to the generation of AI CSAM is not currently an offence, but could
be made one.
All quotes are from their report How AI is being abused to create child sexual abuse imagery, which they published in October.
For one thing, it sounds like they want pure, classic, content-based censorship. And once they get it, theyâll want to expand it and expand it and expand it.
As for their basis for believing that AI-generated content is no different than actual child porn, they tend not to be specific about that since it boils down to sympathetic magic thinking. Taken to its logical conclusion, by their reasoning, making a doll which looks like someone and then sticking pins in it is no different than an actual physical assault on that individual.
Theyâre Brits. They always do.
Look that that âtell us your passwordâ law (RIPA). Abused by everybody for reasons other than the âoneâ it was implimented for; like catching parents trying to send their kids to the wrong caliber school or something.
So do they care about protecting children from abuse, or not? Theyâre LITERALLY confirming every single hypothesis and theory ethicists have had since the early 2000s while simultaneously trying to dismiss it all with ânuh uh!â
They seem more concerned over ideals and concepts with actual victims taking a backseat to some arbitrary school of thought.
People need outlets in order to reconcile their pedophilic sexual desires with the reality that they must not be acted on in a way that involves or implicates a real minor. Using purely AI-generated content, by definition, is not this.
@elliot what do you think of this?
I think the IWF needs to take a moment to read their own description of why they exist and stop getting distracted from that mission
Theyâve made it pretty clear lately that they care more about infringing on privacy than they do about actually protecting children. Theyâve become just another promotional tool for the surveillance state.
Sirius put it better than I could