AI-generated virtual child pornography is, by its very definition, a victimless crime, so long as the subjects and the training data do not involve or depict an actual minor being sexualized or exploited.
No amount of faulty reasoning, flawed arguments, or correlative but not causative research can disprove that. It’s simple logic.
A particularly disturbing report by 404 Media caught me eye, apparently they’ve been doing some activism on par with the types of schlock journalism I’d expect to see from a blog post by the Heritage Foundation or Fox News/MSNBC.
I don’t say this just because I disagree with their article, I say this because it’s objectively bad and falls more in-line with activism, than actual journalism. Even the title is based on rhetoric, and there is no attempt to moderate their central position or try to be objective.
They proudly proclaim that materials which are, by themselves, victimless are somehow CSAM, while also not acknowledging the existence of open-source models used to create explicit content which do not contain images of real minors, or the presence of various communities which do not sexualize real minors.
The issue of AIG-CSAM is that it can only be legally considered ‘child pornography’ if it involves or depicts a real minor. That is the rule, and has always been the rule since 2002, with the US Supreme Court’s ruling in Ashcroft v. Free Speech Coalition, as well as the US Federal law definition of child pornography.
In 404 Media’s post AI-Generated Child Sexual Abuse Material Is Not a ‘Victimless Crime’, the writers try to lend their credibility by consulting with the UK-based Internet Watch Foundation, whose own legal systems do not respect free speech or have any rational understanding of how fictional sexual material affects sexual risk.
The main issue with trying to assert that these materials constitute CSAM is that they’re trying to make that classification based solely on how something appears or is described, not whether or not it actually is that thing. Under their overly broad definition of CSAM, even images of petite or youthful-looking adults, or fictional characters drawn or designed to look young, or even paintings, could be classified as illegal CSAM - something that they objectively and factually are not. These types of coutnerarguments never go addressed by these types because they’re so overly concerned about ‘normalizing’ child sexual abuse or CSAM, which is not even founded.
The overwhelming majority of these communities which consume explicit/pornographic illustrations of non-existent ‘minors’ do so because they do not want to implicate themselves in harming or sexualizing real minors.
Some do not have that type of hangup and will consume it regardless, but these types of people - including those who create CSAM by filming the abuse of children or engaging in any form of sexual contact with minors - are not representative of the majority of people who consume this, nor are they proof of ‘normalization’ or some other flawed implementation of the ‘social contagion’ theorization.
Moreover, the argument that these materials ‘normalize’ a sexual interest in children by allowing the creation of deviant groups is also flawed, because again - much of these communities already heavily regulate against sexualizing or exploiting real minors, with some going so far as to flat-out ban AI-generated content in its entirety after it was revealed how these tools worked.
The overwhelming majority of these contents are not created or consumed with the intention of victimizing real children, or creating content that is linked to that, but there exists a bias among law enforcement partners which will constantly argue the opposite, even when their own statistics and data proves otherwise. This is likely due to a bias that exists at the legal level, where they’re not required to prove that the ‘child’ even exists, thereby targeting a concept, rather than a person. Such a fact is shown with the Australian law enforcement agencies, where they don’t even distinguish between the two when tracking investigations and convictions (thanks, @terminus !)
It’s the same type of flawed and faulty reasoning that culture war pundits used to associate the LGBT community with child sex abuse, because those looking to justify sex crimes which banned homosexual sodomy or the production of LGBT-oriented pornography wanted to cast a broad net that encapsulated ‘sexual deviance’ generally, and to dissuade people from distinguishing between consenting adult activity and man-on-boy child sex abuse. Think of it as a form of ‘peer pressure’, that uses morality as their most effective talking point. Moralism is a particularly toxic form of legalism that functions in a way that’s more in-line with religious ideology than anything else, which explains why these laws are more often than not brought up by those who are religious themselves. The main issue with moralism is that it isn’t required to be measured objectively or factually, or even ethically. The death penalty is a product of moralism, despite the facts failing to show a link between capital punishment and reductions in crime.
AIG-CSAM, as a concept, does exist and it is a valid classification, but it must be linked to real CSAM or an actual minor.
Not all AIG content is FSM (fictional sexual material), defined by its lack of a real minor.
Understanding this requires us, as a society, to take an objective view of these models, images, etc. and understand exactly how they fall in-line with the child protection framework. Waging a war against psychology is not going to result in any victories, and enshrining the mere concept of the ‘child’ as though it were an actual person or victim will do nothing to address the scourge of child sex abuse or the sexual victimization of minors.
Research has continuously failed to link fantasy with contact offending, despite finding out that offenders do engage with it. Correlation is not causation, and we must always be considerate of facts which support - or counter - our arguments.