Now if only we could convince anti-AI fanatics to focus their efforts on actually harmful shit like this
Whatās interesting is that he was charged for CSAM production and possession, as well as the creation of deepfake material, under child pornography laws, basically confirming we donāt need obscenity laws to be able to target this type of direct, real harm.
Keeping criminal prohibitions limited to materials where the rights of actual, real, living children is precisely what the SCOTUS had in mind for Ashcroft v. Free Speech Coalition.
https://www.law.cornell.edu/supct/html/00-795.ZO.html
By prohibiting child pornography that does not depict an actual child, the statute goes beyond New York v. Ferber, 458 U.S. 747 (1982), which distinguished child pornography from other sexually explicit speech because of the Stateās interest in protecting the children exploited by the production process. See id., at 758. As a general rule, pornography can be banned only if obscene, but under Ferber, pornography showing minors can be proscribed whether or not the images are obscene under the definition set forth in Miller v. California, 413 U.S. 15 (1973). Ferber recognized that ā[t]he Miller standard, like all general definitions of what may be banned as obscene, does not reflect the Stateās particular and more compelling interest in prosecuting those who promote the sexual exploitation of children.ā 458 U.S., at 761.
Section 2256(8)(C) prohibits a more common and lower tech means of creating virtual images, known as computer morphing. Rather than creating original images, pornographers can alter innocent pictures of real children so that the children appear to be engaged in sexual activity. Although morphed images may fall within the definition of virtual child pornography, they implicate the interests of real children and are in that sense closer to the images in Ferber. Respondents do not challenge this provision, and we do not consider it.
We all know that it would have not mattered if the images were alterations or fictitious. They would have got his ass either way using obscenity.
Also, AI stuff is still pretty questionable because how can someone truly proof it is not an actual child? What if it is something like this here on a broad scale? As time goes on this stuff will get better and thus harder to spot.
Thatās not how burden of proof works. The prosecutor should be responsible for proving it is based on an actual child
Iām surprised by the levelheadedness of the comments in that news article.
Especially since itās a British news website.
Which is a very trivial task, given how easy it is for training images to be extracted. All you would need to do is query it, and once you find a match, you could do an analysis that would rule out look-alikes and meet the burden of proof.
If these were about targeting images of minors that did not exist (fictional characters), then I would be more worried.
This isnāt necessarily true. All cases that have been brought by the DoJ under Garland involving obscenity seem to be limited to people who already have criminal records/histories relating to contact child sexual abuse, CSAM possession, or were for materials that constitute CSAM anyway.
I think itās safe to say that the DoJ under Biden is not interested in pursuing these types of matters unless it can be somehow linked to a real child, or if the suspect is a known sexual offender.
Still incredibly dangerous that the government even has the option to prosecute you based on the fiction you like, though. Governments have proven time and time again that they canāt be trusted to just ānot enforceā harmful laws
Well, thatās sort of the state that pornography in general is in.
But a silver lining is that nothing is obscene unless declared so in a court of law, and on a case-by-case, state-by-state basis, meaning that the same thing can be brought in and found obscene in one court, and not obscene in another court.
There also exists a presumption of innocence. Matters of obscenity are not matters of fact, merely opinion and conjecture, so that presumption goes a long way in more ways than one could imagine.
This is why CSAM is its own category and is not required to be found āobsceneā. Even pornography, for its own sake, has artistic value, a fact that academics are finally being more vocal about.
This criminal would have probably received the same sentence regardless if AI was involved or not. The criminal had real illegal material of kids. It was not just deep fakes. He also distributed the stuff correct? I use a negative prompt for AI art to block out illegal content as well as a ram disk. With Stable Diffusion 1.5, you can tell that the images are video game like images. They look fake. I use fictional in my regular prompts as well to tell the AI to generate a fictional person even if it is based on an actor.
Stable Diffusion also was released in 2022. So AI art of that nature probably was not involved here. The criminal committed his crimes in 2021. The criminal still harmed real people regardless.
This article hypes up the AI so much that itās difficult to tell if the CSAM mentioned is real or not, almost as if itās just thrown in as a side note. Either way I find his 40 year sentence + 30 years probation to be extremely cruel and unusual punishment. The guy may as well have been an actual child abuser to merit such a sentence and would likely get less time for murder.
I wish Prostasia would clarify its Twitter/X post on this because it really sounds like they are attacking AI and equating it with real sex abuse.
It may not be a good thing and can definitely be grounds for legal action, but pasting a face on a generated body is not sex abuse.
The underlying rationale is that itās a real childās likeness. It may not be as severe as actual child sexual abuse, but it places them in a position that they did not (or could not) otherwise consent to by way of (mis)appropriation of their likeness.
Contrasted against images of fictional/virtual āchildrenā (that are fictional characters which do not exist), there is some harm and risk to be observed when itās a real child. Virtual/fictional children are not āchildrenā, they are not persons, and cannot be afforded any of the same rights or legal protections that are afforded to actual children.
They are also not conceptualized the same way, in that these virtual/fictional depictions do not promote the market/demand for materials involving real children because the demand for them is grounded by a set of preferences or ideas that the latter does not have afforded to it, hence why virtual/fictional child pornography is on the same level of legality as adult pornography.
Prostasiaās post doesnāt even mention AI, so Iām not entirely sure where you feel clarification is needed.
Using someoneās image in porn without their consent is abuse, imo. Itās obviously not equivalent to rape, and our legal system desperately needs reforms in sentencing especially to reflect that, but itās still involving someone in a sexual context without their consent
Itās wrong for the same reason that AI revenge porn is wrong
The article is about using AI to create deep fakes.
The reason revenge porn is bad is because of the ārevengeā part when it is used for nefarious purposes.
People should be free to manipulate legal data in whatever means they wish. How this person got caught isnāt mentioned, but I highly doubt he was blackmailing anyone whose ālikenessā he used.
The criminal had real illegal content at first before AI was used.
Including someone in porn requires their consent. If you donāt have that, itās abusive