I wasn’t sure of which forum to post this in, but this one seems fine, because it’s about internet, technology, and law.
I am concerned. About the indictment, and about the language used in their press release:
“As alleged, Steven Anderegg used AI to produce thousands of illicit images of prepubescent minors, and even sent sexually explicit AI-generated images to a minor,” said Principal Deputy Assistant Attorney General Nicole M. Argentieri, head of the Justice Department’s Criminal Division. “Today’s announcement sends a clear message: using AI to produce sexually explicit depictions of children is illegal, and the Justice Department will not hesitate to hold accountable those who possess, produce, or distribute AI-generated child sexual abuse material.”
Indictment: https://www.justice.gov/opa/media/1352606/dl?inline
“Government’s brief in support of detention”: https://www.justice.gov/opa/media/1352611/dl?inline
Where I stand on this is simple.
Sending those images to a minor (I think he was 14 years old) is wrong.
Generating those images is ethically OK and free speech, and it helps that they do not allege that the outputs resemble a real, identifiable minor person.
It seems that he wasn’t another creep who was using photos of children that he knows or sees around him- I can understand why a person like that should pay something for what he did (but not incarceration, that’s crazy).
If I had children, and some guy was taking photos of my kids, I would want his balls destroyed by a rail gun.
But, it looks like all this guy did is generate a ton of images of young-looking human characters doing very explicit things, and then sent some of them to a teenager on Instagram, a teenager who did state his correct age to the adult (he didn’t lie about his age like many teens probably do)
Instagram’s image classification is what flagged his messages for review. Obviously, Meta/Facebook/Instagram have their own models to help identify potential underage nudity, leaning toward the younger ages (it’s 1000x easier to correctly detect a toddler than a 17 year old girl). It is still hilarious, and insidious, that a human reviewer saw what are obviously fake AI images, and decided to escalate it so it would be reported to law enforcement as if it were CP.
I like “oneshota” (please correct me if I’m using the wrong word), and I have prompted Stable Diffusion for mild scenes, and done 3D renders of fictional women with fictional younger ones, and I occasionally favorite both 2D and 3D renders of this kind on platforms like Pixiv, so I am increasingly concerned.
I still feel “safe”, but I am disgusted by the fact that I cannot accept donations, on a platform like Pixiv Fanbox or SubscribeStar, without the risk of being arrested because my “softcore” images are too realistic or something. I also feel that making donations to such artists is potentially high risk, for the same reason.
It doesn’t help that I’ve caught news media AND POLICE misidentifying images as “AI” when they are actually 3D renders. I checked some cases in UK and in the US that were about “AI” and ended up being about “oh he did some Stable Diffusion too but we’re adding these 3D renders to the charges”, in one case I was able to find an original image that landed a man in jail (one of many he was charged for), because the filename in the legal document was enough to find the original image on rule34 dot xxx because the relevant tags were in the filename. I found it, I saw it, and to me, it was a nothing-burger. This man’s charges included AI renders, and similarly nothing-burger 3D renders like that one, with a woman and a boy who are obviously not real people.
TOO LONG DIDN’T READ:
It looks to me like computer-generated images are on the chopping block, in the United States!
Please let me know if it’s true or not, so that I can be anxious about one less thing, or start reviewing my security posture.