StyleGAN 2 for virtual CP

StyleGAN 2 is an AI known to synthesize “near-perfect” human faces (skip to 2:02). They are NOT real people.

The software can synthesize many other things like cars, cats and birds. The program feeds on pictures belonging in the same category. If you want cats, the AI must be given many, many images of cats.

Some people might use the software to create virtual CP that is hyper realistic.

1 Like

One thing I would like to see is someone developing a software that gives us an option of ONLY generates images based on photos taken of Adults. I plan to create super realistic 3D models in the future, and I do not want any minors to be involved in the development of them whatsoever. I know some software can choose what the generate face is based on. For example, excluding certain ethnicities, face shapes, and even… ages.

While I’m not into porn nor gore because I just don’t agree with it, I don’t want the people downloading my future 3D models to be restricted to only things I agree with. In other words, I want people to be able to use my 3D models to pose in lewd ways which is very common in SecondLife.

Which is why I want ONLY 3D models generated based SOLEY on confirmed adult images. Posing a 3D model in Second Life modeled based on a real child in lewd ways probably constitute creation and/or Possession of Child Pornography which are crimes. And lets say someone uses this software to synthesized based on images of REAL children. Would that constitute a violation of law?

We know morphed images of children such as combining the face of a child with that of a body of an adult in porn is illegal and constitute child pornography. Could the morph related laws apply to AI generated based on the images of 50 children? Which would mean use of ai generated images that involve synthethizations based on images of minors must never be used for pornography, (and in some Countries; gore is also prohibited such as in New Zealand). Now, I have no idea whether this would be a violation of law or not. Even if it does not, I have personal ethical issues with it.

Even if it’s possible to somehow managed to create photo realistic CP without the synthesization of images of real children from these AIs, virtual indistiquishable CP still violates law.

LT;DR, I want a program that gives us the option of ONLY synthesizing images from adults. And I want the AI developers to really confirm the ages of their models. Like checking their birth certificates in person for example. And when it comes to synthesizing, Let us choose the age range. My suggestion is anything involving gore or porn, limit the synthesizing to images taken of people when they are 18+.

  1. It’s very sad that we have to ban photo-realistic CP when the child isn’t real. I guess it’s because the police doesn’t have the tools/resources to tell if photo-realistic CP is real or virtual.

  2. We have a similar problem with photo-realistic deepfakes. In the future, it will be very hard to tell if a picture or video is real or a deepfake.

If someone finds a solution for 2) we might have a solution for 1)

  1. Correct, the issue is that it would be very difficult to tell the difference between fake CP and CSAI. It seems to me the justification for prohibiting “virtually indistinquishable fake CP” is because it makes it potentially very difficult to prosecute real CSAI criminals because the criminal can use the “it’s just virtually indistinquishable fake cp” defense. I believe that was the motivation behind the PROTECT act.

  2. I’m not really sure what the solution is. But if we find some way to verify that something is fake vs real, (like some hidden code in JPG/PNG files that verify that it’s real or fake), it could be useful to be a tool against deepfakes which could be used to annoy others or worse, put words into the president/politician’s mouth that they never said as an attempt to undermine a political faction in the eyes and ears of the public.

I’ve heard of a method for removing CP that involves a hash database. Maybe we could use this method to identify virtual CP. It could look something like this.

  1. Artist creates photo-realistic virtual CP.
  2. Artist signs a contract with Prostasia to acquire “No Children Harmed” certificate.
  3. Certified artist uploads material to database
  4. Material is given a hash
  5. Material is distributed via same database
  6. Anyone can check the material against the database to verify it’s fake
1 Like

From a purely legal perspective, this wouldn’t be any different than simply looking for the real deal, especially when it comes to convincing a jury that it isn’t.

AIs are not gods. They can’t pull things out of a vacuum.
You would have to teach it in one way or another how to create the models, much like how an artist might learn anatomy from legal sources.

If it’s children that look like adults, then this defeats the object of this whole exercise and there is plenty of legal material out there.

There is an easy solution. If everything is fake and so realistic that it is indistinguishable from reality, then people would look at that instead and you would have strong arguments against the current regime of highly draconian laws. You could largely stop enforcing them, except for going after physical abusers and anyone dubious enough to knowingly distribute problematic ones. Self-regulation? I don’t know.

It won’t stop all cases? Laws aren’t made to be absolutely perfect, this leads to EARN IT.

The same applies to an indistinguishable sexbot. The awful and flawed argument against them is that they’re not realistic enough to be useful, well if they are, then it pulls the veil over this censor apologist nonsense.

AI porn is nice. I’m not a map so I will stick with AI adult porn.

I think Americans will have this to deal with:

B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or

C) such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.

(9)“identifiable minor”—
(A)means a person— (i)

(I)who was a minor at the time the visual depiction was created, adapted, or modified; or

(II)whose image as a minor was used in creating, adapting, or modifying the visual depiction; and

(ii)who is recognizable as an actual person by the person’s face, likeness, or other distinguishing characteristic, such as a unique birthmark or other recognizable feature; and

(B)shall not be construed to require proof of the actual identity of the identifiable minor.

I believe the AI porn is trained off real people. Will AI porn trained off real children or mostly off children violate the laws? I think it’s best to avoid it. Also I think there are ethical problems with using images of children for this reason. I cannot condone this.

Maybe the admins of this website know?

Just out of curiosity, when you say “real people”, are you only referring to modern-day people, or are you including historical figures in that?

1 Like

The problem with AI Generated CP is that it is based upon the fallacy that pedophiles become sexually ravenous towards every single child they see. Like all people, pedophiles have individual preferences.

Wouldn’t it be possible for AI generator like this to accidentally produce content that is being considered illegal, even if it wasn’t intended or designed to do so? Computers don’t have sense of morals per se, they don’t understand their bit flipping causes obscene output.

This is going to suck very much considering AI can be trained with real children photos. So you probably will need another AI that is trained on child porn to detect child porn before the AI produces child porn. What a mess.

Can you train it on real child photos which are not CP and therefore do not fall under the rationale of the fruit of the poisonous tree? If the end result if fully virtual and impossible to tie to the original, it may minimise harm the most, even if someone would say it wasn’t created in the most ethical manner.

If it is possible to train a nearly infinite number of scenarios off CP, it would strike many for the rationale for criminalizing CP, as the demand for more CP would be markedly reduced. You could even deter individuals from accessing CP sites at all by promoting alternatives via the mass-media.

This may be seen in a similar light to Nazi experimentation. Should we use research results derived from experiments done many decades ago? As it stands, many people die because we don’t but doing so would be disrespectful to the victims in Auschwitz.

I wouldn’t really care if someone made AI porn of me if they asked nicely. But, the government likely wouldn’t allow it either way. And I don’t have good shots for that. Don’t hold me to that :stuck_out_tongue:

Moral issues are without a doubt fucking complex on this. I think no one here would disagree with the following statement:

-To use a derivative of a person’s likeness for the purpose of profit or producing a sexualized depiction for distribution, permission MUST be obtained first.

-You must NEVER use a derivative of a child’s likeness for the purpose of producing sexual images under any/all circumstances and must remain illegal to do so.

These two statements should be universally agreed upon.

But what happens if it’s an AI that “trains” itself off a large quantity of images and generates something from scratch? Is what it produces considered a derivative? Maybe some experts should chime in. I don’t know.

But considering there are already major ethical issues with selling data collected from our browsing of social media, I would argue yes, yes one could make a good case it would be a derivative. And most people would argue… yes.

But the counter might argue that it’s not all that different from an artist at an art school who practices live drawing and reference drawings of human anatomy to learn the basics of the human form. Than after many years of hard work, he does go on to create a photorealistic drawing of an imagined human with absolutely no references. In that fucking case, is his photorealistic drawing he created from scratch morally a derivative of the live models in his class? Most would argue… no.

But the human brain is not a similar computer to that of a fucking deep learning AI. Our brains are unique and we have consciousness. The deep learning AI is just a tool, more comparable to that of social media algorithms that collect and interoperate data from it’s vast userbase than to our conscious brains. So I definitely lean towards the notion that faces generated using deep learning algorithms off REAL people are derivatives of said people. In the same way data collected on social media by tech giants are derivatives of us.

Using REAL CHILDREN as reference would be fucking sickening. Can a child being consent to his likeness being used in this way??? I cannot stomach why anyone would think this is a good idea. If you disagree… PLEASE. PLEASE explain why you think this is ok

So where does this leave AI generated faces? I would argue they still have a great future and role to play in roleplay, gaming, and enjoyment of alternative personas. But it will be more of a permissions culture. You’d hire a shit ton of models to photograph, have them all sign a contract agreeing that their likeness will be used to train the AI so the AI will generate random faces. The contract should be very clear… If the goal is for the AI generated faces will be for whatever the business wants to do with it, or whatever the user wants to do with it, than make it fucking clear from the get go. Do not mislead, do not manipulate or lie. And if it is suppose to allow the end user to use the AI generated faces for ANTYTHING including sexual images. The faces it’s trained off of must NEVER include any photos of under 18s.

Someone did attempt to create porn out of this. But to my dismay, I learned they did not get FUCKING PERMISSION from the sex workers they trained their algorithm off. So I can’t use it. It is against my morals to do so. There needs to be some kind of moral/ethical guidelines for these businesses to follow. I find stylegen technology to be exciting field.

I want the following guidelines!!!
if it’s a “use AI generated face for anything”, all faces must be 18+ at time photo taken. Permission taken from those who’s likeness being used and they should be made aware that what the AI generated faces to be used for. Not hard. I don’t like the idea of just taking random people’s images, creating a tool like this, and selling it for profit or distributing porn with it. Permission culture please.

I personally would be ok if businesses use my likeness to train off it… They can use the AI generated faces for anything. Including porn. But the bottom line is you need permission. Bring back permission culture.

No because likeness rights are based on derivatives, not coincidences. If you use 10 reference faces with permission, and it generates a stylegen facial likeness that by coincidence looks similar to Johnny. Johnny has no right to stop you from using that stylegen facial likeness. Only limited variety of faces in the human form. You can never ration them. Not with 11 billion people. Not with 120 billion people in 2300. Not with 1500 billion people in 3021. Likeness and name rights will always based on derivatives mostly for that reason. Not based on similarity because of coincidence.

It is absolutely sickening that someone would want to produce something so heinous with such beautiful AI technology as this. There is so many things to create with this. So much beauty that can be developed! But the first thing that comes to your mind is the worst thing, literally the worst thing you can think of to produce!

You, yes YOU op are sick in the head. It has not been established that fake CP will reduce real world offending, the only studies I’ve come across is that there is no causal link between fake CP and offending. That tells us nothing about whether it actually reduces harm. If I were to license this software, without a doubt I would prohibit anyone from producing anything that sexualizes minors even if they are completely fictional. What the fuck is wrong with you.

Technology isn’t beautiful or ugly. It’s just technology. It serves whatever purpose you gave it. It is the utility of the technology that matters, and one of such utilities is to create an alternative for CSEM. It’s not ugly or beautiful, it just is.

There is only one thing you can create with this AI, realistic human faces. Nothing more. And most of those faces are quite ugly in my opinion.

You don’t know that it was the first thing this person had in mind while seeing this AI. It could be his second, or third, or tenth thing in mind.

What exactly is sick in a person, who understands his condition, agrees not to want to hurt any individuals, and instead, tries to seek alternative means of reducing their own sexual frustration without involving any living individuals instead?

Would you prefer him to sexually abuse a real child? Would that be in your opinion, what a “healthy individual” should do? Because I have a hard time understanding what your motivations are to oppose basic human decency. Because that is what it is, having some desires that are unacceptable and finding a way to deal with them in a safe manner is the definition of decency. This is why violent people train martial arts, instead of beating up people on the street for looking at them the wrong way, and why people who are overly competitive play sports, instead of trying to ruin other peoples lives for their own benefit.

It tells us that it doesn’t increase harm. And that is all that is needed. If something officially doesn’t result in an increase of crimes, has no actual effect on increasing the net harm in the world, but we have a lot of individual people who make testimonies, that this given thing helps them in their individual lives, then that is enough of a reason to allow it. Why would they lie if they don’t actually have some benefit from it, and what is wrong in having some benefits from something that doesn’t ultimately lead to any harm?

Consider the opposite domain, there is no proof that banning such productions won’t result in an increase of sexual abuse of children either. No research has shown that. What if a lot of people right now depend on things like “fake CP” and child sex dolls to not sexually abuse real people? You can’t know that it’s not the case, even in countries that already prohibit it, since most countries prohibit drugs, and yet, people find ways to obtain them.

So where is the proof that banning those things won’t result in an increase in child sexual exploitation rates? There is none, so unless you find proofs that banning of those things won’t result in an increase in crimes, it’s dangerous to ban such things. Real children lives are at stake, are you willing to take that risk? Because if you don’t take that risk, then you already know, that no harm will result from the decision to allow it. So it’s irrational to ban those alternatives for any person who cares about real children lives!

It’s immoral to prevent people from finding victimless alternatives to acts, in an effort to force them into only one option: to break the law. Because that is what it is, banning such victimless when there is no link between them and actual harm, would only result in people who want to use it, becoming criminals.

And if you want to know why it’s a very bad thing, look no further than the Prohibition era in the US. The only thing that the US decision to prohibit the production and distribution of alcohol has achieved, was turning a lot of law-abiding citizens who wish no harm upon any other being into criminals.

And what do you think has happened to such people? Do you think that people like Al Capone and the other individuals that participated in illegally supplied alcohol would simply go back to a normal life, working a stable job, obeying the law, respecting the authorities once the prohibition ended? Of course not! If their lives are already at stake, they would lose any restraint for other types of crimes. Because regardless of whatever they decide, in the end, if authorities decide to knock to their door, they will go into jail one way or another.

Ask yourself this question: once a person already risks their life in prison by making and possessing such artificial alternatives for their own use, why should they respect any other laws? It’s a simple cost gain analysis, there is no research needed for that: “If I already risk spending 20 years in prison for possessing this artificial thing that I use for my own pleasure, why shouldn’t I simply download actual CSEM? Or try to abuse a real kid? I mean, I got nothing more to lose, and a lot more to gain.”

But thankfully as you can see on this forum, people like that don’t choose to do that. They instead want to fight for those alternatives. They want to respect the law. They want to respect the societal rules, even though they don’t benefit their personal interest.

How morally corrupt you have to be to desire to push people into becoming criminals simply for trying to be decent law-abiding citizens that don’t want to hurt anybody and try to find alternative ways of getting what they desire without causing any harm to any person? Do you seriously think that the child predators, people participating in child sex trafficking and psychopaths that rape children would care about things like “legality of child sex dolls” or “legality of artificial 3DCG images that depict child-like characters”? They already have no issues using actual CSEM, or sexually exploiting real minors, despite that being illegal.

So by prohibiting these things, the only group of people you fight against, are the ones that try their best not to cause any harm, because they truly don’t want to hurt anyone.

So this question:

Should really be directed towards you. Because I genuinely can’t see how any healthy individual with correct moral values could be shaming and insulting people, for spending a lot of time and effort overcoming their evil nature, simply because they weren’t born “good”.

As for the thread alone, I don’t think creating hyperrealistic virtual CP is a good idea. Such productions, if really that effective in the realisticity, could make it harder to distinguish such artificial works from CSEM. This could be abused by people to transfer photos and videos of actual children, potentially with some filters added to it, to create some doubt.

When such realistic work is created fully by a person, it lacks all the subtle details that allow our cognition to distinguish real photo from a realistic but artificial 3D model. And the same details are hard to remove from actual photos, on top of being a pointless effort, since if someone wanted to actually do that, they would be better off simply making an artificial 3DCG, instead of editing a photo.

1 Like