First of a disclaimer: While I did study Computer Sciences and did work on an AI Project during my time in university. That knowledge is OLD especially for Computer Sciences and specifically AI things. We see extremely fast development here and what might have been universally accepted truth yesterday might be shown to be false tomorrow. Furthermore while my studying involved AI and how it worked it was VERY shallow. So I am by no means an expert on this field but I should know more then the Average person.
Now to the claims in the two comments:
I would disagree with both statements. AI does not use the input images in any way in it´s generation, nor Parts of said images in it´s generation. What it does is essentially build a network that says "if there is a t in the text input on position 345 the likeliness of the pixel at location 345,455 red value to be 255 is 0,785954%. It´s these input taken together multiple times, with a handful of other mathematical functions that essentially generate the output.
(as a side note more modern AIs take a (static / user given) image as input add random noise to it and then does the same as above. AIs like stablediffusion and DallE. ThisPersonDoesNotExists works simply by the above method For an more detailed explaination of these AIs see https://www.youtube.com/watch?v=1CIpzeNxIhU )
So by the time you run the final Network there is essentially nothing left of the training images. Neither entire Images nor parts of it. (in case of Noise based AI´s the initial images are still in there, however unless they are user generated (and we will come to that later) they are hand picked so they are certainly not going to be cp). We then generate an output based on likeliness of a certain pixels color values to be a certain value.
It therefore does also not learn what human flesh is it just learns that based on all the inputs (some of which are random, some of which are user given some are internal, some are just your last 5 google searches) the most likely color of the pixel at 345,455 is bright red. It has this most likely colors for every pixel in the output image and then just writes them to a file / sends them back at you. But it has no idea what is anything at any point. The idea, usually used to explain AI to beginners is that every level in an AI Model would try that, but that is in reality not what happens.
So AI in the end is just a lot of statistics screwed in a certain direction. Based on that when can GUESS the input based on the output using statistical analysis. However such analyis would require both the sample sets and the user inputs. Since we usually do not have these we won´t really have the methods for it.
That much for how AI generally works now onto the next part the inputs. These are actually extremely important for AI as screwed inputs can screw the AIs results.
So where does the sample data for DallE for example comming from? Well since I am lazy I just asked ChatGPT about it the answer:
DALL-E is a neural network-based generative model developed by OpenAI that can create images from textual descriptions. According to OpenAI, DALL-E was trained on a dataset of about 250 million images, and the model itself contains 12 billion parameters.
It’s worth noting that the images used to train DALL-E were not hand-labeled or curated, but were instead sourced from the internet using a variety of methods. This approach, known as self-supervised learning, allows the model to learn from a vast amount of data without the need for manual labeling or annotation.
Overall, the scale of the training dataset used to train DALL-E is one of the factors that has enabled the model to generate high-quality images from textual descriptions, demonstrating the power of large-scale machine learning techniques.
So this gives us a couple of infos: 1) the sample image size for Dall-E was 250 Million. Thats more then any one could check by hand. And OpenAI Openly says they did not. So
2) the data never got checked
So we got no Idea what images they used neither does OpenAI. All we know is they came from the internet … well that is also where most viewers get there CP from so …
While there probably was some filtering based on some sort of algorithm, we do neither know the quality or kind of filter. And the Hash Filters mentioned in the second comment are notoriously easy to defeat as they Only match the exact inputted image not one with the smalls of changes to it.
In other words, take the original cp image and add a black box somewhere in the background where there was none before. And you defeated a hash based filter.
So we can´t really guarantee that cp images where not used in the training of the AI. However since the Input image is kind unused afterwards, all we do after the initial training is use statistical properties of the input is it really relevant.
The conclusion of cause changes if the user input image in one of the noise based AIs is cp, and here too we basically have no Idea rather it was or was not.
Can AI Generate realistic images without real training data
Theoretically yes. Again AI is purely statistics so you can screw the output freely from the input. However the further you deviate from the input the harder this will be. Additionally It will have certain “Artifacts” in it that show the original training images. For example adult looking vaginas on a child or a randomly appearing item of cloth over a mostly naked body. But at least in theory with enough data and alot of time, and supervised learning yes AI can. But if it does, unless the trainer releases it´s training data we basically have no way of knowing.
Conclusion:
We can´t realistically tell what data has been used to train an AI. For an AI that uses User imputed source images we have no idea what that image was (unless the user makes that public). And arguably the training data does not matter since out of the 250 Million images used only the statistical abstract likeliness of a color at a certain location remains.
I would furthermore argue that the second comment therefore is more correct but as outlines above he is still wrong in a handful of things, some of which relevant to the discussion.
legally a completely different problem with realistic AI generated Pictures however remains. For many countries it does not matter if the image is real or not it is sufficient if it seems real to the average consumer / you as the viewer will not be able to identify the image as real / fake.