Exploring the Psychology of doll owners

And there is yet another disturbing thing to consider. We know from studies, that the legalization of same-sex marriage correlates with a significant reduction in suicide attempts among LGBT minors. This is probably because legalization of same-sex marriage is a sign of destigmatization, lifting stigmatization-related minority stress among LGBT people that can lead to all kinds of psychological issues.

Considering this, and considering that a child doll ban if basically a sign of the stigmatization of pedophiles, I find it likely that a ban of child dolls would correlate with a significant increase in suicide attempts among minor MAPs, even among those who don’t own a doll. So the doll ban in the end might have had the effect of protecting objects while further endangering an already very vulnerable population of actual minors.


I would also surmise that people in posession of CSAM, for the most part, are just using it for their own “pleasure”. (I hate saying that) If it’s out there, they will find it. Not necessarily ever abuse a child. Abusers may or may not possess it. It’s not a qualifier. Stop the people producing that garbage.

Stop human traficking. And there you go, I heard a story today that an 8yo illegal immigrant was administered a rape kit. They found 42 different DNA profiles. 42!! Find those animals!

Same as owning a doll is not an absolute that the same person is in possession of it. Which is a frightening thought. Targeting doll owners. Why not turn every teacher’s life upside down? They’re more likely candidates. As anyone else could be! Boyscout leaders. Priests.

Not everyone who owns a gun shoots and murders someone. Not everyone that owns a car drives through crowds. Not everyone who has butcher knives stabs people.

It’s these coorrelations that are taken to extremes trying to predict “future crime”. All in the nature of “feel good” virtue signaling in the name of “protecting children”. Possible “thought crimes” happening. You see the push back with gender now. People need to stop projecting and making assumptions about everything.

And in actuality, banning them takes away an outlet for people that maybe change their mind about what they were considering doing to a child? Prevention is a weak argument, but so is saying dolls promote the offending behavior. For me they took away my depression of never being married or having children. For others, they’ve comforted them in their lifetime of grief over losing a child.

So yeah, I think suicide rates would increase along with an increase in offenses. Which also has not increased due to the availability of these dolls.

1 Like

Heck, that’s a good analogy; the MAP equivalent of legalizing gay marriage ISN’T legalizing CSA, it’s legalizing child dolls (+ loli/shota and other fictional stuff, etc). Maybe if I mention both in the same argument more often and frame it this way (as contrasts), it’ll be easier for people to understand (that they’re not two things lined up on the same slippery slope) :eyes:


A different topic paralleling, also fantasy, true for lolicon and 2d hentai. It’s fictional. So is this new stable diffusion, but it is “trained up” on, who knows what? And to me it’s strikes as much too realistic. Lolicon and hentai are obviously not.

1 Like

Over on ATFbooru, there was a small argument in the comments under a realistic-looking AI-generated image. One user made this comment:

I literally thought it was real. Either way, pictures like this are generated using real kids, often CP. Expunge it please.

AI-generated images are generated completely with images in its database. So if it spits out realistic child faces and bodies that are nude or performing sexual acts, that AI used CP to be able to make that.

Another user responded with:

That’s not how that works at all. It has human skin, eye, etc., samples in its database, and it does its best to extrapolate what you want from user input. You can also sometimes input an image, but most AI platforms have HASHes of child porn images so they know to reject them. (Google what file hash is, I don’t feel like explaining it, but long story short, it’s not the image itself before you bitch about it). You can input art for it to make into a more realistic model, but that’s art, not child porn. Look up jade_chan on here for an example of something drawn that’s hot. Something like that, put through the AI generator.

Look at how badly AI mutilates images at times; the hands, the fingers, navel, thigh gap, etc., little details. It wouldn’t mess those up if it was just retooling a CP image slightly.

I can categorically prove that AI doesn’t use CP. The only exception could possibly be if it was freshly created CP (shudders) whose HASH code hasn’t been entered into the blacklist database as of yet. But in such a case, it is painfully easy and simple to see that it was indeed based on that.

And on top of this, for AI image generators that learn to make realistic stuff based on photos that had been fed into them, because there may be such things, again there’s the HASH blacklist; but on top of this, you wonder how they learn in that case? What, you think people won’t feed ADULT porn images into the bot? The bot then uses neural networking to learn what human flesh is, and how it wraps around the body. It learns what eyes are, nose, etc., all the realistic artefacts from THOSE. (Marilyn Monroe porn edits for instance, or some adult porn star, or an image from someone’s OnlyFans). Once the bot has learned about human skin, eyes, etc., it has also learned general human shapes. Again, shapes will vary, because adult male porn is likely to be entered into it too. So it can and will learn flat chests too, along with all sorts of body shapes.

In the end, the end result is at least thousands of images being fed into the bot. It spits out edits or filters or whatever while learning. And then someone enters text, requesting AI generated porn of a character that just so happens to also be a cartoon child. It then makes a realistic generated image OF THAT CHARACTER. Is it a real child? No. And if you think it would ever spit out a real child, I suggest you go to This Person Does Not Exist - Random Face Generator because NONE OF THE PEOPLE that thing generates exist, existed, or ever will exist. Look how real they look.

Now, I’m done with this subject. If anyone needs any further proof of what I’m saying, Google exists. It will back me up.

Judging by the likes vs. dislikes, the vast majority of people in that thread agree with the latter person. I myself am not an expert on AI generation. If there’s anybody here more qualified than I who can support or defy this argument, I’d appreciate it.


Funny that you should bring that up.

In the AdventureQuest Classic online game, originally, it was implied that the pet shop owner was into bondage, but any mention of that was removed from the game after the company decided to focus on being “kid friendly”, and yes, I can safely assume that that was not originally the case, given that the head of Artix Entertainment is an Evil Dead superfan. When I brought that up on the forums, I got a warning. Hilariously, any mention of the Paladin Order committing genocide was also removed from the game, but you can at least talk about that change on the forums.


First of a disclaimer: While I did study Computer Sciences and did work on an AI Project during my time in university. That knowledge is OLD especially for Computer Sciences and specifically AI things. We see extremely fast development here and what might have been universally accepted truth yesterday might be shown to be false tomorrow. Furthermore while my studying involved AI and how it worked it was VERY shallow. So I am by no means an expert on this field but I should know more then the Average person.

Now to the claims in the two comments:

I would disagree with both statements. AI does not use the input images in any way in it´s generation, nor Parts of said images in it´s generation. What it does is essentially build a network that says "if there is a t in the text input on position 345 the likeliness of the pixel at location 345,455 red value to be 255 is 0,785954%. It´s these input taken together multiple times, with a handful of other mathematical functions that essentially generate the output.

(as a side note more modern AIs take a (static / user given) image as input add random noise to it and then does the same as above. AIs like stablediffusion and DallE. ThisPersonDoesNotExists works simply by the above method For an more detailed explaination of these AIs see How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile - YouTube )

So by the time you run the final Network there is essentially nothing left of the training images. Neither entire Images nor parts of it. (in case of Noise based AI´s the initial images are still in there, however unless they are user generated (and we will come to that later) they are hand picked so they are certainly not going to be cp). We then generate an output based on likeliness of a certain pixels color values to be a certain value.

It therefore does also not learn what human flesh is it just learns that based on all the inputs (some of which are random, some of which are user given some are internal, some are just your last 5 google searches) the most likely color of the pixel at 345,455 is bright red. It has this most likely colors for every pixel in the output image and then just writes them to a file / sends them back at you. But it has no idea what is anything at any point. The idea, usually used to explain AI to beginners is that every level in an AI Model would try that, but that is in reality not what happens.

So AI in the end is just a lot of statistics screwed in a certain direction. Based on that when can GUESS the input based on the output using statistical analysis. However such analyis would require both the sample sets and the user inputs. Since we usually do not have these we won´t really have the methods for it.

That much for how AI generally works now onto the next part the inputs. These are actually extremely important for AI as screwed inputs can screw the AIs results.

So where does the sample data for DallE for example comming from? Well since I am lazy I just asked ChatGPT about it the answer:

DALL-E is a neural network-based generative model developed by OpenAI that can create images from textual descriptions. According to OpenAI, DALL-E was trained on a dataset of about 250 million images, and the model itself contains 12 billion parameters.

It’s worth noting that the images used to train DALL-E were not hand-labeled or curated, but were instead sourced from the internet using a variety of methods. This approach, known as self-supervised learning, allows the model to learn from a vast amount of data without the need for manual labeling or annotation.

Overall, the scale of the training dataset used to train DALL-E is one of the factors that has enabled the model to generate high-quality images from textual descriptions, demonstrating the power of large-scale machine learning techniques.

So this gives us a couple of infos: 1) the sample image size for Dall-E was 250 Million. Thats more then any one could check by hand. And OpenAI Openly says they did not. So
2) the data never got checked
So we got no Idea what images they used neither does OpenAI. All we know is they came from the internet … well that is also where most viewers get there CP from so …

While there probably was some filtering based on some sort of algorithm, we do neither know the quality or kind of filter. And the Hash Filters mentioned in the second comment are notoriously easy to defeat as they Only match the exact inputted image not one with the smalls of changes to it.

In other words, take the original cp image and add a black box somewhere in the background where there was none before. And you defeated a hash based filter.

So we can´t really guarantee that cp images where not used in the training of the AI. However since the Input image is kind unused afterwards, all we do after the initial training is use statistical properties of the input is it really relevant.

The conclusion of cause changes if the user input image in one of the noise based AIs is cp, and here too we basically have no Idea rather it was or was not.

Can AI Generate realistic images without real training data

Theoretically yes. Again AI is purely statistics so you can screw the output freely from the input. However the further you deviate from the input the harder this will be. Additionally It will have certain “Artifacts” in it that show the original training images. For example adult looking vaginas on a child or a randomly appearing item of cloth over a mostly naked body. But at least in theory with enough data and alot of time, and supervised learning yes AI can. But if it does, unless the trainer releases it´s training data we basically have no way of knowing.

We can´t realistically tell what data has been used to train an AI. For an AI that uses User imputed source images we have no idea what that image was (unless the user makes that public). And arguably the training data does not matter since out of the 250 Million images used only the statistical abstract likeliness of a color at a certain location remains.

I would furthermore argue that the second comment therefore is more correct but as outlines above he is still wrong in a handful of things, some of which relevant to the discussion.

legally a completely different problem with realistic AI generated Pictures however remains. For many countries it does not matter if the image is real or not it is sufficient if it seems real to the average consumer / you as the viewer will not be able to identify the image as real / fake.


Yeah, I thought we’d need a new thread to address that. Thank you for all the info! I’ll have to read it tomorrow since it’s really late here.

1 Like

This is why ATFBooru deletes CGI and AI images that look “too real”. It’s not necessarily for moral reasons, but because the site might get shut down if they allowed it. Either they have only unrealistic loli on the site or they have no site at all.

1 Like

It’s obvious. They banned dolls because they wanted to. Then, made up nonsensical reasons for doing so.

Literature about physical attraction obsesses over shape. The reason is, shape is what is attracted to. No one is attracted to a shape because of who or what bears the shape. Therefore, it’s not about representation or similarity. An attraction doesn’t care about what material bears the shape attracted to or whether something else has the same shape.

Moralism is nonsense. One can hand any doll to a dog as a chew toy, and no one will care. What someone does to any item one can hand to a dog as a chew toy has no moral relevance.

I’ll fix the grammar later. I have to run.


I can only wonder who ATF thinks they are appeasing by not allowing “realistic” 3D content on their site. I have seen a lot of it on sites that have been around for years and it’s always been quite obvious to me that it’s nothing more than 3D rendering. So far as I know, in countries where this content would be illegal, it’s also just as illegal to show loli and shota. The laws don’t differentiate the artistic quality required to deem content illegal.


The other interesting thing on ATF is that they also don’t allow FaceApp to be used because it makes dolls look “too real-life”. I’ve used it a little with some good results for my own enjoyment as have others.

This entire thread has carved a winding path of ‘why does everything about dolls, according to non-doll owners, have to point toward SEXUAL fantasy of unknown real people?’ Versus just realism of a fantasy of an imagined person who doesn’t exist. Whether or not this artificial facsimile object really is treated as such shouldn’t matter. I don’t care how they use their broom.

Seems they’re more protective of real-life strangers being “used” in someone elses thoughts. Which seems really odd to me, that they’re so overprotective of and unknown/imaginary person “as if” they may actually exist and somehow, you’ve violated that person in your mind. Therefore making you a terrible person in their mind. Now they’re thinking badly of you. Violating you in their mind.

All the while they’re having thoughts of murdering you because of these imaginary “thought crimes” they perceive you of having of others. Or thinking of all the terrible things you “may” or “could” do to a stranger that they have never met, nor even know whether or not exist. When those thoughts have never even crossed your mind. It’s absolute lunacy when you deconstruct it!


I kinda agree with that policy in this case. While you’re free to use FaceApp for your own enjoyment, putting nonconsensual sexual content (even fictional) involving a real person in a place where that person could find it isn’t okay, since there’s a real risk of them being harmed by encountering the content. Banning that content on a website that already requires registration to view posts might be overkill, but I think it’s still a reasonable measure to ensure there’s no chance of someone finding something that involves them.

And a lot of people view the content of ATF for sexual reasons. Even if the poster doesn’t, viewers might, and other viewers know that. It’s an inherently sexual context for the images it hosts, even if the images themselves weren’t created with sexual intent. Obviously, you’re still free to do whatever you want in private, but their moderation has to be based on the assumption that anything posted there can and likely will be used for a sexual purpose by at least some people.


I meant to add, I agree with the policy, I have no problem with it. We as doll owners share amongst ourselves in other spaces because we choose to.

The fact is that yes, many people are on there for sexual reasons, still none of my business. I know what I’m signing up for when I join or decide to post there. Even if mine or other’s images were not created for sexual purposes, who am I to control how another person views them. More than 1 young person has masturbated to the Sears catalog women’s lingerie section in times past. Stay out of other people’s heads! What they think and how they view something is none of my business.

Part of what you said made no sense though. If it’s fictional, how does it involve a real person? The girl whose smile is used for FaceApp knew her smile would be used for millions of pictures. She signed up and got paid for it. Makes her a willing participant.

I don’t claim to know how the AI works. But it’s interstitched with the original photo of whomever it may be. If it’s a doll, it’s not a real person to begin with so there is no “real person” likely to find themselves unknowingly posted somewhere. Just a girl who’s smile was willingly given to be used for such a thing. With others “possibly” having “dirty” thoughts about them.

All of this sort of rhetoric is hypothesizing and pontificating on things that should not be anyone’s concern. I can’t tell another person what or how to think, much less assume it or read their mind. All of this is morality posturing, attempting to police people’s thoughts.


And honestly, that’s most of what all of these attitudes toward MAPs, the LGBTQ, and doll communities are all about. What other people “may” be thinking or “possibly” contemplating doing. All of it based on assumptions, lack of understanding, and fear.

It’s all residing in these people’s imaginations. It’s not real. It’s not reality.


I don’t know how FaceApp works, so that’s my bad for assuming. I’d consider the situation you’re describing to go by the same rules as stock images, and that is deferring to the terms of the license (which in this case would be reflected in the FaceApp Terms of Service). If the license does not restrict the context in which the content can be used, that is a form of consent on behalf of the copyright owner. Of course, this is usually someone different from the image’s subject, and if you become aware of evidence that the subject does not want the content to be used in this way, that should be respected (though you’re under no obligation to go looking for their personal views on the subject, ensuring their wishes are communicated is the responsibility of the company providing the image).


I’m sure I’m not the first person to use another AI’s algorithm to animate my doll. Even though it states not to use pictures of children. Well, she’s a doll, not a real child. So am I breaking the rule? That’s where the debate starts once again. She has a realism about her, being a facsimile, she is clearly not alive and our human ability to distinguish the difference only gets blurred when we combine the heuristics of the AI with the artificial physical object (the doll). Not 100% though.

I asked permission from the parents and have shown her picture to a few children to ask what they thought. I said, “this is so-and-so, what do you think?” Even with a few of the FaceApp pictures, an 8yo could clearly see she was a doll and immediately said so. I merely confirmed that she was just that to me, a life-size doll. That I loved her no different than a child would love their own doll.

They had no interest in her at all. They had more interest in the mind-numbing game-app they were playing with. What does that tell us? They’re not viewed as a sex object to a child. The question of using them that way is never asked. They need to be told that they are.

So who are the ones with the “issue”? Who’s really viewing children as potential sex objects? I don’t see children as such. I don’t see my dolls as sexual objects. I don’t use them as sexual objects. So, why must I and others, just because I own them, be thrown in the same bin as many others as “potential” predators and labelled as such?

It’s backward thinking.


New article! Follow @stopdollhate on Twitter. https://twitter.com/stopdollhate


Dr. David Ley states correctly that the primary reason to ban is to punish folks who have an attraction pattern with no regard about harm.

Attempts to regulate and prohibit such behaviors appear, at this time, to be primarily driven by feelings of disgust and anger at those who hold such desires, with the intent to punish, ostracize and eradicate those who have such pedophilic interests.

The doll laws don’t make sense. When both a harmless act and a harmful act are punished for, the only reason for choosing the harmless act can be preference. Punishing for the harmless act is, then, punishing for choosing to be harmless. It doesn’t make sense to punish for choosing to be harmless.

One part that irks me is the assertion that attraction and desire are the same. They are not the same. No one controls whether one likes the smell of baking bread. One does control whether one acts on it or even has an interest in it. Attraction works exactly the same way.


Thanks to another member of our doll community for finding this article. There goes the “video games promote violence” argument. He said, "replace the words “video games” with “dolls” and there you have it! Doll owners are less likely to offend than “Uncle Harry”.