My Thoughts on 2023 - About Prostasia, CSA Prevention, and Fiction

2023 certainly was an interesting year…

My biggest concerns are less about the ways children are exploited, but rather, investment in ways where they are not. I’ve kept my ear to the ground with regard to the advancing developments in AI and how it relates to CSA, and I’ve definitely learned quite a lot of useful info.

To put it simply, it was not good. It’s not good for fiction as a whole, nor is it good for child exploitation and prevention.

It has been extremely stressful, seeing industry-leading experts jump out of their chairs to discuss materials that do not depict actual children and were not derived from content which depicts actual child abuse and how they can be tackled, often parroting unproven assertions about these materials and how they affect risk, while others do very little to challenge or scrutinize any of these claims.

I’ve observed them identify actual, but limited and situational instances where AI tools can be used to exploit real children by way of deliberate misappropriation of their likenesses, yet fail to develop or suggest solutions for how these specific circumstances can be addressed aside from just reporting them, while seemingly trying to leverage it into an argument to justify a broad approach to generative AI as a whole, irrespective of whether or not it meets this limited criteria.

All the while companies are humoring concepts like using AI-based classifiers to scan for CSAM without regard to the privacy implications and questions or the prospect of false positives. It’s all so horrifying, so concerning, and quite frankly it gives me genuine nightmares.

It’s all so tiring, so bewildering and over time it pecks at my core. It truly tests my faith in this system’s ability to stop and question whether or not the concept of child sexual exploitation truly revolves around acts or products of that, or if ‘child exploitation’ is a malleable idea which is not confined to the prospect of a ‘real child’, but the mere concept of a child, at which point I question whether it’s something they’re even capable of doing.

It all feels so tiresome that nobody outright challenges this approach or identifies it.

My heart goes out to all of those who are struggling with desires and attractions they did not choose to have, who are encouraged to live in fear not over what they might do, but what they feel.

My heart aches with concern for all of the children who are victims of abuse whose cases have to be set aside because their abuse does not elicit the same degree of disgust as a particularly graphic AI-generated picture of a fictional character.

My heart beats with pain at the thought of innocent people’s lives being upended over the creation or possession materials that are not relevant, nor pertinent, to the issue of child sexual exploitation or abuse, because they do not involve an actual living child.

I’ve shed tears for the myriad of opportunities to study and capitalize on the theraprutic effects these materials may have, the faithful, time-tested ‘fiction is not reality’ mantra and how it relates to CSA prevention, being lost because countries and agencies would rather act on feelings of disgust or fear than anything grounded conclusively in science.

These recent advancements in AI technology certainly have been interesting and beneficial for a lot of people, but not so much here.

I still remember when I first saw AI being used to generate this type of imagery.
My first thoughts were of concern, not because of what the technology was capable of, but because of what reactions policymakers and judges would have the moment they saw something they didn’t like, and what the ripple effects of all of that would be for fiction and CSA going forward.

And it’s horrifying to see these fears be realized, so much so that I often find myself paralyzed in thought, in contemplation of how things could worsen.

It’s all so tiring. It’s all so worrying. It’s all so heartbreaking.

My Hopes for the Future


I want to see proper advancements be made with regard to the science of fictional/virtual child pornography be realized in something that is both tangible and impactful.
I want to see the science finally investigate and state conclusively whether or not these materials have a risk-supportive effect or a protective effect with regard to risk of contact CSA or CSAM consumption.

As of writing this post, the consensus is still very much grey and undecided. Many pundits argue that such materials are harmful and should be banned, despite the fact that they do not harm or involve real minors, and offer up various claims that are usually intuition-driven assertions with very little factual substance to ground them, or will flat-out use rhetoric in place of empirically sound substance.

I want to see these arguments be questioned and scrutinized, because after literally decades of studying media and its effects on risk associated with antisocial or criminal behavior, no causal inference has been, or could be, observed.

This, combined with what I’ve seen and read from other authorities within this field, and from talking with consumers and creators of this content and observing their cultures, I’ve come to the conclusion that they do not cause harm.


I want to see the obscenity doctrine within the United States be revisited and overturned on its face by the SCOTUS.

I also want to see the broader liberalization of the federal judiciary, with more liberal judges being appointed to serve in district courts, as well as appellate courts.

I want to see the obscenity precedents from Roth, Miller, Hamling, and onward to be overturned on their face, finding that the First Amendment does, in fact, protect speech from being banned as ‘obscene’, and that such a vague, arbitrary, and opinionated legal doctrine is patently and fundamentally antithetical to the very concepts of Free Speech, Privacy, and Due Process in the same way that ‘Separate but Equal’ was to the concept of Equal Protection and Due Process.

The very idea is nauseating and clouds proper judgement.


I want to see a stop to, or at the very least a loss in interest, in the development and implementation of AI-powered CSAM detection, with a full trust and reliance on the use of verified hash DBs to proactively scan for, remove, and report instances of CSAM.

Discord, Microsoft, and others shouldn’t be jeopardizing user privacy and potential safety by trying to automate this. Even with the help of human auditors, this type of approach is heart-wrenching and concerning.

I would also like to see the NCMEC limit their focus to only materials which implicate or involve real minors and not expand their CyberTipline and Industry Hash Sharing initiatives to include fictional content.


I would like to conclude this post with special thanks to @prostasia for existing and functioning the way you all do, to @elliot , @terminus , @Gilian , and everyone else involved with the organization.

You are all a fire of hope that burns within me and everyone else. Everyone who enjoys or indulges in fiction owes a debt of gratitude to the Prostasia Foundation for what they do, the network of advisors they’ve built and the research and advocacy they’re helping to further.


This will never change, Chie. Have you seen the draft of the new UN cybercrime convention?

  1. For the purposes of this article:
    (a) The term “child sexual abuse or child sexual exploitation material” shall
    include material that depicts or represents a child or a person appearing to be a child

(b) The term “material” shall include images, video and live-streaming media,
written material and audio recordings.

Source: A/AC.291/22

States have the option to only limit it to real children, but that is not the default anymore. Hentai, stories are now all officially CSAM/CSEM and companies can use every international tool to attack it.


I have faith that it will be disputed by the United States and Japan once again. “A person appearing to be a child” or “represents a child” are definitively overbroad and extend beyond the interests of actual children.

It can’t be plausibly argued that prohibiting depictions that “represent” or “appear to be” children, but are in fact not of actual children is something that’s within their interests because there is no underlying exploitation or abuse occurring and the mere existence and consumption of such media, which does not depict a real child, has not been shown to actually put the rights of real children at risk.

It can only be argued from a standpoint of legal moralism, and legal moralism is not particularly relevant to child protection.


The optional option most wont use:

  1. A State Party may require that the material identified in paragraph 2 (b) be
    limited to material that:
    (a) Depicts, describes or represents a real child; or

Is all we get. That is most likely in thanks to JP.


I thought NCMEC didn’t care about fiction

1 Like

Well, it’s something.

What I’m about to say is probably going to come off as insensitive.

But the prohibition of fictional/virtual child pornography, where the production did not involve, nor implicate the rights of an actual child is probably one of the most egregious and misguided acts of consequential genocide that our society could ever muster.

It’s not like the genocide of the Jews, against the Ukrainians, or even the Cambodians. Rather, it is more akin to the type of genocide attempted by governments to punish the LGBT community, or to silence secular skeptics and critics. It is both an ideological and cultural genocide, because it targets both the idea and the people who are predisposed to falling in with certain groups.

The sooner we, as a society, can distinguish a pedophile/MAP from a child molester, and recognize that they can live their lives and reconcile their desires with the reality that they must never be acted on in a way that harms a real child, the sooner we can perhaps break open a type of humanitarian understanding that enables us to focus on what matters, which is harmful behavior. The Japanese figured this one out relatively soon.

They generally don’t. But with AI, and the level of realism it can be challenging for them to determine whether it’s a deepfake of a real child or something purely fictitious, which is why I’m worried.

Technology and techniques already exist for this, but with all the talk by certain people or groups makes me uneasy about whether it will be limited to just that.


There are not enough people in influential positions to make this sea-change in modern culture. At least I don’t think there are. Movies and TV can really change public perceptions. Hollywood was well stocked with gay people, and it still took decades to change the public’s attitude on homosexuality once they started.


I hate this. There are an abundance of adults that look young, that if you didn’t know they were an adult (say it’s a random picture of someone on social media), you might assume that they are a high schooler (which, legally, are “children”). We should not be including young-looking adults in this. It’s not the adult’s fault that their genes make them look young. And we certainly shouldn’t be infantilizing adults and treating them as if they are a child, when they are not.


While I do agree that this is ridiculous, what about minors who look older? I knew some people in my high school class that could walk into a bar and order a drink without anyone giving a second look. Where do we draw the line?

We have a line, it’s called the age of consent. The only people who seem to think that’s a problem are the ones who think children can consent to sex and the ones who think children are no more worthy of protection than cartoons


Even 21yos can’t make a proper decision.

Who’s kidding who?? I know 40yos that can’t decide the right thing either.

1 Like

You cannot determine the age of a person by a picture.

no, but you can require documentation to be maintained by the producers of pornographic content


Then I assume that content made with adult actors that appear, or are made to appear, youthful is alright?

imo, if the people involved are consenting, legal adults at the time of planning and production, it shouldn’t matter what the plot or their physical appearance is