2023 certainly was an interesting year…
My biggest concerns are less about the ways children are exploited, but rather, investment in ways where they are not. I’ve kept my ear to the ground with regard to the advancing developments in AI and how it relates to CSA, and I’ve definitely learned quite a lot of useful info.
To put it simply, it was not good. It’s not good for fiction as a whole, nor is it good for child exploitation and prevention.
It has been extremely stressful, seeing industry-leading experts jump out of their chairs to discuss materials that do not depict actual children and were not derived from content which depicts actual child abuse and how they can be tackled, often parroting unproven assertions about these materials and how they affect risk, while others do very little to challenge or scrutinize any of these claims.
I’ve observed them identify actual, but limited and situational instances where AI tools can be used to exploit real children by way of deliberate misappropriation of their likenesses, yet fail to develop or suggest solutions for how these specific circumstances can be addressed aside from just reporting them, while seemingly trying to leverage it into an argument to justify a broad approach to generative AI as a whole, irrespective of whether or not it meets this limited criteria.
All the while companies are humoring concepts like using AI-based classifiers to scan for CSAM without regard to the privacy implications and questions or the prospect of false positives. It’s all so horrifying, so concerning, and quite frankly it gives me genuine nightmares.
It’s all so tiring, so bewildering and over time it pecks at my core. It truly tests my faith in this system’s ability to stop and question whether or not the concept of child sexual exploitation truly revolves around acts or products of that, or if ‘child exploitation’ is a malleable idea which is not confined to the prospect of a ‘real child’, but the mere concept of a child, at which point I question whether it’s something they’re even capable of doing.
It all feels so tiresome that nobody outright challenges this approach or identifies it.
My heart goes out to all of those who are struggling with desires and attractions they did not choose to have, who are encouraged to live in fear not over what they might do, but what they feel.
My heart aches with concern for all of the children who are victims of abuse whose cases have to be set aside because their abuse does not elicit the same degree of disgust as a particularly graphic AI-generated picture of a fictional character.
My heart beats with pain at the thought of innocent people’s lives being upended over the creation or possession materials that are not relevant, nor pertinent, to the issue of child sexual exploitation or abuse, because they do not involve an actual living child.
I’ve shed tears for the myriad of opportunities to study and capitalize on the theraprutic effects these materials may have, the faithful, time-tested ‘fiction is not reality’ mantra and how it relates to CSA prevention, being lost because countries and agencies would rather act on feelings of disgust or fear than anything grounded conclusively in science.
These recent advancements in AI technology certainly have been interesting and beneficial for a lot of people, but not so much here.
I still remember when I first saw AI being used to generate this type of imagery.
My first thoughts were of concern, not because of what the technology was capable of, but because of what reactions policymakers and judges would have the moment they saw something they didn’t like, and what the ripple effects of all of that would be for fiction and CSA going forward.
And it’s horrifying to see these fears be realized, so much so that I often find myself paralyzed in thought, in contemplation of how things could worsen.
It’s all so tiring. It’s all so worrying. It’s all so heartbreaking.
My Hopes for the Future
SCIENCE
I want to see proper advancements be made with regard to the science of fictional/virtual child pornography be realized in something that is both tangible and impactful.
I want to see the science finally investigate and state conclusively whether or not these materials have a risk-supportive effect or a protective effect with regard to risk of contact CSA or CSAM consumption.
As of writing this post, the consensus is still very much grey and undecided. Many pundits argue that such materials are harmful and should be banned, despite the fact that they do not harm or involve real minors, and offer up various claims that are usually intuition-driven assertions with very little factual substance to ground them, or will flat-out use rhetoric in place of empirically sound substance.
I want to see these arguments be questioned and scrutinized, because after literally decades of studying media and its effects on risk associated with antisocial or criminal behavior, no causal inference has been, or could be, observed.
This, combined with what I’ve seen and read from other authorities within this field, and from talking with consumers and creators of this content and observing their cultures, I’ve come to the conclusion that they do not cause harm.
LEGISLATIVE/JUDICIAL DEVELOPMENTS
I want to see the obscenity doctrine within the United States be revisited and overturned on its face by the SCOTUS.
I also want to see the broader liberalization of the federal judiciary, with more liberal judges being appointed to serve in district courts, as well as appellate courts.
I want to see the obscenity precedents from Roth, Miller, Hamling, and onward to be overturned on their face, finding that the First Amendment does, in fact, protect speech from being banned as ‘obscene’, and that such a vague, arbitrary, and opinionated legal doctrine is patently and fundamentally antithetical to the very concepts of Free Speech, Privacy, and Due Process in the same way that ‘Separate but Equal’ was to the concept of Equal Protection and Due Process.
The very idea is nauseating and clouds proper judgement.
INDUSTRY CHANGES AND PRACTICES
I want to see a stop to, or at the very least a loss in interest, in the development and implementation of AI-powered CSAM detection, with a full trust and reliance on the use of verified hash DBs to proactively scan for, remove, and report instances of CSAM.
Discord, Microsoft, and others shouldn’t be jeopardizing user privacy and potential safety by trying to automate this. Even with the help of human auditors, this type of approach is heart-wrenching and concerning.
I would also like to see the NCMEC limit their focus to only materials which implicate or involve real minors and not expand their CyberTipline and Industry Hash Sharing initiatives to include fictional content.
FINAL NOTES
I would like to conclude this post with special thanks to @prostasia for existing and functioning the way you all do, to @elliot , @terminus , @Gilian , and everyone else involved with the organization.
You are all a fire of hope that burns within me and everyone else. Everyone who enjoys or indulges in fiction owes a debt of gratitude to the Prostasia Foundation for what they do, the network of advisors they’ve built and the research and advocacy they’re helping to further.