Paedophiles are using AI to create child abuse images: National Crime Agency warns artificial intelligence is being harnessed to make pictures and 'deep fake' videos of real-life victims

I still exist, I’ve just been preoccupied the last few months trying to learn programming.

https://archive.is/i8UNp

Archive link because the Daily Mail suck in general and don’t deserve clicks. (But they’re the only people who’ve done an article on this.)

Paedophiles are using artificial intelligence to generate sickeningly realistic indecent images of children - before sharing them on social media with other perverts.

Abusers have seized upon the latest image-generating software to fulfil their warped fantasies, MailOnline can today disclose.

And in some cases, perverts have gone further, experimenting with ‘deepfake’ technology to paste the faces of real-life youngsters and child actors onto naked bodies created by a computer AI, authorities say.

The revelation has shocked campaigners and prompted calls from child abuse charities for an urgent Government response, with Britain’s FBI, the National Crime Agency (NCA), insisting it is reviewing how new tech is being used by sex predators.

It follows the arrest of a computer programmer who used AI to create ‘truly shocking’ child porn images, in what is believed to be one of the first busts of its kind.

The man was tracked down by police in Spain, with investigators unearthing a ‘huge’ stash of hardcore pictures at his home in Valladolid, about 130 miles north of Madrid.

According to police he had also taken real images of children from the internet, writing them into sick scenarios that an AI image generator would create. He had also downloaded real indecent images, which included babies being raped.

The depravity of the pictures he created appalled even the most experienced detectives, with police saying: ‘The files, which caused a great impact to the researchers because of their extreme harshness, depicted real images of very young girls being raped and using disproportionate organs and sex toys.’

Britain’s NCA told MailOnline it was awake to the threat posed by sex offenders using hi-tech computer software to create vile images.

A spokesman for the force added: 'The amount of child abuse imagery found online is concerning; every year industry detects and reports an increasing number of illegal images. We constantly review the impact that new technologies can have on the child sexual abuse threat.

‘The NCA works closely with partners across law enforcement and wider government, as well as gaining insights from the private sector, to ensure we have the specialist capabilities to continue to detect and investigate AI-generated child abuse images.’

MailOnline understands images are being spread predominantly across Instagram, Facebook and Twitter. Perverts are also setting up groups on instant messaging apps Telegram and WhatsApp to ‘trade’ pictures, with others using TikTok - a social media platform popular with children.

Predators are even abusing Instagram stories to advertise huge online catalogues containing thousands of child sex abuse images, which deviants pay to download.

Campaigners have warned social media giants are not acting quickly enough when suspect accounts are reported.

It comes as MailOnline can today reveal how predators are starting to experiment with ‘deepfake’ software to paste the faces of real children onto the naked bodies of computer-generated characters.
Tech firms insist they have ‘strict’ rules to combat abuse and that they are using new software to hunt out and automatically delete known child abuse images.

But critics say the current measures are not up to scratch when it comes to seeking out computer-generated child abuse images, which are illegal to possess in the UK.

The news has outraged child protection charity the NSPCC, which says social media firms have a ‘moral and legal duty’ to act.

Richard Collard, the NSPCC’s associate head of child safety online policy, said: 'It can be incredibly distressing for parents and children to see their images stolen and adapted by offenders.

‘The negative impact can be just as significant as if the photos were unaltered. Perpetrators of abuse are increasingly becoming more and more technologically savvy which means that the threat of child sex abuse is constantly evolving.’

Mr Collard added: ‘Under the Protection of Children Act, it is illegal for these kinds of images to be made and circulated in the UK. That’s why, regardless of whether these are AI generated, social media companies have a moral and legal responsibility to intervene and crack down on these images being shared on their platforms.’
The Internet Watch Foundation (IWF), which finds, flags, and removes child sex abuse images and videos from the web, was also concerned by the reports.

Its chief executive, Susie Hargreaves, said the organisation had yet to see any deepfake abuse images of children.

But she added: 'Material depicting the sexual abuse of children normalises and perpetuates some of the most harmful kinds of behaviour. This is true even for deepfakes, or other AI-generated imagery.

‘We also know that accidently viewing child sexual abuse material online, can cause lasting damage for the person who stumbled upon it. It’s also illegal in the UK to host this type of material.’

The news comes amid growing calls for laws against deepfake technology following a porn scandal that rocked the world of young, online Twitch influencers.

Multiple young, female Twitch stars were disgusted to discover their images on a deepfake porn website earlier this month, where they were seen to be engaging in sex acts.

They had not consented to their images being used in the footage, nor were they even aware of them.

Terrifyingly, the creator - who has not been publicly named - was able to manipulate their likeness to make it appear as though they had taken part in the filming.

She has now vowed to sue the creator responsible, who has since removed the content.

The creator has not been publicly named. They are said to have scrubbed all traces of their old site from the internet after posting an apology.

The incident has sparked fears among young internet influencers and the wider public about the extent to which the advanced AI technology can be harmful.

Among those who discovered they were on the site was 32-year-old British Twitch star Sweet Anita.

'I literally choose to pass up millions by not going into sex work and some random cheeto-encrusted porn addict solicits my body without my consent instead.

‘Don’t know whether to cry, break stuff or laugh at this point,’ said Sweet Anita, one of the victims.’

Professor Ross Anderson, a professor of security engineering at the University of Cambridge, said the debate surrounding AI-made indecent images and deepfake pornography was ‘complex’.
In particular, the US and UK were at odds over how its legislation handles offenders found in possession of such images.

‘Artificial images of child sex abuse are a point of contention between the USA and Europe, because the US supreme court struck down an attempt by Congress to make all such images illegal,’ Prof Anderson told MailOnline.

'The judges restricted the law to cases where a child had actually been harmed while the image was created.

‘As a result, if you have a cartoon image of Bart Simpson being raped by his dad’s boss, that will get you jail time in the UK, while in the USA it’s completely legal. It’s not seen as porn but as social commentary.’

The academic, who has carried out research into the UK’s new Online Safety Bill, raised concerns about the focus of authorities.
He claimed too much attention had historically been centred on targeting those who view indecent images instead of tackling the more ‘grimy’ and difficult contact offending.

‘This rapidly becomes a very complex issue,’ he added. The problem then is this law becomes a cultural war issue where the main problem here is police have put far too much effort in image offences because it’s easy and not contact abuse because it is hard - that kind of work is grimy and difficult.

‘There has historically been a lot of evidence of CSAM (child sexual abuse material) online because it was a useful hammer for the Treasury to raise funds. But it really has misdirected an awful lot of effort that should have been placed elsewhere.’

Social media giants have insisted they are acting - and are tightening online security measures to prevent indecent images being shared on their platforms.

Meta, which oversees Facebook, Instagram and WhatsApp, says it ‘detects, removes and reports millions’ of images that ‘exploit or endanger children’ every month.

This includes using technology to prevent links from other internet sites containing indecent images from being shared on Meta’s various platforms.

A Meta spokesperson added: 'MailOnline: 'Sharing or soliciting this type of content - including computer generated imagery - is not allowed on our apps, and we report instances of child sexual exploitation to the National Center for Missing & Exploited Children (NCMEC).

'We lead the industry in the development and use of technology to find and remove child sexual exploitation imagery, and we work with law enforcement, child safety experts and industry partners to identify new trends and remove illegal networks.

‘Our work in this area is never done, and we’ll continue to do everything we can to keep this horrific content off our apps.’

The company added it worked with the National Center for Missing and Exploited Children (NCMEC) to flag illegal interactions which are then reported to law enforcement.

1 Like

This is honestly the point of contention I have with this type of material - whether real child abuse imagery was used or if a real child’s likeness is being infringed by misusing it into abuse scenarios.

I think AI art of this variety will never reach a point where it can be distinguished from a real photograph, there’s always a tell, whether it’s the composition, the perspective, or the way the AI creates certain features or assets, such as anatomy.

Outside of that, though, I think it’s fine and resources SHOULDN’T be wasted going after people unless - and I say this again - a REAL child is involved.

The genie is out of the bottle, technically it’s been out of the bottle for decades, but now even the most artistically uninclined layperson can satisfy their deepest desires without causing harm to thers.

Okay - THIS I just don’t believe.
People need to get their fucking heads out of their asses and learn how to differentiate reality from fiction and stop trying to find ways to validate their discomfort.
The lack of emotional self-control here genuinely concerns me because most people who are exposed to AI-generated VCP are not emotionally distraught by it unless it involves a child they know.

I agree that child sex abuse imagery is rightfully and justifiably criminalized, but I beg to differ here.

There is ZERO evidence which supports this contention. I’d argue that reading up on the history of Ancient Greece and other ancient societal practices does more to “normalize” adult-child sexual relations than anything that any reasonable person could tell is a product of fantasy or fiction.

People often understate the value that simply being a product of fiction has with regard to how it’s interpreted by their respective audiences. People who consume these simulated/virtual/fantasy materials do so knowing that it is wrong and illegal outside of that context, and the overwhelming majority of which are still against actual/real CSEM. That nuance, in and of itself, stands as a bullwark against these fallacious claims of ‘normalization’.

If anything contributes to the normalization, as in, actual promotion of CSEM production and distribution, it’s CSEM. Not fantasy. I’d argue that, if these materials can (and I believe they do) will cut down on the demand for real-life CSEM, then we ought to take advantage of this and focus on real children being exploited.

Arguments that favor censorship and legal criminalization of these types of works tend to be centered around ideological arguments, not evidence-backed or even rational ones, bolstered and motivated by legalese and doctrine, rather than rational or critical thought.

Pay attention to the wording there; they don’t clarify whether these images themselves constitute ‘child exploitation’ unless the minor depicted is a real, living human being.

6 Likes

I’m on the same concensus. If it’s a choice of these people using AI tech to make whatever victimless stuff they want with endless possibilities… or going outside and grooming/molesting actual children, I’d much, much rather they do the former.

But the British apparently don’t like that because it makes them uncomfortable.

As usual, they want to have their cake and eat it too.

Honestly won’t surprise me if they just make it a crime to possess any StableDiffusion like model.

7 Likes

The fact that these people think AI-generated content should be treated the same way as actual CSAM says a lot regarding their (lack of) knowledge and understanding of why CSAM is harmful. It’s not about protecting victims for them, it’s about getting rid of any sexual outlets they consider gross, whether they’re harmful or not.

10 Likes

This is probably the most sensible thing I’ve read out of this entire article, though, I disagree that police focus shouldn’t be made towards image-based offending, I believe it should be more narrowly-focused on material which, objectively, constitutes real CSEM, not material that only appears visually or thematically adjacent, but is not.

I guarantee you that, the moment such material becomes legal and the specter of government control is shifted away from fantasy and narrowed towards REAL materials, we will see a drastic reduction in actual CSEM on the clear-web since people will be free to engage with these materials while also enjoining it with their moral viewpoints.

It’s no secret that people mold their moral senses out of practical conformism, so if that can be leveraged (and it can, just look at Japan, Denmark, and (formerly) Germany, as well as the US) with how lolicon/shotacon communities there are so pro-fiction but fervently against real CSEM, which involves both non-pedophilic consumers AND pedophilic consumers alike.

“but chie, that’s stupid! you have no evidence for it!”
The fact that it’s not even treated as real CSEM by most global NGOs proves my point. Investigators and LEA who specialize in CSEM-related crimes know of these fantasy-oriented communities and how they operate, hell, Allthefallen and Lolicit flat-out cooperate with US and EU authorities when criminality is made apparent to them.
That culture of likeminded pragmatism is enough to prove my point.

This, unironically.
There are people who seriously think that the reason why CSEM is illegal is because it helps law enforcement wrangle up and imprison pedophiles, which in-turn fuels a misunderstanding of exactly what pedophilia actually is. They see it as a “oh, they’re GOING to offend, so it makes sense that we treat this material as a sort of ‘red light’ offense” type of issue.

But then there are people who think that it will motivate subsequent offenses, a contention with zero conclusive scientific backing, but the lack of this conclusivity is usually met with “better safe than sorry!”, something I think is horrifically wrong because being ‘safe’ means putting people in prison who do not deserve to be while adding fuel to a fire that, like the academic in @Jigsy 's quote-post says, begins to resemble more of a culture war than something actually worth legislating.

7 Likes

there are a lot of ethical issues involved in ai generated images when it involves the likenesses of real people. Pornographic deepfakes are just one small aspect of it.

current machine learning images can be pretty convincing, Even if they are still distinguishable from the real thing by someone who knows what to look for. The question might be whether they are convincing to the average person and how much harm can it cause.

2 Likes

As I always say, between my safety and the safety of others:

If it’s possible to discern between a deepfake and real-life, otherwise it’s no different than a bad photoshop.

3 Likes

I think the issue of content creations based on anyone’s likeness is sort of blown out of proportion. Just think how many times in your life some random person in a store thought you look identical to someone else they know. This happened to me many times throughout my childhood and adulthood, even as recently as about 3 months ago. In a world of over 8 billion people, I am sure that no matter how you draw a face, or rely on AI to do it for you, there will be someone that could identify that face as a person they have known. This makes me question, does anyone really own their “likeness”? It gets even more absurd with children, because their facial features will change noticeably with age.

5 Likes

Precisely. I feel like there’s always gonna be some plausible deniability when it comes to AI porn. Unless it’s clearly based on somebody famous or based on a specific preexisting photo, you can’t fully prove if it’s based on any particular person.

2 Likes

It seems strange to call someone a real-life victim if they’ve never been victimized IRL and someone just wants to draw a character they depicted doing something lewd.

3 Likes

…is that you Tyciol?

1 Like

I don’t see anything wrong with redirecting attention away from real children.

Supplement child protection by allowing any detours.

Minimize the amount of people we need to focus on capturing.

Focus on those who are abusing children and distributing material.

8 Likes

To the extent that only risk of punishment influences decision making, penalizing for harmless conduct reduces incentive to choose harmless conduct over harmful conduct. It makes sense for legislation to reflect the disparity between what is valued and what is not.

From a risk of punishment perspective, punishing for dolls or generated images is valuing children no more than dolls or generated images. Such does, indeed, appear to invite issues. It also seems reasonable to feel confident that those who choose dolls or generated images in places that have a cartoon law would choose to be harmless anyway.

I like your writing.

If it can be said that a behavior normalizes itself, then if can be said that confining content to fiction normalizes confining content to fiction and that doing no harm normalizes doing no harm. The activists like to claim that one behavior normalizes another.

I like how you address context. For sure, undressing to shower is not the same as undressing to ride a bus. A roller coaster ride isn’t intended to represent a ride to the store; it’s designed to entertain.

3 Likes

(I was going to send this response to a thread about Stable Diffusion but it was kinda old. This thread is newer and still makes sense)

Glad to see this here because it’s related to why I just created an account. I’m not new to Prostasia but I’m back because I’m getting into Stable Diffusion and I’m a little upset about something.

Just today, I noticed two of my favorite artists on Pixiv have disappeared, their art gone from the site (and I’m unable to find their original username, user ID, etc).

I have no way to contact them to ask them why this happened, but they have one thing in common: They were posting images made with Stable Diffusion and depicting women with younger characters, in a way that was often only suggestive, and they are realistic at first glance (not upon closer inspection).

Is there any indication of the legality of Stable Diffusion outputs like this, in the United States? Let alone the legality of it in my state REDACTED.

I think those two artists got sacked from Pixiv because banks and payment providers are fucking with Pixiv. In fact they are, because they already forced Pixiv to halt FANBOX payment (similar to Patreon) to R18 artists who depict underage characters. However, Pixiv claimed this enforcement would not apply to the Pixiv site, where there is no paywall to see art. Despite this, it seems like Pixiv may be taking down AI art like this (with young characters) that looks somewhat realistic.

It’s totally lame because I actually got into generating my own Stable Diffusion images because I was impressed with the images they posted, and right now Stable Diffusion has me hooked (reasonably hooked) after I managed to make it output beautiful images (beautiful to me).

I’ve already been skeptical of the legality of my 3D renders that show a young character with adult female(s), and I’ve been skeptical of their legality to the point of self-censoring myself in Pixiv conversations regarding this topic.

And now, on top of that, I have to be aware of an unknown risk I take by keeping my silly Stable Diffusion outputs.

After seeing images suddenly vanish from Pixiv, I felt like Googling around to find other people like me who are using Stable Diffusion to try to make nice scenes involving human, young-looking characters, but I self-censor again by not entering such a query in Google search, knowing that it could be used against me in the legal system.

I just can’t believe I have to worry about fucking fake images. I also can’t believe that I would be arrested in France for drawing the wrong 2D fake ass characters. A few days ago I randomly found some nice furry art on inkbunny and I liked the artist, their style really appealed to me. Well guess what, they live in France and were arrested because some of their characters were “children”, and it is only thanks to a great lawyer that this person with adorable, positive art was not thrown into a prison for having “”“”“”“child rape images”“”“”“”. I think the artist is a woman too and that hurts me even more. How can government throw such a nice person into prison for bullshit charges, while they allow confirmed terrorists to live freely in your country and even surveil their text messages before said terrorists take hostages in a small cafe and shoot people. Yea that did happen, and it’s going to happen again sooner or later, but thank GOD they arrested the kind empath artist who likes the color pink and drawing cute sexy things.

3 Likes