Preventing and Reducing child sexual abuse material use through On-device Technology

There is a EU financed program to develop an app that is designed to help CSAM offenders and MAPs to not access CSAM through filtering such content out in searches and blocking its display if accessed. The idea behind the project is to further study how and why people consume CSAM and to also help law enforcement reduce actual cases.

There is a very bitter taste to this project since the technical part is lead by the IWF who also accepts reports for cartoons and even humanoid furry imagery. It is not unlikely to say that this app will also try to filter out fiction, or somehow report it. When will this “helpful” tool become a mandatory app pre-installed on every device? When will it be further development to also include unpopular speech?

5 Likes

Such a thing would only work if it filtered out all adult content, in addition to known-CSEM hashes, but I suppose it helps.

4 Likes

That would defeat the purpose of such an app, no? If I wanted to look at legal porn, I’d have to switch the app off, which would mean my phone is no longer filtering it out; meaning any CSAM caught in the search will inevitably pop up on my screen anyways. The app is only useful if it can tell the difference. I dunno if I trust an app to do so, especially if it comes with an automatic reporting system.

4 Likes

Depending on how they’re identifying CSAM, the tool should be able to distinguish between that and adult content

2 Likes

Technically it is certainly able to distinguish real and fictional, adult and child stuff. Considering the IWF is involved I am however not sure that is wanted.

They might decide to block childish looking stuff together with real child porn and fictional child porn …

3 Likes

My concern is blocking pornstars that simply look too young. Legal adults who resemble children on a first glance. If I were to search specifically for petite porn, would this app just block me from seeing any of it because it can’t tell the difference? Telling the difference between fiction and real is one thing, telling the difference between two similar real things (legal and not legal) is different.

I imagine an AI couldn’t differentiate between, say, an illegal 17yo and a legal 18yo because of how close they are. And would the AI block out and report a 20yo if they looked 16?

2 Likes

In its current implementation, the system only blocks content, it doesn’t report anything. I’d assume there’s some false positives and false negatives if it’s truly AI-based, although I don’t know how common they are or which one the system favors

2 Likes

Germany has actually addressed this in the EU chat control debate. Since adult actors who could also be seen as the ranges you mentioned are not illegal, so people who tend to watch young adult actors have a very high risk of being wrongfully targeted.

Source: https://www.bundestag.de/resource/blob/935532/8114aeba9ed4f9e2dcde001a7107062c/Stellungnahme-Steinebach-data.pdf, page 4, last sentence of fourth paragraph

3 Likes

According to experts from german law enforcement and IT institutes the false positive rate is very high when it comes to unknown material (not hashed). A special task force has been using such an A.I tool for 5 years now and they say the false positive rate is 10% and 20% when it comes to recognizing grooming.

With billions of images and text messages being shared each day you can pretty much guess what would happen.

4 Likes

Still, we don’t even know whether it’s using AI and if so what AI system it’s using.

Also recognizing grooming is a lot different from recognizing CSAM (one is text one is images)

2 Likes

I mean even 99,999% is not something worth implementing.
With the amount of images being shared each day this would still cause a lot of false positives. Media sharing is also not gonna go down, but increase steadily. Despite popular belief; a big chunk of the human population is still not using the internet on a regular basis.

On another note tho: why is this technology not even discussed to be broadened to also include torture imagery? Stuff like this just highlights that it is not about protecting kids. There is never even a talk about such media and its existence, but I doubt that the internet is void of it.

2 Likes