I came across an EU funded project which is supposed to help people stop consume CSAM by installing an app called “Salus” which monitors the screen and blocks content in real-time which it considers CSAM.
What I find astonishing is the way this project is being advertised. They seem to be super happy about the fact that it was trained on real CSAM, so it must be super accurate!
Can someone explain to me how this is any different than someone using an AI which was trained on the same stuff? Is that not the go-to reason to ban AI? Why the fuck could they not use artificial images? This shit is dark comedy.
I am 99% surre it is AI. Just detecting not creating AI. I personally dislike the usage of real images in any case. But as far as I remember the discussions about bans are almost all about banning AI for the purpose of creation not for detection. The usual reason is that detection can not create anything harmful. It detects harm. There is no new Image coming out of it. The Problem with detection is that it could miss classify images. Or rather one should say that any detection WILL miss classify images. So you want to decrease misclassification.
In general AI is easier to train and more accurate when it has what you goal is. In this case: To lower the rate of misclassification is generally lower and the required compute Resources are lower if you do use real images. Additionally with AI when using artificial images to train the AI it might just end up detecting the artifacts created by AI. Through realistically you would want o have a mix of real and artificial images. Since you want to block CSAM that has real children but not artificial stuff. (I am btw 99% certtain this one blocks both. But I have yet to try it)
Using real images hence is a reasonable thing to do for detection as it Prevents and does not cause harm. Through no filtering, or at the very least no AI filtering is better. The reason is that as stated above filtering always has the problem of misclassification. You either block o much or not enough. So your filer will always fail in some way. Most likely in multiple ways. AI filter in specific is problematic cause not only does it suffer from misclassification you also do not now what direction it is prone to to misclassify. And moreover the direction of misclassification may not be static. Eg. it could over block white CSAM and lock young looking adults while not blocking any CSAM with black children in it or similear. AI therefor might actually do more harm then good in detection / filtering due to its unpredictability.
Do me a solid and look up the reasons why people argue against AI images. It is almost always the controversial training data and not the actual outcome.
People do not want their data to be used as they did not consent. I would never consent for some private company to use my images. They are in the training data and who knows elsewhere.
CSAM is forbidden because it can only be produced without (knowledgable) consent. There is no difference to me if someone is using it to masturbate, or to develope a system with. Feeding it my images like I am just a worthless piece of data.
Do me a solid and look up the reasons why people argue against AI images. It is almost always the controversial training data and not the actual outcome." for general Generative AI yes. The issue is copyright there. And as I said that still applies here since it is not your image so does consent again I oppose the this.
However those who did this and support this (like KTWs Klaus Beier) oppose generative AI cp because it “would normalize and is an entry gate to csam” … What I did is explain there point of view and why, even if you follow there (flawed) logic it is still a problem.