This is really important and i need everyone’s attention here, how does this work, what platforms will inplement it(Twitter, Discord?) And will it target even jokes, ageplay, stories of people’s own abuse or as far as discussion related to lolicon and other stuff? Remember that this is an algorithm, and these things usually doesn’t work, PhotoDNA works because it matches images that are the same, but algorithms like that doesn’t, just go and see how the situation is with youtube right now, i need to know what platforms will implement it, because this is harmful to privacy since it gives law enforcement and moderators the excuse to monitor a conversation because of false positives, they claim it was used on xbox live for years but it wasn’t mentioned how much it worked against predators, and i’m also sure since it scans conversations, it violates the european law, idk about the american one, or the one of Canada or the one of the UK, but it’s all confusing and we need to know more and to be updated to what services will implement it
To answer your questions:
-
It scans internally archived data, such as text messages & emails, before looking for patterns and perceived pedophilic connotations, which it then uses to assign a “probability rating” to someone. It then relies on a human moderator to assess and possibly detain you based on your answers.
-
Any platform can install it, through visiting a special email address: [email protected]. Skype will incorporate it and likely, many other sites will too.
-
I bet anything even slightly suggestive will be taken with extreme discretion.
Proceed with due caution.
Quite frankly, a lot of companies already use anti-grooming tools. Provided they keep it to things like Roblox, it should be largely fine.
For platforms which NYTimes got hysterical about like Discord, it is statistically very improbable any one person will be groomed (it is unfortunate that it happens, but this is like banning cars to prevent accidents, but with a far lower probability) and it really isn’t worth subjecting everyone on a general purpose platform to this little experiment.
Education would go a long way there, as-well as more educational materials from platforms to help warn youngsters about potential dangers, and really young folks aren’t really supposed to be on Discord to begin with.
The issue i have is if Discord, Twitter and similiar implements it and makes false positives by flagging stuff that isn’t grooming or child exploitation
Also are we sure that due to chat scanning, this doesn’t violate European Law? I remember hearing that under EU law filters like that are illegal, i know i mentioned like that in the first post, but it is the most important part
No one can really use an algorithm to judge individual on a personal level. Context and variables would blur everything. There is no one trick solution. While having no automated authentication would be foolish, we can’t see a positive impact until the mindsets of those in control change.
Scratch that, it might be bad on Roblox too, due to stereotypes that all “MAPs” are evil.
For instance, there’s the teenage ones into toddlers, they discover these interests (perhaps just in fiction), people freak out, report it to the site and under Fosta / Fosta 2.0 (it’s coming, the moral panic over “CSAM”) / PR risk, they will simply ban them and be done with them. Even though they themselves are minors.
Grooming is a problem and any child focused services will get a lot of heat, if they just as the NYTimes would put it, “look the other way” at the problem (why not put the blame on the tech sector when government ignores prevention, barely funds police, over-criminalizes, etc. eh?). There might need to be more research into ideas on what to do.
Many platforms use PhotoDNA (and even AI which are prone to false positives) to detect child pornography, including in file backups (Dropbox), emails (Gmail), etc. This could be considered “chat scanning”, but no one ever complains about it, so I assume it would be lawful.
Links?
Change is hard to contextualize, accept, and work towards efficiently for many, sadly. Many are so afraid and irrational that the very idea of something that they don’t understand can even be realized just scares them silly. People can’t help but be overly defensive because that is all they know, and going beyond that would require humility, contextualization and application. What are we to do, honestly?
PhotoDNA is a different story because it doesn’t create false positives if the database is flawless and it was approved by EU law, the chat scanning is a different matter and we probably will need a lawyer or a judge to review it further, also Discord said in the past they don’t want to put a chat reader for privacy reasons and Whatsapp seems to be royal to his end to end chat service
Honestly we have to wait to see how it works on Skype and see how it will blow up in people’s faces
And also about the part about MAPs, of all places Roblox doesn’t seem one of the most appropriate ones, though it shouldn’t invade messaging apps so people with minor-attraction can discuss it with their close ones without fearing to be reported because mapa needs phsychological consultation and help, having them reported to the police or banned to the server just for speaking with adult friends or relatives is way too far
There was a country in Europe (the Netherlands) which actually barred scanning and I’m fairly sure it got in the way of PhotoDNA as-well. There was a NYTimes article on it.
If that was the case there, then I cannot see why it wouldn’t be the case here.
I wouldn’t say any hash is “flawless” because the nature of a hash is to condense a large file down into 80 characters or so. There will always be data-loss because the new representation contains less data than the old one. It is possible for two hashes to collide, although I am not too familiar with PhotoDNA. The chances are low enough that it really doesn’t matter in practice.
You would almost have to deliberately produce a file specifically to collide with it, the chances are one in fifty billion.
Not only that really, but drama that occurs elsewhere, let’s say someone discovers things they shouldn’t and try to take drama from one place to another. It happens. Or if Thorn decides to connect contexts because they scan for “pedophilic connotations” by doing effective background checks on people or so was implied.
I doubt an algorythm can detect context that well, because many of the things a groomer says to a kid are also said between two consenting adults sexting
Of course, every methodology has flaws, but we cannot make a serious change unless we acknowledge and work around them. There is no one perfect solution to anything, but we can try to begin to understand each other, or at least analyze each other’s thoughts before jumping to conclusions. As much as I would love to take a stand, I’m out of my native element. If only…