Looking for CSAM self-reporting tools

I’m working on an updated version of Prostasia’s Get Help page for CSA survivors, and I’d like to include a list of tools that allow people to report abusive images of themselves to organizations that can get them removed through existing hash lists and CSAM removal programs.

So far the two I’m aware of are NCMEC’s Take It Down in the US and the IWF’s Report Remove (warning, this actually uploads the photo itself rather than a hash) in the UK. Are there any others I should know about?

:woman_facepalming: wtf are they thinking? that seems like an ill-advised bad idea

1 Like

They claim it’s to ensure that the content being reported is actually illegal. I get the concern, but the websites that receive the hashes are gonna review it anyway, so it seems unnecessary. Not to mention the fact that they keep images they determine are illegal for 3 years and share them with police who can store it for longer

Or a fishing expedition…

2 Likes

They have done that before. I was told recently that apparently they got consent from the adults depicted as children in the photos they posted during the Childs Play case at least, though still sketchy at best, and the person didn’t have a source (though they’re someone I trust)

I’m somewhat concerned over the NCMEC’s ‘TakeItDown’ not having any verification.

I took a look at it myself, using a jpeg of a plain white box with a filled-in black square as a placeholder that I made in Gimp, it does indeed pull a hash from a local browser applet and seems to submit that (I did not submit this hash), but it does have me concerned over whether they have any plans to actually inspect the hash contents for what they are, absent of some post-hoc field investigation.

Though I applaud the efforts of the NCMEC for empowering minors to submit hashes like this, I fear that it may not play out the way they want. This type of technology may be employed by bad actors to target adult pornography, or materials that are objectively not CSEM.

1 Like

My assumption is that platforms will use their existing review processes to ensure the image is actually abusive. Honestly, one of the benefits of the “anyone can submit anything” model is that it kinda forces them to review stuff or risk having people getting others’ legal posts taken down at random.

Most existing reporting tools already allow malicious targeting of non-illegal content, there’s no reason for those involved to add an extra step to their troll campaigns

Yeah, but that’s assuming they’ll actually look at it. not every service provider may actually inspect the offending image, they’ll just see PhotoDNA tripped something, assume it was verified CSAM, and take punitive action anyway.

This lack of oversight could have the unintended effect of tying up investigations, because the NCMEC didn’t see the original image that was hashed, and it may even hamper law enforcement faith in the whole system, further adding to the delay in prosecuting/investigating CSAM-related offenses.

I DO NOT want to see CSAM become trivialized.

1 Like

I imagine that approach will end up being quickly reconsidered once someone inevitably falsely reports a meme or image of a political figure. At some point or another they will have to start performing reviews or risk seriously undermining the usefulness of their platform.

This service is intended to be used by the victim. If they want charges to be filed, the Take It Down website has information on filing a CyberTip report

The NCMEC was sold on the concept when Meta (Facebook), after they partnered up with the UK government to help facilitate a way to crack down on the non-consensual sharing of sensitive adult images, and later CSAM.

They reached out to the NCMEC set up something similar. And my intuition tells me that they did so without compelling victims to upload submissions of their CSAM images to law enforcement, as that approach may be considered legally dubious, since even having those images on their phone has been shown to carry with it a demonstrated legal risk, and many minors may not be willing to take the risk of uploading it out of the fear that the site they’re uploading to either may not be legitimate, or may be compromised, since data breaches and hacks frequently make the news.

But like… oh god.

That’s assuming that the tool will only be used in good-faith, exclusively by victims.

The more I think about it, the more uncomfortable I am with the concept.

It just doesn’t seem responsible, and later down the line could contribute to a series of misfires within the system that could ultimately undermine the core focus and mission of these types of scanning regime. Already there’s a great deal of mistrust with scanning things.

I think it might be worth sending an email or something to them to see how they’re willing to address this or if there are contingencies in place to prevent or mitigate this type of misuse.

Which is why they’re unlikely to file full CyberTip reports, which are not anonymous. Yes this tool could be hacked as well, but so could literally anything else. There is no solution to that, and honestly if it does get hacked, better that NCMEC only has hashes in their database rather than original images for criminals to get their hands on and distribute.

It won’t only be used in good faith, but when CSAM is reported (regardless of who reports it), getting the images taken down from a few services is better than the previous nothing (or much more invasive CyberTip report, which I assume a decent number of victims don’t feel comfortable with).

There is, and if this tool is misused and adequate review processes are not implemented, I think that the resulting erroneous moderation actions will be a good indicator of the potential dangers of scanning (which will increase public pressure for human review in any detection process). Ultimately a good thing, imo, while still serving its original purpose of allowing victims to more easily make reports.

I guess I’m just confused as to what negative outcome you’re worried about here. In my mind, the worst case scenario is that companies will initially dish out automatic bans, then realize that human review is necessary when people are getting banned for completely legal images.

1 Like

If they’re anything like Google, then they could probably outsource this to AI…

afaik YouTube doesn’t even have any human moderation anymore.