Is hentai reported by Google?

TLDR: I may have made a grave error in a google search in a country where cartoon/anime depictions are illegal. Looking for insight into what danger I face after the fact.

Like other search engines, Google flags uploads of CSAM if the upload’s hash matches one from whichever database(s) they use. It also filters search results to not include CSAM to some degree.

One example of flagging content: There have been cases where people have been jailed for uploading real photos of CSAM to reverse image search.

If something like loli hentai were uploaded to a reverse image search, would that generate a report? It seems like loli hentai is abundant in image searches with non-banned keywords, so their filtering looks lax. Still, I’m not sure how relevant that is to searches with uploads.

There was a topic last year about PhotoDNA apparently not flagging hentai, but it was unclear if that included characters depicted as underage. I’ve read that groups like the Child Rescue Coalition (US) do keep cartoon hashes in their database, so it’s unclear whether other databases do the same.

Does anybody have info on how Google approaches the flagging of cartoons? I know I should not have risked this (it was a very small number of searches, but I remember a case where someone in my country got tagged for 2 searches, both photos), but I had a pretty reckless addiction that I’m getting counselling for. All I can hope for is that the searched images were obscure enough to not be in a database somewhere.

If it is reported, how likely is it to be followed up? I had a vpn but it’s no-log policy is unproven.

Things like cartoons are notoriously difficult to hash, and because the CRC is not a state actor like the NCMEC is, it would be illegal for them to begin hashing images that are believed to be CSAM, as that would require them to possess it in the first place.

I wouldn’t worry about it, though. Google is only required to report illegal CSAM, and cartoons are not illegal in the US, which is where they’re based, so I don’t think they’d implement such a regime designed towards images that are not CSAM.

I know that Microsoft’s PhotoDNA only includes hashes given to them from the NCMEC, as does Google. So if the CRC are adding their own hashes, that’s probably worthy of investigation in its own right because they may be messing things up.

I have looked at that paper, and again, no.

I don’t doubt that they work with law enforcement, but if it’s not part of the NCMEC’s list of hashes, then it’s likely not actionable unless verified against their list. It’s far more likely that they’re just sharing the hash database by given to them by the NCMEC and adding their own hashes, and forwarding that to ECPAT, which is both legally and ethically dubious, since many of these images have to be triaged and followed up on by qualified, authorized federal investigators since identities have to be matched.

Under federal law, no private entities are allowed to handle child pornography that have not been vetted by the government to do so, and to my knowledge, the CRC have not been granted that privilege. I have no doubt that they’ve created their own tools, but to the extent that other companies, let alone Google, have used their tools or hashes to detect and remove CSAM.

Google has developed their own regime to detect and report illegal CSAM and they, like Microsoft and everyone else, use what was provided by the NCMEC.

Also, I don’t trust the CRC as much as I can throw them. They’re on record spreading misinformation regarding the supposed likeness of a woman’s daughter being used to market a child sex doll, which we very easily debunked by simply looking at the doll itself and how far back it went.

They’re liars.

You can find all that information here;

Because the likelihood for false-positives increases drastically with the simplicity of the image, given how malleable they can be and the fact that artists usually create these images in sets. There’s a reason why they restrict this type of technology to CSAM detection, and why existing regimes that are utilized by both public and private interests do not include them.

The CRC is playing with fire by doing this, and given how their own tools are not even used as often as they claim, I’d say they’re not worth worrying about.

What you SHOULD be worried about are the tools used by Cybertip.ca.

Because they’ve been trying to triage cartoons/fiction and even employed tools for searching for it, even uploading actual, known CSAM to anime reverse-image-search tools to get their server hosting companies to de-list them.
They’re also a state actor for the Canadian government.

@LegalLoliLover1 hey, you know which thread that is? It was discussed a while back.

I’ve read that thread, where the tipline uploaded CSAM repeatedly before denying any involvement.

What I’m wondering is whether they’d even be aware of a reverse image search of drawn images. I’m guessing Google doesn’t report every suspicious search, nor do their filters seem to prevent explicit drawn images from appearing in their results.

Google definitely screws with that kind of stuff, but strangely.

You yourself mentioned that you can find fictional lolicon can be found on Google by not querying with that term, which is true. If such images were actually being triaged by a group, then I’d expect to not find well known R34 images that’ve been around since 2008 still appear in Google search results for that specific character.

Google will sometimes “suspect” such images as illegal CSAM, but if you do that search again at a later date, you’ll see it return to the results field once it’s been vetted and cleared as legal, despite filtering the keyword “Lolicon”

I have no doubt Google reports searches involving verified CSAM, though. And if they do, that’s good.
Nobody needs to see an actual child be sexually abused except for those tasked with punishing those involved with its creation.

1 Like
1 Like

CRC are notoriously secretive about exactly what their tech does and how or by whom it is used, so good luck in finding out whether your cartoon uploads are being scanned by them. Here is an argument that I had on Twitter with a CRC executive recently.

I watched that argument, he genuinely had no ground to stand on and relied too heavily on the “normalization” fallacy.

As for their technology, it seems dubious to say the least. It’s not uncommon for cases or charges to be discarded because investigators are not allowed to establish probable cause from the sole use of tools that are sourced from a private, non-state actor, whose inner functions cannot be verified.

Their technology also doesn’t seem to be anywhere as valid as their claims appear. They’re not authorized by the Justice Department to triage or identify victims depicted in CSAM, and such actions could potentially be the subject of a perfectly valid legal complaint, especially if they’re trying to lump in image hashes for cartoon child pornography. From a legal standpoint, they might as well be triaging images of child-like adults since they’re on the same level of legality.

I know for a bleeding fact that nobody, let alone major entities like Google, the NCMEC, or even Twitter, use their technology for hashing CSAM. as such hash databases have to be validated by legitimate STATE or GOVERNMENT actors to be valid in an investigation. I can only imagine how many false positives have been sorted out due to their overwhelming incompetence, and even the DoJ themselves have expressed concern over their practices.

I feel as though a proper investigation is warranted. They don’t seem honest, and the fact that they did that child sex doll scandal makes me especially wary.

1 Like

I’m not inclined to think their database is being used by Google, but it’s hard to say.

CRC’s efforts seem to center on law enforcement, namely their Child Protection System which openly tracks cartoons and stories using their hash system. However, it seems this system is restricted to law enforcement according to their site. The system’s hashes seem to be used by police for P2P, social media, and chat services.

They have partnered with tech giants including Google, which may mean they share hashes. However, much of CRC’s partnerships seem related to sharing IP addresses of suspected criminals. There’s little mention of their database being used by these companies, a fact I feel they’d advertise.

Google also seems to focus on their partnership with NCMEC in reporting CSAM. Couple that with how Google and NCMEC’s definition of CSAM follows US law (which excludes drawings, unlike CRC), it points to CRC’s dataset being used for their CPC system only.

1 Like

That’s not true. They’ve allowed other organizations that are not LEA to peruse their database of hashes before, namely ECPAT and other international organizations, it’s just that those organizations never went public about their findings. I find it odd that the CRC would lie about that fact, probably to cover their asses for acting like they’re sanctioned to possess and view CSAM for the sake of triaging it. I may file a report or inquiry myself. Something doesn’t seem right with that, if they’re allowed to possess and save CSAM for the purpose of converting it.

No, they haven’t.

I could not find any evidence supporting any alleged partnerships with tech giants, especially not Google, and many of these companies rely on PhotoDNA and other tools sourced directly from the NCMEC, as is per law.

As for cartoons, they even admit that such things are legal, but still keep tabs on them. Hashing them is probably what they meant. Such a waste of time.

Well, I hope my searches didn’t trigger a report and/or there’s more pressing cases to worry about. Not knowing exactly how Google handles this has me a bit terrified.

I think it’s safe to assume that Google doesn’t care about hentai, since if you reverse-image-search hentai images, even loli ones, it will flat out guess the name as either “fictional character” or “lolicon”, and it’ll sometimes link to websites like Gelbooru or Sankaku Channel, or none at all despite IQDB also linking to those sources, while also linking to Rule34, even if it won’t link to that image which exists on said booru sites. I think those are kinks in Google’s search algorithms, rather than meddling by a middle man or Google themselves.

You’re good, bro. The only thing that makes me uneasy are the fact that the CRC claims to be hashing images of fiction, despite such content being legal (by their own admission). I’m trying to find the news artlcle where they admit that their technology monitors things that are not criminal, but still something they prefer to keep an eye on, but I know it exists.

It’s just shocking that they’d lump it in like that. I’m just glad that no one really uses their hashes, for obvious reasons. There’s a reason why those exist, and it’s to eradicate ABUSE MATERIAL! Not fiction/fantasy materials. I always blow a gasket when people try to equate a documented example of child sex abuse with a fictional drawing, the mere idea. That’s exactly like comparing a snuff film to a horror movie and claiming they’re ‘just as bad’.

Here’s an article going over their system. They say that, while fictional material is legal by US law, they use its presence in P2P sharing as a flag for potential CSAM collectors.

However, since their software is shared with countries like the UK and Canada, it’s likely this ability is used to arrest people with strictly fictional material.

1 Like

I’m still going to look into filing a report or some sort of inquiry about it, since it’s the job of the NCMEC and ICAC task force officers to triage and verify CSAM, not a private entity.

From that article…

[…] text-based stories about incest and pornographic cartoons that predators show to potential victims to try to normalize sexual assaults.

This pisses me off.

You can groom children for sex using virtually anything, but it’s still a crime regardless of what’s used.

2 Likes

You can groom children with adult pornography. You can groom them without any kind of pornography. Candy, video games, funny stories and warm thoughts.
The SCOTUS went over why the “groomer material” argument was a lame excuse in Ashcroft v. Free Speech Coalition, which was (and still is) a satisfyingly succinct and reasonable take.

1 Like

Facts. I don’t see anyone blaming candy or puppies for those being used by groomers. I guess this is a radical take, but perhaps: BLAME THE PERSON DOING THE GROOMING???

3 Likes

Okay, now I’m freaking out. This article explains that Google actually does scan for cartoon images.

A user who stored cartoons on his Google Drive was found by Google who reported it to NCMEC. They are capable of detecting it and will report it.

Christ, I was already freaking out but this seals it. I’m fucked. Oh my god, this is not good. My country aggressively prosecutes this stuff too. No no no no