Is hentai reported by Google?

I don’t know if I believe this story…

Because it’s EXTREMELY unlikely that Google actually scans for cartoons of this nature and I know that the NCMEC does not triage reports that does not involve real children because they’d be transparent about that, and the laws under which they DO triage for such content do not involve fictitious depictions.

Also, I don’t know how a warrant like this can be made public or accessible by a company like Forbes to access, especially for an investigation where a case hadn’t even lead to the arrest of the suspect.

As no charges have been filed, Forbes isn’t publishing his name, but the man identified in the warrant had won several small Midwest art competitions, and one artwork from the 1990s had been mentioned in a major West Coast newspaper.

I’m trying to look up the supposed warrant, but I’m not getting any luck on any source. Subpoenas and court orders relating to active or ongoing cases, especially prior to an arrest, are typically issued under the condition that their mere existence is kept secret from the public. Though, I know outside of this situation, that most are made public, but this kind doesn’t seem like it would be.

@terminus what do you think of this? I find this concept extremely troubling, but at the same time, I’m extremely skeptical.

1 Like

The Americans, Australians and British really are determined to censor artwork they hate, huh…

I’d hold off on including the Americans in that statement.

There’s more to this than the article is telling us.

Yeah, no… this story stinks. And I won’t believe a word of it until the author provides some objective proof to their claims that Google actually scans for and files reports on fictitious cartoon/drawn child pornography. Such material is on the same level of legality as adult pornography, and the idea that specific works can be judged objectively to fit the bill of obscene criminality is beyond senseless.

It’s not clear which of those two technologies were used in late 2020 in Kansas, when Google detected “digital art or cartoons depicting children engaged in sexually explicit conduct or engaged in sexual intercourse” within a Drive account, according to the warrant. It goes on to detail the graphic images, which included what appeared to be sexually explicit cartoons of underage boys.

What warrant?? Please provide the warrant in question if it exists. I want to see it. I want to make sure that it exists and that drawings are the focus of the inquiry.

I would even accept a REDACTED warrant, wherein the names and identifying characteristics of the person in question are withheld to protect their privacy. I just want proof that it exists.

As per its legal requirements, Google handed information on what it found, as well as the IP addresses used to access the images, to the National Center for Missing and Exploited Children (NCMEC), which then passed on the findings to the DHS Homeland Security Investigations unit. Investigators used the IP addresses provided by Google to identify the suspect as the alleged owner of the cartoons, and searched his Google account, receiving back information on emails to and from the defendant.

There is no legal requirement for individuals or ESPs to report drawings, cartoons, or other forms of noncriminal virtual child pornography, that responsibility is discussed in 18 USC 2258A, and is very clear in its scope, which is limited to depictions made using real children.

(2) Facts or circumstances.—

(A) Apparent violations.—

The facts or circumstances described in this subparagraph are any facts or circumstances from which there is an apparent violation of section 2251, 2251A, 2252, 2252A, 2252B, or 2260 that involves child pornography.

(B) Imminent violations.—

The facts or circumstances described in this subparagraph are any facts or circumstances which indicate a violation of any of the sections described in subparagraph (A) involving child pornography may be planned or imminent.

As shown here, it’s very clear that drawings are not included.

It appears the suspect may actually be a known artist. As no charges have been filed, Forbes isn’t publishing his name, but the man identified in the warrant had won several small Midwest art competitions, and one artwork from the 1990s had been mentioned in a major West Coast newspaper.

Well I would hope no charges would be filed! Should this warrant exist, it was executed in late 2020 and still no arrest, indictment, arraignment, or docket to be had. It’s safe to assume that no charges would be filed after that long, right??

And even then - we still don’t have a warrant or any proof to go off of!

This may concern some artists who draw or depict nudes. But the law around cartoon imagery is worded so as to provide some protection for anyone sharing animations of children engaging in sexual conduct for the sake of art or science. A prosecutor trying to convict anyone possessing such material would have to prove that the relevant images were “obscene” or lacked “serious literary, artistic, political, or scientific value.”

Right. And such things cannot be adjudicated in such a way that a computer system can adequately compensate or accommodate for. What happens in 5 or 6 years when such materials are unlikely to be considered obscene, even in the most prudish, conservative state or locale?? Just remove it from the hash table??

What about the BILLIONS of artwork that caters to these interests?? Wouldn’t that have an effect on the efficacy of a CSAM scanning regime?? There’s a reason why the NCMEC does not include this in their list of reportable criteria.

There are a lot of things wrong with lumping in fictional materials alongside actual CSAM, and I’ve only begun to touch on why that is.
It TRIVIALIZES a regime where the rights of actual, real children are supposed to be priority number 1!

If you’re going to lump in cartoons, you might as well lump in depictions of petite/youthful adults, since legally they’re on identical ground as far as child pornography law is concerned.

Whatever the case, this just tells me I’m fucked. They can detect it and I will hear from police in 2022. There’s no point waiting around if my life will be destroyed so soon.

You’ll be OK then, just like you are now.

I just don’t see how. There’s now reports about me so it’s just a matter of time. How can that be okay?

Because we don’t know if the article is even truthful.
I just pointed out gaping flaws with the article, and, to my knowledge, the author at Forbes has not responded to my requests for proof or clarification.

It’s very likely that it’s bogus.

The author has posted snippets of the warrant:

https://twitter.com/iblametom/status/1473597274089562113

I saw that, I still don’t feel convinced.

They didn’t provide a date or signature. Hell, the whole thing feels like they took an already-existing warrant or affidavit and edited it after the fact to make it seem like they have a story.

FWIW, I think the warrant is probably real. Can’t really say though. IF it’s real, there are a couple of possible scenarios that I think MAY have happened to cause this:

A) The material was NOT caught by an automated scan, but rather the user shared the gdrive and he got reported by someone. Which would explain why the warrant states that “Google personnel reviewed the images prior to submitting the report”

B) Cartoon hashes are mixed in with the NCMEC CSAM hashes

C) Google’s latest machine learning classifiers (AI), detected the image. After the AI triggered, it was sent to an actual human within google for review. The google personnel decided to file the report after viewing. The personnel probably decided that cartoon depictions are enough to submit a cybertip.

Probably the most plausible, if the report is real. I highly, HIGHLY doubt the NCMEC would include hashes for fiction in their dataset, and since only the NCMEC are allowed to create those hashes, it’s highly unlikely that Google would add things to the filter that are not CSAM.

The least plausible one, considering that Discord, 4chan, Twitter, and other websites which use PhotoDNA and Cloudflare’s CSEM scanners either have such materials, or implicitly allow it.

I don’t see how a cartoon could have triggered an AI like that, but it is plausible. Google is always trying to improve in this regard, so it could have been an unintended consequence, unless the art was just that good.
Even then, it probably wouldn’t be called a “cartoon”.

Google states that they contribute hashes to NCMEC. Over 1.6 Million hashes so far:
https://transparencyreport.google.com/child-sexual-abuse-material/reporting?hl=en

They must have a liason with the NCMEC then, unlike the CRC, whose presence in the tech world is merely tolerated at best.
Otherwise it would be a crime to do this, since only state actors are allowed to handle or work with CSEM for purposes other than reporting and disposal.

When we identify new CSAM we may create a hash of the content and add that to our internal repository. Hashing technology allows us to find previously identified CSAM. We also share hash values with NCMEC so that other providers can access these hashes as well.

Yep. Definitely a liason.

Sorry, I meant it more as a small number of cartoon hashes have been mixed in. So most cartoon-based material would go through without triggering, but a very small number would be flagged.

I still don’t think that’s the case. The inclusion of fictional content with such hash databases would risk trivializing the regime because it would divert focus from abuse material towards things that are simply not that.

If not on purpose, it could have been accidental. Somewhere along the way, these cartoon images were introduced in the pipeline and got hashed. Perhaps they found a large collection of real CSAM (with a few cartoon images mixed in) and decided to hash all of them without bothering to go through every single image. Happens frequently enough in other software pipelines, if the dataset is too large, you just sample a few and assume the rest to be the same…

That said, my curiosity with this case here is more on whether it was pro-active scanning that detected the user and/or if AI was used.

That’s assuming such things are even hashed, which currently there exists no hard evidence for outside of a dubious Forbes article.

See, it’s this inconsistency between the content of the story and the citation provided that makes me suspicious, and if it IS true, leans very much generously towards someone John Doe may have shared the Drive directory with reporting it, rather than Google scanning for it and finding it that way.

That, and the fact that only the NCMEC can verify and hash known-CSAM. I’m very disturbed by how, even if Google have a liason with the NCMEC, that such things have less oversight.

I’m still not going to make a determination without clarification from Google themselves, a statement from the NCMEC, the Department of Justice, or without the full, or at the very least, signed, dated, and verified warrant in question.

Well the UK actually collects Cartoon images in a database of their own called CAID and create hashes. They only scan for uploads on websites hosted in the UK and use it in LE.

https://www.cps.gov.uk/legal-guidance/indecent-and-prohibited-images-children

…what an abhorrent and irresponsible way to misuse CSAM detection technology.

Such images are, by every literal definition for CSAM/CSEM, not included on the simple fact that there is no child depicted. I worry about all the innocent people who will inevitably be harmed by such technology, as well as the actual children whose sexual exploitation will be overlooked or ignored.

If it can be argued that such cartoons can be considered “child sex abuse material” on the simple fact that they appear to depict imagery that would be considered exploitative or abusive in real life, rather than themselves being exploitative or abusive, then it can also be argued that such depictions made with petite/youthful ADULTS who look like they could be children are also CSAM. Indeed, such materials are already covered under the blanket terminology of “virtual child pornography”, which is distinct from “child pornography” because it does not involve real children.

At that point, we’ve already trivialized the definition.