Image Scanning and the difference between censorship and spyware

I think it should be pretty fucking clear I support mass censorship against CSAM. Anyone who is not a piece of trash would agree with me. If you don’t kindly fuck off. I’ve said it many times and I’ve said it before: A way to automatically destroy any known detected CSAM on all user’s personal computers and phones is very desirable. Outside from specific government organizations, no one has the right to possess CSAM. So an automatic system installed on all user computers that will destroy known CSAM would definitely be desirable.

Where the line would be drawn would be if this censorship machine decides to act as spyware. If the technology has the capability to not just detect & destroy, but detect & report, it thus becomes spyware and this is where the line is drawn. I absolutely do not feel comfortable at all as a law abiding citizen that I am being treated as a probationer who has been convicted and given a suspended sentence for a cyber crime. Just because criminals exist does not give the government the right to treat us all like cyber criminal probationers. I am not a probationer, I am a free citizen.

I support mass installation of CSAM detection software on almost all personal computers and phones, but instead of reporting users with that horrible stuff on, it should not go beyond removing the likely illegal content from accessibility of the user.

  • Mass censorship = prevent from view and/or destroy images that are very likely to be contraband (PhotoDNA detects a match thus likely contraband). If a user stumbles across material that is very likely contraband, it would just censor that image out so the user does not see it.

  • Mass Surveillance = not only detect likely contraband, but also reports the user automatically. If a user stumble across material, even accidentally, it may still be reported to authorities automatically.

We want mass censorship, not surveillance. A world of mass surveillance is a world in which people live in fear. A person who has no nefarious goals may live in fear that they may stumble across digital contraband and would live in fear that their own phone may have snitched on them on the authorities.
Furthermore, Surveillance can easily be extended to other things beyond CSAM so they may live in fear in not only stumbling across digital contraband, but may be scared to ask unpopular questions or give unpopular statements.

  • Mass censorship can be argued to protect both the victim and the general public! Members of the general public often reported being deeply disturbed by what they saw and do wish to unsee such horrible material. Victims also do wish the images of their abuse could be taken off the internet for good. Mass Censorship does not treat the general public like potential criminals, because blocking CSAM from view also protects the well being of the public.

  • Mass Surveillance in contrast DOES treat members of the general public like potential criminals. Why would you need to install an automatic reporting system into my phone or computer unless you think I might commit a crime? This does nothing to enhance my well being or to protect me like the mass censorship idea. If someone stumbles across a post which would have CSAM, blocking it from view is enough to protect them. A system which automatically reports them treats them as a suspect as oppose to someone in need of protection.

If we are going to enable this level of mass censorship, Transparency will absolutely be needed. It’s absolutely important that victimless material like Cartoons are not being lumped in with abuse of children. We all should support censoring the hell out of CSAM, but we need to make sure these hashes do not contain obviously victimless material like cartoons.

They inevitably will, as will anything that’s determined to be “obscene”, and such a system would inevitably be abused, and would be bypassed so would do little to prevent the spread of material, and wouldn’t prevent the spread of new material at all.

The resources wasted on this could be used far more effectively.

4 Likes

Controversial opinion. I think victims exaggerate the harms from this to extract more damages when they sue someone for millions. Direct abuse is clearly harmful and CP shouldn’t be created at all, but there is a lot of nonsense floating around about how “viewing CP is no less harmful than someone raping someone with their own two hands”.

The U.S. is very useless about providing victims with healthcare to recuperate from their terrible traumas from when they were younger, but they are good about spending money on throwing people in prison for disproportionate lengths of time. There is also recompense for time lost from being unable to work, or work as efficiently.

A lot of cases seem to involve an “anonymous person on the internet”, a single person who collects millions from suing random viewers, someone getting hassled by the police when they don’t want to be whenever someone gets arrested (“they will suffer each and every time it is viewed”), someone who was exposed in the media against their will, or angry parents.

Prison contractors make big money from incarcerating people and anyone who wants to reduce conact crimes are going to have to run against them. Parole? The register? Probation? This is easy money for monitoring firms.

This is actually a decent idea worth considering. But one must consider the risk that government maybe tempted to turn this tool into spyware. I’m all for mass installment of software that auto-deletes all known abuse images on every thumb drive, hardrive, or computer it is installed on. But the moment it has the ability to record and send personal or metadata information on it’s users is where I draw the line. How would we ensure the state does not take this opportunity to engage in mass surveillance?

This is why I have overall complicated opinion on this idea.

What does the CEO think? @terminus

I think we have a lot of work to do on the accountability and transparency of the CSAM scanning regime before we get anywhere close to the point where we could trust the government that much.

But, we may be getting there. Since CSAM scanning was inadvertently banned by the European E-Privacy Directive, there has been a commitment from the European side to create a public system for CSAM scanning, which would have a lot more checks, balances, and safeguards than the current ad-hoc system does. If the United States (NCMEC) also joins and supports this, then we can start talking about new options for how to use that CSAM scanning infrastructure.

But for now, there is no transparency about how a given image ends up in NCMEC’s database, nor any process to remove it from there if it was added wrongly. That is just a recipe for mistakes and abuse.

2 Likes

I think detect and report would be preferable. You can’t trust a computer with false positives that inevitably would destroy wrong data. But anyway, I’m skeptical about the efficiency of something like that. Chances are that the real targets of this are precisely those who will be able to circumvent the system. Because they are the ones who will be actively looking for ways to do that.

Have you ever found that anyone from the US has had content such as loli/shota inappropriately flagged? I recently saw one of your posts saying the PhotoDNA software was relatively solid and seemed to be doing the right job albeit people not being transparent about it.

Great question. We know of at least one case where a simulated 3D rendered image is in the NCMEC child abuse hash database. However it’s hard to say that it’s a mistake, because the definition of child pornography under US law does include simulated images if they are indistinguishable from real.

Gotcha well would it be safe to say although there is a lack of transparency, which is obviously bad, that the software is doing its job so far at a decent rate at the least?

I highly doubt it. Emotional sentimentality isn’t something a binary system is capable of resolving, it would either be an “all-or-nothing” style approach to image scanning, or something else. The way these image-scanning programs work is by a human being feeding it numerical data relevant to known child sex abuse imagery. A similar system has been designed for copyrighted content, but again - it has an objective reference point.

If they really wanted to begin classifying fictional material as illegal abuse imagery, they’d be more than capable of doing so, but the difference between a content ID system going up against a plethora of similar media would yield untold amounts of false-positives. It’s also simply not what these regimes are designed to detect, which should be a tell for those who think cartoons and other fiction are worth going after if scientific data on their harmlessness isn’t enough.

Freedom of speech is a natural consequence of human nature. The free exchange of ideas and information is not designed to be restrained by mere preferences, opinions, etc regarding a broad range of subject matter. It is incompatible with it.
Child sex abuse imagery is, of course, exempt from this due to the level of intrinsic, objective harm it causes to minors both directly and indirectly.

I’d like to hear more about this specific instance.

I know for a fact that CSAM, as defined by the NCMEC, requires the abuse of an actual, real-life, identified and identifiable minor be implicated in some way for it to be hashed in their database.
I’m assuming it was a spliced/morphed image of a child actress’ face or it was deliberately posed and rendered to resemble actual CSAM, rather than a hyper-realistic render that some analyst may have assumed to fit the category of illegal material?

Great care and consideration has to be taken when developing a content scanning regime like this. It has to be as objective and concisely designed specifically to accommodate actual children who suffered real abuse and trauma. Anything above that limited definition would not only diminish the credibility of such a regime by inviting material that does not meet that criteria. Children need to be protected, not ideas or idealistic depictions.

The words “actual” and “identifiable” need to be taken into consideration with regard to the word “indistinguishable”.

Hello, @terminus
You have to know that NCMEC is not credible organization, as far as i know. They use myth and stereotypes about human trafficking, this allows them to use scaring tactics to exaggerate human trafficking, so people can believe that human trafficking is something that happens everywhere at the same time.

When people are scared, they are more likely to accept anti-human laws, such as EARN IT, SESTA-FOSTA, SISEA, etc…

In fact, i have caught NCMEC using the same strategies as Exodus Cry, NoFap, NCOSE, Fight-The-New-Drug, and other anti-human organizations that are trying to harm our freedom under the lies of protecting us and “saving” victims.

Are you aware of the “Stranger Danger” hoax? The Stranger Danger hoax was invented to convince everyone that everyone is dangerous. The goal was people to believe that everyone who they dont know, is dangerous for them and their children.

Fun fact: The sex offender registry rests on the Stranger Danger Hoax

You can read more here: https://www.martyklein.com/ncmec-hypocrisy-stranger-danger/

In my opinion, NCMEC is just hypocritical organization that cares only about implementing laws that are based on their own interests, even when the laws claim to protect people, but actually restrict people. The NCOSE’s laws are not evidence-based.

According to the so-called “law”, simulated images can count as CSEM. Child sex dolls can also be counted as CSEM.

If these dolls don’t have consciousness and personality, and if the images are not images of real people but of painted people, then why is it considered CSEM? Where is the exploitation, if the people are fictional images?

These laws have to be changed. These laws only influence moral panic. There are some organizations, like the CollectiveShout, that claim that Amazon and Etsy support human trafficking, incest, and rape. Im talking seriously. The worst thing is that there are people who believe that.

The only reason why these organizations exist, is because the laws allow them to exist. This is because most laws are based on stereotypes, myths, and misinformation. Im surprised that QAnon’s ideology is still not implemented in the law. But hey! More than 70 far-right evangelical legislations are coming in 2021.

I fear about the future of sex work, it doesn’t look bright. Too many restrictions are coming.

This normalization of censorship / surveillance is pointless, and frankly, disgusting. If you can’t begin to figure out why, you are beyond help.

O how the mighty defenders of free speech have fallen. Working at a firm where he spends all his time defending censorship, and making excuses.

1 Like