Do you think CSAM could ever be borderline-eradicated from the internet?

I know sadly, this scourge of the internet will always exist, at least for the next several hundred years.

I do wonder, what technologies in 50-100 years that could exist that would make attempted access of CSAM to be impossible or at the very least, extremely difficult? So far at least from what I’ve read and heard, the major platforms seem to be doing a better job cleaning the “open” web areas of their sites than they were a decade ago.

For example, I don’t know if this idea below will ever be possible or not, but in 50 years, the processing power and internet of the future will be many orders of magnitude more powerful than it is today:

Imagine a future computer with software in such a way that if a criminal attempted to upload and watch CSAM by accessing a CSAM site, all the CSAM images will simply not be “loaded” on the users computer screen nor uploaded to any server. Computers could have a “Hash list” before they are even sold to consumers and the “Hash list” will be regularly updated as a part of regular security updates. If the user attempts to load an image with a certain “Hash list”, the computer would terminate the download and the user would never see that CSAM.

Unfortunately, hash lists are not effective against new CSAM, so another idea is the computer may also have built in AIs powerful enough to recognize CSAM that is illegal and immediately refuse to allow it to load on the criminal’s screen or from being uploaded to a server. Even if these ideas are theoretically possible, doing so with current internet would slow the fuck out of everything, but in 50-100 years, internet and computing power could be many orders of magnitude far more powerful than today and this may have no impact on users in the far future.

I’m just throwing ideas around, I’m no computer expert so the ideas above could very well be impossible even with near infinite computing power. But I’d like to know what technology in 50-100 years that could make CSAM so difficult to access online that it would be less difficult to access it offline. Believe it or not, there was a time when CSAM was almost eradicated, this was shortly before the rise of the internet. I wonder if we could ever see a future where this pedo crap is almost destroyed.

If none of this is possible, than maybe we need to fix the faulty hardware of human brains. Perhaps with designer babies, we can promise new parents their children will never grow to have a sexual preference to children. We can also tailor other aspects of the child’s personality, high intelligence, healthy genes, high hedonistic set point, so I don’t see why we can’t adjust sexual preference.

You are not the only person to think of these ideas, Hany Farid of Microsoft is already pushing for some version of this. But with the poor state of transparency and accountability of what goes into these hash databases, it would be a privacy nightmare. We have strong reasons to believe that the NCMEC databases include a lot of material that isn’t illegal, and adding AI into the mix would make it even less accurate. Automated scanning of your computer for this content is essentially a license for the government to perform mass warrantless surveillance.

Remember when Edward Snowden obtained classified documents that he used to reveal illegal surveillance by the US government? If every digital device was constantly scanning for hashes, it would have been child’s play for the government to order NCMEC to include hashes of those classified documents in its CSEM database, so that they could find where they were being shared. And because NCMEC has no public accountability for its actions, they would have gotten away with it.

Once this technology is mainstreamed, you can also be sure that Hollywood will want to include scanning for copyright-infringing content in every digital device. This is one genie that we probably don’t want to let out of the bottle.

As for the eugenics suggestion, I’m not even going to go there. Yes the answer to reducing abuse is changing people’s behavior. But not like that.

1 Like

Professor Farid is one of the developers of the PhotoDNA technology that Microsoft and its partners use to scan images that are uploaded by users to the web. Professor Farid’s suggestion was that this scanning technology should be incorporated into all major web browsers.

Was his suggestion that when PhotoDNA detects an unlawful image on a user’s browser, it will automatically report to authorities? Than in that case there would be constitutional issues. Or is it the case that if PhotoDNA detects it in the img cache or elsewhere, the image/video would simply not be loaded or/and immediately destroyed and the user is not reported.

I don’t know a whole lot about NCMEC hashes, but I’m surprised to learn they include lawful content and content that isn’t real cp. I looked at their report site, it says only report content that violates 2252?

I still find the idea of incorporating CSEM hashlist into browsers and computers (TOR, Chrome, Midzilla, Explorer) attractive as an idea as long as it does not act as an arm of the government for mass surveillance, but to only for censorship of true CSEM. I hope it can be deployed in an ethical manner in the future.

If they are really lumping in things that don’t violate 2252, there is an issue. And I certainly don’t want this technology to be used for copyright content as the copyright system as is gets heavily abused. But if the tool would be so powerful that it could make accessing hashlisted CSEM extraordinarily difficult, than it should be on the table. Would be nice if there could be some safeguards to keep it limited to only CSEM, legal or otherwise.

What do you suggest, do you think PhotoDNA and other equiv could in theory be incorporated into end users browsers and computer systems in an ethical manner?

It says that, but in reality there is no quality control, especially for images that are hosted overseas. NCMEC simply passes reports on to foreign police without checking them whatsoever. More about that here. And of the two hash databases that it maintains, they are both crowdsourced from what Internet providers and nonprofits send them, they are not curated by NCMEC (unlike the Internet Watch Foundation’s list which is higher quality).

I think it could only be done in an ethical manner if there were a system of checks and balances that simply don’t exist now. NCMEC needs to be held to account and currently it isn’t happening. They even had us thrown out of a public meeting because they didn’t like us trying to bring light to what they are doing.

1 Like

Well, this certainly changes my view of NCMEC. I still think they do a lot of good, but they have flaws in how they act.

If we are going to install photoDNA software into TOR, Chrome, or Windows 10 with the goal blocking out all the CSEM from view, I’m going to want transparency in policy as to what they are blocking out to make sure they aren’t blocking out non-CSEM and I’d probably trust IWF lists much more.

Do you want a blunt answer or a politically correct answer? The obvious answer is no.

The world in a century from now will be a very different one from one today.

There are a few arguments against the existence of this content, but some go along the lines of reliving trauma. In a century, there may well be technology to delete that trauma or dampen it, which would reduce the impetus to censor this content. There are even a few studies now which are exploring these areas.

And even today, people are already largely fed up with all the civil liberties and backdoors politicians try to sneak in under the guise of this. It is only getting worse as the numbers explode and politicians conjure up ever more restrictive ways to deal with it.

It is not realistic to do this without shutting down the internet, which is certainly an option, but not one that someone would see as an acceptable cost.

There are too many things which would change in a few decades or a century to reliably predict what would be the case then.

In a century, you may even have habitats or spaceships millions of miles from the Earth, is it possible to closely police all of them? A message to Mars can take as much as 25 minutes. There are many ways for societies to diverge here on Earth, let alone, ones getting ever more distant.

Even though AI appears omnipotent now (as the technology has just been introduced), it is already hitting the point of diminishing returns and Moore’s Law which governs the exponential growth of computing could be argued to have ended back in the late 2000s.

Scale. They get tens of millions of reports. They don’t have the resources to go over it all, so instead of taking the reasonable approach of ignoring some they decide to push it on like a train to make sure they don’t mess any.

People also make reports for personal reasons rather than for legal ones. For instance, if someone or something disgusts them, they’ll report it. They don’t like someone on Twitter? They’ll report them. They don’t like a certain cartoon or artform? They’ll report them. They don’t like a model on Instagram? They’ll report them.

They feel entitled that the hotline will also feel that person x or art form y is evil and revolting and lock them away, which may work if you were living inside a banana republic.

Section 2252 is also vague. You can’t really fathom how vague “if it is obscene” is.

Reports are only going to get more fun now that there is a push to “detect” grooming and everything people don’t like.

You like using the word “ethical”, but none of your ideas or notions are particularly ethical.

Tor exists for the sole purpose of bypassing censorship. This isn’t much better than Blumenthal’s rhetoric to have encryption that is simultaneously cracked and secure. It would defeat the object of bypassing censorship in the first place to have a 1984 government filter list on it.

You might as-well just shut the entire Tor Project down at that point.

1 Like