Apple plans to scan US iPhones for child abuse imagery

Not Internet platform, but does relate to tech…

Apple plans to install software on US iPhones to scan images of child abuse, opening the door to monitoring personal devices for millions of people, according to people who were briefed on the plan. It is ringing a warning bell among security researchers who warn that there is.

Apple detailed the system proposed earlier this week (known as “neural Match”) to some US scholars, according to two security researchers briefed at a virtual meeting. They said the plan might be published more widely soon this week.

The automated system warns a team of human reviewers in advance if it determines that an illegal image has been detected. The reviewer will contact law enforcement if the material is available. This scheme will initially be deployed only in the United States.

Apple declined to comment.

Part of me is uneasy about the fact that the scanning is done locally, rather than server-side, even though it’s only going to scan images that are in-sync with iCloud.
I’m also uneasy about the prospect of such material being designed to expand material that is not CSAM, such as drawings, CGI, and other fiction. Some transparency on the matter from the NCMEC would be most reassuring and beneficial to ensure that only CSAM - child sex abuse material - will be targeted, not harmless artistic expression which does not include the sexual abuse of children.

I also have issues with relying on AI to ‘predict’ this type of material being sent or received if it has not been logged or reviewed manually by an NCMEC operative.
AI is notoriously inconsistent when it comes to identifying images, and I feel as though it may never reach the level of accuracy that people want or expect it to reach. As a supporter of verified CSAM-detection technology, such as PhotoDNA, This whole regime seems very much problematic, and the fact that Apple’s own expert reports on the matter outline a ‘due diligence’ at reducing the likelihood of a ‘false-positive’ is striking.

I do not expect this to bode well in Apple’s favor, whereby the information they may be after and the ways they seem to be going about it will not reflect positively on the desire to protect children from sexual exploitation and abuse, and the regime of ‘being watched’ may only further the stigma felt by NOMAPs and at-risk individuals.
That alone makes me very skeptical of this.

2 Likes

Everyone who is concerned by this, please sign this petition to Apple:

It only has a few signatures because we launched it just today, but if you spread it around it may grow!

For now, I think the intent to scan all images is quite clear, this is just to ease people in and minimise criticism because they’re only scanning images that have already been freely shared. A good reminder though not to upload things you want to stay private to cloud services without true, strong end to end encryption.

We’ve been through this countless times before and the same things apply here, this won’t work and will undoubtedly be expanded far beyond its original intention.

This’ll be an interesting way to find out if drawings are part of “illegal” hashes.

There exists a myriad of issues with that, I think, which I don’t think the tech community would be willing to ignore. There’s a reason why smartphones with GPS enabled don’t alert the nearest police officer if someone is speeding. It seems that Silicon Valley is playing with two different realities, one that is more practical and one that is more ideal, and they’re trying to find a middle-ground for issues which they feel may warrant some measures, which I’m not entirely opposed to, yet not in favor of either.

It seems to me that the purpose of having the hash-check be done on the local level, rather than server-side, because of the fact that media is encrypted as it gets uploaded to iCloud and the keys to decrypt them are always stored locally too, so even if the scanning were to occur at the server level, it may not even be possible, given the fact that all of it is encrypted already. This is bizarre, considering that Apple is going to have to acquire those keys to decrypt and verify the violation manually should one be triggered, yet I’m still unsure of whether my own understanding here is really correct.
Does it seem right, @terminus ?

Apple seems concerned about the prospect of iCloud accounts being used as repositories for CP/CSAM, as reports of such things could damage their brand, yet I fail to see how such a thing was any different from when Apple bravely defied the FBI’s attempt to implement an LEA-friendly backdoor into all devices and cloud storage accounts after they had troubles accessing such contents on a specific iPhone that was used by one of the perpetrators of the 2015 San Bernadino attack.
Apple faced a great deal of praise from privacy and civil rights advocates because of that.

Actually iCloud storage isn’t end-to-end encrypted, so it’s confusing why Apple went straight for the option of client-side scanning.

Since I’m pretty sure lolicon is included in their definition of “child abuse images”. All I have to say is: one more item to add to my big pile of reasons to stay away from apple products.

1 Like

I don’t think it is, nor do I see a reason why it would be. They would need to add their own hashes to this. This is material that they would have to report to the NCMEC for subsequent legal action, and the act of reporting fiction/fantasy material to them outside of the context of grooming would possibly bring about legal penalties for companies/people that do since doing so would be akin to filing bogus reports. The NCMEC even outlines what types of material that is to be reported via their CyberTipline on their site, which is anything under 18 USC 2256.

image

I also base my bet on the fact that various Booru sites and imageboards which also allow lolicon material also use PhotoDNA, such as 4chan, Gelbooru, etc. and have been known to automatically scan imagery, ban and report the details users who attempt to post CSAM. None of these specific instances have they been seen to categorize loli/shota as bannable.

1 Like

That is strange… Part of me is deathly afraid of how this debate will turn out.

Obviously, CSAM scanning at the local level is problematic because it opens a Pandora’s box of sorts. It sets a precedent that may enable further encroachment into the private lives of people and the deterioration of consumer privacy barriers, all the while causing unspoken yet palpable damage to the longstanding privacy consensus.

I see value in using CSAM detection methods, like PhotoDNA, to detect, remove, and report known CSAM. That has real application in helping end the sexual exploitation of children, as it helps eliminate harmful material whose existence will only drive up demand for further sexual exploitation and abuse against children.

Yet… I’m not sure how - or why - Apple would feel the need to implement something like this at the local level if it can be done server-side. The fact that such things would only function if media were synced up with iCloud paints an interesting gloss over everything, as if Apple knows the kind of fire they’re playing with here.

Personal computers, smartphones, etc. are, in many ways, extensions of the self. Their contents and how they’re used or interacted with always bear a significant relationship with their users, in much the same way one’s own person, their possessions, or their papers and effects have.
Privacy matters. Without it, people can’t be people if they believe that their every move, keystroke, or thing their device renders, sees, or hears can be used against them.
People need privacy and freedom to think logically and clearly. To be true to themselves, and to understand themselves. This level of self-actualization in relation to interaction with information and communications technology is, in part, why first-world America has been able to achieve so much. If that sense of personal privacy and security is threatened, undermined, or negatively implicated in any way, we may see the death of rational thinking.

Of course, nobody should be allowed to participate in a market that is contingent on the sexual abuse and exploitation of children. The commoditization of child sexual abuse, the trauma, the suffering, all of it bears an intrinsic link to the material in question.
Legal prohibition on even the mere possession of such materials is justified. The right to privacy does not extend to the participation of this market, even at that level.
If a person is caught with CSAM stored locally, rather than in the cloud, then it really matters not. They deserve whatever punishment they get. They had to access the Internet in order to obtain it, or they had to have acquired it from a someone who obtained it through similar means. And even still - it’s sexual abuse involving real children.

At the end of the day, one core question looms over.

Is sacrificing or risking respect for user privacy like this a valid trade-off in the war against CSAM?
In the end, I don’t think so. I think the benefits of preserving and maintaining user privacy outweigh the benefits of leveraging local drive access as a means to strengthen prohibitions on CSAM. I think it’s quite dangerous to that goal, if anything.
I don’t think it’s a stretch or reach to say that, in the long run, such a trade-off would actually only cause more harm to children and victims of abuse.

Allow me to explain.

If your phone or computer is monitoring the contents of your local storage, scanning images and video and for CSAM and running your messages, documents, notes, etc. through an AI designed to detect pedophilic or CSAM-related contents, then it’s likely that the contents of a person’s web browser (history, bookmarks, cache) would be under identical scrutiny.

People are already afraid to google information related to child pornography or pedophilia because of the fear of being put on some kind of list or someone, somewhere linking those terms to them.

People who may be having obsessive, undesired pedophilic thoughts or succumbing to sexual fantasies and feeling guilty or stressed about it may be in desperate need of information or support from charities or support groups would likely feel reluctant to query google for said resources, and may take matters into their own hands, often inventing maladaptive coping strategies built around sexual repression or denial which ultimately serve to cause further harm or even further the risk of committing an offense against a minor.

Those who may be interested in the scientific literature and studies of how adult-child sex may negatively affect a child’s psychology feel reluctant to google it due to the nature of it alone, so they may simply settle with what society tells them, often with hearsay, conjecture, and an aggressive stigma or other conformist rhetoric, rather than an objective, empirically-sound and valid account based on facts and data.
The former is, of course, the wrong way to go about learning about child sexual exploitation. The fact that a sizeable portion of the population simply assumes that CSAM is illegal because it’s “icky” or “offensive” or because they believe it “may cause pedos to go out and abuse children”, rather than out of a legitimate interest in the child victims who were involved in the production of said materials and the necessity to eradicate the market for said abuse, is indicative of both an overall lack of knowledge and understanding regarding pedophilia, pornography, their effects on one another and how they relate to CSA, and perhaps even further telling of where their true interests or concerns lie, whereby rather than concern for that of the safety and wellbeing of children, it’d be their personal feelings or the need to blindly conform to a popular or cultural standard.
These are both bad because, should any specific examples or categories of CSAM fail to arouse offense, or rather, the standards and viewpoints themselves bend and warp to accommodate and welcome said categories or examples of CSAM, then at that point child sexual exploitation would officially become systemic, and such a reality would, of course, be antithetical to the interests in protecting minors from abuse and punishing those who abuse them.
The rights of children to be free from sexual abuse and exploitation is not contingent on flimsy norms, personal feelings, or assumed popular ideals, rather, they are absolute. This absolute is justified by the vast amounts of empirical data, which consistently shows a causal relationship between stress, trauma, psychological and developmental conditions and adult-child sexual activity.

A person might argue that such things would never be seen as “inoffensive”, but that’s not really true. The main reason why “child pornography” is its own separate category of speech from “obscenity” is because the obscenity doctrine and its laws were not designed to address sexual exploitation, rather, the doctrine was designed solely to appease prudes and quelch certain types of sexually-oriented expression, and the SCOTUS felt that CP warranted its own category.
This is primarily because a child pornographer could evade prosecution if a judge or jury believed the matter wasn’t obscene. Children and young teens could be filmed or photographed performing erotic or sexual acts, such as genital-on-genital penitration, lascivious poses or even oral sex on adults or actors of a similar age, so long as the judge or jury either didn’t find the material “patently offensive” under state law or believed it to have “serious…value”.
They knew that the arbitrary, vague. and superficial standard they carved out in Roth/Miller couldn’t be relied on to actually prevent or punish harm.

All of this is relevant to the need to preserve user privacy because people need to know that they won’t be punished for being curious about controversial issues or subject matter, they need to feel comfortable and confident, but most of all, safe.
Sure, one could argue that all of these truths and facts could be held and maintained alongside client-side CSAM detection regimes, yet such an argument still overlooks the implicit value of true privacy. If it were possible to implement localized client-side CSAM detection regimes without affecting user privacy, I would be all in favor of it.

@terminus I hope you guys know what you’re doing with this.
This is a very delicate matter, and I wouldn’t want to see Prostasia’s stance against Apple’s client-side CSAM detection be used against us, either to discredit the charity and its community of activists and scientists or our arguments and empirical research.

1 Like

I was thinking about this while I was at a museum today.

  1. That photo of the naked girl running away from the Napalm Attack, would that qualify? I know people have tried to argue it is CP in the past and it royally pissed the woman in the photo off.

  2. That Virgin Killer Album cover, which the UK blocked an entire Wikipedia page over, and something that the organization which blocked the page still fiercly claims is CP.

  3. What happens when someone is browsing a website and a nefarious actor starts posting CP? Does the phone scan the Browser cache then report the person for something beyond their control?

So…

Apple’s NeuralHash algorithm (PDF) – the one it’s using for client-side scanning on the iPhone – has been reverse-engineered.

Turns out it was already in iOS 14.3, and someone noticed:

Early tests show that it can tolerate image resizing and compression, but not cropping or rotations. We also have the first collision: two images that hash to the same value. The next step is to generate innocuous images that NeuralHash classifies as prohibited content.

This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.

Wow color me surprised…

@terminus

So basically, it is now possible to generate fake collision pictures that match the hash of real CSAM and spam Apple with false positives, or swat people. Apple will have to act quick, because this is scary and dangerous for every Iphone user. They just got completely fucked by someone with a loli picture, oh the irony.

2 Likes