I’ve read many times that companies do not use hashing for illustrations. However, Thorn’s ‘Safer’ program allows companies to create their own hash lists, and share these lists with each other, even if they do not constitute CSAM.
In addition to our matching service, which contains millions of hashes of verified child sexual abuse material (CSAM), Safer also provides self-managed hash lists that each customer can utilize to build internal hash lists (and even opt to share these lists with the Safer community).
[…]
Each customer will have access to self-managed hash lists to address these content types:
CSAM - Hashes of content your team has verified as novel CSAM.
Sexually exploitative (SE) content - Content that may not meet the legal definition of CSAM but is nonetheless sexually exploitative of children.
[…]
Generally speaking, SE content is content that may not meet the legal definition of CSAM but is nonetheless sexually exploitative of children. Although not CSAM, the content may be associated with a known series of CSAM or otherwise used in a nefarious manner. This content should not be reported to NCMEC, but you will likely want to remove it from your platform.
Once identified, you can add SE content hashes to your SE SaferList to match against these hashes to detect other instances of this content and to help enforce your community guidelines.
It seems then that hash matching is employed by some tech companies even for SE, and while the post doesn’t specify what this means, it’s safe to assume it means illustrations. Additionally, companies can share their lists with each other, with no oversight.
I don’t know if Google uses Thorn, but even if they don’t, I imagine they use a similar technology for their own platform, as they don’t allow images.
My main question is this: say someone uploads some illustrated material to Google or a similar company. At the time, it passes all filters and isn’t removed. Then, the person deletes their Google account. Later, the same image’s hash gets added to Google’s filter. Would Google be able to retroactively find this account which had previously uploaded the hash and take action against them?