Soooo... I should be OK with Apple's human reviewers visually checking any photo on my phone? Assuming it triggered some flag in their entirely opaque review process? Do you realize that the potential for a human review process of innocent people's photographs makes this 100x worse? That in itself would be a reason to avoid their platform. Thanks but no thanks.
I would assert the only reason they pursued it in the first place is PR/optics, since the "optics" of not being able to proactively police what users do using E2EE services you provide is somewhat a problem. That said, I think the concept of having your own computer covertly report you to the authorities is a level too dystopian to accept even from Apple.
I agree the reason they pulled it was probably PR/optics. But given the problems with human reviews of apps on the app store, I wouldn't be confident that an underpaid employee somewhere wouldn't blindly agree with the algorithm.
Going from memory here but IIRC the deal was that on device they'd produce a hash using a known pHash, and if that was positive, they'd send the photo to check it against a second pHash that wasn't publicly-disclosed (to try to mitigate the problem of intentional collisions) and then if both of them were positive matches, they would have human reviewers in the loop.
Not "if then send" -- if that were the case you could detect that your images were matching and then use the system as an oracle to e.g. detect that the database might contain disfavored political content in addition to whatever they claim it contains.
The system was proposed as part of the upload process to your private encrypted cloud storage. So they already have the files. The system was designed so that if there were a sufficient number of hits against an encrypted database then a party possessing a particular private key would learn the private key for the encrypted files, as a passive effect of there being matches.
Like if I mutter random letters of my password while entering it, you'd eventually passively learn my password just by hanging around while I logged in each morning.
This way any matching is completely undetectable on the users end, at least until the private key possessing parties or the people they share the information with choose to take some action against the user.
It was a lot more advanced and abuse-resistant than people assumed. I really wish people had read how it worked instead of guessing it was something a lot simpler. There were two different perceptual hashes. If both matched, and the number of positive matches was high enough, a thumbnail would be able to be decrypted by Apple. Neither the device nor the server were able to independently check for a match, so the device wasn’t able to just scan all your files and flag the matches. It was tied into the iCloud upload process.
While this is understandable, the unfortunate issue was that Apple could be coerced into adding images certain authoritarian governments didn’t like to the list. Though imo it’s all moot if iCloud Photos aren’t end to end encrypted anyway.
The fact that it is CSAM makes it an even harder problem to solve. With e.g. copyright infringement, you could keep some kind of records why a particular file is in the system, potentially even letting trusted and vetted organizations audit the files, but doing that for CSAM would be highly illegal and defeat the purpose of the system.
This is a great point. How do you audit whether they flagged "Tank Man" photos as criminal if you can't actually control for CSAM images? Talk about the thin end of the authoritarian wedge...
“Coerced”? Check some recent news to see that for Apple rainbow-washing stops at the very moment they are held responsible for providing basic censorship circumvention tools.
I am amazed how people still cling to the hope that one day a corporation will do something nice for them without any hidden motive.
Apple also had human reviewers in the mix. The only reason they pulled it was PR/optics.