It was a lot more advanced and abuse-resistant than people assumed. I really wish people had read how it worked instead of guessing it was something a lot simpler. There were two different perceptual hashes. If both matched, and the number of positive matches was high enough, a thumbnail would be able to be decrypted by Apple. Neither the device nor the server were able to independently check for a match, so the device wasn’t able to just scan all your files and flag the matches. It was tied into the iCloud upload process.
While this is understandable, the unfortunate issue was that Apple could be coerced into adding images certain authoritarian governments didn’t like to the list. Though imo it’s all moot if iCloud Photos aren’t end to end encrypted anyway.
The fact that it is CSAM makes it an even harder problem to solve. With e.g. copyright infringement, you could keep some kind of records why a particular file is in the system, potentially even letting trusted and vetted organizations audit the files, but doing that for CSAM would be highly illegal and defeat the purpose of the system.
This is a great point. How do you audit whether they flagged "Tank Man" photos as criminal if you can't actually control for CSAM images? Talk about the thin end of the authoritarian wedge...
“Coerced”? Check some recent news to see that for Apple rainbow-washing stops at the very moment they are held responsible for providing basic censorship circumvention tools.
I am amazed how people still cling to the hope that one day a corporation will do something nice for them without any hidden motive.