Apple's mistake is that they seemingly believe there is pushback because people misunderstand how it works. The reality is more nuanced: People understand exactly how it works, and how it works is that it is turn-key onboard spyware, that Apple pinky-swears isn't being used wrong today.
For example if the scope/mission expands (e.g. foreign governments), suddenly you've created a drag-net for whatever "badness" is of interest in whatever today's moral panic is (e.g. terrorism after 9/11). Plus perceptual hashing, by its very design, is created to be less precise than traditional cryptographic hashing.
A cryptographic hash + file size combo is unlikely to have a false positive within our lifetime (and it has been used successful by multiple companies to combat CP). The interesting thing about a perceptual hash is that the closer the source material is to the banned material in terms of actual content (e.g. nudity), the more likely for a false positive.
Therefore, if Apple does mess up via false-positive and manually review your material, it is more likely to be sensitive private materials (involving consenting adult(s), not CP) because that is what the perceptual hashes are looking for similarities to.
PS - If you think this concept cannot happen in a Western country, see the UK's internet filters as a textbook example. Originally started to fight CP, and now used to fight a ton of other stuff too with more proposals every year: https://en.wikipedia.org/wiki/Web_blocking_in_the_United_Kin...
About that "manual review"...
Refer to pages 10 and 11 of the technical paper.
>The server then uses the decryption key to decrypt the inner encryption layer and extract the NeuralHash and visual derivatives for the CSAM matches.
This "visual derivative" term shows up repeatedly. To me, the implication seems to be that Apple doesn't look at the actual suspected image before deciding whether to proceed with a report. Instead, I infer that they only verify whether (as the device reports) the image's neuralhash is indeed present in the NCMEC database. If my understanding is correct, their "manual review" process actually provides no protection at all against collisions or erroneous database entries.
Further supporting this, on page 4:
>Apple reviews each report to confirm there is a match
It only refers to a match, not about whether the image appears to be illegal.
This makes perfect sense from Apple's perspective- who would want to be in the business of reviewing reports of probably-illegal images?- but it means that the references to a manual review safeguard would seem to be false reassurance. Maybe I'm misunderstanding the paper.
Visual derivative is a preprocessed picture itself, and it's part of the "safety voucher".
It may be grayscale and resized/normalized in other ways. Only apple knows exactly what it is.
It's only a matter of time until internet trolls find a way to abuse this. The database is stored on user's devices, so someone downloading it and permuting innocuous images enough until they match the database, and then spreading them for funzies via some underhanded method (wallpaper download sites?)... is not too far fetched.
End to end encryption of messages is by comparison easy as the devices can handle all of that internally. However, losing your iPhone is one of the main reasons to have an iCloud backup. Require a user to come up with a private key and any user who lost it also loses all their data.
Most people don’t really want end to end on consumer backup services, because of the associated risks. If however you don’t want unsecured backups you can handle this manually.
Of course nobody wants the company to actually look at your data, but that’s a separate issue.
The main selling point of Apple is how well integrated the ecosystem is, they could make it super simple to backup the private key on your different devices like watch, tablet and laptop.
Apple has gotten a ton of heat over this and they haven't once mentioned that e2e on iCloud is something they're working on or that this technology would make possible, so can people stop spreading this narrative that this is their goal? It's completely baseless.
Most of the time, Apple seems to think through what bad actors will do with their stuff. Their scale and (perceived) reputation basically requires this.
I don't see any reasonable way here. Either somebody looks at the images - i.e. they have some underpaid poorly trained grunt to look at horrible abuse images which is probably illegal to look at whole day, having about 3 seconds to make a decision on each - or what they're doing is just a sham, "Did the computer say 'it matches'? Yes it did! Review complete, match confirmed!" I don't see any non-horrible way of doing non-sham reviews here.
People keep trading away privacy for convenience. We won't have any privacy left if we continue to go down that road. People need to start looking at these tech giants with extreme scepticism. Like we do if companies are from China. We need to think about our American companies the exact same way, and stop trusting them to care about our privacy.
> We won't have any privacy left if we continue to go down that road.
If people look at today's state (phone movement tracking, voice capture, cameras and face ID, etc.) from the mental model of 30 years ago, they would see neither privacy nor freedom. At all.
Sadly, it is not the case of preserving what we still have, if we want freedom and privacy we will have to reacquire them. And a lot of the price will have to be paid in blood.
Before that I thought about dumping Pixel phones for a iPhone. Now I gonna try out CalyxOS on my Pixel 2. Once I finally decided on an upgrade to a Pixel 5 I'll probably give GrapheneOS a shot. Hoping that makes a difference, otherwise it will a dumb phone again.
> who would want to be in the business of reviewing reports of probably-illegal images?
I could think of a couple of companies this could be outsourced to, with strict business and privacy agreements in place, of course, and also conveniently on the lowest end of European minimum wage for the tier 1 reviewers.
> Apple's mistake is that they seemingly believe there is pushback because people misunderstand how it works.
I've read too many contentious-change memos to believe it's a mistake. These things are authored by committee with Legal always present, and everything is written expecting it to leak. It's just a passive way to manufacture consent by convincing at least some segment of people to doubt their own thought processes. I'd call it "gaslighting" if that word wasn't already massively misused. See also: iPhone 4 owners who were "holding it wrong".
Also, there's a "we can't show you the exact CP image that one of yours matched, because that would also be illegal" catch-22 that basically gives them the power to shroud this whole system in legally-obligated secrecy. Secrecy which will no doubt be abused to hide much more.
I do worry this is basically automating and scaling up the whole "idiot working at the mall photo development booth turns family in for child porn because they took pictures of 3 year old in bath" story that shows up every now and then.
I spent about a year working a retail photo processing lab - I can assure you that what you describe is exceedingly rare.
I saw “questionable” images on a probably weekly basis. Maybe once a month something would come through that I thought warranted bringing my supervisor over to provide a second set of eyes because I didn’t think it warranted calling the police over but wasn’t entirely sure. An example of that that comes to mind was a series of photos that appeared to show a young woman bound and gagged to a pipe in a basement area - but she was obviously of age, her eyes in the photos didn’t look fearful, and the rest of the roll showed her free and apparently happy. Finally, I was the person who accepted the roll of film and knew that she was the one who dropped it off. We didn’t call anyone on that, but I figured it wouldn’t hurt to get a second opinion. When she came to pick them up, I did ask her to review them and make sure everything was OK with them.
Twice in about a year we have photos come through that were obviously what’s now called “CSAM”. In both cases I called the police without consulting management then told them afterward. Also in both cases, the negatives were put into the store’s safe and the owners were arrested on-site when they came to pick them up.
All of this is to say - photo lab techs see some shit. If they called the police every time there was a picture of a kid in the bath, police brutality would be at an all time low because they’d not have time to respond to anything else. :)
reading through Daring Fireball's technical discussion of it https://daringfireball.net/2021/08/apple_child_safety_initia... I suppose does much to allay my worries though, although my natural distrust of authority and the powerful still makes me not like anything like this - no matter the spin.
Sometimes the slippery slope is not a fallacy, but the natural consequence of seeking more control. This feature can be horribly misused in the future, for example, to find people who have sent/received/downloaded Tiannanmen pictures, because instant messaging pictures can be automatically added to your albums.
You can't trust a feature whose only impediment from being abused in the future is such pinky promise.
Recall that apple devices include an ML accelerator, at some point or another, the next step down this slope will be adding this to the display pipeline.
PhotoDNA and similar are done in on-premises machines, nothing on your own private phone.
Do you know how discovery works in court? I'm guessing not.
Once a hash matches, law enforcement would need a subpoena to access the raw image. If a person were to be arrested, that evidence would be turned over to the defense.
It would be very obvious that a picture of a police officer was not child pornography.
Are you under the impression people are jailed for hash collisions? Because that's not the case at all....
I don't think the suggestion is that a hash of a police uniform will be used to convict of CP but rather the idea that a new law will be passed that - in order to protect the police from 'harassment' - will outlaw taking pictures of the police.
So in that case in discovery the picture of a police officer would be evidence that you broke that hypothetical law. The technology deployed to protect the children opens up the possibility of its deployment for other things later, based on legal requirements of course.
For example don't take pictures of copyrighted material would be something people might want to work on.
I head over to my friend Midev's house with precursor chemicals for VX. You complain that having the precursors to VX lying around is not a good idea. I point out that creating VX is purely hypothetical. Sure, if that happened, it'd be terrible. But, that's not happening.
Apple is deploying a technology with few legitimate uses that makes terrible things not only possible but easy all without the voluntary consent of the device's owner.
Yes, they are. The point is that this change make this hypothetical too close to being true so the change should be abandoned. It skews the power balance too much into the hands of a small number of unknown people so the rest are protesting.
Appealing to the slippery slope is itself a fallacy.
"While this image may be insightful for understanding the character of the fallacy, it represents a misunderstanding of the nature of the causal relations between events. Every causal claim requires a separate argument. Hence, any "slipping" to be found is only in the clumsy thinking of the arguer, who has failed to provide sufficient evidence that one causally explained event can serve as an explanation for another event or for a series of events"
You need to argue things that are actually happening. Appealing to hypotheticals, especially when technically incorrect, serves no one.
What's actually happening is that a mechanism is being introduced that makes it easy to censor anything Apple wants censored. Apple promises to only use it for child pornography.
Apple, however, is still not a sovereign state, and as such must bow to the wishes of actual sovereign states. Sovereign states have proven again and again that they will grab as much power as they can, and doing it insidiously, in the form of a private database of hashes of undesired content, is especially attractive to them.
This is not a hypothetical. For a real-world example look to England where its nation-wide internet blocking system is already used beyond its original scope. Or think of what countries like China will certainly do with such a mechanism. Scope creep, in the form of power grabs by nation states, is a realistic concern, based on vast historical experience, not a fallacy.
Apart from the concern of scope creep, there is also the concern of false positives. When deployed on such a scale, there will undoubtedly be perfectly legimate images being flagged. I'm not happy about my phone containing software that's always vigilant, ready to ruin my life over a false positive.
ok first off that slippery slope referred to here is the domino theory - which is a faulty idea of the causality of events - we can say a slippery slope regarding events is likely faulty because unless the causal chain between any two events is very tightly linked our understanding of causality is not sufficient to understand it. That is to say we have a very tight link between a domino falling, hitting another domino and toppling it, but we do not have a very tight link between a country going communist and then another one later. Hence, every causal claim requires a separate argument.
The slippery slopes being discussed here are not about events and the causes that link them, but about the applications of technology and to a certain extent about laws being used beyond their original mandate. This kind of slippery slope is a logical one.
Thus the technology being used for this mean that it can be used for other things - what kinds of things can it be used for? Are there things that people would like to make illegal or that are illegal now that this technology can be used catch people infringing on these hypothetical laws. Thus - if we allow this technology in our devices now are we opening ourselves up for other potential uses for the technology in the future that will hurt us.
If this argument seems the same to you as the domino theory and that we must take every hypothetical problem when it actually occurs I wonder how you are ever able to plan for any eventuality?
I understand your argument, that a court (in a democratic country with a rule of law) would expect that the investigators extract the original image from the accused's phone with the appropriate chain of custody of the evidence.
However, that does not mean that while this investigation is under way, the accused is now go about their business.
For CP, or other crimes of violence, that makes sense.
But if laws are passed against peaceful protest, then that means someone could be held in detention while those photos are investigated. An anonymous photo now is trackable.
Let’s conveniently ignore all the facts that speak against the planned dystopia, and focus on the positives of having all facets of your life constrained and controlled by authorities.
There are plenty of examples where it is fallacious though, often absurdly so. The important distinction is: does A abstractly "lead to" B, or does A directly enable B, but for some will to use it that way.
For one example, it was always absurd to suggest that gay marriage would lead to legalized pedophilia, bestiality, or marriage to objects.
Meanwhile, trolls and staunch intellectuals demanding "enough" evidence think they can gatekeep what's acceptable to notice is already slipping and gaslight others for noticing and worrying, only to find that years later, the Overton window shifted and newer generations were none the wiser, bringing the things that were previously unacceptable into the mainstream with reckless abandon.
(edited for brevity, expanding more on this in a reply...)
The corporation that is the U.S government acts just like one: roadmaps and planning ahead for radical policy changes to occur within longer spans of time to signify progress (or something), and then absolutely losing their shit if an opposing candidate wins & gets in the way of their progress, as we observed these past 5 years.
Acceptance of pedophilia has been set in motion for several years now. One only need look at some of California's SB-145, the ever-expanding reach of public schools teaching sex-ed to kindergartners, and the N number of drag queens sexualizing themselves in front of young children in public libraries and schools with grand applause and media gushing as these facilities move a few steps away from becoming brothels.
(See what I did there? Surely our libraries won't become brothels, that would be absurd! I mean, the chances of that happening are 2nd to none; and if 10 years from now, libraries still aren't brothels, then I'd have been wrong after all, and my playful exaggeration will have expired. A gatekeeper would have done their noble internet duty to inform me just how idiotic my suggestion was from the start, to be sure. The joke's on them for losing 15 minutes crafting the perfect response to my alarmism that will definitely change my mind.)
We had none of this just 3 years ago, but I'm pretty confident I'll be gaslit in response to my highly misinformed and "hateful" comments, as it goes. ;) That's no matter though. I take responsibility for that by standing by what I say, not bending to the winds of outrage.
The only example you mentioned that is attributable to government is sex-ed.
Sex education isn't about teaching you how to have sex, it's "here's all the reasons you need to be very careful with sex".
In my classes I learned about many different STI's and the dangers of unprotected sex. Not once was I taught a Kamasutra position or what to do with my fingers.
For younger kids I assume the curriculum would be more about what kinds of behaviors they need to be careful of and immediately warn other adults about.
I suppose the name is very unfortunate because a lot of people seem to think sex-ed is about getting young people to start having sex, when in fact it has the opposite result and we can see it in statistics.
You're implying a slippery slope fallaciously. Legal protections for transgender people does not directly follow from legalized gay marriage.
Also, that father violated a gag order, and acted against the decisions of the both custodial parent and his child's medical professionals. You're being disingenuous to misrepresent that as "for calling his daughter[sic] she".
It is only a fallacy when it’s not backed up by solid reasoning. There are obviously slippery slopes. And sometimes they aren’t there, even though people claim they are. It’s a shame that some have been conditioned to immediately make the connection between the two.
I'm surprised to hear this falsehood being repeated here. There is nothing inherently fallacious about slippery slope arguments. Like almost all rhetorical tactics they can be used fallaciously. The fact that slippery slope is not inherently fallacious is very well documented in the academic literature on rhetoric.
It is a logical fallacy in that the consequences do not logically follow from the proposition. However that doesn’t say anything about whether they are likely to follow in practice, only that there is no logical proof that they follow.
In this case, the arguments against this mechanism are generatly slippery slope arguments - logic doesn’t guarantee the bad outcomes. However it’s quite reasonable to be concerned that the bad outcomes will happen because of the actors involved.
>isn't even really a logical fallacy. In fact, I can think of a number of circumstances where it turned out to be true.
This represents a very common misunderstanding of fallacies, IIRC it's called the Fallacy Fallacy: fallacies don't give you any information about the truth of the conclusion, they only indicate that the logic that ties that conclusion to the premises is unsound. So "slippery slope fallacy" means "worse things don't logical follow in increments from better things" but that doesn't mean that in any real situation "worse things follow in increments from better things" is not true.
The most common example of this misunderstanding is probably Occam's Razor, it says nothing about whether things are or aren't more complex in any real situation, only that it's easy to reason about things if you don't add additional aspects that aren't needed.
Pictures is just a pretense. Step #2 will be URLs of pictures that users send each other using the phone's keyboard. Step #3 will quietly expand the definition of a URL to any keyboard input. At that point, Apple will have a god's eye view into all communications of their users, be it over WhatsApp, Signal, SMS or an online forum. Imagine having a dashboard that shows in realtime with gps coordinates how many text messages contain the word "enough"! The same scanner module can detect certain words in phone calls. What dictator wouldn't want that?
If there ever is a showcase of the slippery slope fallacy this comment would be on it.
Step back for a second and think about why perhaps falling for the ‘but the children’ trap tells you Apple wants to go in the absurd direction you are pointing. Think about if it makes any sense at all they would want to go that way, think about why they would even let you know and think about how plausible it is for these steps to lead to each other.
It's not fallacy. There's huge demand for such monitoring and few countries are as competent as China to do it themselves. Apple already works closely with CCP, so it won't face any moral dilemmas. The only obstacle is damage to reputation in the US and Apple is assessing the scope of this damage right now.
I started writing an explanation of how you're mistaken, and then I realised just how quickly you went from Apple using a very common method for identifying CSAM (which Facebook already uses, yet I've not really heard much in the way of complaints about that) to Apple wanting to listen for keywords in phone calls, and just how unlikely it is you'd reasonably think through my response.
Perhaps you're the one who's wrong though? Your response makes no sense other than to grandstand how hasty and unreasonable the OP is being. You could have just omitted it and we'd be no worse off.
A good way to think about any potential law like this is to imagine your worst enemy or most hated politician was given power to abuse said law, and then think about all the ways they could use it to make your life miserable
always think about the worst case scenario when it comes to privacy, because that will be the end result
To me we don't even consider the actual worse case scenario.
Is there any doubt 50 years from now all this will be on algorithmic autopilot for digital authoritarianism? All societies will have Chinese style monitoring but to an unimaginable degree even to people in China right now.
People outside IT were barely even using the internet 25 years ago. We are at the very start of this and already too far gone. You just have to enjoy these times and what we still have.
It's not a mistake, it's a rhetorical trick - to present somebody opposing your actions as a misinformed idiot that just doesn't know what they're talking about, and once proper technobabble is deployed every reasonable person would realize their mistake. And if they don't, they must be much bigger idiot than we thought, there's no other reasonable conclusion.
And HN's mistake is thinking any of this will matter. Apple only cares about money, always has, always will. And I'm willing to bet they are going to continue beating their revenue and profit record quarter after quarter. When their actions do not affect the one thing they care about, why should they care what anyone says?
I don't agree with Apple's move here either just explaining that they are not doing anything illegal no matter how much we may not like it. Of course we can choose not to buy their products but do you think the average person considers it before they make a decision on their next phone?
> Apple's mistake is that they seemingly believe there is pushback because people misunderstand how it works.
They don't believe this. The lack of understanding trick is often used by governments and corporations because it robs the opposite party the choice to oppose, as opposition is framed as lack of understanding instead of a political choice. It's a semi-elaborate way of calling idiots people who disagree. This is why governments "educate" on topics they don't people to oppose.
> Fast forward to the present where law enforcement in Germany, Australia and Singapore have used contact-tracing for other purposes.
I don't understand how people were so naive to install those applications. I did the exact opposite: whipped my phone and bought two cheap ones so I can separate my calling phone (which I leave home), from my media phone (no connections), and my GPS phone (no connections aside from GPS and turned off most of the time).
Yeah, this is my reading of the situation too. From a technical point of view, what Apple is doing seems like a reasonable approach to me; what I absolutely do not trust is any claim that it will never ever be expanded to include, say, governments requiring similar tech be applied to all messages before they are encrypted in order to look for dissidents… and then generally at opposition parties. (And of course there are enough competent technologists available for making custom encrypted chat apps that the hostile governments could only achieve their goals by putting the AI in the frame buffer, which opens the door to subliminal attacks against political enemies — targeting the AI with single frames such that the device user doesn’t even notice anything).
But I know I have paradoxical values here:
On the one hand, I regard abusive porn as evil. I want it stopped.
On the other, I have witnessed formerly-legal acts become illegal, and grew up in a place where there are sexual acts you can legally perform yet not legally possess images of. I think the governments (and people in general) follow a sense of disgust-minimisation, even when this is at the expense of harm-minimisation.
And any AI which can genuinely detect a category of image can also be used to generate novel examples that are also in that category without directly harming a real human.
I don’t have a strong grasp on what I expect the future to look like. My idea of what “the singularity” is, is that rather than a single point in the future where all tech gets invented at once, it’s an event horizon beyond which we can no longer make reasonably accurate predictions. I think this horizon is currently 2025-2035, and that tech is changing the landscape of what morality looks like so fast that I can’t see past that.
> A cryptographic hash + file size combo is unlikely to have a false positive within our lifetime
The chance of a "technical" false positive is tiny, but they need to look all false positives:
If some joker sends you a lewd picture through WhatsApp, WhatsApp by default saves every picture to your photo library, then you are now on the naughty list.
Good luck trying to explain yourself out of that one.
The context there was that a typical CSAM hash like PhotoDNA is NOT a cryptographic hash, but a perceptual one, which DOES have notable false positive issues.
WhatsApp saves all received images to your library? That seems hard to believe. I can't imagine that anyone would appreciate having their personal photo collection cluttered with other peoples photos and all of the meme images that people share.
>these features are part of an “important mission” for keeping children safe.
The other mistake is to think that removing privacy with longing effects on society [1] and thus attacking the very fabric of a free democratic society respecting human rights would not make children any safer. When they grow up they possibly would have to live in concentration camps or Chine-like regime. It is very safe, but not sure it is very pleasant.
It's about values. It should be absolutely illegal to install any spyware on personal device without warrant
You nailed it. Apple is so flexible with their morals when money is on the line you know this will be abused by some authoritarian government. They will ask apple to scan for some content, apple will say no, the government will ban apple from selling their products, and suddenly apple will change their mind. Next thing you know a huge swath of journalists or dissidents is being disappeared to reeducation camps. Apple is working overtime to wrestle the “most evil company in the world” title from google and amazon.
they sacked these government employees because they allegedly were engaged in "anti-national activities", no courts, no trial, no hearing. just sentencing that since you are allgedly found to be engaged in such activities, your employment is terminated and there is no recourse.
https://www.organiser.org/Encyc/2021/8/3/Govt-s-policy-of-No...
here they have decided to not give passports to protesters or give then government jobs because "anti national activities" again the same trope. this might be a small issue but here is a jounalist https://thewire.in/rights/kashmiri-journalist-masrat-zahra-p...
whose father was beaten to a pulp because she being a decorated journalist had criticized the government, they slapped terrorism charges on her and german government gave her asylum so they did the next best thing, hurt her parents because why not.
now if she didnt have her passport because she was already a known suspect of "anti national activities", asylum might be a little difficult so they want that.
i know 100% indian government and for that matter pakistan government as well will use this technology for "national interests" in finding out protesters and dissidents and they either do not even go for trials or if they do, the same are sham ones so doesn't matter if they make a show of a trial because the guilty are condemned already.
I see whataboutism (google drive scans! ms scans!) and "here are the technical details of why we can't just look..." . The bottom line is that none of that matters. the 99.999% problem is Apple is spying on you on your. own. device. I don't know how many techies I've had to shoosh and point that out. It doesnt matter what kind of hash or algo or "layers of protection" are being used. What they are doing is inherently wrong and totally the antithesis of their privacy mantra. It will also inevitably be used by governments for ant-humanitarian purposes to imprison people for thought crimes against the state. Unless they (Apple) backpedal this abomination soon I'm definitely minimizing my apple purchases in the near future and moving off the platform entirely long term. Even google isn't so bold as to put their spyware on their devices to police you for the government. Suddenly ads don't seem that bad ...
The mistake here is thinking iphone is your own device. If they can view anything stored there, remove anything from there, control anything you install and run, disable it at any moment they wish - do you actually own it, or you just granted the use of it as long as Apple approves of it? Apple certainly doesn't ask your permission to do anything on "your own device" - but you have to ask theirs. Is that how ownership works?
Having ditched Apple laptops this year (after 18 years of Mac computers) for Linux (on System76 hw), I've been wondering what it would take to pry the iPhone out of my hands. I think I've found it.
No, Apple fully understands the situation, and in fact, you don't see many Apple fanboys in the comments defending the firm for once because even they know it's wrong.
Apple just communicates this message because that's what they need to say for their business, that's all.
And they can do so because they also know this uproar will disapear like tears in the rain in a short time if they keep smiling. Their bottom line will not be affected because people don't care.
Remember that now, Bill Gates, who we used to associate ti Satan in the 90s, is now viewed as a hero.
Remember that about 1 american out of 160 is working at Amazon, a company with the reputation of treating their employees terribly. Most people hence knows somebody impacted by this, but don't stop shopping at amazon.
Remember you could say you grab them by the pussy and be elected president.
Remember people gives in mass their data to someone who said "they trust me, the dumb fucks".
Remember that when somebody risked his life to unveil a massive conspiracy to spy on the entire world, the person has been hunted as a traitor.
Remember that one president can lie about WMD, go to war against the vote of the UN, kill thousands of civiliands, spend 600 billions of dollars there while poverty is rising at home, and suffer no consequence whatsoever. Most people have forgotten already. Hell, my girlfriend don't even remember the name of the guy.
But it's you that have to be monitored all the time for CP. And you will be punished harshly if you behave badly, not just according to law, but also according to social contracts, both which powerful people can escape now with money and PR.
It's just a matter of time until someone can create a "ThisChildPornDoesNotExist" site, then send links to targets to have their lives ruined. I am extremely worried by humans policing the Intertubes, and even more worried by algorithms doing so because they make the perfect excuse to do sloppy checking. See Youtube DMCA related takedowns for multiple examples.
Probably so, but a more smart malware that grabs fake images from an encrypted server somewhere, then plants them in the victims PC/phone/whatever, then deletes itself, isn't rocket science.
> Apple's mistake is that they seemingly believe there is pushback because people misunderstand how it works. The reality is more nuanced: People understand exactly how it works, and how it works is that it is turn-key onboard spyware, that Apple pinky-swears isn't being used wrong today.
Not entirely. It makes it so that, to achieve a "full" collision, you have to ensure that the sets of data collide both in SHA hash and in length, helping to prevent attacks that rely on appending/prepending/removing data (for example, "length extension attacks" involve manipulation of the hash by appending data).
TL;DR: It is harder to find a collision SHA(B) for SHA(A) if you add the additional constraint that the length of B must match the length of A.
The known collision attacks for the MD-family and SHA-1 all in fact produce collisions with the exact same length. The method used necessarily does this.
Which part? The fact that storing "length" along with a hash is not superfluous?
You can probably find many things which have a SHA hash of "ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb" (infinite things, if we assume arbitrary-sized inputs), but you can only find ONE thing which has that hash and has length 1. I just made it impossible (not just unlikely) for you to find a collision.
> The known collision attacks for the MD-family and SHA-1 all in fact produce collisions with the exact same length.
Emphasis mine. And note that I did not claim otherwise in my comment.
> Which part? The fact that storing "length" along with a hash is not superfluous?
The part where you make a false claim out of ignorance.
> You can probably find many things which have a SHA hash of "ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb"
No reason I should go looking for such things. You're the one making the false claims, if you have found "many things" with that hash then list them to prove your point, otherwise go away.
> The part where you make a false claim out of ignorance.
Which false claim did I make? I'm still waiting...
> No reason I should go looking for such things. You're the one making the false claims, if you have found "many things" with that hash then list them to prove your point, otherwise go away.
You don't need to look for those things. By definition, you know they exist. I don't need to find or enumerate all primes to know that an infinite number of them exist.
By definition, assuming arbitrarily-sized inputs, there are infinite messages that collide to the same hash value.
But, don't worry... it is clear you have no actual meaningful point to add, so I won't continue this conversation with you any further. Have a nice day.
You are misrepresenting or more likely have simply misunderstood the Pigeonhole Principle. Which I guess makes sense for somebody who didn't understand why length extension matters. It does not prove that any particular output will recur, and what you've got here is one very particular output.
Again, you need actual examples. Not handwaving, not the unwavering yet entirely unjustified certainty that you're correct, you need examples. And you don't have any.
Again, which false claim have I made? Be specific and quote me: you need actual examples, not handwaving.
Until you do that, I'm not pursuing this conversation any further. Have a nice day.
EDIT: Also, if you do want to have a conversation, make sure to stick to HN rules and talk about what is being discussed, rather than about me. Thanks.
I disagree. Google announced the same thing for Google Drive and didn’t receive this level of pushback. If you’re Apple, the only way that’s possible is if you explained it badly (i.e. differently than Google did).
I’m guessing their real prerogative is to keep unlawful content from ever reaching their servers. You wouldn’t know that from reading their press releases though.
Why wouldn't they show a pop-up on the phone highlighting which images may be problematic and that report is being sent to the government? I might change my mind on uploading these things to my iCloud. Can't wait for all examples of false positives that will surface.
Why wouldn't they delete them directly, but rather send report to the government?
Why wouldn't they go even further and if image detected on the webpage that is problematic => It doesn't allow it to get to my filesystem either, so that I don't get into trouble if for some reason police decides to search my laptop.
All relevant questions. Answer is the only one: so that when police knocks on your door you were blissfully unaware.
Quite honestly this is all too sad. Again this argument about CP that everyone will buy, but which doesn't work in the wild. There are some bad people out there, but if these people are into buying or creating CP => they will rarely upload it into iCloud. Especially now, when everyone knows Apple is looking for it. So they will go even deeper into hiding, while everyone else will have to live with devices that are spying on them...
The difference is that Google doesn’t advertise itself as a bastion of privacy. Google spying on you is expected. More consumers also pay for iCloud vs Google Drive. Expectations change when you’re paying for a service.
Isn't the difference that with google drive, they scan the files you upload to their servers? And, this latest apple debacle is on scanning files locally on the device itself?
If Apple is to be believed (and doesn't change anything later) then they're doing the same thing. They're scanning the things that are being uploaded to iCloud. They're just doing it locally instead of doing it after they're on the iCloud servers.
You can literally read any thread on HN or Reddit to see this is the case. Thousands of comments about facial recognition and AI, completely misunderstanding how it works. The EFF article itself was FUD.
The EFF article starts right off claiming a backdoor, which is completely incorrect. Taking hashes of client-side content is not a backdoor, and the EFF should be embarrassed for not understanding that.
This client side scanning of local files can be easily extended to scan the whole device. That's why the eff was calling it a backdoor. All it takes is for the right govt to put the right preassure on apple. In a way it's kinda worse than a backdoor since your phone is actively reporting any suspicious looking files to the authority all the time on its own. The govt dosent need to actively look for it. Just add the appropriate has of what it wants discovered to the db and wait
What's the bad faith? What's single thing have I said that is incorrect? It's not breaking E2E encryption. No one is locking people up for political photos. It's not using facial recongition.
I don't like FUD driving discussions. I think we need to have serious discussions about this technology. I think people that don't understand it shouldn't participate....
It's not breaking it, but it is rendering it moot. Why even bother with E2E encryption if you can't trust your endpoint? Would upgrading your exterior door's security protect you from theft if it's your roommate who's stealing from you?
> No one is locking people up for political photos.
Authoritarian dictatorships like China are, and they'd love to use this tool to help them with that.
You have repeatedly misrepresented other people's viewpoints in order to win this argument. Maybe you have completely misunderstood what the discussion is about, but since you're doing it repeatedly and have been called on it multiple times, it is fair for others to assume bad faith.
Take a close look at the way this works. If there are enough hash matches, Apple apparently gets to look at all the pictures on the phone. The description says "threshold reached, all content visible and unlocked". "All", not "Matched".
Matches can be forced from the outside, by sending pictures to the phone. They don't even have to be from the known-bad image set. Using an adversarial system to try to create hash matches might allow generating innocuous pictures which trigger a phone dump.
Expect a toolset for doing that to be developed. Although it may take a while for word of it to leak out. This has applications for entrapment, swatting, political retribution, etc.
Even regardless of whether it's all photos or all matched photos, the issue is that Apple is actively and publicly building adversarial black boxes that run on your device, on a OS level (on iOS AND macs!), with the sole and explicit goal of snitching on you to the government for possessing information that is deemed wrong.
The actual specifics of what material is currently deemed wrong is irrelevant, because it will be expanded.
This. Is. Not Okay.
This is something you'd expect China to mandate, not a Western company like Apple to advertise as a feature.
I don't see why this is surprising at all. Apple has never cared about the privacy of their Chinese users.
Privacy was never a principle for them, merely a marketing feature for technically illiterate westerners.
> If there are enough hash matches, Apple apparently gets to look at all the pictures on the phone.
Not according to the technical summary PDF they released[1], "Only those images that have a voucher that corresponds to a true CSAM match can have their vouchers’ data decrypted [...] Even if the device-generated inner encryption key for the account is reconstructed based on the above process, the image information inside the safety voucher for non-matches is still protected by the outer layer of encryption"
This is spin by Apple, driven by ignorance or deliberately misleading statements. NCMEC's database contains images that are not CSAM, not illegal and are not even borderline (grey area).
The fact there is a match in NCMEC's database does not mean the content is CSAM. It does not even mean it's illegal. In fact, it doesn't even mean there's a single person in the picture frame.
And no, this is not theoretical. NCMEC's database is this much of a mess today.
In fact, given there's a threshold at all I'm pushed into believing that Apple know the database is faulty.
> NCMEC's database contains images that are not CSAM, not illegal and are not even borderline (grey area).
Got a handy link about this? A quick search doesn't show anything obvious.
> This is spin by Apple, driven by ignorance or deliberately misleading statements.
The claim was that Apple got access to all the pictures on the phone which isn't correct according to their technical summary. Nothing to do with the validity of NCMEC hashes.
I think the problem is that no one will ever be able to validate what's in that database. "I need to see that database so I can ensure that it's all CP" isn't going to go over well.
Anyone can already do that by sending a iOS user an incriminating message via airdrop or sharing an album. Unless this search is limited to only photos saved to the personal Photos Album, all apps on the iOS device become an evidence plant attack vector including the built in mail app, third party apps that can accept image messages or have planted illegal images, and even iMessage.
This is especially important given that NSO Group's Pegasus' works through sending multimedia iMessages/WhatsApps with exploits. The former auto-uploads to iCloud, and the latter can be configured to auto-upload all received media to iCloud.
It almost sounds like a feature Apple is building to explicitly support the imprisonment of dissents through tje false planting of child pornography.
> It almost sounds like a feature Apple is building to explicitly support the imprisonment of dissents through tje false planting of child pornography.
But you can already do this without these changes - cf the UK police officer currently under investigation for having CP on her phone that someone else sent her and (she claims) never opened.
I like to think this case will call for reform in these laws.
"Williams was sentenced to 200 hours of unpaid work and placed on the sex offender register for five years, which damages her chances of getting another job.
Williams completed her community service in a charity shop, and having finished the hours the court ordered her to do, returned to volunteer further."
Unfortunalty if our experience from the past with such events teaches us anything, it's that a minority of us will stop using apple products (or keep not doing it) but the rest of the world will quickly forget and carry on as usual.
Worse, if your decision to not use an iphone makes others people life slightly anoying for 2 seconds a week, many will bash on you.
As a FOSS user with no FB nor whatsapp account that ask to avoid photos to be taken of him with smartphones, I can tell you I pay regularly the price the price for what I think is doing the right thing. It's just easier to let the abuse go on.
And we have other fights in life. I didnt pick up smoking (pot of cigarets), stopped drinking alcohol, became vegetarian. They all come with social challenges. Add then you have your familly, job, health and goals that needs your care, attention and time.
At some point you just want to say "screw them", create your own bubble of freedom and let them enjoy their distopia. Except we kind of live in it, don't we?
I'm usually an upbeat person but these types of event arrive in a never ending stream, and today I feel tired.
We are not here by accident. Those are symptoms of a public that doesn't care. We have been explaining the consequences of each choice for techs during the last 20 years. I've had the talk with my friends and familly many times. Very patiently. Very nicely.
Nobody cares. It's just an annoyance for them.
And it requires an enormous more amounr of energy to live your live this way than not caring. Even more if you gotta explain.
Most people believe they are good and conforming. They don't believe anything would happen to them because of what Apple does and they don't want to be seen as those defending abusers right to privacy.
Acting concerned in discussions would be a start. Choosing what services to use, what product to buy, what media to consume, not just because that's the easiest. Limit their involvment in this, even professionnally, if they can.
It doesn't need to be perfect. People have constraints, and life is hard.
But politicians and busines people don't care about being good or bad. They just follow the wind.
Now being gay is trendy, well, they are pro gay.
More people care about organic, they sell organic.
Just caring a little, demonstarting a little that we disaprove and want something else orient their decisions. Because their decisions are not driven by moral, it's just business.
Also, even if you don't do any of this, and I can believe I have to even state this, not teasing or shaming those who do would be an improvement.
Verifiably? Are people still swapping ROMs on XDA developers like its 1990? Are folks still having to dump binary blobs from devices to get the things to work with non-stock ROMs? Are some people still relying on jailbreaking tools they have to “just trust”? Does the modem on all these SOCs still run it’s own network facing machine with full RAM access?
It's non-trivial to unlock, root, and install a custom ROM - unless you are technical, very few people are going to be able to do this.
Also, it's been several years since I used a custom ROM, but I thought rooted devices were not permitted to use the official Google Play Store? (happy to be corrected if I've got this point wrong!)
Rooting and installing a custom ROM are two independent things, you don't need one to have the other.
Rooted devices can use the official Play Store yes, though most people use Aurora Store to access the Play Store because they tend to remove Google Play Services from their phones as well.
The fact that Apple ordered upwards of 90M new iPhones tells me they took all the backlash from techies into consideration and still found it beneficial.
Something tells me that this could be used as marketing, e.g. "protect your children from abuse", "be aware of when pictures of them are taken", and "protect your children from pornography".
A system on my phone where it has a list of bad files and a threshold on how many of those files are allowed. If the threshold is reached Apple can read them. Both the bad files list and threshold is controlled by Apple and is explicitly designed to be un-auditable...
Honestly I would have been fine with Apple scanning my all photos after they are uploaded to iCloud but this is deeply disturbing
This is how I feel as well. I don’t use iCloud photos but if I did, sure scan them for CP, I don’t care. Maybe you will catch some bad guys that willingly gave you their photos. But scanning everyone’s phones is beyond creepy and feels like exactly what the fourth amendment is about. It’s British soldiers suddenly having the ability to search EVERY home in colonial America as often as they want.
> Amendment 4. The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
I have moved away from Apple ecosystem, already purchased another phone yesterday. The Constitution is holy to me, it’s all we have protecting us from technological dystopia.
Constitution applies between the government and you. Apple is not (knowingly) a government agency, they're a private business. Therefore the amendment doesn't apply to their actions. Vote with the wallet instead.
Yes that’s true but I assume this program is working closely with the government. I could care less about child abuser protection but history shows us that it is a very short maybe non-existent slope from “protecting the kids” to whatever overstep a government organization wants to take.
I wouldn’t care nearly as much about Apple scanning my phone it’s about who is pulling those strings they are working with and in America they are (or should be!) bound by the Constitution. I can only imagine the obliteration of rights that will happen in other places. Imagine a Hong Kong protestor with this program on their phone.
We’ve got to do everything we can to keep George Orwell’s quote from coming true-
“If you want a picture of the future, imagine a boot stamping on a human face—for ever.”
I’m a cheap bastard so I got a Pixel 2 used for ~$55. I’m sure people will tell me Android is no better and I’d be open to hearing that but they don’t scan your phone as far as I know. I also have the option to install things like Calyx OS, Graphene OS I believe.
> Honestly I would have been fine with Apple scanning my all photos after they are uploaded to iCloud
Apple has already been doing exactly this for years. Now they are going to check the photos right before uploading to iCloud which is actually more privacy friendly than they do now. It also lets Apple turn on e2e for iCloud photos if they want.
I understand the 'what if' and slippery slope arguments, but wow, there is so much misunderstanding in this thread. Apple makes the OS and doesn't need a fancy system to scan all the files on the device if that's what they want to do.
I highly suggest reading this: https://www.apple.com/child-safety/ and the associated PDFs. Apple PR hosed this up by putting 3 distinct features on one page that people seem to be conflating.
> Apple has already been doing exactly this for years.
Is that an assumption made based on that "everyone is doing it" or is there some evidence?
> Apple makes the OS and doesn't need a fancy system to scan all the files on the device if that's what they want to do.
If the goal is to scan all the files on everyones device this system is exactly what they need. It's not like they could upload hashes of every file on every users phone continuously.
Now that a fully built system for breaking end-to-end encryption is shipped directly in the OS, we're one configuration change away from massive scope creep.
First terrorist content, then "misinformation", then political speech. Apple will be unable to resist government demands to use this preexisting backdoor with a different set of perceptual hashes.
Disabling end to end encryption is always “one configuration change” away, presumably by substituting keys. How closely do you keep track of which keys are used to encrypt Health data - which is end to end encrypted - versus photos - which are not?
If Apple substitutes keys, then that is a detectable event (by jailbroken devices) and that would make news.
This is Apple explicitly announcing they are actively backdooring all iOS and Mac devices, and using your CPU cycles to determine whether you should be reported to the government.
Not really. Key management is done by the SEP, which can’t be introspected. And again, database updates take an iOS update so the back door threat is the exact same.
Then ask yourself why they shipped this scanning on the client-side. This is the first step towards normalizing client-side scanning of encrypted content across the entire device.
Why would Apple want to do this? It doesn’t benefit them at all. Their competitive advantage is having people trust their devices, if not their values.
People are making an extreme claim that Apple went out of their way to implement a fancy system to ruin their own value proposition, and the evidence they have to offer is mere speculation.
Ambiguous retorts like this make you sound intelligent but offer little to the discussion. If the adversary is the USA or China, I have bad news for you: every major democracy has planned encryption regulation which is unimaginably worse than what was announced here.
Actually, the discussion benefits from everyone stating their arguments clearly, even if you don’t benefit. Most of the discussion of this change has been FUD, making it difficult to tease apart actual privacy regression from imagined ones.
Hopefully so they can remove their current ability to decrypt user photos for whatever reason they want. The current state is they can decrypt any user photos on iCloud. Doing client side scanning and this CSAM detection implementation could allow them to remove their ability to decrypt EXCEPT in very specific situations.
It's not true end-to-end encryption since in some cases the content can be decrypted without the user key but it's significantly closer than what they have today.
That being said I don't know if that is their plan or not, but it is a plausible reason to make this change.
If they can decrypt in a “specialized” situation, then they can decrypt in any situation. All that has to be done is to broaden the classifier step by step. Or someone else gets access to the back door. That’s why there can be zero allowed back doors.
The database ships with iOS. Apple can do anything they want in iOS updates. In fact, this was exactly the “back door” the FBI requested Apple use half a decade ago. Per this standard, all Apple end to end products are already backdoored, and nothing new was announced.
Yes but previously Apple’s stance was, “no we won’t do that”. And so they earned many’s trust. Now they are planning to do exactly that. And so they have broken the trust they earned.
Where specifically did Apple say they will never try to detect for the presence of CSAM in your iCloud Photo Library? In fact, people mistakenly assume that they do already.
I disagree. I think it's the first step towards enabling e2ee on iCloud photos. This system will replace the server side CSAM they have done for years.
Many foreign countries have also clearly stated that they do not want this (E2EE) to happen and would legislate against it (the UK comes to mind first).
I do believe that you are correct with the idea that this technology was initially developed as a compromise to E2EE. But while E2EE on iCloud was indefinitely shelved, somehow this was not.
And someone at Apple thought this could be repurposed as a privacy win anyway ?
The other way I can think of it is if the ultimate goal is to add those checks to iMessage. One could argue the tech would make a lot more sense there (it's mostly E2EE with caveats), and it would certainly catch many more positive hashes.
I think someone at Apple massively misjudged the global implications of this and opened the company to a (literal) world of upcoming legislative hurt.
I read that article and see this new method as work around for the FBI complaints, and once again allowing E2EE to move forward.
Technology doesn't live in a vacuum. Given the calls from the government for backdoors to encryption, I think it's safe to assume this is Apple getting out in front of what could likely be heavy handed legislation to add actual backdoors like master keys.
But, we'll have to wait and see if Apple starts adding more services to E2EE again. It also may all be moot if legislation gets passed that forces companies to be able to break the encryption for warrants.
> Technology doesn't live in a vacuum. Given the calls from the government for backdoors to encryption, I think it's safe to assume this is Apple getting out in front of what could likely be heavy handed legislation to add actual backdoors like master keys.
I broadly agree but I cannot foresee a scenario where limiting at this particular issue (CSAM) would be seen as a sufficient compromise by legislators to allow E2EE to be expanded.
And other countries will have very different interpretations, much less palatable to Apple's values, on what should be checked for and they will have no qualm legislating to require it.
Quoting the NY Times (via Daring Fireball) :
> Mr. Neuenschwander dismissed those concerns, saying that safeguards are in place to prevent abuse of the system and that Apple would reject any such demands from a government.
> “We will inform them that we did not build the thing they’re thinking of,” he said.
They can tell themselves that but it doesn't matter : they precisely did.
It's disturbing because of how effective it could be.
It's at OS level, anything you have on your phone can be scanned. Even if an app tries to circumvent it by keeping files encrypted at rest it can scan them in-memory. And since it's all done client-side you'd never know it was happening until it found a match and sent it to Apple.
Everyone, just relax. It's fine. The Apple corporation is just entering your home on a regular basis and rifling through your photo album. This is perfectly normal, because you might have pictures of 15 year old girls. Which you took when you were 16. But you're 18 now, so if you've upset the wrong people you can be locked up for life. Be glad you're not just quietly taken out the back and shot, like in some other countries.
We're more civilised here, you see? We just put you on a list. You can go about the rest of your life. It'll be a bit... shit, but at least you'll get to regret angering the rich and powerful.
That's entire memo sounds completely tone deaf and delusional.
It's shocking to me that they don't even acklowege the critisms and fully ignore them. It would be one thing if they said that the potential implications are worth it, but they don't even acklowege them.
...which is ultimately what they are going to try to do: label those who oppose this feature as being supporters of child abuse. It's a classic "if you're not with us, you're against us" argument.
If by "They" you mean Apple, then they did not. As noted by jsnell, and properly attributed in the linked article, it was written by the National Center for Missing and Exploited Children
It's true that if you retweet someone else's message, then you are not the original author of the message. But Apple leadership vetted the letter and passed it around staff, hoping that their employees would find it as "incredibly motivating" as they did.
The Apple VP did say the NCMEC note was incredibly motivating though. And he could have distanced himself from that part. Or told the NCMEC director he'd love to share her note with the staff if she removed the disrespect.
I'm seeing lots of news and pushback about the iCloud scanning stuff, but not a lot of news about the iMessage scanning and sharing stuff.
I'm not a parent, thankfully; but if I were, I'm not sure I'd be entirely comfortable with the idea of other parents seeing my child naked. Even if I don't sign my child's account up for this scanning thing, the recipient could well be, and they won't be warned about that before sending it.
This is an abusive parent's wet dream waiting to happen, and it disgusts me. Minors should be free to explore their sexuality with their peers. It's how we grow as people.
EDIT: Upon further thought; how is this NOT illegal? How else can this be framed than "Apple knowingly and willingly assists in the dissemination of child pornography into the hands of adults"?
the imessage scanning is just to provide on-device warnings and blurring of photos that are suspected of being inappropriate. nothing is transmitted off the phone — parents are only given a notification (and only if their child is under 13, and only then if the child sends or replies to nude photos after being warned by the system)
> Apple’s second main new feature is two kinds of notifications based on scanning photos sent or received by iMessage.
> To implement these notifications, Apple will be rolling out an on-device machine learning classifier designed to detect “sexually explicit images”.
> According to Apple, these features will be limited (at launch) to U.S. users under 18 who have been enrolled in a Family Account.
> In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content.
> If the under-13 child still chooses to send the content, they have to accept that the “parent” will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later.
yeah, not sure where they’re getting that info from. neither apple’s child safety landing page nor the pdf info sheet they released mentioned anything about photos being saved. the eff say that the image will be saved on the child’s device for the parent to view later, but i can’t find a source for that.
from the eff page:
>…if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. … once sent or received, the “sexually explicit image” cannot be deleted from the under-13 user’s device.
Let’s not confuse the 2 parts that Apple is implementing;
- one is parental control, it’s upon request, uses ML, it’s “local” (that is, it’s sent to the parent)
- the other one is plain hash matching, which does not “save the children” but rather is “catch the viewer” — exclusively. This has no impact on the abuse of the CSAM subjects because it only matches publicly-known content.
I don’t know why NCMEC is excited since stopping the viewer does not stop the abuse; This does not affect them.
Conspirationally speaking, it almost feels like things aren’t the way they’re described and Apple will in fact ML to detect new content and report that.
The threshold thing doesn’t even make sense otherwise. One known CSAM picture should be enough to trigger the report, but it sounds like they want better accuracy for ML detection.
So you’re technically correct, operationally incorrect. When CSAM is detected there is also the possibility of new, unhashed, CSAM being found amongst the suspects other files.
'the other one is plain hash matching, which does not “save the children” but rather is “catch the viewer” — exclusively. This has no impact on the abuse of the CSAM subjects because it only matches publicly-known content.'
Are you claiming victims of child sexual abuse wouldn't care if images of the abuse are circulating freely and being viewed with impunity by pedophiles?
So stop the circulating. This is like trying to stop people from using drugs by searching EVERYONE’S homes and reporting suspicious looking substances.
"So stop the circulating." Good idea. Any ideas how we could do that? Perhaps one thing that would help is to implement a system that hashes circulating images to ensure they aren't known images of child sexual abuse. Just a thought.
A lot of unwanted kids out there that end up in broken homes or on the street. There are also a lot of kids born in abusive families.
Good ideas to stop the circulating would be to increase birth control education and access, increase foster care funding and access, implement common sense policies for adoption, and increase funding for Child Protective Services.
It prevents CSAM from existing in the first place. Much like it would be ridiculous to search everyone's houses to stop drug use, it is far more effective to prevent the causes of drug dependency.
The NCMEC director claimed "many thousands of sexually exploited victimized children will be rescued" because of this system. It can't detect new abuse images if I understand correctly. And don't most arrests happen after a victim talks to another adult? Or someone notices signs? Are many thousands of active child molesters discovered through scanning online photos?
The NCEMC was created by a child abuser[1] and his victim whom he groomed, and they asked Apple to send them pictures of suspected abuse.
Pictures they can’t use as evidence, or use to “discover” evidence due to fourth-amendment protections.
[1]: In his book Tears of Rage, Walsh openly admits being in a relationship with 16-year-old Revé when Walsh was in his 20s and aware of the age of consent being 17 in New York. Critics of the Adam Walsh Act have pointed out that, had he been convicted, Walsh himself would have been subject to sex offender registration under the law which he aggressively promoted.
“But you know, she had this way about her. She had a certain presence. And after a while I just got over how young she was. She was way more sophisticated than anybody in her high school and she always dated older guys.”
NCMEC is clearly lying (for one thing, CSAM users will stop using iPhone/iCloud for CSAM), but the theory is that someone who collects images is likely to (a) have some new not yet indexed images in the collection, which can be investigated, (b) might he abusing children.
IMO posession of CSAM should be legal but heavily regulated, so that users who don't want to hurt children (probably will most of them) cooperate with authorities to investigate producer (child abusers), get connected to mental health care, and not be driven into a criminal underworld full of blackmail.
I'm skeptical most people who download child pornography are active child molesters. Or most active child molesters distribute evidence of their crimes. And wouldn't NCMEC collect new images from the most popular distribution points?
To me it is clear they are bowing to government pressure to add this feature. I cannot imagine any other reason they would go from drawing so much attention to privacy of their platform in their communication and general PR around the launch of the privacy labels in the app store to then make this change which is a clear violation of their user's privacy.
Cynical me thinks this is some "deal" they made with DoJ to not have the FBI sue them any more for backdoors in such a public fashion. As long as they work with them in manners that give the FBI info they want while claiming to protect user privacy compared to just throwing open the doors, Apple thinks they are saving face.???
A lot of people not in government are also fiercely opposed to raping children and are willing to compromise, and understand that Apple can already do whatever they want to your phone, so this specific thing isn't the straw that breaks the camel's back.
All the "what if Apple turns evil one day" applies equally well to iOS without this feature.
I believe a lot of the outrage brewing now against Apple’s CSAM scanning is misdirected, and might actually be hurting the larger cause.
Most ordinary people, especially those who have kids, won’t have a problem with a well-implemented, E2E encryption compatible scheme that is legally limited to only apply to this type of material. If explained the hash collision issue, they’d reasonably point out that Apple does manual review before notifying law enforcement, so this rare eventuality is something they can live with. Meanwhile, in the other camp, many of the vocally outraged fail to understand that no E2E encryption breakage has to be taking place for this feature to work.
What’s actually a problem is that earlier, back in 2019, Apple changed their ToS to allow pre-screening of generally any “potentially illegal” content[0]. This should trigger much wider audience, and for legitimate reasons (the phrasing is clearly unnecessarily broad, and opposing this does not undermine kids’ safety in any way), yet no one is talking about it to my knowledge.
Did you learn the same thing—that E2E encryption doesn’t need to be broken for this to work?
The only event in which Apple can gain access to your content is if you happen to have multiple CSAM matches; then they can access only the matching content, and only then if it’s manually confirmed by a human to be CSAM an action is taken.
The issue is if this type of matching is done for other purposes than CSAM; and unfortunately they gave themselves legal permission to do it back in 2019. That’s what we should object to, not CSAM reporting.
I didn't say anything about breaking E2E encryption. Anything a human in the middle can review in any event isn't E2E encrypted. Call it something else.
The issue is the hash algorithm is secret. The decryption threshold is secret. The database of forbidden content can't be audited. People claim it includes entirely legal images. And it's a small step from scanning local only files.
> as long as it's strictly for CSAM check purposes
And that's precisely the leaky part of this setup. Nothing about this system's design prevents that from changing on a mere whim or government request.
Next year they could be adding new back-end checks against political dissident activity, Uyghur Muslims, ... and we'd be none the wiser.
I really don’t understand why apple is doing this. What point are they trying to score and from whom ? Do they think implementing a spyware in iPhone would boost sales? Or does it bring revenue from charging government agency to use this feature? I just don’t get it.
Could be throwing a bone to politicians who say E2E encryption protects criminals. Not saying I agree, but I could see this being a compromise on Apple's part.
They can’t be this stupid, right? They should know this has huge security implications. I can’t see how it can be justified for some political brownie point.
This is wholly speculation, but their view could be that they have two bad options to choose from, and they chose the option that didn't involve congressional hearings and public pressure to make changes that could be even more anti-privacy.
Again, pure speculation and really the most generous means of explaining this controversial decision.
Have you paid attention lately? Politicians only want to react to the lowest common denominator. Your average Joe/Joette doesn't know jack about encryption or hashing. So to them this is just tech mumbo jumbo and "thinking of the children" and they simply won't care that it's 1 step removed government spying. After it happens China has full backend door to upload Pooh hashes to ban political dissidents using memes. Next comes the FBI and CIA with their own hashes to find "the terrorists" and on and on. This is just a toe in the door for governments to abuse the iphone and iphone users and Apple doesn't seem to care.
Much more than brownie points. Apple et al all check CSAM on the server today. If Apple is wanting to announce further e2ee at the next iPhone event, moving a privacy protecting CSAM check to the client is how to stave off encryption destroying legislation.
There is now, as far as I can tell, a system that can flag any photo that matches a perceptual hash for manual review by Apple or by other parties as required by law. Have any screenshots of code or confidential operational documents? Or photos placing you at private events attended by political dissidents (as defined by authoritarian regimes)? You’re one no-code config change away from Apple being able to exfiltrate this content from your device and deliver it to a third party. Engineers at Apple might not even know this is happening to be canaries. It’s a dangerous backdoor waiting to happen.
This is actually vaguely what I came up with, which is: "no Apple user is a practicing pedo".
Which is a very scary statement any company to make (the means to get to that point?!). But I am sure some countries see this is as a great technology to push for slogans of their own: "no citizen of Nonhomostan is a practicing homosexual!", "no citizen of Oppositionkillerian has any mind-corrupting western banners on their phone!"
This technology will allow large powers to make sweeping statements about data kept on private devices.
Android could take the opposite approach. I.e. someone can write an app that generates CP, so those who want it can have it without any actual children involved.
In the current US legal zeitgeist CP is illegal because it's morally wrong. Yeah, also to protect the children, but an image can be deemed by a judge to be CP even if no real children are involved. Under the current doctrine it's something like "I'll know it when I see it", so such app would most likely be generating illegal images.
My own pet theory is that the new far left have gone full circle and have become as puritanical as those they despised. (Just look at social media talking about what outfits women wear in the olympics, thinking that they show too much skin. It's arguments from the early 1900s come back around.)
It's as if Apple has created the first gun in the world, a gun that can only shoot pedos, and Apple starts clicking the trigger in the face of everybody it can reach to see if the gun fires. Oh, the gun also does frequent updates to its pedo detector software.
As a thought experiment, why not allow us to disable the scanning feature in our iPhone settings?
The entire premise of this system is that the targeted users are ignorant of its existence. If we can be so sure they don't know it exists, then surely having an option to disable it would cause no difference in outcome.
If the people in favor of this system don't like the idea of making it optional, perhaps that shows that the premise behind it is flawed.
I don't understand the claim that this system improves their ability to detect anything. If it's really true that the system only works on data uploaded to icloud then apple could have easily just done all the scanning in the cloud, because icloud is not end-to-end encrypted. In that case, all this feature does is move the computational cost of scanning from Apple's datacenters to Apple's users, saving Apple money and costing users battery life. It doesn't change anything about how effective the scanning can be. Unless I misunderstand how it works.
Respectfully you've missed what this is. This has nothing to do with data sent up to iCloud - at that point not only can those photos be checked via hash they can be further reviewed if there is a positive/high degree match
The issue is that this mechanism applies to photos (/data) on the local device that wouldn't otherwise make its way up to iCloud.
Do you have a source that this is being used on all photos on the device?
The EFF posts says it applies to 2 cases: photos being uploaded to iCloud, and photos through iMessage if the account holder is a child and the feature hasn't been disabled.
You're right that only scanning photos uploaded to iCloud is currently pointless. But maybe Apple plans to do semi-end-to-end encryption on iCloud photos in the future, and then this feature would be useful.
I believe it's inaccurate to lump the iMessage and iCloud scanning together. The iMessage scanning tries to classify images as sexually explicit. The iCloud scanning tries to identify images that exist in the CSAM database, accounting for recompression and cropping.
No, that's what people are afraid will happen, not what's happening now. Apple has been very explicit in saying this only affects uploaded photos. For now.
Maybe because they plan on enabling E2EE for iCloud. Of course that's not necessarily useful given that the set of images yours are being checked against is completely opaque.
Photos in iCloud photos are "encrypted". They are not end-to-end encrypted. Apple retains the keys. Apple can and does decrypt the photos for various purposes including for law enforcement. The encryption poses no obstacle to scanning the photos in the cloud.
The technical summary describes the detection system, not the rest of the product. The rest of iCloud photos is not end-to-end encrypted. If Apple was planning to introduce end-to-end encryption for iCloud photos then they should have announced it at the same time.
If they are using this to introduce something closer to end-to-end encryption then it seems like a clear win for users.
Today photos are not end-to-end encrypted, there is nothing preventing Apple from decrypting your photos if they want (or if they are asked by law enforcement). If a part of this implementation is to make it so only the user keys OR the CSAM keys in the case of a match are able to decrypt the photos then that is a clear step in the right direction over the current system. It's not real end-to-end encryption, but it still prevents Apple from just decrypting your photos without probable cause of a very specific crime.
If that is the case they should have made it a lot clearer in the initial announcement though.
Yeah. After thinking about it this is probably the missing motivation for building this feature client side. It's a huge blunder to announce/enable this before actually doing the end-to-end encryption for iCloud. It renders the whole thing pointless.
They just have keys for the safety voucher for the suspect images. It's not entirely clear that the payload in the safety voucher is the only copy of the image or if this CSAM system is in addition to what happens today.
What I came to realize on a broader scale (1) is that none of these automated system provide any disclosure or transparency. Neither for true nor false positives.
In Computer Security we are required to provide responsible disclosure when the privacy of users was (potentially) violated. When law enforcement has a search warrant for my place, I can at least notice that an intrusion happened and in many countries I can request the warrant.
Here, a false positive never gets any of the "Five Ws" answered:
Who used this system (which government agency / company)? Why was I (mistakenly) flagged? Which particular piece of my data triggered the system? When did the system phone home? What is the intended purpose of this system?
And most importantly: What parts of my privacy were violated? Who had access to those, and how was the infringement of my rights remedied?
We have hardware fuses, trustzones and so on. It must be possible to have a auditable and tamper-proof system that triggers once such a system calls home and then discloses the "breach" to the user after X days.
If the public had access to that information and transparency reports about the success rate vs. false positivity rate, then society could evaluate the system is adequate.
I'd wager many would question the "save the children" argument if they regularly heard about false positives and egregious access to private photos by law enforcement.
Many there are barely any false positives, I'd personally support a CSAM in that case. As it stands we won't know.
(1) Other less invasive system (Youtube strikes, many social media bans) also share the laid out concerns.
Just installing graheneos on a pixel, writing this from my Librem 14 (got it after the M1, the phoning home of apple apps, the new lockdown of kernel extensions).
Now that I think about it, is there any good, viable alternative for Windows 10 on a new ThinkPad? One that ideally is able to run MS Office (or at least Excel...)? Any recommendations regarding decent, and ideally user friendly, Linux versions?
I mean, the fact that our world exists on the back of proprietary formats (thus locking us into software) should be a bigger deal.
"It should run microsoft office" is a difficult problem to solve and only serves to deeply entrench the big players.
I'm aware that I've had this same criteria myself for choosing a new device (because: pragmatism) but I've come to just "accept" it.
If I look a bit further away and remove my implicit acceptance of the status quo: I'm actually quite horrified that we're so tied to one company and what _it_ decides to support.
If the next versions of Microsoft Office didnt work on apple devices at all: what would the world do? Run Windows exclusively due to pragmatism is my initial assumption.
Yeah, that's why I "choose" Windows and Office 365 for my own company. Now that this is over, and I have a new set of employer owned hardware for work, I can get rid of that for my private life. Hence Linux and I guess, LibreOffice. Once I have time I'll also quit Google's Android on my Pixel. I need time for that so, good thing is most of Steam works on Linux just fine. I'll do some digging first so if Linux on my X1 Extreme works just fine. I am not motivated enough to solve driver issues...
"We know that the days to come will be filled with the screeching voices of the minority."
Nice one Apple. Hopefully they're not referring to the minority that buy their products as a result of them being marketed with privacy and security as setting them aside from their competitors.
Here is a list [1] of at least some of the 'minority'
I've often wondered why with the prevalence of Echo, Google Assistant, and engines that recognize what copywritten song you're using in a YouTube video... They haven't extended the code on our phones to listen for the sound signature of anyone watching a well-known child abuse video. Surely if they use the microphone they can get a lot more positives. Instead of just catching people who watched it on the phone directly they'll pick up all those who watched it on an old TV or highly secure Linux rig like Tails (but that inadvertently left their phone near the speaker).
I'm sure the government has tens of thousands of hours of audio and will gladly supply the sound signatures. Think I should submit my idea to Apple??!
Their CEO has a track record of doing just that. Funnily enough the NCMEC started bringing in serious cash after some such statements and the CEO’s compensation is firmly in the six figures.
I was planning to switch to iPhone this year. I no longer am. I do not want any device that would turn me in to police. For any reason. That is just not okay. To start with, because software has bugs. To end with, because principles matter.
don't do it, i did it last year and bought a top end mini and new iphone. They burned this bridge though, I guess it's back to Android and using adb to degoogle it and use open source apps to replace the google apps
For anyone that has read “The Circle”, this should feel like life imitating art. The antagonist company in that book repeatedly “finds” child pornography on inconvenient people to exert ever-growing control over society.
Isn't this covered by the 4th Amendment? The fact that Apple is doing it and not US.GOV is irrelevant, because the explicit intent is to check for content-deemed-illegal on a personal device and Apple is clearly just a cutout.
Worse - unless the "perceptual hash" tech is better than anyone suspects, most of the images flagged will be personal images of consenting adults.
So Apple - and possibly US.GOV - ends up with a huge collection of intimate photos of many of its users. Plus everyone's porn stash.
I'm finding it very hard to understand why no one at Apple realised this would be a PR disaster.
Let's propose Amazon announced all Alexa devices would start scanning everything it hears for illegal activity. This new feature has the same security semantics as Apple's CSAM detection: on device hashing using a government controlled audio database of known illegal activity, violations are only reported once a threshold is met, etc.
Would you be comfortable with this? Would you be comfortable with a company searching our physical lives in the way Apple intends to search our digital lives? How are these 2 things any different?
I believe we'll just be the testing ground. Apple has no doubt to capitulate and use this for China so they can track down Pooh memes and similar anti-Chinese content for thought crimes against the party.
This reminds me of 'Reflections on Trusting Trust'. The headline is so relevant here
> To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.
The CEOs of the largest corporations in the world have additional human-in-the-loop evaluation steps in the chain before swatting, unlike people like you and I.
> For the CSAM detection this wouldn't do anything at all because you aren't uploading it to iCloud.
The whole point of this change is that they will scan content "before" it hits iCloud. (the verbage says "before" as if there is going to certainly be an intent that it goes to iCloud but that's not a given at all, sounds like Apples marketing speak to "assume the sale" so to say).
CSAM scanning of photos uploaded iCloud photos has already been in place for quite some time.
To me, my biggest issue with scanning user data is accountability and transparency;
Who will audit false passive cases including reasons for unlocking data ?
How false positive cases will be investigated and by whom ? (in case it was intentional data unlocking, eg. corporate espionage)
How "victims" of false positive cases will be notified ?
The part where "threshold reached, all content visible and unlocked" is what worries me. That means, any content can be visible and unlocked to anybody, and the only barrier is some threshold where Apple pinky-swears it would never look unless it is passed. We all know how well it works with NSA wiretapping, and even if Apple right now is sincerely committed to hold their promise, they could decide otherwise - or being forced to - at any point in the future. Once there's a door and there's a key, they key will be used to open the door, and it will be used for any future purpose the key owner wants to.
To be fair, this sounds like something they would consider to be a strong branding position, and is totally something they would point out constantly? "Folks who want to be sexual
preditors can buy an Android phone" (a slight modification of Steve Jobs saying "Folks who want porn can buy and [sic] Android phone.").
How will this catch anyone? Won’t the people (or definitely the very worst of them) who this feature is intended to catch just stop using iPhone now? So in the end, is the effort really worth it?
I can see all the arguments and concerns people have and the strong opposition from the tech community. What I’m not seeing enough of is a focus on the actual problem Apple is purporting to address (whatever your views may be on their implementation).
If anything I’m seeing some very worrying minimisation and dismissals of the problem of CP, child trafficking and abuse and other privacy and ethics related issues such as counter-terrorism.
What’s the privacy and ethically aware solution to these problems?
“Think of the Children” and “terrorism” are two of the most used excuses for overreach, usually reserved for governments, but apparently for some companies too now. Why is it Apple’s business to stop crime, regardless of severity or type? How does “solving crime” help them sell devices? The more conspiratorial-minded individuals might actually suspect some government collusion in this, because it makes no financial sense otherwise to do this.
Nobody is dismissing CP (or terrorism). Simply put, it’s not Apple’s problem to solve.
> I know it’s been a long day and that many of you probably haven’t slept in 24 hours. We know that the days to come will be filled with the screeching voices of the minority.
> Our commitment to lift up kids who have lived through the most unimaginable abuse and victimizations will be stronger.
Ah, yes, everyone that is vocally concerned about this is either misinformed or irrationally “screeching.” We, of course, are righteous defenders of children. Tighten those blinders, Team Apple!
Here's how it will probably go wrong; because privacy abuses always sit on a slippery slope:
Detect CP
Detect child abuse
Detect signs of child abuse
Detect signs of abuse
Detect signs of violent crime
Detect violent crime
Detect crimes
Detect "crimes"
People not believing me, then tell one example of a privacy encroaching policy that didn't expand its original scope of "think of the children" and "terrorism".
The worst part is that they are probably doing it and abusing it and are now seeking a way to normalize their conduct.
Apple is already helping both the FBI and CCP illegally spy on their citizen-subjects. Most iCloud data in the US is stored unencrypted to aid the US national spying, and iCloud data in China is stored on CCP-accessible servers for the same reason.
This is no different. Apple knows which way the wind blows.
The difference now is that they couldn't be used officially and at least, in principle, three letters agencies had to go through a legal process for unmasking.
Now there's a constant search warrant on your phone that can be used against you in a court of law.
It's as if Apple has created the first gun in the world, a gun that can only shoot pedos, and Apple starts clicking the trigger in the face of everybody it can reach to see if the gun fires. Oh, the gun also does frequent updates to its pedo detector software.
Just an aside as we all discuss iCloud storage of photos, etc. ...
I don't back up anything to iCloud. Instead, I connect my iPhone to my laptop and run the excellent "iExplorer" utility from Macroplant[1].
This allows me to browse my iPhone like a filesystem and copy off all of the photos/videos to a directory where I can then rsync them to my local fileserver.
You can also dump iMessages and notes and all of those other thing but I just export photos ...
This is basically like Apple putting a video camera in everyone's bedroom, and saying that the camera will only upload video if they detect a criminal's face.
If we pretend for a minute, which may not be hard, that Apple isn't trying to "think of the children" - this feature seems like a good step to protect themselves from the legal implications of having CP on their servers. I don't know what all the ramifications would be, but I imagine that if a CP ring was discovered using iCloud as a storage medium, Apple could get in a some big trouble.
It's odd that a "private" organization, the "National Center for Missing and Exploited Children" gets access to your photos first, before law enforcement.
Why would we trust law enforcement to a posse of private citizens (albeit with a congressional charter)?
What I don’t get is why Apple feels the need to dig through my or anyone’s photos for any reason whatsoever? Like, who is asking for this crap? Is there some legal obligation for Apple to do this? It feels like a “feature” that no one asked for or wants.
Weird that the heads of apple seem to have redefined 'in true apple fashion' from just integrating well across the hardware and software layer to being about every part of the organization working well together (engineers and lawyers)
This makes me think, how do we know for sure that apple can't keylog WhatsApp or other e2e correspondence?
Is it obvious in some open source code like signal or are there other hacks to prove/disprove this as a user?
I suspect currently they do nothing, however in the long run they control everything on their devices so there's no reason to think they can't do the same thing to signal, whatsapp or telegram.
The phone could nag you mercilessly about any disturbing pictures on your phone, without notifying any outside authorities. That would accomplish 50-75% of their goals.
I thought their strategy is similar to Valve's anti-cheat - Tell the offenders absolutely nothing up until you snap shut on them.
The consequences between cheating in pub Counter-Strike are obviously less serious, but Valve does let people cheat for a while and then bin people into "ban waves" (last I heard, anyway) to try to throw off the trail and prevent cheaters from figuring out exactly which actions got them in trouble. It's also similar to the concept of hellbanning trolls from a forum. They won't comply with kind messaging, they'll just instantly shed their accounts. So don't let them know they're banned.
There is a whole (even more fraught) conversation about how to get pedophiles in touch with psychiatrists before they commit any crimes, but telling them "pst, I'm onto you" peacefully probably isn't the way.
First of all, i feel that saying “people don’t understand” is insulting.
Second, the fact that everyone is now a suspect by default (instead of targeting specific individuals), is a very concerning development.
Just imagine Tim Cook retiring and being replaced by someone like Donald Trump. All the tools are now in place to start misuse.
On the other hand, it seems they did built a great tool, preserving privacy while still scanning Photos locally.
I just feel it’s wrong to use this on all (innocent) people.
The tools are already in place for misuse. iPhones autoupdate by default, so Apple can push new malicious privacy-invading software to your device at any time without your intervention.
Each update, even if done manually, re-enables autoupdate, requiring that you go turn it off.
Apple is really into being able to run whatever code they want on your device without your intervention.
If I browse internet and accidentally load a page with all images from this CSAM database => are these images automatically saved somewhere on my filesystem on iOS? So if this software was scanning not only iCloud but all images on my phone I would be reported automatically?
I think it is our role to educate our relatives on this subject: any text that refer to child pornography or similar "save our children" shall immediately trigger the idea association: "written by a Nazi posing as a sheep to reduce out freedom". Shoot! Goldwin's law strikes again!
> Today marks the official public unveiling of Expanded Protections for China, and I wanted to take a moment to thank each and every one of you for all of your hard work over the last few years. We would not have reached this milestone without your tireless dedication and resiliency.
> Keeping China safe is such an important mission. In true Apple fashion, pursuing this goal has required deep cross-functional commitment, spanning Engineering, GA, HI, Legal, Product Marketing, PR and the CCP. What we announced today is the product of this incredible collaboration, one that delivers tools to protect China, but also maintain Apple’s deep commitment to user privacy.
> We’ve seen many positive responses today. We know some people have misunderstandings, and more than a few are worried about the implications, but we will continue to explain and detail the features so people understand what we’ve built. And while a lot of hard work lays ahead to deliver the features in the next few months, I wanted to share this note that we received today from NCMEC. I found it incredibly motivating, and hope that you will as well.
> I am proud to work at Xinjiang with such an amazing team. Thank you!
What will happen if this catches someone like Hunter Biden? Hypothetically speaking. Though he's kind of famous on 4chan. Will apple have an exemption list for the elite. Because this could bring down elected governments if its actually applied equally.
I honestly don't get the uproar here. This is opt-in, so how is it any more outrageous than existing MDM software that people opt-in to for work reasons which are way more invasive?
If it's opt-in it's entirely useless for its stated goal, because why would the bad guys choose to subject themselves to such scrutiny???!?
So the very fact that it is opt-in means it doesn't even try to "protect the children"; it just uses the children argument to promote the idea of total control, which will be then used for a host of other purposes.
It's not opt in. Please do more reading on it. Because of a technical "limit" currently you can turn off icloud photos and get out of the worst of it but then you also handicap your device and one of the reasons people buy into the apple platform in the first place.
For example if the scope/mission expands (e.g. foreign governments), suddenly you've created a drag-net for whatever "badness" is of interest in whatever today's moral panic is (e.g. terrorism after 9/11). Plus perceptual hashing, by its very design, is created to be less precise than traditional cryptographic hashing.
A cryptographic hash + file size combo is unlikely to have a false positive within our lifetime (and it has been used successful by multiple companies to combat CP). The interesting thing about a perceptual hash is that the closer the source material is to the banned material in terms of actual content (e.g. nudity), the more likely for a false positive.
Therefore, if Apple does mess up via false-positive and manually review your material, it is more likely to be sensitive private materials (involving consenting adult(s), not CP) because that is what the perceptual hashes are looking for similarities to.
PS - If you think this concept cannot happen in a Western country, see the UK's internet filters as a textbook example. Originally started to fight CP, and now used to fight a ton of other stuff too with more proposals every year: https://en.wikipedia.org/wiki/Web_blocking_in_the_United_Kin...