Hacker News new | past | comments | ask | show | jobs | submit login
Apple wants to redefine what it means to violate your privacy. We mustn’t let it (ar.al)
162 points by aral on Aug 9, 2021 | hide | past | favorite | 79 comments



The thing that scares me isn't so much the violation of privacy. It's the idea that some computer algorithm can accuse me of a crime automatically with no evidence and generate an investigation.

Judging by how police respond to these leads, you can end up in jail based on this "Evidence." While you wait for a 6 month investigation to be completed you lose your job and get an arrest record. Even if your photo is just a picture of static on a TV which produced a false positive.

It reminds me of the dog who always indicates the presence of drugs 100% of the time. Probable cause, made to order.


I want to add that they are scanning your property without your permission. The crazy part is that the government can't do this with a warrant or probable cause, but Apple decided that it can and will.

Amazing that Apple destroyed their image about "Privacy" in minutes.


I really wonder about their motives for this. It really has destroyed their image to the point that I'm thinking about abandoning the platform if nothing changes.

This will not just be about CP. In many countries this will instantly be used to detect any kind of dissent. In the US it will take longer, but eventually I can imagine "misinformation" as determined by some stupid algorithm flagging you for quiet social credit style lists. What constitutes misinformation will change with the political winds.

Hey... maybe we can let people select which reality they wish to be enforced! Maybe Republicans can get flagged if they spread memes about universal health care while Democrats can be flagged for discussing LGBT rights!

The only thing I can imagine is that Apple is facing enormous pressure from behind the scenes to become part of the panopticon. Private companies can't really resist governments.


> This will not just be about CP. In many countries this will instantly be used to detect any kind of dissent. In the US it will take longer, but eventually I can imagine "misinformation" as determined by some stupid algorithm flagging you for quiet social credit style lists. What constitutes misinformation will change with the political winds.

In the US, there are over 70,000 overdose deaths a year[1], and that number has increased dramatically each year.

If Apple is already scanning photos and messages, why not save 70,000+ lives while they're at it by detecting heroin and fentanyl dealers, too?

[1] https://www.cdc.gov/drugoverdose/deaths/index.html


What I wonder is-- are child pornographers really by and large storing these materials on iCloud or on their phones at all?

I think you're correct, this is the thin end of the wedge for something else entirely.


This is the thin end of a wedge even if it isn't intended to be. Rights need to be strictly defined for a reason.

I personally believe that we had an attempted fascist coup in the USA in January of this year. I know a lot of people disagree. If you disagree imagine that the defeated POTUS had been someone to the left of Bernie Sanders and a bunch of militia members wearing "Lenin was Right" shirts and carrying nooses and zip ties had stormed the capitol to stop the vote from being counted. What would you call that? (I'd call it the same thing.)

The next attempted coup will not be so incompetent. That totalitarian regime will inherit everything we are building today.

If you're on the other side of the aisle, imagine that the next attempted coup is indeed a radical leftist. That's by no means impossible. If the economic divide continues to worsen in this country it may not matter if they are "right" or "left" as long as they are passing out torches and pitchforks. I could see a large fraction of the MAGA crowd turning red in the more classic commie sense if they were promised jobs, affordable housing, and... well... someone to make America great again.


Not that anyone reads them, but every cloud storage EULA has a clause describing how they may remove content at will, especially if it is illegal. That's their justification for why this scanning is happening on photos that are being uploaded to iCloud, and not simply all photos on every device.

https://www.apple.com/legal/internet-services/icloud/

> Apple reserves the right at all times to determine whether Content is appropriate and in compliance with this Agreement, and may screen, move, refuse, modify and/or remove Content at any time, without prior notice and in its sole discretion, if such Content is found to be in violation of this Agreement or is otherwise objectionable.


I have no problem with Apple, Google, Microsoft, etc.. Removing and reporting illegal content from their servers. My issue is with Apple doing it on your device or machine.

When did Apple become law enforcement?


> I want to add that they are scanning your property without your permission.

This is a false statement. Scanning is opt-in.


Allow it or don't buy our product is a fairly disingenuous take on "opt-in".


> Allow it or don't buy our product is a fairly disingenuous take on "opt-in".

That’s not how it works.

The only thing it affects is iCloud Photo Library. There are numerous competitors you can choose from if you want a cloud photo library, and nothing prevents you from just keeping your photos on device or exporting them to a PC or Mac.


Say it louder. Recently I opposed this and was met with "I bet you don't have children."

I do, and I am black and so are they, and I'm quite familiar of the present state of facial recognition tech (and, of course, the justice system generally) when it comes to people who look like us.

This is all unacceptable, and the fact that some of it has already come to pass is no reason to not fix it.


What does any of this have to do with facial recognition tech or race?


I get that questions like these are perhaps meant to be "more precise" about the specific tech in question, but honestly, they do more to display the naivete of the questioner in terms of understanding how this sort of thing plays out in real life, when it's not just tech folks, but other policymakers and stakeholders are involved and have the power to make decisions.

Even if the thing Apple is talking about right now can be distinguished from "facial recognition" on a technical level, it would be much MORE of a mistake to NOT lump them in together if we're trying to bring this debate to the general public, which we, of course, we should.

(don't get me started on race..)


> it would be much MORE of a mistake to NOT lump them in together if we're trying to bring this debate to the general public, which we, of course, we should.

Frankly this suggests that you think it’s a good idea to mislead the general public. I think that is one of the ways we harm our public discourse. I could be missing a connection between the two that is obvious to you but not to me, in which case I apologize.

I am not naïve about facial recognition. I am not white, and I have known about ML technology since the 90s. Long before it became a known problem it was obvious to me that ML models would end up simply reflecting the biases of the corpus they were trained on, and this would lead to them embodying discrimination of one kind or another, some of which would not be obvious in advance. The perceived ‘neutrality’ of algorithms would be and in fact is invoked to minimize this problem, when of course the problem is not with the algorithms but with what people feed into them.

Please let me know if that doesn’t capture the problem with facial recognition adequately.

So, given that I’m not naïve about facial recognition, I ask again - what is the connection you see here between racially biased facial recognition and Apple’s CSAM countermeasures?


Not at all -- what I'm saying is that tech people end up misleading themselves in terms of likely outcomes by having these discussions and focusing on the hard discrete lines around this or that particular technology.

What very reliably happens is -- thhe people who make the decisions (who, unfortunately, are very rarely tech people) will lump them in anyway LATER, when it actually matters and is too late.

So your point is technically correct, and simultaneously absolutely does not capture the problem with facial recognition adequately, because it doesn't factor in "if you get people to sign off on Apples specific thing today, you'll basically be able to sign off on just about anything that sort of looks like it to the layperson" tomorrow.


Well the ‘tech people’ line is an ad-hominem. I don’t see anyone doing a great job of knowing the right move in terms of communicating about complex issue to the public. Aren’t you a tech person too?

Who do you see as ‘the people who make the decisions’ in this case? I.e. who do you imagine will lump these together? I could see the FBI saying ‘you did CSAM detection, so surely you can do ‘child predator detection via facial recognition’. Is that what you mean?

As for getting people to sign off on stuff - I think it’s not so obvious what we do and don’t want people to sign off on. Not implementing something like this now could easily mean they are forced to scan in the cloud, and that really would be a slippery slope.

It seems like you are saying even though this thing isn’t bad, we should persuade people into not signing off on it because we want to make sure they don’t later sign off on some facial recognition thing that actually is bad. Is that an accurate enough paraphrase?


No. I am saying "my, or anyone's evaluation of whether or not this thing today is bad is utterly meaningless, because history shows that law enforcement type powers literally never respect limits like these."

Again, your "tech knowledge" will not help you here. The lines you percieve between this "not-bad" thing and a future actually bad thing don't meaningfully exist.

Better to resort to simpler principles, go with the 4th amendment, slightly modified to include the tech companies. If the FBI wants to be in my stuff, they need a warrant, and that's it.


> Again, your "tech knowledge" will not help you here. The lines you percieve between this "not-bad" thing and a future actually bad thing don't meaningfully exist.

Are you saying you don’t understand the technology? That you don’t have knowledge about this subject?

> Better to resort to simpler principles, go with the 4th amendment, slightly modified to include the tech companies. If the FBI wants to be in my stuff, they need a warrant, and that's it.

If you don’t understand the technology how will you know whether it violates the 4th amendment or not?


No, I'm saying stop being a nerd. I do understand the technology just fine -- but that's entirely beside the point. A deep understanding of the technology is not necessary. A shallow understanding is sufficient.

Let's go back to a real case, Kyllo v. US. They used a thermal imaging camera to "look inside" a building, from the unusual heat signature, they correctly presumed a marijuana grow operation, and went in without a warrant.

Doesn't matter though. The court said they needed a warrant because people should be able to presume a level of privacy that the camera violated. Anything that one reasonably believes is private should be protected like that, regardless of whether you are using technology to "look" at it or not. The authorities observed a thing that a reasonable person would think of as private because they treated it that way.

Same applies here, even moreso, because Apple has previously guaranteed privacy, and already- this is not privacy on its face.

The only really important gap in knowledge is the one I mentioned before, that this does also end up in a slippery slope.


> No, I'm saying stop being a nerd.

That seems like a pretty empty thing to say at the best of times. It’s not clear how it helps.

> I do understand the technology just fine -- but that's entirely beside the point.

Do you? That remains to be seen. A shallow understanding is only sufficient if it is correct and supports your other ideas.

As for Kyllo v US, that doesn’t obviously explain anything about lumping this in with biased facial rec. We seem to have moved away from that remark.

It’s unclear what you mean when you say Apple has ‘guaranteed’ privacy.

Do you mean, they have said they will only use this technology to check for CSAM, and they won’t use it for anything else?

If so, then I agree. Now that this has been publicized, and Apple has released detailed answers to the privacy concerns, people can reasonably expect it to be used for only the purpose of preventing CSAM being uploaded to iCloud, and not for anything else. That is a publicly documented committment and seems like would stand up well in court.

By your reasoning, there is no slippery slope, and this feature is exactly what Apple says it is, because they have made such public commitments. You say it isn’t privacy on its face but why do you think that?

If the system only reports already publicly known CSAM, and is well known to do so, and is part of an opt-in service, how is that a privacy violation?

If on the other hand you are claiming that Apple has made some other blanket ‘guarantee’ of privacy and this new feature contradicts that, I’d be curious to know what guarantee you are referring to.

It’s worth noting that once detected, CSAM must be reported by statute, and other cloud providers report tens of millions of images per year. I don’t know what the status of these reports are in the courts, of whether they have been tested.

I am not a lawyer, but perhaps you are.


I am, but that's beside the point.

Look, you're a huge sucker if you think that the boundaries of the tech and the stated policy today are 1) not fluid and 2) here's the bigger part -- aren't there primarily for the purpose of laying the groundwork for more intrusive spying. That's the "nerd" charge. If you follow the words they are saying and treat those as gospel and limiting, you're a nerd and a sucker.

And to take it further, if I seem paranoid or whatnot -- that's fine; it's better and smarter to be wrong in my direction than it is in the sucker direction, where you can't put the toothpaste back in the proverbial tube.


> I am,

Good.

> but that's beside the point.

Is it? I think legal insight is relevant to what we are discussing.

> Look, you're a huge sucker if you think that the boundaries of the tech and the stated policy today are 1) not fluid

Who would think that?

> and 2) here's the bigger part -- aren't there primarily for the purpose of laying the groundwork for more intrusive spying.

It’s obvious that you think that is the agenda. Calling people who aren’t as convinced as you ‘suckers’ tells us you are sure of yourself, but not much else.

>> That's the "nerd" charge.

Sure, but it’s uninformative. It’s pretty obvious there are people who think what you think, so no news there.

> If you follow the words they are saying and treat those as gospel and limiting, you're a nerd and a sucker.

Agreed, but so what? If you treat the words as gospel you are a fool, but equally if you ignore them altogether you are simply ignorant.

Those aren’t the only options.

> And to take it further, if I seem paranoid or whatnot -- that's fine; it's better and smarter to be wrong in my direction than it is in the sucker direction,

I like this line of reasoning. I agree that it’s often good to take a precautionary position.

However in this case I just think the maximally paranoid position is weak, not just rhetorically, but effectively.

As for ‘intrusive spying’ being the primary purpose. That is an open question. Nobody is denying that law enforcement, and presumably intelligence agencies, want that, and will exploit what they can, in secret if they can get away with it. But they aren’t the only actors here. Is it Apple’s primary purpose? Is it NCMEC’s primary purpose?

That’s why understanding the technology matters.

> where you can't put the toothpaste back in the proverbial tube.

Now who is being a sucker about the boundaries not being fluid? The toothpaste in this case was out before the tube was ever invented.

Privacy technology is not binary, and always exists within a social context.

The state is always going to employ paranoid actors, and the public is right to be concerned about them. Law enforcement is always going to push for more power, and the public is right to want that power checked.

The rest of us, Apple included, operate in a complex and fluid environment. Defaulting to paranoia is like being a stopped clock. You’re right twice a day but you never know what time it really is.


This is why algorithmic and predictive policing is so terrifying. Everyone in the justice system now has a great way to shirk liability for their actions and decisions: they can just defer to the infallible black box system that tells them who is a criminal or not. Want to side step accusations of bias or targeting? They were just going after whoever the black box said was guilty, and computers purportedly are not biased.

The corollary to this shirking of responsibility is how it lets people, and not just law enforcement, justify abuse of whoever the system says is guilty of a crime, because after all, the company that made the black box says that there is a one in a trillion chance of encountering a false positive. Judges will throw the book at defendants because, statistically, the black box system is almost never wrong, and the system says the defendants are monsters.

But the most Kafkaesque part is that people will never get an answer for how these systems determined they were guilty. It's a trade secret, it's part of on-going investigations, it's critical to national security, or in Apple's case, it's literally illegal to look and find out how their system came to conclusion because viewing the data itself is a crime. These systems often inscrutable ML models, as well, and we all know just how buggy and error prone computers and software can be.


> The thing that scares me isn't so much the violation of privacy. It's the idea that some computer algorithm can accuse me of a crime automatically with no evidence and generate an investigation.

Nothing about apple’s proposal involves algorithms accusing people with a crime without evidence.

All actual alerts are done by humans checking the photos.


Predictive policing models require a human in the loop as well. However people tend to trust these algorithms far beyond their reliability without knowing how they work. As I understand it, human reviewers for this program do not see the photo itself, but instead see the hash and make a determination from that.

A positive result may be enough to see all of your devices in evidence bags for the next 6 months and serve as probable cause for warrants. In addition, how hard is it to subpoena Apple for a report on how many times a whistleblower's device has been flagged then use it as probable cause. Or, cherry pick which reports Apple makes to target undesirables.


> As I understand it, human reviewers for this program do not see the photo itself, but instead see the hash and make a determination from that.

No, this is not correct. Human reviewers see a visual derivative which is separate from the hash. It’s basically a blurred thumbnail - enough to visually confirm that the image is not a false positive, buy not enough that the reviewers are constantly exposed fo child porn.

Also remember that multiple matches are required to even get to the human review.

The rest of your comment really doesn’t seem to match the system being described. It’s not predictive policing or anything like it, and it is obviously very much against Apple’s interest for it to generate false positives.


> It’s not predictive policing or anything like it, and it is obviously very much against Apple’s interest for it to generate false positives.

It is not predictive policing. However it's a pretty close cousin. Automated policing. I'm also skeptical that a blurred image will be enough to confirm/deny CP. I'm pretty sure that it's a system similar to YouTube's content ID and will work out in a very similar fashion. Also they have a very good incentive to err on the side of false positives in order to reduce liability for hosting CP on their servers.

It worked for the DMCA, now law enforcement are trying something similar for CP.


> It is not predictive policing. However it's a pretty close cousin. Automated policing.

It’s not policing. This is a mechanism to detect if people are uploading known child pornography images to Apple’s servers without giving Apple access to your photos. That is the only use case for this system.

Yes, if you try to upload such a collection, a police report may be filed, but only if you do this specific thing and it is verified by humans.

> I'm also skeptical that a blurred image will be enough to confirm/deny CP

It doesn’t have to confirm CP - it only has to confirm that the image matches the known CP from the database.


I'm not buying it. If you can recognize what it is, enough to be able to be sure, you're exposed to it. If not, you're just guessing - and your guess is heavily biased towards confirming what the machine said, because Apple spend $MASSIVE_AMOUNT on this technology, and who are you to question it based on a blurry picture? Also, you'd be saving children, and if you're wrong (which you're likely not - remember, industry-leading AI!) - well, the police surely will find that out very quickly and everything will be fine.

It's not like it's the first time such systems are built. We have a cases of Big Social banning people for random stuff and then saying "it was a technical error" when the noise in the media is strong enough. We have chess channels banned for racist hatespeech. Only this time the question is not whether you will be denied the opportunity to post cat pictures on facewitter for a week. It's pretty much the most shameful accusation one can be subjected to in our society. Once the press gets hold of it - and it'd get hold of it the minute the police does - there's no coming back from it for the person affected (well, maybe if they are Hunter Biden, but not otherwise). And all that will be hinged on an anonymous drone looking at a blurry picture?


> I'm not buying it. If you can recognize what it is, enough to be able to be sure, you're exposed to it. If not, you're just guessing

That’s a misunderstanding of what is happening. The reviewer doesn’t have to see whether the visual derivative looks like child porn. They only have to see whether it matches the visual derivative of the child porn image that the hash matched.


So they'd just compare two small very blurry thumbnails? Not sure that makes it any better.


It’s not literal blurring - it’s a transform that means you don’t see the actual image, but you do see features and detail that lets you easily tell two source images apart.

The point is that you aren’t looking to see if there is porn in the image. You are looking to seem if the image matches the porn image.

It’s a good mechanism.


There is a human review (ugh.) as well.

But still, your example is very concerning.


From Apple's announcements, there is a human comparison of the visual hashes. They don't appear to be forcing humans to view the images themselves.


The technical assessments speak about "visual derivatives". I'm guessing they mean a low resolution thumbnail.


They say there is a human review, but fail to explain the parameters of the review.

How many photos is a reviewer expected to review in an hour? What are their incentives and performance metrics? Do they have a 3 party voting system to confirm? For positive human review is there additional scrutiny? Is data anonymized properly? Does law enforcement receive a copy? Is this going to be limited to child exploitation?

Without this info I just have to assume it's some guy in a box expected to review 1000 pictures an hour and gets scrutinized if he doesn't click at least 10% positive. So he's just counting 1 positive 9 negatives.


Remember they said the odds of an account being falsely reported were "1 in a trillion."

We have no math to back that, some have called it BS. However... that would mean that one person could easily review only a few photos a week even if it was way off.


I would hope they aren’t incentivized to incriminate innocent people.


I would hope so as well. However, my hopes for these policing algorithms have been dashed too many times. The potential for abuse is just too great without effective laws and appropriate procedures. We've seen the results at YouTube, Pasco County Florida, the UK, and Facebook.

There is a reason that a key part of justice is being able to face your accusers. Secret algorithms are being used to abrogate this right. You can't cross examine an algorithm. Expert witnesses to contradict its expertise are too expensive for most defendants. Then the developers of the algorithm can claim with plausible deniability that the algorithm is 1 in a trillion accurate.

It's essentially the new bite mark analysis and polygraph pseudoscience used to elicit false admissions of guilt or convince gullible juries. "We have the hash of your photo George, we KNOW you are a pedophile. Just admit it and you can go home. We'll go easy on you. We promise!"

https://www.forbes.com/sites/nicksibilla/2021/04/26/lawsuit-...


I've been persuaded on this over the past few days.

Initially, I saw the controversy as overblown: It's the exact same content scan that already occurs when uploading images to iCloud; it will still only occur when uploading, and the only change is where it takes place.

I now see that as a reductionist take. Where it takes place does matter. The lines between client and server have been slowly blurred over the past decade to the point where a move like this may seem trivial to many, but ultimately, it is not. It becomes a foothold for so much more, and despite Apple's detailed assurances of all the friction they've installed onto this particular slippery slope, to step onto it at all is a step too far.


I can't understand how anyone - even Apple's usual cheerleaders like Gruber - can justify defending this with a straight face. It's scanning your content on your device, without your consent. Full stop.

Apple's FAQ to try and quell some of the backlash for this just makes it sound even worse in my opinion with gems like this:

_Could governments force Apple to add non-CSAM images to the hash list?_

_Apple will refuse any such demands._

Bullshit. If China "requests" this with threat of banning them from selling the iPhone in China? They'll just say "Apple must operate under the laws of the countries it operates in" and its hands are tied. Which is most likely how this whole thing started.

https://www.apple.com/child-safety/pdf/Expanded_Protections_...

Maybe there will be more of an uproar when this inevitably comes to macOS.


> Maybe there will be more of an uproar when this inevitably comes to macOS.

It's coming to the next macOS release this fall.


It is with your consent. If you don't use iCloud Photos, your files don't get scanned.


Oh right, so if I stop using the thing I paid for, I can avoid giving my consent. I could also just not buy an iPhone for the same effect. (at least until Google gets around to copying Apple)


This isn’t true unless you paid for additional storage in iCloud photos which you are under no obligation to do.

There are numerous competing storage photo storage services with both free and paid tiers.


I know about nothing on this topic, so my only comment is that every time I read something along "please think of the children!" it suddenly raises warning flags.


There are numerous valuable child protection laws and services against which your knee-jerk would be unwarranted. It just happens to be warranted this time around.


True. More generally, "think of the children" is representative of a class of moral imperative arguments. Such arguments are easily abused by policy makers as they are an emotional short-circuit of logical reasoning. Being skeptical of people who make such arguments is healthy.


He didn't say that all measures to protect children should be rejected out of hand. He is just saying that if someones argument starts and ends with fear baiting around children then they probably have ulterior motives.


They professed unfamiliarity with the details of the situation, so they could not have known that the emotional appeal was the full extent of the argument.


Thanks, that's exactly my point.


It's the same excuse the EU are currently using to infringe upon its citizens' privacy and require messaging application providers to install backdoors. It's an appeal to emotion, and since we have to assume that these legislators are intelligent, it's a disgusting overreach.


They are indeed overreaches, but even a valuable child protection law or service might presumably be pushed with an appeal to emotion. It’s better to familiarize yourself with the situation before dismissing it out of hand. While preserving a healthy dose of skepticism, of course.


I agree that one should look into what is being proposed and its implementation, but in both these instances backdoors are being introduced. Once backdoors are in place any government can petition access, malicious actors have attack vectors, and all of this for one proposed quasi-legitimate use-case.


We are in agreement here.


Nice :)


That doesn't mean that "think of the children" shouldn't raise warning flags.

The fact that good child protection laws exist doesn't invalidate the fact that "child protection" is frequently and increasingly used to push through bad laws.


What Apple is doing is the equivalent of the police one day deciding to search everyone’s physical home photo albums just in case there’s a picture of an illegal activity.


Not really, actually.

They only do the scanning if you use iCloud Photos backup. If you use, say, local storage instead of iCloud, you don't get scanned (which would be a more apt comparison to a physical photo book). Also, they aren't scanning for new illegal activity, but images that you somehow obtained of past illegal activity that the government already knows about.


This to me is the most unbelievable aspect of the whole thing.

Apple is basically saying:

"So hey, if you have anything you don't want us to know about, here's how to get around our scans (wink, wink)."

What would be the point of implementing a system and then telling people how to avoid it? I think at some point in the near future they will flip a switch and start scanning private photos.


I see your distinction and it makes sense- don’t want to be subject to Apple’s CSAM checks, don’t upload to iCloud.

I’d say we can do better. We need better tools for individualized data ownership and interoperability.


Will the next NSOGroup Pegasus malware feature a swatting-as-a-service plugin that makes the victim phones self-report?


A lot of people seem to be forgetting/not know about the other aspect of what Apple will be doing, which is scanning iMessage pics sent or received by minors for nudity. If it's detected, their parents will be notified and have the ability to view the pic in question.


> If it's detected, their parents will be notified and have the ability to view the pic in question.

There are some other steps in there. If such a photo is detected the minor is notified that it may be intended to harm them and the subject of the photo may not have consented to sharing it. They are asked if they are sure they want to view the photo.

If they say they are sure and they are between 13 and 17 they are shown a blurred version of the photo. Their parents are not notified.

If they say they are sure and they are under 13 they are told it is their choice but their parents want to make sure they are OK and will be notified so they can check.

If they then elect to view the photo they are shown a blurred version of the photo and their parents are notified.

This is all off by default. Parents have to expressly opt in to it when they set up a child's device with Family Sharing.


>I will not write another line of code for their platforms ever again.

Surprised that this is still a thing. Apple has made it very clear in their App Store case, they do not need developers and Apps on their platform. And Apple operating their App Store has been a benefits, or more like a gift to developers for access to their users.


Apple apparently has the ability to look at pix stored in iCloud. I wonder who's pix they will start looking at first?

The Fappening Part II By Apple

https://en.wikipedia.org/wiki/ICloud_leaks_of_celebrity_phot...

That which CAN happen WILL happen.



Serious question: why did Apple bother making that announcement at all? I can't imagine they're naive enough to think it would be good press for them?

They could have done this quietly without telling anyone, maybe with a vaguely-worded update to the terms of service for the next mandatory iOS update that nobody reads anyways.


Hiding such a massive thing in a quiet update would have generated a much larger shitstorm, I don't know if the internal memo I've read is legit but by the look of it: Apple made its calculations and decided it was more interesting to release it this way.


> Serious question: why did Apple bother making that announcement at all? I can't imagine they're naive enough to think it would be good press for them?

Because they are proud of it.

I think for some people, this was a feel good project. I know I'd feel better about working for a company that profits from forced labor[1] and lobbies to water down legislation against forced labor[2] if I was working to help children there.

Also, who wouldn't want to stop CSAM distribution? They probably thought dissent would be shamed and outnumbered.

[1] https://www.theverge.com/2021/5/10/22428899/apple-suppliers-...

[2] https://www.washingtonpost.com/technology/2020/11/20/apple-u...


Exactly. It is rephrasing the definition and re-defining what 'they really mean' of their intentions.

Hence, Apple Inc. has a very strange definition of what they think 'privacy' means.

Always with privacy in mind.™ /s


If you wanna screw someone over and you know they have an iPhone with icloud backup set up, you can whatsapp them a pic that matches CP signature.


This could actually have some very serious consequences. Quote from the apple announcement:

> Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC.

So a match doesn't just get you reported to the authorities, it disables your account. Even if the feds do investigate you and find you innocent due to being swatted or whatever, you still need to file an appeal with Apple and hope they give you back your digital life.

This could also lead to large spam campaigns of CP being sent to random people, either to just fuck with them, or to undermine the system/overwhelm investigators with a bunch of false positives.


> This could also lead to large spam campaigns of CP being sent to random people, either to just fuck with them, or to undermine the system/overwhelm investigators with a bunch of false positives.

If you're a nation-state that wants to sow discord and distrust in another nation's society, what better way than to frame random, respected or important people?

The attacker wouldn't need to worry about being arrested, due to being in a foreign country and most likely associated with foreign intelligence agencies. Those same agencies would have the evidence they could plant on people's devices or cloud storage, and the resources not to get caught.

They wouldn't even have to burn zero days or accounts/numbers to send messages, just look for database dumps of services that the target country uses, and then log in and upload the evidence.

It could be cheap, easy and scalable.


You need to meet the threshold, and they must add those photos to their iCloud Photos library. Remember iCloud Backups are E2E encrypted, iCloud Photo Library is not E2E.


... an innocuous hash-collision one FTW


Digital Swatting


As a back-end engineer I can't understand this outrage.

Apple's approach is less intrusive than Google and Microsoft since they don't touch your photos in iCloud except when you passed threshold and Apple workers will have technical ability to decrypt your detected (not regular) photos and manually compare with images from the database. Also iPhone doesn't trigger photo scanning if you don't upload them to iCloud.

From technical and privacy standpoint they have the best approach and it seems people are mad don't even understanding what Apple is doing.

Android users never cared but when news come to Apple everyone is losing their shit. I can't believe people are that weird.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: