Hacker News new | past | comments | ask | show | jobs | submit login
The deceptive PR behind Apple’s “expanded protections for children” (piotr.is)
753 points by arespredator on Aug 12, 2021 | hide | past | favorite | 571 comments



I have a newborn at home, and like every other parent, we take thousands of pictures and videos of our newest family member. We took pictures of the very first baby-bath. So now I have pictures of a naked baby on my phone. Does that mean that pictures of my newborn baby will be uploaded to Apple for further analysis, potentially stored for indefinite time, shared with law enforcement?


Lots of people responding to this seem to not understand how perceptual hashing / PhotoDNA works. It's true that they're not cryptographic hashes, but the false positive rate is vanishingly small. Apple claims it's 1 in a trillion [1], but suppose that you don't believe them. Google and Facebook and Microsoft are all using PhotoDNA (or equivalent perceptual hashing schemes) right now. Have you heard of some massive issue with false positives?

The fact of the matter is that unless you possess a photo that exists in the NCMEC database, your photos simply will not be flagged to Apple. Photos of your own kids won't trigger it, nude photos of adults won't trigger it; only photos of already known CSAM content will trigger (and that too, Apple requires a specific threshold of matches before a report is triggered).

[1] "The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account." Page 4 of https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


The false positive rate for any given image is not 1 in a trillion. Perceptual hashing just does not work like that. It also suffers from the birthday paradox problem - as the database expands, and the total number of pictures expands, collisions become more likely.

The parent poster does make the mistake of assuming that other pictures of kids will likely cause false positives. Anything could trigger a false positive - especially flesh tones. Like, say, the naughty pictures you've been taking of your (consenting) adult partner. I'm sure Apple's outsourced low-wage-country verification team will enjoy those.


> The false positive rate for any given image is not 1 in a trillion. Perceptual hashing just does not work like that. It also suffers from the birthday paradox problem - as the database expands, and the total number of pictures expands, collisions become more likely.

There was a good article [0] that was on HN a couple days ago that touches on the flat out lie regarding "one in a trillion" and how PhotoDNA sounds poorly thought out.

[0] https://www.hackerfactor.com/blog/index.php?/archives/929-On...


Apple is not using PhotoDNA, but in any event - this is a good article but they misconstrued the '1 in a trillion' quote, as is canvassed in the comments for the article itself. According to apple there is a 1 in a trillion chance of wrongly flagging an account, not a 1 in a trillion false positive rate for individual images, and we know that step to banning include: - multiple flagged images; and - human review

The details for those processes hasn't been fully disclosed, and it isn't possible to say whether 1 in a trillion is a reasonable estimate or otherwise.


I don't want my private pictures reviewed by humans. What's the probability of that?


"Don't worry, even if the system flags your baby pictures, we'll just hand them to the lowest-bidding subcontractor who will review the pictures of your naked baby (you know, to catch pedophiles!) and then probably not report you to the authorities. If this happens too often we're just going to ban you tho, this process costs money y'know?"


People really don't know about the birthday paradox, don't they? Should be required reading any time someone claims a false positive rate is "1 in a trillion."


Then where are the news reports or articles of these false positives that would have shown up within the past decade? That's how long these companies have been using PhotoDNA on the server side. And the version of PhotoDNA from ten years ago would probably have been inferior to the version in place now. Is there even a single verifiable report of such a false positive? I feel that with the amount of attention brought to this issue, if there was such a report then it probably would have been brought up by now.

Beyond that, assume that a false positive occurs and the innocent person is taken to court. Why would they have to fear being convicted if they don't actually hold any incriminating evidence? At most, that would become evidence against using perceptual hashing in future court cases.

The issue in that case is the violation of the innocent person's privacy, not that they have a risk of being falsely convicted. The courts would still need admissible evidence, and I don't believe that only having a perceptual hash and a set of legally photographed images clears that bar.

However, it becomes a completely separate issue if the false positives are "coincidentally" used to persecute marginalized groups in other countries where the same set of laws don't apply. But Apple has stated that they have no intention of expanding the system's scope to follow those laws. There isn't any evidence yet that Apple will do such a thing, or that they've already done it in the past. We are free to disbelieve them, but that's what they've stated. We can only hope that they won't change their minds.


> Why would they have to fear being convicted if they don't actually hold any incriminating evidence?

Because going to court is extremely expensive?

Because retaining legal council is extremely expensive?

Because your district attorney will likely inflate the charges to coerce you into taking a plea deal, despite your innocence?

Because you don't want all of your private documents, including ones completely unrelated to the alleged crime, entered into the court records?

Because a "jury of your peers" can be convinced to believe just about anything?

Because even if you're acquitted, most people will still believe you're guilty, and treat you as such?

There are a myriad of reasons one shouldn't let the police go on mass criminal fishing expeditions. A lot of innocent people take collateral damage as a direct result of the incentives involved in an adversarial justice system.


Because we know that the US justice system is horribly corrupt and inept and don't care if they get it right, as long as they get a win.

Because a case like this, you'll be paying a lot of bail just for the privilege of defending yourself properly. (Might have to sell/mortgage your house, or your mother's house).

Because you will probably make the news, and the false accusation will be on the internet forever.

Because you will probably get the living shit beat out of you by the arresting police.

Because you will probably get the living shit beat out of you by your fellow inmates.

Because the incompetent people who charged you can't be sued due to qualified immunity, so no skin off their back.

Because your family will have a lot of stress on it and this will affect your spouse, children, siblings and parents in a very negative way, probably for the rest of their lives.

Because you've lost your job in the process.

Because you'll probably never get a decent job again.

Because you'll probably need a lot of psychological counseling assuming you survive this process.


> That's how long these companies have been using PhotoDNA on the server side.

Then where are their published peer reviewed papers demonstrating their false positive rate? The burden of proof is on them and they don't have any verifiable public data at all.


> Then where are the news reports or articles of these false positives that would have shown up within the past decade?... Is there even a single verifiable report of such a false positive? I feel that with the amount of attention brought to this issue, if there was such a report then it probably would have been brought up by now.

Positives get reviewed by humans, at which point false positives are identified and discarded. We would not hear about them, and there would not be any reporting about them in the press.

You might think that this is the system working as intended, but I do not and would never consent to Apple giving my photos to some random anon to look at.


>Positives get reviewed by humans, at which point false positives are identified and discarded.

Let's not forget, this part only happens if you are lucky. You get the wrong photo reviewer and you catch your girlfriend with the wrong type of lighting that makes her look as if she could pass for 17 and you're going to have the police show up at your work and cuff you, demanding to know who this person of interest is. Adult males have been tried for possession of CSAM for having images of well-known absolutely above 18 adult video stars: https://reason.com/2010/05/03/porn-star-saves-man-from-incom...


But this isn't from automatic scanning. This is from dumb police work - one could actually argue that the scanning approach would have saved the person from this problem.


The privacy violation here is smaller than the one we all used to have to go through to get photos developed at all.


People could and did develop their own photos.


Exactly why people avoided taking naughty pictures with film. Digital, on the other hand, is private. And should remain so.


According to an article I read this morning in the WSJ, you would have to upload about 30 matching images to iCloud, at which point it would be flagged for review, and even then only the flagged images would be shown to the reviewer. This strikes me as not a serious privacy risk.

Article here: https://apple.news/AhVzAY--DT36Oz22W7taL9Q


Tim Apple doesn't have the right to see a single one of my photos, let alone thirty.

How is Apple going compensate the innocent people?


> you would have to upload about 30 matching images to iCloud

For now.


If your concern was that they might at some future point change their policies and do something objectionable then you should not be using iCloud at all.


"If you're concerned that a bad thing might happen some day, you shouldn't today oppose things that pave the way for that bad thing happening."

"If you're concerned that a hurricane might blow your house over some day, you shouldn't build one today."

"If you're concerned that your child might grow up to be a psychopath, you shouldn't have kids."


Kinda funny how almost every leaker or hacker in the last decade always seems to end up with child pornography in some way, shape, or form once the authorities get to them. Besides which, law enforcement isn't picky about when they pick people up.

>The issue in that case is the violation of the innocent person's privacy, not that they have a risk of being falsely convicted. The courts would still need admissible evidence, and I don't believe that only having a perceptual hash and a set of legally photographed images clears that bar.

So... You admit that this type of scanning is an invasion of privacy, and would likely be a flagrant constitutional violation if done by the Government?

So why is it okie-dokie for the private sector to do this type of systematic check and balance evasion? That's what gets to me.


> Like, say, the naughty pictures you've been taking of your (consenting) adult partner. I'm sure Apple's outsourced low-wage-country verification team will enjoy those.

I'm not sure they'll be able to after looking at CSAM all day...


Depends...

This would be a great job for an actual pedophile.


> The false positive rate for any given image is not 1 in a trillion

What's relevant is the overall false positive rate. If they require 6 matches for example, it's enough for each match to have a 1% false positive rate in order to get 1 in trillion overall.


Sadly not, because like the original comment implies, a false positive on a given image might cause it to be uploaded to Apple for manual review.

So if Apple has a "1 in a trillion" 1e^-12 chance of flagging a person and they require (my guess) 3 hits to flag someone, then that would mean the chance of each single image being false positive is (1e^-12)^(1/3) = 1e^-4. So that means:

Expect 1 in 1000 images to be uploaded to Apple for potential human verification.


No, manual review only happens after the threshold is reached. Meaning, there is a 1 in a trillion chance that threshold-number of images are manually verified by Apple.


My guess is that the one in a trillion figure includes the threshold that must be exceeded, requiring multiple false positives, bringing the full chance of your account being flagged to 1 in a trillion.


Correct, the "1 in 1 trillion" does factor in the requirement for multiple images to match. From Apple's technical summary:

"Using another technology called threshold secret sharing, the system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images."

"The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account. This is further mitigated by a manual review process wherein Apple reviews each report to confirm there is a match..."

And when the manual review process sees that the images flagged aren't NCMEC classification A1 (A=prepubescent, 1=sex acts) the flag is cleared.


> Anything could trigger a false positive

Don't you think Apple has independently arrived at that very same possibility? Maybe even thought about it and extensively tested it? Then put a probability on it? And then chosen the threshold such that the overall probability of falsely flagging an account is, indeed, minuscule?


Given what I know about how FAANGs are run, they probably shrugged and wrote down whatever threshold felt right.


> It's true that they're not cryptographic hashes, but the false positive rate is vanishingly small.

What are you talking about? Ask anyone who has worked in this space: false positives are abound[1][2], especially when you're looking for fuzzy matches. And you have to look for fuzzy matches otherwise slight modifications to illegal images would bypass the detection system.

> Photos of your own kids won't trigger it, nude photos of adults won't trigger it;

This is also incorrect. In general, if two images kind of look like one another when you squint, they're going to have similar perceptual hashes. A lot of unrelated things look similar to one another when you squint, and a lot of unrelated things are going to have similar perceptual hashes. And, again, you'll be doing fuzzy matches on these hashes, so you're going to pick up those unrelated things even more so than when you just have hash collisions.

[1] https://news.ycombinator.com/item?id=28091750

[2] https://news.ycombinator.com/item?id=28110159


The 1 trillion figure is only after factoring in that you would need multiple false positives to trigger the feature. It's not descriptive of the actual false positive rate of the hashing itself.


Youtube current censorship outrage with abusive video take downs is a proof that perceptual hash false positive are a massive problem.

Even if the tech has 1 in a trillion chance, it will happen a lot with billions of people generatin thousands of ilage every years. And of course, the hash database being abused.


Are those YouTube takedowns happening due to perceptual hashes?

I assume they are just standard ML.


Even if this is true (I don't trust apple's claim) I think this should still be used as a talking point to scare normal uninformed people about the surveillance tech.

The other side started it first. Look at the government's dubious claims about terrorism prevention. John Walsh a founder of NCMEC testified to congress that millions of children were abducted every year and that america was "littered with mutilated, decapitated, raped, strangled children," (this was and is not true).

If fear mongering about Big Brother throwing normal people in prison for pictures of their children is what it takes to blunt the expansion of the surveillance state I say fair play.


This is all behind a huge, neon-flashing-lights asterisk of "for now."

How long until they try to machine-learn based on that database? The door's open.


Apple previously stored photo backups in their cloud in cleartext. The door was always open. At some point if you are providing your personal images for Apple to store, you have to exercise a modicum of trust in the company. If you don't trust Apple, I suggest you don't use an iPhone.


Apple never had the technology to do offline scans of stuff directly on your phone. Now they have it. How long will it take, until some three/four letter agency makes them scan other stuff too, since they clearly have the tech to do so, and it's just one "if()" removed, to check eg. who was the first person with wikileaked photos.


Tiananmen square tank man.

The CCP already gets Apple to censor the Taiwanese flag emoji and store all Chinese iCloud user data on CCP-accessible servers in China.

We know Apple presently actively and eagerly cooperates with CCP censors to be permitted to operate and sell in China.

This is tailor-made for CCP abuse.


I don’t even use iCloud but what you suggest is exactly what I’m going to do.

I no longer trust Apple, and I’m going to get rid of my iPhone.


I would argue that in a discussion about privacy, if trust is brought up it’s no longer a discussion about privacy.


Today PhotoDNA, tomorrow NeuralNet smart AI. PhotoDNA is limited to hashes because performance 15 years ago, which makes it limited to known CSAM. However there is no doubt a push to pick up new CSAM content before it's ever distributed, which will absolutely introduce false positives.


Probability of a false positive for a given image = p

Probability of N false positives (assuming independence) = p^N

Threshold N is chosen by Apple such that p^N < 10^-12, or N log p < -12 log 10, or N > -12 log(10)/log(p) [since log(p) < 0, since p < 1].

ETA: Suppose, just for the sake of the argument, that p = 10^-3 (one false positive in 1000, so really quite bad).

Then log(p) = -3 log(10), so N > -12 log(10)/(-3 log(10)) = 12/3 = 4.

Similarly, if p is one in a million (10^6), then N would be required to be > 12/6 = 2.

In practice, I'd expect N to be larger than 4, in other words, Apple being very conservative here.

ETA: The above doesn't take into account how many images M you have. The analysis gets more complicated, but N needs to be way larger than 4. I'll think about it some more.


If the threshold is 4, then 4 photos needed to have incorrectly matched, meaning the accuracy needs to only be 1/10000 if the average user has 10k images. (1/10000)^4*10k is 1 trillion.


How do new hashes get added to this database? How do we know that all the hashes are of CSAM? Who is validating it and is there an audit trail? Or can bad actors inject their own hashes into the database and make innocent people get reported as pedophiles?


Apple has stated point-blank that the only source of CSAM content to generate the hash list will be NCMEC and other child safety organizations.

While I fully admit that NCMEC could do a better job with transparency and auditing, they are currently being used by several other platforms right now (Facebook, Google, Microsoft) without issue.

Could bad actors inject hashes of non-CSAM content into the database somehow? Well even if they could do this, Apple employes human reviewers who must visually confirm that the flagged photo contains actual CSAM before they report the image. If the image does not contain CSAM, Apple is under no legal obligation to report it.

More information here: https://twitter.com/AlexMartin/status/1424703642913935374/ph...


Wait a second.

You’re telling me that you believe a low-wage worker reviewing the worst of the worst human depravity, is going to stand up on a soap box and defend another nameless and faceless denizen of the Earth, when the crux of the argument is basically this:

“Yeah I know the person tripped the safety threshold for CSAM, but these images aren’t that bad!”

It’s not reasonable to trust the human reviewer. They put themselves at risk by going against the automated system. I’d bet 95% of people in this circumstance would pass the buck to the FBI to make their determination, at which point “your” life is already ruined.


This is a misunderstanding of the system. It's not a classifier, it's a hash. The only images that are going to be seen by this low-wage worker are going to be:

A) Images which are a correct hash match to an image already known to NCMEC or other agencies which have already been assigned CSAM category A1 (A=prepubescent, B=sex act);

B) Images which are a hash collision. According to Apple, the likelihood of a collision is 1 in 1 trillion per user account.

This system isn't a child detector strapped to a porn detector, being backed up by a low-wage worker making legal or editorial judgement calls. It's searching for images already known to child safety organisations—and even then only the most unambiguously horrific classification within the set of known images, far far far beyond the point where any ambiguity could possibly reside.


> This is a misunderstanding of the system. It's not a classifier, it's a hash

You know that a perceptual hash has more in common with a classifier than with a cryptographic hash, right?

If they were using a crypto-hash, then, yes, your argument could be valid, but with perceptual hashing your picture of your baby in the bath could VERY well generate the same hash as a CSAM image. And nobody believes Apples "1 in a trillion" number.


I'd be surprised if a baby in the bath—or indeed any legitmately innocent photograph taken by a parent—could ever be a perceptual match to images tagged as "A1" by NCMEC and other agencies. This is the most extreme of the extreme: prepubescent minors actively engaged in sex acts.


I’d be surprised that a YouTube video of white noise would be flagged for a copyright violation, and yet here we are.

Things may work right now in 2021. Things will always be changing. New hashes will be introduced. New code will be introduce. New laws in different countries will be introduced.

Now that Apple has introduced this technology that no other phone manufacturer has, it can and will be changed to decrease privacy and increase government control. If China passes a law that states that IPhone needs to scan everyone’s phones for anti-government material, do you really think Tim Cook will say no? They already acquiesced about storing iCloud backups unencrypted on Chinese servers.

And before you say that China would never do that, China and Hong Kong are hunting down the people who were videoed booing Chinese anthem in a shopping mall.


> I’d be surprised that a YouTube video of white noise would be flagged for a copyright violation, and yet here we are.

Speak for yourself, because that didn't surprise me at all. When your corpus of "copyrighted" material is so utterly massive and almost entirely devoid of defined rules or boundaries, this kind of error is inevitable. If anything I'm more surprised that we haven't seen even more of these kinds of matches, like the sound of generic telephone ringtones, recordings of mechanised church bells, or machinery which performs highly uniform tasks, etc.

In the case of A1-categorised CSAM, the quantity of items is many orders of magnitude lower, the degree of technical curation will be orders of magnitude higher, and the thresholds for matches will be narrower. Is there a chance that the A1 corpus will have a few images that should have been classified as A2, B1 or B2? Yes. Is there a chance that it includes pictures of The Statue of Liberty or Westminster Tower? Almost certainly not.


You completely bypassed the point of my post. It doesn't matter about what happens now. What matters is in the future. If China creates a law saying that not only should this system work for CSAM but also for objectionable material or anti-government material, is Apple really going to say no if it means billions of dollars in losses and Apple execs being targets by the CCP? Of course they won't.


If you think it's Apple's (or Google's, or Microsoft's) role to act as the international human rights arm of the US State Department, then I have some bad news for you.

China can do whatever China wants. If China wanted Apple to scan the iPhones of Chinese residents for pictures of Winnie The Pooh, they could have done that last year. They pass a law, Apple must comply or leave. They don't have a choice.

Of course I don't like it. I think many things China does are awful. But at the end of the day I wouldn't stand for China exporting their morality onto me, and I'm not a hypocrite.


One explanation for this CSAM thing is to muddy the water about content scanning using an unassailable goal so that apple can give the Chinese and Saudi governments what they want and not get the same degree of criticism.


You have a fundamental misunderstanding of perceptual hashes, then. If two images look similar to one another, then they will have similar perceptual hashes. That's the point of perceptual hashing.

They aren't doing simple hash matching, they're doing fuzzy matches on the hashes, so there will be far more false positives than just hash collisions.


All of us can only guess how much "fuzz" is being allowed for in their hash matches. I see no reason to believe Apple phoned in their analysis which leads them to be confident in a false positive rate of 1 in 1 billion per iCloud account. While we can only guess, they actually know how their algorithm behaves, and I dare say they've validated it against tens of millions, if not hundreds of millions of real customer images.


You are incredibly naive.

Apple started out saying that they can't decrypt data from anyone's phone. They fought a lawsuit from the FBI over the San Bernadino terrorist phones. This is one of the reasons why I went all-in on Apple, because they were willing to fight the government over our privacy.

Now, years later, they don't encrypt iCloud backups because the FBI told them not to.

Google used to human review all copyright violations on Youtube. Fast forward a few years later, and all copyright violations are demonstrably shown to be approved and the content generator needs to prove that they didn't violate copyright. Look at the violations over white noice. Google doesn't even care anymore, they just let the copyright violations go through and affect the content creators with no review.

To believe that Apple employees will review CSAM-flagged photos 5 years from now is so incredibly naive, it's actually funny. You can bet they are working on AI that will handle this for them right now in Cupertino.

And then, it will be random chance whether or not we are flagged and labelled as pedophiles, or if a government wants to tag us as "problematic" and wants access to our phones because we are journalists and they want to see our anonymous sources.

It's naive to think that it won't go in this direction.

If you're a pedophile, after this announcement you will delete all the photos off your iPhone and never use it again. After the first few rounds of arrests and cleansing, it will be well knowing within the pedophile community not to use iPhones. And then the only ones who will be getting their photos scanned will be innocent people. So the entire feature doesn't make sense at all. It's a ruse.


There is a manual review step by Apple.

You have to first get a “significant” number of photos flagged, then they have to pass Apple’s manual review (ie looking at photo thumbnails), and only then does Apple report the account.


I find how severely this is all being misunderstood, as in very few actually RTFM, on a site like HN, incredibly eye opening.


It’s eye opening how naive some people on HN are despite decades of evidence that things are always twisted in favor of more government surveillance especially outside of the US.


If the worry is that a government can order Apple to do something, they could do that before or after this system was added just the same.

Again I’m not in favor of this system but I think a lot of the criticism misses the mark.


Apple’s whole marketing scheme was that they protected privacy. That was their market differentiation. So no, it wasn’t like this before and this is why so many people like myself feel betrayed by Apple. This is what most of the outrage is about. The largest most profitable company in the world was selling their products saying they believed in privacy and now created a Trojan horse wrapped around CSAM that can be used by governments to spy on whatever is in our phones and I guess our laptops too. Maybe not today. But in 10 years? Yes definitely. They have already bent to government pressure since 2016 what would stop them going forward? Nothing. Tim Cook doesn’t want to go to Guantanamo for defying a government order.


If they were going to implement a government scanning tool it could be done much more simply. And they would if legally required, or turn down some of their largest markets.

This is a step too far (on-device vs in cloud) but all of the “but what if the govt…” is ridiculous because that happens anyway if the govt wants it.

Maybe we should change the government.


You point that it "could have" been done more simply is completely irrelevant.

This "much more simply" mechanism is exactly what was created by Apple. It's done right now, and there's no need to worry about how much more simply they COULD HAVE.

It's a fait-accompli. The government wanted it, and now they have it. Now they can scan individual phones for whatever they want.


You keep saying they can scan for whatever they want but that’s not true, today, by Apple’s description.(which is all we have to go by and is what you are mad about)

Yes the government could order them to change the system. They could also order Apple to create the system in the first place without all the indirection, safety vouchers, human review, etc which make it inefficient as a direct surveillance tool.


Since when has the government acted so transparently?

They could simply tell Apple that this is for CSAM so that people would support it, and then change it later after everyone is is acclimatized to it and forgot about it.

Which is exactly what had happened. And exactly what will happen in a few years.

You sound like a government plant trying to gas light people into thinking this isn’t a big deal. It is a huge deal and it’s not as simple as saying “the government could have asked for it much simpler!” This is them asking for it, plain and simple, and Apple delivering on it.


I didn’t say it would be transparent or public. I’m a government plant for saying the government could just tell Apple to scan for whatever they want without messing with the CSAM stuff? I’m saying governments are already capable of ordering this kind of thing, gag orders included, so this specific tech implementation doesn’t really change that. And yes that’s bad and scary and we should probably try to prevent our governments from doing that.

But no, I don’t believe that a government could “fool” Apple by adding non-CSAM images to the database. The review step would catch that.

I don’t like on device scanning in principle and in precedent. I’m just saying this specific tech stack doesn’t seem like it would be useful for your surveillance scenario, and most of your criticisms don’t seem to be based in having read how this system actually works.

I’m AGAINST this system, I just wish the discussion here weren’t so full of misinformation and bad assumptions.


That 1/1t rate apple gave is post human review according to a recent interview


Do you have a link to that interview? That's a really big asterisk on the "one in a trillion" claim if true.


According to Apple's technical summary:

"The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account. This is further mitigated by a manual review process wherein Apple reviews each report to confirm there is a match..."

So no, "one in a trillion" doesn't include manual review.


> Apple Privacy head Erik Neuenschwander addresses concerns about its new systems to detect CSAM

> We want to ensure that the reports that we make to NCMEC are high-value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched. And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year.

https://techcrunch.com/2021/08/10/interview-apples-head-of-p...

Leaves some ambiguity but it sounds like the reporting to gov is 1/1t


Who looks at the photos in that database? How do we know it is a trustworthy source? That it doesn’t contain photos of let’s say activists or other people of interest unrelated to its projected use?


Don't forget staff rotation. A database can live forever. You not only have to trust what is now, but what the future leaders of Apple may do, what all the future image viewers may do and who they are...

This is a pandaora's box of trust. Once you open it, you have to trust in perpetuity.


How do we know the database itself does not have any false positives?


Because it has been vetted by trustworthy FBI agents who don't make any mistakes.


Even if that were the case this will now make it possible to "swat" or target someone by using spyware to send pics to someones phone. Obviously not many people have access to this type of spyware (think NGO Pegasus) but if law enforcement really wanted to target someone this way they will be able to do so.

For everyone else getting access to the phone and manually loading the pic would work.

Additionally it's possible to fool AI: https://slazebni.cs.illinois.edu/fall18/lec12_adversarial.pd...


Even assuming this is true, this is just the 'if you're not doing anything wrong, you have nothing to worry about' argument, which has been shown to have numerous flaws and counterarguments.


Explain to me how photos get into the NCMEC database to begin with


I've absolutely no knowledge of how they operate, but it occurs to me that there would be at least two very obvious avenues:

1) During the course of investigation an officer infiltrates a CSAM sharing ring and/or poses as a customer for CSAM. Material is shared with the officer as it would be to an actual consumer of CSAM.

2) When someone is charged with child abuse, possession of child porn, etc, their physical and electronic lives will be methodically and forensically searched for CSAM material. They will likely find material they already know about, but potentially uncover new material and/or new social networks.

Any material acquired would need to be analysed and classified for the purpose of effective prosecution. My understanding is (from other comments made by people on other websites) that images in the NCMEC database are tagged based on the severity of their content and that Apple is only scanning for the most extreme "A1" material.

I wasn't sure what A1 meant so I googled it. According to this[0] PowerPoint presentation, page 22:

  A = prepubescent minor
  B = pubescent minor
  1 = sex act
  2 = "lascivious exhibition"
If you want to ruin your day, the PDF provides very specific—depressingly, grossly specific—definitions for the above.

[0] https://www.prosecutingattorneys.org/wp-content/uploads/Pres...


3) No one really knows and since the content is illegal to possess no one can audit it either.


I think most people don't upload to facebook pictures of their kids taking a bath? But they more than likely store such pictures on their phones/laptops.


Sure, but none of those images will be a hash match to any material in NCMEC databases.


It's entirely possible for one innocuous picture to look like an illegal picture in that database, and if they do look similar, they'll have similar perceptual hashes. That's the point of perceptual hashing.


Furthermore - and the extent of this was eye opening to me and I'm not the most naive person:

What for parents is "children playing in the pool in the garden - shared on YouTube so grandparents could see it"

- is something that is collected into playlists and shared with "interesting" timestamps in certain circles. A story here on HN a couple of years ago about a (for us normal people) totally innocent video having reached a million views or something on YouTube because of this sick and ugly thing.

But now we must ask ourselves: is that video CP?

No, for everyone else.

Yes for these sick bastards.

How do you classify that?

If it is classified, what happens to the parents when they switch from Android to iPhone and backup the photos and videos to "summer memories 2017" in iCloud?


I agree that "children playing in the pool in the garden" can function as CP when viewed by a sick bastard. But just because material can function as CP doesn't turn it into CSAM. It would never be classified under any of the formal CSAM categories (A1, A2, B1, B2) unless the video included lascivious exhibition, e.g. focus on genitals, sexual touching or posing, or an aroused adult.

The definitions are horrifyingly, depressingly, tragically very clear. I did not enjoy reading them.


OK, thanks for the update.

I think we more or less agree.


'Similar' hashes wouldn't do anything; the system hashes perceptually similar images (cropped, resized etc.) to the same hash.


Not all transformations result in the same hash, hence why fuzzy matching is necessary. Some perceptual hashing techniques try to incorporate common transformations like rotations and scaling, but if you change an image slightly, but enough, hashes will be different but still similar.


Apple's system only concerns exact hash matches, and is built so transformed images hash to the same number.


> and is built so transformed images hash to the same number.

That is the goal of all perceptual hashing algorithms.

> Apple's system only concerns exact hash matches

Then it is almost useless. All someone would need to do to evade the system is make minor enough adjustments to illegal images so that the distance between resulting hash and the original is minor.

In reality, perceptual hashing systems use fuzzy matching on hashes by using a metric like hamming distance to calculate differences between two or more hashes.


Or here’s an idea: don’t import CSAM into your camera roll in the first place? This is what I don’t get. I’ve never felt compelled to import regular porn into my camera roll–and that stuff is legal. Who the hell is going to be doing that with stuff they know will land them in prison?

I can believe that some people might be stupid enough to believe that a private Facebook group was secure. But who the hell co-mingles their deepest darkest dirtiest secrets with pictures of their family and last night’s dinner?


> Then it is almost useless.

I'm sure you're in a prime position to judge that.

> In reality, perceptual hashing systems use fuzzy matching on hashes by using a metric like hamming distance to calculate differences between two or more hashes.

Again, Apple's NeuralHash purposefully doesn't work like that [0].

[0] https://twitter.com/jonathanmayer/status/1423370142411476993


What if they decide next year to use AI to automatically detect potential violating images?

I hope you won't throw the "slippery slope is a fallacy" fallacy at me.


Slippery slope isn't always a fallacy, but it is the way most people use it. Most people just define the slope however they want to suit their argument. Let me

Does this change to the software code represent a slippery slope of motive or opportunity? Many here have said yes, but in my opinion, no. When software can update itself, every single update is an opportunity for the software to betray you. That risk is already high; the risk profile doesn't increase because a particular change feels slippery slopey to you.

As soon as any closed-source software implements automatic software updates, you've always one malicious update away from the system betraying you. Interim steps are unnecessary. Whether it's Chrome, or Firefox, or Windows, or Android. Heck, even Ubuntu. Any of them could betray you at any time. Potentially trash their reputation in the process, but that's a mere technicality.

Therefore the slippery slope is the wrong metaphor. The correct metaphor is trust. Does this change lower my trust in Apple? Me personally, no. If anything, Apple's transparency has increased my trust in them. It gives me confidence that Apple won't use the fear of bad PR as an excuse to conceal serious things like this.


The reason that "slippery slope" is almost never a fallacy is that people with intentions to perform actions that are outside the overton window[0] of acceptable behavior will first do a thing that lies just on the edge of the overton window.

It is currently considered completely unacceptable for companies to scan the data on the user's own disk.

If someone wanted to start doing that, they would have to create a plausible and convincing excuse.

Once people accept that excuse and time passes, they can slowly "expand" the territory of this excuse (push the overton window further).

"Protecting the children" is exactly this kind of plausible and convincing excuse.

It needs not be the case that Apple specifically wants to scan user data so they came up with this excuse.

It's simply that, once scanning users data "for the good of humanity" becomes acceptable, _some_ malicious actor will push the overton window further and further.

Imagine in 10 years from now, all operating systems will scan users data to detect potential child porn material. It might even become required by law, or just by "social pressure". Just like it is now almost required by "social pressure" that social media platforms censor discourse and information.

It's then very easy to expand this capability to also detect illegally downloaded music, or adult videos, or whatever deemed unacceptable.

[0]: https://en.wikipedia.org/wiki/Overton_window



I don't agree that what Apple is doing is sufficiently distinct from what Google has already been doing to represent an Overton window shift. To most people, they will see that the corpus of images being targeted is functionally identical to what Google has been doing for Android users for many years and therefore the difference is mere implementation detail. A distinction without a difference. Nothing more.

Highly technical, libertarian-minded people see a big distinction between on-device and off-device scanning. But such people have always eschewed the "managed experience" model of Apple devices.


I have tried to play around with perceptual hashing on an image set consisting of very similar images (flowers) and there were clashes all the time.


> Apple requires a specific threshold of matches before a report is triggered

True, but the wording of that condition was very vague... the threshold could be 1.


Thank you for the paper.

>"...report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts."

OK. They're not trying to tag and bag your images for 'abuse content'.

If you collect your child abuse porn on iCloud, we're going to report you?


So we just believe Apple. We can call it security by infinite-trust. Beats even security by obscurity!


To be clear, it's 1 in 1 trillion per account. 1 in 1 trillion per photo would potentially be a more realistic risk, since some people take tens of thousands of photos.


except for the time that its 100% for an account...

But the truly sad thing is that there are trillions of instances, photos, moments that will never even see the light of day. and plenty of human children get abused, raped, murdered every single day.

Child abuse should be a capital crime.


right, this tech is primarily about fingerprinting existing child porn, hashing that, and trying to prevent dissemination and what they might call "ininiation" into trading child porn for the casuals.

like most privacy invasions these days, its casting a widenet to put a dent in a problem that sill inevitable just route around it, and soon enough itll just be turned into a copyright cashgrab


I worked with perceptual hashes. The false positive rate makes it unusable. Even when combined with AI it does not work well. It can get you a list of possible matches, but only a human can really tell if the image is the image. Then you have a problem of images not making to the list. That was in 2016. Maybe things changed.


No. The CSAM (Child Sexual Abuse Material) scanning is comparing hashes of photos about to be uploaded to iCloud against a specific set of images at NCMEC (National Center for Missing and Exploited Children) which are specific to missing and exploited children. It is not machine learning models looking for nudes or similar. It is not a generalized screening. If enough matched images are found, the images are flagged for manual verification. If the manual verification confirms that the images match specific images in the NCMEC database, law enforcement is informed.

Be aware that almost all cloud providers screen photos. Facebook reported 20 million images in 2020, Google reported half a million. Dropbox, Box, and many, many others report images. See https://www.missingkids.org/content/dam/missingkids/gethelp/... to see a complete list of companies that screen and report images.

The other thing Apple announced which is completely separate from the CSAM photo scanning is additional parental controls for the Messages app. If a parent opts in for their under-13 children, a machine learning model will look for inappropriate material and warn the child prior to showing the image. The child is also told that their parent will be flagged if the child looks at it anyway. For 13-18 year olds whose parents opted in, the teen is warned first about the content. If the teen continues past the warning the image is shown and no further action is taken. Parents are not flagged for children 13 and over. As I said, this is a parental control for pre-adult kids. It requires opt-in from the parents and has no law enforcement implications.


The correct answer is a well qualified "Maybe." The hashes are fuzzy AI generated weights. It's impossible to know what will cause a false-positive.


I am not sure the right questions are being asked.

1. Who is adding these photos to NCMEC? 2. How often are these photos added? 3. How many people have access to these photos - both adding and viewing?

Everyone is focused on Apple and no one is looking at MCMEC. If I wanted to plant a Trojan horse, I would point everyone towards Apple and perform all of the dirty work on the NCMEC end of things.


Exactly. An unknown mechanism adds hashes to a NGO subject to exactly what conditions?

This initiative makes me extremely leery of black boxes, to the extent that any algorithm between subject and accusation had damned well better be explainable outside the algorithm; else I as a jury member am bound to render a "not guilty" verdict.


Their system needs real images in the training phase, because they are building the system which produces hashes. There must be someone to confirm from Apple, that indeed correct photos are flagged. At least in the beginning.

We don’t know really how adding new hashes work. NCMEC has the whole new algorithm and they drag-n-drop new images? Hopefully not like that.


Welcome to "rejected in voire dire".


Comparing hashes reminds me of this announcement from a few years ago that Google had produced a SHA1 collision: https://security.googleblog.com/2017/02/announcing-first-sha...

Can you imagine the chaos of a successful collision matching some explicit material being sent as a prank or targeted attack?


No chaos. The photos would be reported, reviewers would say "that's weird" since the false positive was obviously harmless and the industry would eventually switch to a different hash method while ignoring the false positives generated by the collision. If there were a flood of false positive images being produced the agencies would work faster to come up with a new solution, not perform mass arrests.


Right. Kind of like how copyright violations on YouTube are double checked and the humans say “that’s weird” and deny the request. Or maybe they will just report everything and let the law work everything out. If they’re innocent they have nothing to worry about, right?


I don't understand how most people are still willing to "trust the system" when it's evident that this type of mechanism keeps failing time and time again.

And in the example you gave we are talking about Google, not some early-stage understaffed startup.


I thought the images were encrypted after the hashing was done locally. Reviewers can still view them?


Any local match causes a “safety voucher” to be uploaded along with the encrypted image. The voucher contains a (fragment of a) decryption key. If that fragment is combined with enough of its buddies from other vouchers, Apple gets to decrypt the image.


More precisely, once they have sufficient vouchers, Apple gets to decrypt the contents of the safety vouchers, which contains a low resolution, grayscale copy of the original image. Safety vouchers don't give Apple access to your photo library.


It's worth reading this, which is basically the only good reporting I've seen on this topic: https://daringfireball.net/2021/08/apple_child_safety_initia...

There are legitimate things to be concerned about, but 99% of internet discussion on this topic is junk.


John Gruber is biased because his brand is closely tied to Apple’s brand. Ben Thompson wrote a better review on the topic: https://stratechery.com/2021/apples-mistake/

There’s also the Op-Ed by Matthew Green and Alex Stamos, cyber security researchers: https://www.nytimes.com/2021/08/11/opinion/apple-iphones-pri...


They have a podcast together called Dithering which is pretty good (but not free) - they're friends.

I think John's article is better than Ben's, but they're both worth reading.

Ben takes the view that unencrypted cloud is the better tradeoff - I'm not sure I agree. I'd rather have my stuff e2ee in the cloud. If the legal requirements around CSAM are the blocker then Apple's approach may be a way to thread the needle to get the best of both worlds.


For me it's the worst of both worlds - e2ee has no meaning if the ends are permanently compromised - and there's no local vs cloud separation anymore which you can use to delineate what is under your own control - nothing's under your control.


> there's no local vs cloud separation anymore which you can use to delineate what is under your own control

If uploading a perceptual hash of a photo breaks 'local vs cloud separation', the uploading the whole photo in the clear surely does the same.


The end isn't really compromised with their described implementation.

The only thing sent is the hash and signature and that's only if there are enough matches to pass some threshold.

I don't really view that as 'permanently compromised' - at least not in any way more serious that Apple's current capabilities to compromise a device.

I think e2ee still has meaning here - it'd prevent Apple from being able to see your photo content on their servers.

This is a nuanced issue, I don't think there's an obviously better answer and both outcomes have different risks. [0]

[0]: https://www.lesswrong.com/posts/PeSzc9JTBxhaYRp9b/policy-deb...


Yeah, and as argued in one of the blog posts - that's just a policy decision - not a capability decision - malleable to authoritarian countries' requests.


Yes - and I agree that that's where the risk lies.

Though I'd argue the risk has kind of always lied there given companies can ship updates to phones. You could maybe argue it'd be harder to legally compel them to do so, but I'm not sure there's much to that.

The modern 'megacorp' centralized software and distribution we have is dependent on policy for the most part.


That's the problem I had with Ben's post - it's always been policy since Apple controls and distributes iOS.


Yeah - the sense I got was he just liked the cleaner cut policy of a hard stop at the phone itself (and he was cool with the tradeoff of unencrypted content on the server).

It does have some advantages - it's easier to argue (see: the disaster that is most of the commentary on this issue).

It also could in theory be easier to argue in court. In the San Bernardino case - it's easier for Apple to decline to assist if assisting requires them to build functionality rather than just grant access.

If the hash detection functionality already exists and a government demands Apple use it for something other than CSAM it may be harder for them to refuse since they can no longer make the argument that they can't currently do it (and can't be compelled to build it).

That said - I think this is mostly just policy all the way down.


I have no idea if this feature existing makes it harder or easier for Apple to refuse. Based on how the feature works, it would still require a special build of iOS just like what the FBI wanted in order to remove the unlock count years ago.

Given the amount of nuance here, I also think it's important to differentiate between the FBI showing up and asking for something and government passing laws forcing encryption backdoors. The former is what Apple has fought to date b/c they can. The later is much harder to fight and Apple will most likely have to comply regardless of what features already exist or not (see China/iCloud). The later is also the most dangerous since politicians rarely understand technology enough to do something sensible. It remains to be seen, but Apple could be trying to get in front of long term law changes with an alternate solution.


Yup, we can agree on that.


> The only thing sent is the hash and signature and that's if there are enough matches to pass some threshold.

Not true. If there are enough matches, someone at Apple will have a look at your pictures. Even if they are innocent.


I think the one thumbnail of the matching hash? Just to make sure there isn't a (they argue one in a trillion, but I don't know if I buy that) false positive.

That's if there is enough matches to trigger the threshold in the first place, otherwise nothing is sent (even if there are matches below that threshold).

Alternatively this is running on all unencrypted photos you have in iCloud and all matches are known immediately. Is that preferable?


> I think the one thumbnail of the matching hash?

So it is sending pictures? That makes your argument quite a bit weaker.

> Is that preferable?

Nope, E2EE without compromises is preferable.


I think the thumbnail is only when the threshold is passed and there's a hash match. The reason for that is an extra check to make sure there is no false positive match based on hatch match (they claim one trillion to one, but even ignoring that probably pretty rare and strictly better than everything unencrypted on iCloud anyway).

> Nope, E2EE without compromises is preferable.

Well that's not an option on offer and even that has real tradeoffs - it would result in less CSAM getting detected. Maybe you think that's the acceptable tradeoff, but unless government legislatures also think so it doesn't really matter.

This isn't the clipper chip, this is more about enabling more security and more encryption by default but still handling CSAM.

The CSAM issue is a real problem: https://www.nytimes.com/interactive/2019/09/28/us/child-sex-...


>Well that's not an option on offer and even that has real tradeoffs - it would result in less CSAM getting detected. Maybe you think that's the acceptable tradeoff, but unless government legislatures also think so it doesn't really matter.

It should and can be an option. Who cares what they offer us. Do it yourself.


That's just a separate topic.

If you do it yourself none of this policy stuff matters.


> So it is sending pictures? That makes your argument quite a bit weaker.

Important to note this is only ran on images going to iCloud so they are already sent.


I really don't understand how you're arguing as if you don't see the bigger picture. Is this is a subtle troll?

They are now scanning on the device. Regardless of how limited it is in its current capabilities, those capabilities are only prevented from being expanded by Apple's current policies. The policies enacted by the next incoming exec who isn't beholden to the promises of the previous can easily erode whatever 'guarantees' we've been given when they're being pressured for KPIs or impact or government requests or promotion season or whatever. This has happened time and again. It's been documented.

I really am at a loss how you can even attempt to be fair to Apple. This is a black and white issue. They need to keep scanning for crimes off our devices.

So to your answer your question, yes it is preferable to have them be able to scan all of the unencrypted photos on iCloud. We can encrypt things beforehand if need be. It is lunacy to have crime detecting software on the device in any fashion because it opens up the possibility for them to do more. The people in positions to ask for these things always want more information, more control. Always.

The above reads like conspiracy theory but over the past couple of decades it has been proven correct. It's honestly infuriating to see people defend what's going on in any way shape or form.


Frankly the distinction seems arbitrary to me.

This is a policy issue in both cases - policy can change (for the worse) in both cases.

The comparison is about unencrypted photos in iCloud or this other method that reveals less user information by running some parts of it client side (only if iCloud photos are enabled) and could allow for e2e encryption on the server.

The argument of "but they could change it to be worse!" applies to any implementation and any policy. That's why the specifics matter imo. Apple controls the OS and distribution, governments control the legislation (which is hopefully correlated with the public interest). The existing 'megacorp' model doesn't have a non-policy defense to this kind of thing so it's always an argument about policy. In this specific implementation I think the policy is fine. That may not hold if they try to use it for something else (at which point it's worth fighting against whatever that bad policy is).

Apple's good solutions to the CSAM problem (which I think thread the needle for a decent compromise) could prevent worse policy from the government later (attempts to ban encryption or require key escrow like in the 90s).

Basically what I said here: https://news.ycombinator.com/item?id=28162418

This implementation as it stands reveals less information about end users and could allow them to enable e2ee for photos on their servers - that's a better outcome than the current state (imo).


They've given you two options:

1. Encrypt everything in the cloud but upload the hashes of these items as well on the device. Also notify us so we can notify law enforcement if they're doing some illegal stuff.

2. Everything is unencrypted in the cloud. No actions are taken on the device. No notifications to authorities.

With option one the sanctity of the device ownership is breached. With option two it's maintained. Maintaining that stark distinction is hugely important for what future actions can be taken in the public eye. Normalizing on device actions that work against the user must be fought at every instance they occur.

Your line of thinking dangerous because you're ignoring the public perception of a device you own actively working against you. Apple's behavior cannot be allowed to be considered normal.


For #2 actions are taken in the cloud, authorities are contacted, and all of the photos are unencrypted.

Option #1 doesn’t really breach the ‘sanctity of device ownership’ because it only occurs if you’ve enabled iCloud photos on the device.

Option #1 seems better to me in its current implementation. I understand the fear of abusing the hash matching. I just think that’s a separate thing.


>Option #1 doesn’t really breach the ‘sanctity of device ownership’ because it only occurs if you’ve enabled iCloud photos on the device.

I'm done. You'll rationalize anything.


To be fair - I think reasonable people can disagree on this.

I don't think it's a rationalization to point out that it only occurs when the same baseline conditions are met (using the cloud). I think those constraints/specifics matter. I wouldn't be in favor of the policy if they were different (and I'm not even sure I'm in favor of it now).

My personally preferred outcome would be e2ee by default for everything without any of this, but I also understand the concerns of NCMEC and the general tradeoffs/laws around this stuff (and future regulatory risk of CSAM) - and just the general issue of reducing child sexual abuse.


I largely agree with your point fossuser.

I am also in favour of E2E by default for everything without any device or cloud based scanning. However, Apple doesn't want to be caught in having developed a service that enables for child exploitation. Doing nothing may have even more invasive requirements legally forced by government, so Apple is stuck with a dilemma. Also lets not forget that Apple should also not want child exploitation to occur and therefore also should do something.

The question I have for drenvuk is how else is Apple able to prevent or detect child exploitation and the storage or distribution of content such as this on Apple's services?


They can rely on users reporting it. They are not obligated by law to do this.


For now, but if they don’t do something like this laws can change and that change could make things a lot worse for privacy than this implementation.

I suspect they’re trying to get ahead of that and solve this in the most privacy protecting way possible.


How is anyone OK with Apple employees viewing their private pictures because their algorithm fucked up?


> someone at Apple will have a look at your pictures

No, but at a "visual derivative"


A JPEG is a "visual derivative". It's also a "mathematical fingerprint" of an image.


> The end isn't really compromised with their described implementation.

They've turned your device into a dragnet for content the powers that be don't like. It could be anything. They're not telling you. And you're blindly trusting them to have your interests at heart, to never change their promise. You don't even know these people.

You seriously want to cuddle up with that?


> "They've turned your device into a dragnet for content the powers that be don't like. It could be anything. They're not telling you"

They're pretty explicitly telling us what it's for and what it's not for.

> "And you're blindly trusting them to have your interests at heart, to never change their promise. You don't even know these people."

You should probably get to work building your own phone, along with your own fab, telecoms, networks, - basically the entire stack. There's trust and policy all over the place. In a society with rule of law we depend on it. You think your phone couldn't be owned if you were important enough to be targeted?


Yes. All the fabs and stuff we have now shoufd be devoted to implementing a surveillance state. This must happen. It cannot be any other way.

This is what you sound like. The problem here isn't the tech. It's that Big Tech has deluded society into believing privacy and personal ownership of devices doesn't exist because it would inconvenice Big Tech. Law enforcement echoes it because they were spoiled by the brief period that they tasted ClearNet, and they don't want to return to having to investigate the old fashioned way.

Every other major industry has increasingly started doing the same thing. It is not okay. We have no right to sell out future generation's privacy. It's cowardly, selfish, and does more harm to them in the long run.


I’m not making statements about the way things ought to be, I’m being pragmatic about the way things are.

We’re trusting a lot of the stack. Apple’s policy as described is reasonable. If you distrust it because of the things they could do, that same logic applies to the entire stack.

The nature of modern software distribution is that the majority of the stuff we use from centralized corporations is governed by policy, not technical capability or controls, and you don’t get to know the details.

You don’t own the OS you use, you don’t own the important parts of your phone.

This can be different. Decentralized applications via protocols are interesting (DeFi blockchain stuff like Audius or other apps on Ethereum). If the UX can get figured out.

Urbit is interesting too - if you want to actually own your stack, use Urbit: https://media.urbit.org/whitepaper.pdf

Outside of decentralized protocols you’re ultimately just trusting policy somewhere. It seems dumb to me to arbitrarily be upset at Apple’s policy here, when the specifics are reasonable (and allow for e2ee).


> Apple’s policy as described is reasonable.

Apple is performing warrantless searches, and somehow people see it as okay because Apple is not the government (even though it functions as a state agency in this regard).


> They're pretty explicitly telling us what it's for and what it's not for.

Nobody should blindly trust Apple. As an organization, they already love secrecy and shadows--what better place to sneak in and test this kind of feature, free from employee ethics and scrutiny?

They've been cooking this up without telling anyone, which is also indicative of how above board they are. Who knows what else they're doing with this now or will do in the future.

The CIA, FBI, MI6, Mossad, FSB, CCP, et al. will use this to learn more about their targets.


One logical conclusion of systems like this is that modifying your device in any "unauthorized" way becomes suspicious because you might be trying to evade CSAM detection. So much for jail-breaking and right to repair!

I think I'd rather have the non-e2ee cloud.


I don't really buy that - you could just turn off iCloud backup and it'd avoid their current implementation.


And you think this will be the ultimate implementation?

Let the devil in, and he'll treat himself to tea and biscuits.


I think it's possible to have nuanced policy in difficult areas where some things are okay and others are not.


Did we learn nothing from the Snowden revelation?


Is it really E2EE if there is an MITM application scanning and reporting everything?????

Seems like the existence of this scanning agent by default makes it not E2EE anymore


Friends can disagree. Everyone has their own biases - good and bad. I think it’s always good to keep people’s biases in mind when reading their work.


I agree, but it doesn't necessarily mean what they say is wrong.

I like that they disagree - the issue doesn't have an obviously correct answer.


Even HN reporting / article linking / comments have been surprisingly low quality and seem to fulminate and declaim with surprisingly little interesting conversation and tons of super big assertions.

Linked articles and comments have said apple's brand is now destroyed, that apple is committing child porn felonies somehow with this (the logical jumps and twisting to get to these claims are very far from strong plausible interpretation).

How do you scan for CASM in an E2EE system is the basic question Apple seems to be trying to solve for.

I'd be more worried about the encrypted hash DB being unlockable - is it clear this DOES NOT have anything that could be recreated into an image? I'd actually prefer NOT to have E2EE and have apple scan stuff server side, and keep DB there.


From https://www.hackerfactor.com/blog/index.php?/archives/929-On...

> The laws related to CSAM are very explicit. 18 U.S. Code § 2252 states that knowingly transferring CSAM material is a felony. (The only exception, in 2258A, is when it is reported to NCMEC.) In this case, Apple has a very strong reason to believe they are transferring CSAM material, and they are sending it to Apple -- not NCMEC.

> It does not matter that Apple will then check it and forward it to NCMEC. 18 U.S.C. § 2258A is specific: the data can only be sent to NCMEC. (With 2258A, it is illegal for a service provider to turn over CP photos to the police or the FBI; you can only send it to NCMEC. Then NCMEC will contact the police or FBI.) What Apple has detailed is the intentional distribution (to Apple), collection (at Apple), and access (viewing at Apple) of material that they strongly have reason to believe is CSAM. As it was explained to me by my attorney, that is a felony.

Apple is going to commit child porn felonies according to US law this way. This claim seems actually quite irrefutable.


Ahh - an "irrefutable" claim that apple is committing child porn felonies.

This is sort of what I mean and a perfect example.

People imagine that apple hasn't talked to the actual folks in charge NCMEC.

People seem to imagine apple doesn't have lawyers?

People go to the most sensationalist least good faith conclusion.

Most mod systems at scale are using similar approaches. Facebook is doing 10's of MILLIONS of images to NCMEC, these get flagged by users and/or systems, and in most cases then facebook copies, checks through moderation queue and submits to NEC.

Reddit uses the sexualization of minors flags. In almost all cases, even though folks may have strong reasons to believe some of this flagged content is CSAM, it still gets a manual look. Once they know they act appropriately.

So the logic of this claim about apples late to party arrival of CSAM scanning is weird.

We are going to find out that instead of trying to charge apple with some kind of child porn charges, NCMEC and politicians are going to be THANKING apple, and may start requiring others with E2EE ideas to follow a similar approach.


Sorry under which of these other moderation regimes does the organisation in question transmit CSAM from a client device to their own servers? To my knowledge Apple is the only one doing so.


Almost every single one.

Facebook checks for potential CSAM when you upload from your client device (sometimes an iphone) to their servers or after it's on their system and a user flags it.

Instagram also check once you upload.

These are all transmissions.

Apple checks if you upload. If you don't upload or attempt to upload to their servers, no check.

All these are to flag potential CSAM. Some do more - nudity in general, harmful content filters etc. Some is auto blocked, some is forwarded for review and report etc.

In almost all cases flagging is part of or connected to uploading to a third parties servers. The flagging for CSAM is not a conviction - some do and some don't do a human review before submission. Most situations where folks can use flagging to hide content get a human review at some level to avoid abuse of the flagging system itself.


> Facebook checks for potential CSAM when you upload from your client device (sometimes an iphone) to their servers or after it's on their system and a user flags it.

Thats not the issue according to the linked source.

Instagram transmits all photos and assumes it's not CSAM until flagged - thats safe content. Apple transmits ONLY CSAM - thats a no-no content because they're assuming it is CSAM and you can't transmit CSAM.

You can't knowingly transmit CSAM. Transmitting a photo pre-scan is safe (if you assume any photo is not csam), transmitting post-scan is dangerous if you filter for csam.


Huh?

Apple uploads all photos to iCloud photos (just as google does). This includes CSAM and not CSAM.

It keeps a counter going of how much stuff is getting flagged as possible CSAM. They don't even get an alert about anything until you hit some thresholds etc. And no one has reviewed anything yet at all, the system is flagging things up as other systems do.

Are you sure (legally) that one can't review flags from a moderation system? That is routine currently. No one is knowingly doing anything. Their system discusses being alerted to possible counts of CSAM.

Is your goal that apple go straight to child porn reports automatically with no human involvement at all? At scale (billions of photos) that's going to be a fair number of false positives with potentially very serious consequences.

The current approach is that images are flagged in various ways, folks look at them (yes, a large number have problems), then next steps are taken. But the flags are treated as possible CSAM.

Please look into all the false positives in youtube screening before you jump from a flag => sure thing. These databases and systems are not perfect.


> Are you sure (legally)

> Is your goal that apple

I'm not a lawyer and i want apple to do nothing especially not scan my device.

I'm saying the linked article in discussion says you can't transmit content you KNOW (or suspect) is CSAM. You don't assume that all your customers' content is CSAM, but post-scan, you should assume.

The only legal way to transmit (according to article) is if it's to the government authorities.

I don't know the legal view on the "false positive" suspicion vs legality of transmitting. That's a gamble it seems. I don't have a further opinion on it since IANAL and this is very legal grey area.


Apple is very clear that they don't know anything when photos are uploaded. The system does not even begin to tell them that some may have CSAM until it's had like 30 or so matches. The jump from this type of system (variations are used by everyone else) to some kind of child porn charges is such a reach it's really mind boggling. Especially since the very administrative entities involved are supporting it.

A strong claim (apple committing CSAM felonies) should be supported by reasonably strong support.

Here we have a blog post where they've talked to someone ELSE who (anonymously) has reached some type of legal conclusion. If you follow the QAnon claims in this area (there are lots) they follow a somewhat similar approach - someone heard from someone that something someone did is X crime. It's a weak basis for these legal conclusions.


Apple is attaching a ticket to images as the user uploads to iCloud. If enough of these tickets think CSAM and allow an unlock key to be built, they will unlock and get checked. It's still the user who has turned on iCloud and uploaded the images.


Correct.

The one odd thing I don't get. It would be a lot EASIER to just scan everything when its in the cloud itself.

Why go to this trouble to avoid looking at users photos in the cloud, set these thresholds etc. You'd only need to scan on device if for some reason you blocked your own ability to scan in cloud (ie, for E2E photos - which I don't think users actually want).


Something like all of iCloud getting E2EE would be a big feature and likely only be announced at an event. I agree, if the CSAM on device scanning isn’t followed on by something else, it seems like a lot of work and PR flak for little gain.


Right - the system is actually quite complex to blind apple to something they currently have sitting with their own keys on it on their own servers. I mean, they can (and maybe will) just scan directly?


That’s amusing, but your source is completely wrong about (1) What 18 USC § 2252 says in general (notably, it leaves out the “knowingly” requirement, which is critical given the wait being given to post-auto-flag, pre-verification transfer), (2) What exceptions are in § 2252, and (3) the entire reference to § 2258A, which is a separate reporting requitement, not an exception to § 2252. Really, one should read all of the chapter those sections are part of, but the whole argument is based on either fantasy or distortion of the text.

https://www.law.cornell.edu/uscode/text/18/part-I/chapter-11...


> This claim seems actually quite irrefutable.

I don't think so.

Apple transfers the images to iCloud, yes, but before the threshold of flagged photos is reached, Apple doesn't know that there might be CSAM material among them. When the threshold is exceeded and Apple learns about the potential of CSAM, the images have been transferred already. But then Apple does not transfer them any further, and has a human review not the images themselves, but a "visual derivative" that was in a secure envelope (that can by construction only be unlocked once the threshold is exceeded).


It’s good to know that if I use the “section” glyph in an otherwise completely false analysis born of extending my experience past where I’m competent, readers will quickly repeat whatever it is I said as factual and “quite irrefutable”. The power of one blog post is quite something. You’re the third person I’ve seen quote that very post and go “welp, sure is a smoking gun” despite it being completely, utterly, demonstrably false. It’s make believe. That entire blog post is so incorrect that it’s barely useful as toilet paper, friend.

If the author were correct Facebook and others would have been charged with felonies already. Verification happens in good faith and requires transmission despite the exact letter of the law. The people doing it regularly talk to NCMEC. There are evidentiary and chain of custody procedures to follow that also transmit the material. Guess we should charge computers with felonies!

It’s also fucking hysterical that the armchairs who didn’t know what CSAM stood for a week ago think that one of the most litigiously sensitive systems ever built by a corporation somehow missed criminal legal liability. You know, it’s pretty easy to overlook criminal liability when you’re building a system that is used to establish criminal liability. Totally passes the smell test. Irrefutable indeed.

You guys are off your rockers and should move on to another topic. Seriously. Every day this is discussed is another harsh reminder of (a) how few fucks the industry gives about abuse of children and (b) how everyone here digests blogs and considers themselves authoritative on horrors they’ve never once experienced. Every day this is argued on HN, particularly last night when someone said the privacy situation is “the actual abuse” and “much worse” than the rape of children, is another day I’m ashamed to have chosen this profession and work alongside you. This community welcomes the people who have built surveillance systems for every consumer activity imaginable and kids getting molested is where you draw the privacy line, huh? Can’t look for that? I’m genuinely out. Keep your industry. I’d rather manage a Taco Bell if this is the perspective of this industry, because brother, I’ve seen what they’re trying to tackle and I’m still in therapy.

There is a hypocrisy at the core of the “privacy” argument here that is fundamentally indicting not only this community, but everyone discussing this up to and including EFF. I’ve never been so disappointed in people who I used to look up to and think of as good, smart folks.

Source: I’ve built CSAM handling systems for a FAANG and written the policy for using it. The author has misinterpreted the very law he cites and overlooked two subparagraphs. But that isn’t what you want to hear.


Thanks for coming on here.

You WILL be downvoted.

I'm a parent, it's also crazy to me how THIS of all things is where folks are going crazy over privacy. Literally, they will build something to track your every mouse move, scan every photo, log and sell all your browsing history and TV watching history (including big ISPs and mfgs).

And this is the thing folks get outraged about? Apple can already scan your stuff on their servers (and should!).

Instead of apple being charged criminally, other companies that do any kind of E2E without this may be required to do something like this. That's my prediction.

We will see if these "irrefutable" claims amount to anything like a child porn charge against apple.


Most who really care about the central hypocrisy have been banging our heads bloody about all those other surveillance issues. We're so damned concerned, because this is a textbook case of cramming through another brick in the wall of deconstructed privacy now and into the future. There is plenty of terrible arguments, but the good ones that people keep trying to handwave or ignore are the more pressing one.


> I'm a parent [...] > Apple can already scan your stuff on their servers (and should!).

It's crazy to me how you fail to see that instead of everyone playing cyber cowboys and child predator indians the focus for preventing child abuse should be in real life. Criminalizing content does absolutely nothing for those kids who are abused by someone close to them. And unfortunately the vast majority (~75%) of cases happen like that.

If the stats are even remotely right it's unfathomable how bad prevention and deterrence is. (Over the course of their lifetime, 28% of U.S. youth ages 14 to 17 had been sexually victimized -- https://victimsofcrime.org/child-sexual-abuse-statistics/ )

This shows how bad our aggregated priorities are. War on X so far only made X worse. ¯\_(ツ)_/¯


Actually - sharing and having the photos out there is further and active victimization of the victims and does bother them. So shutting that down is a good use of resources.


Do they get a nice feel good card from NCMEC that states that they have managed to delete this many copies that year? /s

I don't doubt for a minute that victims continue to suffer from the knowledge that there's a lot of traumatizing content on the Internet, and that various services integrating with clearinghouses help.

I also don't doubt that Apple is at least semi-competent and could pull this off in an okay-ish way. But all the goodwill and clout that these megacorps have would have been better spent on advocating for policies that prevent child abuse. (Neglect, physical and sexual.)


Don't you think that Apple has their own attorneys and lawyers?


Don't you think that telling people now that there will be a "check" at Apple before things get reported to NCMEC could be a PR lie to keep people calm?

They can easily say afterwards that they're "frankly" required to directly report any suspicion to enforcement agencies because "that's the law", and they didn't know because that was an oversight?

That would be just an usual PR strategy to "sell" something people don't like: Selling it in small batches works best. (If the batches are small enough people often don't even realize how the whole picture looks. Salami tactics are tried tool for something like that; used for example in politics day to day).


IMHO, what Apple is doing is not _knowingly_ transferring CSAM material. Very strong reason to believe is not the same as knowing. Of course it's up to courts to decide and IANAL.


Go ahead and click on some google results after searching for child porn and see if that defence holds up.


Can you elaborate how your reply relates to Apple's case and my comment?


Apple isn't looking at the actual image, but a derivative. Presumably their lawyers think this will be sufficient to shield them from accusations of possessing child porn.


"But look, I've re-compressed it with JPEG 80%. It's not THAT picture!".

It would be interesting to hear what a court has to say if a child porn consumer would try to defend him/her with this "argument".


Love 'em or hate 'em, it is hard to believe Apple's lawyers haven't very carefully figured out what kind of derivative image will be useful to catch false positives but not also itself illegal CP. I assume they have in fact had detailed conversations on this exact issue with NCMEC.


Firstly, the NCMEC doesn't make the the laws. They can't therefore give any exceptional allowance to Apple.

Secondly, any derivatives that are clear enough to enable a definitive judgment whether something's CP or not by an Apple employee would be subject to my argument above. Also just collecting such material is an felony.

I don't see any way around that. Only that promising some checks before stuff gets reported for real is just a PR move to smoothen the first wave of pushback. PR promises aren't truly binding…


Another reminder that many parts of HN have their own biases; they're just different than the biases found on other networks.

Instead of exclusively focusing on the authoritarian slippery slope like it's inevitable, it's worth wondering first: why do the major tech companies show no intention of giving up the server-side PhotoDNA scanning that has already existed for over a decade? CSAM is still considered illegal by half of all the countries in the entire world, for reasons many consider justifiable.

The point of all the detection is so that Apple isn't found liable for hosting CSAM and consequently implicated with financial and legal consequences themselves. And beyond just the realm of law, it's reputational suicide to be denounced as a "safe haven for pedophiles" if it's not possible for law enforcement to tell if CSAM is being stored on third-party servers. Apple was not the best actor to look towards if absolute privacy was one's goal to begin with, because the requests of law enforcement are both reasonable enough to the public and intertwined with regulation from the higher powers anyway. It's the nature of public sentiment surrounding this issue.

Because a third party insisting that user-hosted content is completely impervious to outside actors also means that it is possible for users to hide CSAM from law enforcement using the same service, thus making the service criminally liable for damages under many legal jurisdictions, I was surprised that this debate didn't happen earlier (to the extent it's taking place, at least). The two principles seem fundamentally incompatible.


The encryption on the hash DB has very little to do with recreating images. It is pretty trivial to make sure that it is mathematical impossible to do (just not enough bytes, and hash collisions means there is an infinitive large number of false positives).

My own guess is that the encryption is there so that people won't have access to an up-to-date database to test against. People who want to intentionally create false positive could abuse it, and sites that distribute images could alter images to automatic bypass the check. There is also always the "risk" that some security research may look at the database and find false positives from the original source and make bad press, as they have done with block lists (who can forget the bonsai tree website that got classified as child porn).


You don’t. That’s the entire point of E2EE, the data transferred is private between you and the recipient party.


99% of internet discussion on this topic is junk.

And how is that?

It seems like the Gruber article follows a common formula for justifying controversial approaches. First, "most of what you hear is junk", then "here's a bunch of technical points everyone gets wrong"(but where the wrongness might not change the basic situation), then go over the non-controversial and then finally go to the controversial parts and give the standard "think of the children" explanation. But if you've cleared away all other discussion of the situation, you might make these apologistics sound like new insight.

Is Apple "scanning people's photos"? Basically yes? They're doing it with signatures but that's how any mass surveillance would work. They promise to do this only with CSAM but they previously promised to not scan your phone's data at all.


But some of those technical points are important. Parent comment was concerned that photos of their own kids will get them in trouble - it appears the system was designed to explicitly to prevent that.


The Daring Fireball article actually is a little deceptive here. It goes over a bunch of that won't get parents in trouble and gives a further couched justification of the finger printing example.

The question is whether an ordinary baby photo is likely to collide with the one of the CSAM hashes Apple will be scanning for. I don't think Apple can give a definite no here (Edit: how could give a guarantee that a system that finds any disguised/distorted CSAM won't tag a random baby picture with a similar appearance. And given such collision, the picture might be looked at by Apple and maybe law enforcement).

Separately, Apple does promise only to scan things going to iCloud for now. But their credibility no long appears high given they're suddenly scanning users' photos on the users' own machines.

Edited for clarity.


> how could give a guarantee that a system that finds any disguised/distorted CSAM won't tag a random baby picture with a similar appearance.

Cannot guarantee, but by choosing a sufficiently high threshold, you can make the probability of that happening arbitrarily small. And then you have human review.

> And given such collision, the picture might be looked at by Apple and maybe law enforcement

No, not "the picture", but a "visual derivative".


> "visual derivative".

Do you have any idea that means? Because I certainly don't - how could you possibly identify whether an image is CSAM without looking at something which is reasonably the same image?

What is a visual derivative? Take that algorithm and run it over some normal images and show me what they look like.

All of this is being aggressively talked around because everyone knows it's not going to stand up to any reasonable scrutiny (i.e. plenty of big image datasets out there - does Apple's implementation flag on any of those? Who knows - they're not going to refer to anything specific about how they got "1 in a trillion").


> Do you have any idea that means?

No, I don't know what that means. Presumably it is some sort of thumbnail, maybe color inverted or something.

> does Apple's implementation flag on any of those? Who knows - they're not going to refer to anything specific about how they got "1 in a trillion"

I assume they've tested NeuralHash on big datasets of innocuous pictures, and gotten some sort of bound on the probability of false positives p, and then chosen N such that p^N << 10^-12, and furthermore imposed some condition on the "distance" between offending images (to ensure some semblance of independence). At least that's what I'd do after thinking about the problem for a minute.


I assume they've tested NeuralHash on big datasets of innocuous pictures, and gotten some sort of bound on the probability of false positives p, and then chosen N such that p^N << 10^-12

What's interesting about this faulty argument is that it hinges an assumption that "innocuous pictures" is a well defined space that you can use for testing and get reliable predictions from.

A neural network does classification by drawing a complex curve between one large set and another large set on a high dimensional feature space. The problem is those features can include, often include, incidental things like lighting, subject placement and so-forth. And this often work because your target data set really does uniquely have feature X. So you can get a result that your system can reliably find X but when you go out to the real world, you find those incidental features.

I don't know exactly how the NeuralHash works but I'd presume it has the same fundamental limitations. It has to find images even they've been put through easy filters that are going to change every particular pixel so it's hard to see how it wouldn't find picture A that looking like picture B if you squint.


“If it works as designed” is I think where Gruber’s article does it’s best work: he explains that the design is pretty good, but the if is huge. The slippery slope with this is real, and even though Apple’s chief of privacy has basically said everything everyone is worried about is currently impossible, “currently” could change tomorrow if Apple’s bottom line is threatened.

I think their design is making some really smart trade offs, given the needle they are trying to thread. But it shouldn’t exist at all, in my opinion; it’s too juicy a target for authoritarian and supposedly democratic governments to find out how to squeeze Apple into using this for evil.


The EFF wrote a really shitty hit piece deliberately confused the parental management function with the matching against hashes of illegal images. Two different things. From there, a bazillion hot takes followed.


The EFF article refers to a "classifier", not just matching hashes.

So, three different things.

I don't know how much you know about them, but this is what the EFF's role is. Privacy can't be curtailed uncritically or unchecked. We don't have a way to guarantee that Apple won't change how this works in the future, that it will never be compromised domestically or internationally, or that children and families won't be harmed by it.

It's an unauditable black box that places one of the highest, most damaging penalties in the US legal system against a bet that it's a perfect system. Working backwards from that, it's easy to see how anything that assumes its own perfection is an impossible barrier for individuals, akin to YouTube's incontestable automated bans. Best case, maybe you lose access to all of your Apple services for life. Worst case, what, your life?

When you take a picture of your penis to send to your doctor and it accidentally syncs to iCloud and trips the CSAM alarms, will you get a warning before police appear? Will there be a whitelist to allow certain people to "opt-out for (national) security reasons" that regular people won't have access to or be able to confirm? How can we know this won't be used against journalists and opponents of those in power, like every other invasive system that purports to provide "authorized governments with technology that helps them combat terror and crime[1]".

Someone's being dumb here, and it's probably the ones who believe that fruit can only be good for them.

[1] https://en.wikipedia.org/wiki/Pegasus_(spyware)


> When you take a picture of your penis to send to your doctor and it accidentally syncs to iCloud and trips the CSAM alarms, will you get a warning before police appear?

You would have to have not one, but N perceptual hash collisions with existing CSAM (where N is chosen such that the overall probability of that happening is vanishingly small). Then, there'd be human review. But no, presumably there won't be a warning.

> Will there be a whitelist to allow certain people to "opt-out for (national) security reasons" that regular people won't have access to or be able to confirm?

Everyone can opt out (for now at least) by disabling iCloud syncing. (You could sync to another cloud service, but chances are that then they're scanned there.)

Beyond that, it would be good if Apple built it verifiably identically across jurisdictions. (If you think that Apple creates malicious iOS updates targeting specific people, then you have more to worry about than this new feature.)

> How can we know this won't be used against journalists and opponents of those in power, like every other invasive system that purports to provide "authorized governments with technology that helps them combat terror and crime[1]".

By ensuring that a) the used hash database is verifiably identical across jurisdictions, and b) notifications go only to that US NGO. Would be nice if Apple could open source that part of the iOS, but unless one could somehow verify that that's what's running on the device, I don't see how that would alleviate the concerns.


That isn't an offer of legal protections or guarantees that the trustworthiness and accuracy of their methods can be verified in court.

It really doesn't matter how they do it now that we know that iOS has vulnerabilities that allow remote monitoring and control of someone's device to the extent that it created a market for at least one espionage tool that has lead to the deaths of innocent people.

I remember when the popular way to shut down small forums, business competitors, or get embarrassing information taken off the web was to anonymously upload CP to it and then report it, repeatedly. With this, what's to stop virtual "SWATing" of Apple customers? Not necessarily just those whose Apple products have been compromised, whose iClouds have been compromised, or who are the victims of hash collisions (see any group of non-CSAM images that CSAM detection flags).

Will Apple analyze all hardware to ensure no innocent person is framed because of an undisclosed vulnerability? What checks are being offered on this notoriously burdensome process on the accused?

>If you think that Apple creates malicious iOS updates targeting specific people, then you have more to worry about than this new feature

Oof. Good point.


You’re making my point. Nobody cares about your penis pictures.

EFF is an advocacy group, and you need to read between the lines what they say because they have a specific set of principles that may or may not align with reality. They published a bad article and took an extreme stance about what could be as opposed to what is.

Parents care about children sending or receiving explicit material. This is for behavioral, moral and liability reasons.

When your 12 year old boy sends a dick pic to his girlfriend, that may be a felony. When your 16 year old daughter sends an explicit picture to her 18 year old crush, that person may be committing a felony by possessing it.


So why doesn't Apple release an iPod Touch or iPhone without a camera? Maybe release child safe versions of their apps that can't send or receive images from unapproved (by parents) parties?

There seem to be an endless number of ways to achieve what they claim without invasively scanning people's private data.

Figure it out.


Yeah I found the EFF's piece to be really disappointing, coming from an organization I'm otherwise aligned with nearly 100% of the time.


EFF today is really not the organisation it was just a few years ago. I dont know who they hired badly, but the reasoned takedowns have been replaced with hysterical screaming.


> hysterical screaming

Given the political/societal climate, it probably gets them more donations.


> hysterical screaming

Ah, I see you work for Apple, given the language ("the screeching voices of the minority").


Can you quote what you found confusing, because I didn't see anything that didn't agree with the Apple announcement they linked in the piece.


Two different things which are sold as one package by Apple.

Two different things which both are known to be prone to all kind of miss-detection.


Like this part?

"The Messages feature is specifically only for children in a shared iCloud family account. If you’re an adult, nothing is changing with regard to any photos you send or receive through Messages. And if you’re a parent with children whom the feature could apply to, you’ll need to explicitly opt in to enable the feature. It will not turn on automatically when your devices are updated to iOS 15."


I still don't understand how this is allowed. If the police want to see the photos on my device, then they need to get a warrant to do so. Full stop. This type of active scanning should never be allowed. I hope that someone files a lawsuit over this.


Speculating (IANAL) - it's only when iCloud photos is enabled. I'd guess this is akin to third party hosting the files, I think the rules around that are more complex.


You agreed to the EULA :)


I'm not sure EULA's can effectively bargain away US constitutional protections.


Where does the constitution come into play here? This is a private company scanning content uploaded to its own servers.


...as a pre-requisote of avoiding criminal liability for statutory violation of a Federal statute. Transitive property of logic therefore yields that this private entity is acting as a Government proxy. Therefore, Constitutional considerations.

I don't find this unreasonable.


Yes but so is much in that link or at least it is very biased. This one is far better:

https://www.hackerfactor.com/blog/index.php?/archives/929-On...


You must be joking. It would be hard to find anyone more biased in favour of Apple than Gruber.


It's also not even wrong in so many ways that it really highlights how far DF has fallen over the years. Really ugly stuff, handwaving about hashing and nary a mention of perceptual hashing and collisions. Not a technology analysis of any sort.


It's still a non-zero chance it triggers a no-knock raid by the police that kills your family or pets.

it happens all the time


Non-zero being technically true because of the subject matter, but I don’t see how Apple’s system increases the risk of authorities killing family or pets more than server-side scanning.


> more than server-side scanning

False dichotomy. How about they leave people's data alone?


Their neural hashing is new, and they claim has a one in a trillion collision rate. There are 1.5 trillion images created in the US and something like 100 million photos in the compared database. That's a heck of a lot of collisions. And that's just a single year, Apple will be comparing everyone's back catalog.

A lot of innocent people are going to get caught up in this.


We’ll have to wait and see how good their neural hashing is, but just to clarify the 1 trillion number is the “probability of incorrectly flagging a given account” according to Apple’s white paper.

I think some people think that’s the probability of a picture being incorrectly flagged, which would be more concerning given the 1.5 trillion images created in the US.

Source: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


From Apple's technical summary:

"The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account. This is further mitigated by a manual review process wherein Apple reviews each report to confirm there is a match..."

So it's 1 in 1 trillion per account PRIOR to manual review in which the odds of error get reduced even further.


How is it that you are going to "wait and see how good their neural hashing is"? Do you think there is going to be any shred of transparency about the operation of this system? It is completely unaccountable - starting with Apple and going on to NCMEC and the FBI.


I think you're wrong about the risk (the paper says per account), but even so you need to compare it to the alternatives.

Photos in iCloud are unencrypted and Apple checks for CSAM on the unencrypted photos server side, they know of all matches.

OR

Photo hashes are checked client side and only if a certain threshold of matches is passed does Apple get notified at all (at which point there's a sanity check for false positive by a person). This would allow all photos on iCloud to be able to be encrypted e2e.

Both only happen when iCloud photo backup is enabled.

The new method reduces the risk.


Gruber practically (no, perhaps actually) worships Apple. He'd welcome Big Brother into his house if it came with an Apple logo, and he'd tell us how we were all wrong for distrusting it. He's not the voice to listen to this time, and you should trust him to have your best interests at heart.

People are furious with Apple, and there's no reason to discount the completely legitimate concerns they have. This is a slippery slope into hell.

It's a good thing congress is about to start regulating Apple and Google. Maybe our devices can get back to being devices instead of spy tools, chess moves, and protection rackets.

(read: Our devices are supposed to be property. Property is something we fully own that behaves the way we want. It doesn't spy on us. Property is something we can repair. And it certainly is not a machination to fleece the industry by stuffing us into walled and taxed fiefdoms, taking away our control. Discard anything that doesn't behave like property.)

[edit: I've read Gruber's piece on this. It's wish-washy, kind of like watching a moderate politician dance on the party line. Not the direct condemnation this behavior deserves. Let's not take his wait and see approach with Dracula.]


> Gruber practically (no, perhaps actually) worships Apple. He'd welcome Big Brother into his house if it came with an Apple logo, and he'd tell us how we were all wrong for distrusting it.

You mean the same Gruber who described the situation as “justifiably, receiving intense scrutiny from privacy advocates.”? The one who said “this slippery-slope argument is a legitimate concern”?

I'm having a hard time reconciling your pat dismissal with the conclusion of his piece which very clearly rejects the position you're attributing to him as grounds for dismissal:

> But the “if” in “if these features work as described and only as described” is the rub. That “if” is the whole ballgame. If you discard alarmism from critics of this initiative who clearly do not understand how the features work, you’re still left with completely legitimate concerns from trustworthy experts about how the features could be abused or misused in the future.

I mean, sure, know where he's coming from but be careful not to let your own loyalties cause you to make a bad-faith interpretation of a nuanced position on a complex issue.


If icloud backup works as advertised - it backs up your device.

However, if we consider the slipery slope, under pressure from a shaddow government, the contents of your phone could have been uploaded to the CIA every day, including live recordings 24 hours a day.


> the contents of your phone could have been uploaded to the CIA every day, including live recordings 24 hours a day.

You’re describing new functionality which would have to be added in many places: in addition to building that service they have to turn off the recording indicators and prompts, coexist with other apps recording, not have recording pause playback in other apps like normal, masking data usage on both the phone and your carrier’s reports, concealing the battery loss and putting in bigger hardware batteries to compensate, etc.

That’s technically possible but there’s no link to this feature - that hypothetical government would need to do the same things either way. It’s similarly not Apple-specific: with that level of control the same thing would happen Android and anything else.


But muah slippery slope.

Phone has functionality to backup its contents. Phone has functinality to record things. No new functionality needed.

Thus, they are backdoors built into the system with the ability to record everything and upload it to an authourtarian regime for the genocide of the human race.

But we all know the real evil here is using a hashing algorithm to check images you upload to their server for known kiddie porn.


Yes, but that has always been the case. It can upload to iCloud, it could also upload to the CIA. What has changed?


So why are people freaking out about a hash scanning system as an invasion of privacy when the grab anything from your phone contents feature has been there for 10+ years?


Policy-wise Apple have just said "yes, we are going to use our super-admin powers to push updates to turn your phone against you".

Sure, a suitably powerful authoritarian org could do lots of secret things, but that isn't what happens in real life: in real life you publicly change policy in increments and get everyone to go along with it.

"Apple was actually scanning all users photos for CSAM regardless of iCloud usage due to a bug in the most recent firmware" is a headline that is guaranteed in the future.


> is a headline that is guaranteed in the future

Care to put a date on your guarantee?


Thanks - I couldn't have said it better.


> regulating Apple and Google

this is not strong safety for citizens

source: political history


Yes, if they wind up part of a child porn investigation. Your cloud account gets hacked. Some perv gets your images. He is then arrested and his "collection" added to the hash database... including your family photos.

Context often matters more than the nature of the actual content. Police aquire thousands of images with little hope of ever knowing where they originated. If they are collected by pervs, and could be construed as illegal in the hands of pervs, the images become child porn and can be added to the databases.


It’s worth pointing out that this could happen with any Internet-attached photo storage, and pre-dates Apple’s announcement.

What Apple announced is a new system for reading the existing hash lists of known CSAM images and doing the comparison on the device as part of the iCloud upload, rather than on the server after upload.


> your family photos ... could be construed as illegal in the hands of pervs, the images become child porn and can be added to the databases.

It's not as simple as that. Photos in the NCMEC database are tagged based on the severity of their content. The categories are A1, A2, B1, B2. According to this[0] PowerPoint presentation, page 22:

  A = prepubescent minor
  B = pubescent minor
  1 = sex act
  2 = "lascivious exhibition"
Apple are only searching for images tagged as "A1" by NCMEC and other agencies. This is the most extreme of the extreme. There is a massive gulf between the A1 category and anything you could even remotely conceive of being in anyone's family photos.

[0] https://www.prosecutingattorneys.org/wp-content/uploads/Pres...


A1 is nowhere near the most extreme material with which police must deal. Not even close. This is called child abuse imagery for a reason. Sex acts are literally the entry level for A1.


"Sex act" is specifically what the "1" in A1 refers to. A1 does not cover images of child abuse unless they involve sex acts. Yes, even if these are "more extreme" in other ways.

The definitions are horrifyingly, depressingly, tragically very clear.


Actually, we don’t know yet whether you can access your photos from the web anymore after this update, because E2EE ”like” implementation.

Protocol is rather device specific (while allowing multi-device), so it might not be enough to access or hack iCloud account to access the photos. So, things get complicated.


> because E2EE ”like” implementation.

Did apple actually say photos would be e2ee or are we just assuming?


Did you read the spec? This is why the scanning happens on the device.


I don't recall them explicitly saying anything was e2ee except imeessage which is not relevant for this discussion.



if your cloud account gets hacked you have other things to worry about than someone stealing your child's photos (which for whatever reason are explicit enough for apple to report the detections to authorities), spreading them on a honeypot forum and you ending up in court where the judge doesn't believe that the minor on the image is your child


> judge doesn't believe that the minor on the image is your child

Parents can produce, distribute and sell CSAM of their own kids. That's one of the implications they'd face in court.


If you don't choose upload to icloud, no upload to apple at all.

If you do choose icloud upload (most do), they were being uploaded already and stored and may be available to law enforcement.

If you do upload to icloud, NOW they will be screened for matches with "known" images in a database, and if you have more than a threshold number of hits, you may be reported. This will happen on device.

Apple will also scan photos in their cloud system as well from what I can tell (though once on device is working less should land in cloud).

Note that it is HIGHLY likely that google photos / facebook / instagram and others will or are already doing similar scanning and reporting. I've heard millions of reports go in a year.


Arent the perceptual hashes based on a chunk of the image?

I wonder what the false positive rates are for:

- A random image against the DB of perceptual hashes

- Images of a baby's skin against the DB of perceptual hashes

It seems like the second would necessarily have a higher false positive rate: similar compositions (contains baby's skin) would more likely have similar chunks. Is it just a little higher or several orders of magnitude higher?

I know hash collisions are rare, but wonder how much rarity of collisions decreases with perceptual hashes.


According to

https://rentafounder.com/the-problem-with-perceptual-hashes/

the false-positive rate will be likely high. Given the billions of pictures going through this system there are going to be a lot of false accusations of child porn possession likely (and alone such an accusation can ruin lives).

HN discussion of that article from a few days ago:

https://news.ycombinator.com/item?id=28091750


This is where the thresholding and manual review come in, but could be a bit scary for sure.


Even with thresholding and manual review, the idea that most iphone users (who don't have any CP) are going to have a non-zero child porn score in apple's database is just super creepy.

Just a minor policy change (lower threshold) away from calling millions of people pedophiles.


The threshold is cryptographically enforced. See page 4 of Apple's technical summary.


It's two factors, both the match on an image hash and an unknown threshold of matches at which point the data gets sent up. If the threshold is not met then nothing gets notified (even if there is a match). Arguably this is why this approach is better for privacy. Cloud matches would not be able to have this extra threshold (in addition to this model allowing e2ee on the cloud in the future).

I'd also like to know more about the specifics here, my guess is that threshold value is pretty high (their 'one in a trillion' comment not withstanding). It's probably targeting large CSAM dumps of matches which would not get flagged by different images.


Absolute - I think this is one of two key questions for me. That is why I put "known" in quotes. It can't be an exact match because it has to handle cropping, rotation, resize etc.

Images then do get a manual review before a report is made which is good and may help provide feedback on alogs being used.

Going to be hard though for apple to set the second factor to high - I'd say 5 maybe? It's hard to say you had matches on potential CASM and ignored them I'd think.


Disabling iCloud does not remove the scanning system or it’s database from your phone.


Not syncing your contacts to icloud does not remove the uploading system and its components from your phone.

Disabling iCloud does not remove the uploading system from your phone.

Pressing end recording on a video does not remove the video capture system from your phone.


no, because it only catches registered CSAM, however if you sent your pictures to relatives etc. and somehow someone who was into CSAM got one of your pictures and later gets arrested, your photo in their collection could theoretically be registered in the official archives - then you might have something that matches one of the hashes of a known CSAM image in your collection of images (maybe enough matches to have the police come talk to you)

on edit: later on of course this will make a great article in some place like the Atlantic with a stolid monochromatic picture of your family in the lead-in and we will all read about it on HN and talk about how this was an obvious problem with the whole system (if it gets posted at the right time and gets enough upvotes).


Eventually, yes. As we work more towards AI's and "smart" image recognition eventually there will be a system that will have these false positives on innocent images.

Current CSAM depends on humans to "verify" the imagery, this is something companies desperately want to get rid of, and so do the employees understandbly so. Nobody wants a job (99.99%) of comparing CSAM. It costs companies money in labor costs, and draws them bad PR when those employees inevitably develop permanent/semi-permanent mental health issues from it.

The only reason it hasn't happened yet is because a startup can't just start scanning CSAM. They need the blessing of the feds to do that, which requires political connections, and of course requires competing with companies that already have that blessing - something that politics prevents.

PhotoDNA and current CSAM scanning only gets known CSAM, but not new CSAM. The end goal is to detect CSAM before it's ever even distributed, to be "closer to the victim", rather than just those consuming it.

Even with current PhotoDNA you can generate hash collisions, which flag the image for review, and a real human compares the material. This is of course subject to change for the reasons stated above.

Secondarily, automatic scanning and ID'ing of imagery is how you can easily throw an FBI raid at someone. Apps like Telegram automatically download every image/video in the thread.

Ontop of that you can create images that appear different at differing resolutions. At one resolution a harmless meme, at another, CSAM. Meaning that you can again throw an FBI raid at someone using simple tricks.


My understanding is that you should not upload these photos to the cloud anyway. The cloud is not your computer and who knows, maybe apple engineers might be snooping on them, or there could be a hack and so on..Putting on the cloud is like sharing with Apple.


Unless those pictures are also in the NCMEC database, there won’t be a match.*

* As addressed in the comments below, this isn’t entirely true: the hash looks for visually similar picture and there may be false positives.


As far as I understand they use some kind of hash. I suspect their paper on avoiding hash collisions is right next to the Nobel price wining description of the worlds first working perpetuum mobile.



They have accounted for the possibility of false positives.


That is not true. If you read what experts who actually do something to stop CP (unlike Apple) say there are proven false positives.

https://www.hackerfactor.com/blog/index.php?/archives/929-On...


Yes, and you can put a number on the probability of that happening, say a fixed p << 1. And then you can choose the number of required matches before flagging, say N. And then (assuming independence [1]) you have an overall probability of p^N, which you can make arbitrarily small by making N sufficiently large. (I'm pretty sure that's how Apple came up with their "1 in a trillion chance per year".) And then you still have manual review.

[1] you could "help" independence by requiring a certain distance between images you simultaneously flag.


Thanks, I updated my comment.


Absolutely not true. Apple is using a similarity based hash, so if the NCMEC database contains a picture that's similar to one that you have, it could produce a match even if it's not the same. Apple says this isn't an issue, because a person will look at your picture(yes, a random person somewhere will look at the pictures of your newborn) and judge whether they are pictures of child abuse or not. If this unknown person thinks your picture shows child abuse, you will be reported to NCMEC and then what happens is unknown - but likely that it would result in some legal action against you.


> because a person will look at your picture(yes, a random person somewhere will look at the pictures of your newborn)

No. If the number of matches to known CSAM in your library exceeds a threshold, then a person will look at a "visual derivative" of only those pictures whose perceptual hatch match that of known CSAM.

Note that, if I understand correctly, pictures that Android users sync to Google have already been scanned for some time. Where are all those false positives?


>>the number of matches to known CSAM in your library exceeds a threshold, then a person will look at a "visual derivative" of only those pictures whose perceptual hatch match that of known CSAM

Apple very specifically said that their employees will look at the suspected picture before sending it through to authorities. Where do you see the bit about visual derivatives? What would that even mean or look like?

Also what is this threshold? As others have pointed out, us parents have literally hundreds of pictures of our newborns, toddlers and kids - having to trigger some detector "multiple" times doesn't give me any peace of mind at all.

>>Where are all those false positives?

Google doesn't use perceptual hashing, or at least haven't said they do.


> Where do you see the bit about visual derivatives?

In Apple's white paper about the proposed feature:

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

See the links at the bottom here for more:

https://www.apple.com/child-safety/

> Also what is this threshold?

The "perceptual hash" is supposed to match a specific image (though possibly cropped, or otherwise altered a bit, such as through a filter), not "toddlers" per se.

> Google doesn't use perceptual hashing, or at least haven't said they do.

I don't know what the other cloud providers are doing, but I'd be very surprised if they use (trivially circumventable) cryptographic hashes.


Where’s your evidence on this?

The NCMEC database and this hashing have been around for like 15 years. I’m curious as to how you know this.


False positives have been found, not because the photo hold any similarities but because the hashes match: https://www.hackerfactor.com/blog/index.php?/archives/929-On...


Ok. Who has been arrested for a picture of a tomato that generated a hash collision?


Literally Apple said in their own FAQ that they are using a perceptual(similarity based) hash and that their employees will review images when flagged. If that's not good enough(somehow) then even the New York Times article about it says the same thing. What other evidence do you need?


Keep in mind, this manual review only happens after Apple’s system detects multiple occurrences of matches. Until that point, no human is alerted of matches nor does anyone see how many matches there have been.

In a TechCrunch interview Apple said that they are going after larger targets that are worth NCMEC’s time.


Parents take a lot of photos of their kid. Like, lots.


How many of them include erect adult penises and active participation in sex acts?

Apple's on-device list of hashes only includes images which have been classified "A1" under the CSAM categorisation scale. If any other photographs are accidental hash collisions to these images, it's going to be pretty damn obvious to the human reviewer.


The problem with all of this, is that it's about to trust. We're are supposed to trust apple's algorithm to avoid false positives, we're supposed to trust apple that even with false positives there's some threshold to cross, and then supposed to trust apple that their employees will do a good job verifying the pictures, and then(most imporantly) trust that a giant american corporation won't honor secretive state requests to start scanning our phones for other data.

I just don't have that trust. Obviously I think that protecting children is an incredibly important thing to be doing, but I don't trust apple to be running such a system(I do trust them maybe a tiny bit more than I'd trust google in this case, but ultimately I'd rather this system didn't exist at all).


I trust that Apple knows that an actual, real world false accusation will make this week's media challenges look like a fleabite.


"Multiple occurrences of matches" could have definitely been an issue for a friend of mine. When they took a picture, they'd often go for the "blitz the subject with the camera" approach, then never ended up deleting all the bad pictures because they had hoarding tendencies.


Good point, thanks, updated my comment.


> Apple is using a similarity based hash, so if the NCMEC database contains a picture that's similar to one that you have, it could produce a match even if it's not the same

It's actually worse than this, if the hashes are similar then they'll get sent for review. Your picture could be a picture of an abstract painting[0] which has no visual similarity to anything in the db, but through the magic of crypto is similar and it too will be flagged.

[0] The reason I use this example is because someone posted a bunch of abstract art that was flagged by the algo.


Just don't critisize your government in any way. Otherwise they will find anything illegal to arrest you like from crossing street in red to I don't know what. You will be fine becouse there is a legal system and no one can put you into jail for crimes you have not commited. Just look at Julian Assange or random Joe in Belarus who was arrested for wearing red white hat. The justice system always is in the innocent people's side, without exception.


No, it doesn't work like that.


Yes it does, it uses fuzzy perceptual hashes not crypto hashes.

So if your innocent baby pic looks similar enough to a previously tagged child abuse image then YES, it will flag you and send a copy to the feds.

And before you correct me, the Apple employee will see a picture of your naked baby and hit “forward to NCMEC”, which… upon investigation is actually just the feds


Perceptual hashes are extremly accurate. You might need a twin and top of that somehow identical environment almost in pixel level.

Are news filled with false-positive accusations by PhotoDNA, flagging wrong images in Google, Facebook, Instagram etc.?


> Perceptual hashes are extremly accurate.

No, they are not. Ask anyone who has worked in this space[1][2], including myself. They are incredibly common.

Two images that kind of look like one another will have similar or the same hashes. That is the point of perceptual hashing.

[1] https://news.ycombinator.com/item?id=28091750

[2] https://news.ycombinator.com/item?id=28110159


IF there are multiple matches, IF it's going to icloud, THEN a 'derivative image' will be show for screening and IF deemed to be warranted, sent to NCMEC.


from my current understanding - that does occur with m-soft one-drive which is a default in many systems), but not the hash-looking thing apple is currently proposing.


No, this will never be caught.

This only catches ownership of illegal photos.


Wait until someone manages to create an image (white noise) that's a hash collision for anything in that database. And then starts spamming random strangers via airdrop.

Enjoy explaining why your mugshot and arrest record had these charges attached to it!

(Actually, in this case the prosecution would probably use the other pictures on the phone that were not detected by the scanning tool as a way to get a guilty plea deal!)


It would have to be a number of pictures that are flagged, and after that threshold is exceeded, they (more precisely, their "visual derivative") are reviewed by a human. So, no mugshot and no arrest record, even if you choose to accept any number of pictures sent from random strangers via airdrop.


Assuming it is possible (I think it is), there is a manual verification process if you have a match. And obviously, the white noise will be rejected, like all pictures that do not look remotely like the original.

But it can be a form of denial of service: saturate the system with hash collisions so that people can't keep up.


So... this is the way I understand it, which the general public will never have the attention span to understand, so it doesn't fucking matter one bit.

LEO's/FBI/every other institution/group that deals with child pornography and abuse have teams that go through a near infinite amount of pictures and videos of CP/etc.

These are then marked by said people as either - yes, CP/Abuse/etc - or marked false positive.

Once marked as what they're after, they're uploaded to a shared database between all groups involved.

Only what is in these worldwide national databases is what's going to be checked against. Your new pictures of your children will have obviously never made their way to any of these groups as they've never been shared/distributed in any areas of the internet/etc these people work in to track down trafficking rings (well, I'd hope you're not selling pictures of your children to them).

This is the way I understand it. I admit I haven't looked into it that much. If it's anything different than what I've said, then yeah, it's probably fucked. I don't get what people don't understand about checking against a database though. No, your new pictures of whatever are not in this pre-existing database


Unlikely except if you send them to a iphone which is registered with a "child" account.

Apple uses two different approaches:

1. Some way to try to detect _known_ child pornographic material, but it's fuzzy and there is no guarantee that it doesn't make mistakes like detecting a flower pot as child porn. But the chance that your photos get "miss detected" as _known_ child pornographic material shouldn't be too high. BUT given how many parents have IPhones it's basically guaranteed to happen from time to time!

2. Some KI child porn detection on child accounts, which is not unlikely to labile such innocent photos as child porn.


Even in the child account case it's not sent to Apple - it alerts parent accounts in the family. It's also just nudity generally, more akin to garden variety parental control content filtering.

The child account iMessage thing is really entirely separate from the CSAM related iCloud announcement. It's unfortunate people keep confusing them.


> The worst part is: how do I put my money where my mouth is? Am I going back to using Linux on the desktop (2022 will be the year of Linux on the desktop, remember), debugging wifi drivers and tirelessly trying to make resume-from-suspend work?

Oh come on. DOn't make it sound like it's that bad. Wifi is a solved problem for a long time now, and you can buy Lenovo, System76 or Tuxedo if you want to make sure 100% things work as expected. Don't be that guy.


I don't know why, but Ubuntu still manages to make installing updates a 50/50 chance of breaking the NVIDIA GPU drivers, which then means I have to reinstall them in 800x600 where the "Software Updater" window doesn't fit on screen anymore.

Also, getting full USB3 support on Ubuntu is still a struggle. On Windows and Mac, the same USB camera "just works". On Linux, I need to learn how to download the kernel sources, checkout the correct branch, and recompile uvcvideo with different URB parameters, or else I get random disconnects.

And of course, "apt-get source" will produce the source code for the 4.x kernel that Ubuntu 18 had when I installed it, but they since upgraded it to 5.x so "apt source" is now utterly useless.

If I had to summarize my Linux experience:

"Pain only makes you stronger"


Ever since the famous middle finger salute to nVidia anybody who buys their graphics cards to use with Linux hopefully know what they're doing. Linux doesn't support them at all, and nVidia only kind of supports Linux.

If I got random disconnects for my webcam in OSX I wouldn't bother with it, I'd just buy a better supported webcam. Maybe that's just me, but I appreciate the people who tirelessly tinker to get support going. Just make sure you send those changes upstream to whatever distro you are using.

As long as nobody tells non-technical people to patch their kernels, we're all good. Any modern Linux desktop from one of the major distributions (Fedora, Ubuntu, ChromeOS) is the most low maintenance computer you can find, but with that level of tinkering you'd soon find yourself on your own.


> I don't know why, but Ubuntu still manages to make installing updates a 50/50 chance of breaking the NVIDIA GPU drivers, which then means I have to reinstall them in 800x600 where the "Software Updater" window doesn't fit on screen anymore.

I cannot dismiss your experience. I think you are telling the truth.

But I was introduced to Ubuntu by a (mildly) enthusiastic 50 years old electrical engineer back in 2006, so I know it used to work for some ordinary people even back then.

I really don't know some HN-ers manage to break Ubuntu repeatedly while it just works for non technical users, but a qualified guess is a combination of exotic hardware and tinkering (a good thing).


> I really don't know some HN-ers manage to break Ubuntu repeatedly while it just works for non technical users, but a qualified guess is a combination of exotic hardware and tinkering (a good thing).

It's more than that IMO, it's that we don't give up.

Your average non-technical user will just deal with the camera disconnecting occasionally, maybe swear at it once in a while, or (commonly) just not even notice. If you ask them how Ubuntu is, they will say "Yeah, works fine. No viruses!"

Your average HN'er will be bothered every time it disconnects, to the point where they are recompiling a bleeding-edge kernel and breaking 10 other things, but have a working camera that never disconnects. Then we complain here about Ubuntu being broken and terrible :)

On a tangent – I think that it's kind of worrying that Canonical seems to have Ubuntu on the desktop as super-low priority – all I hear about them these days is them trying to market some kind of server product, or their various kinds of managed Kubernetes offerings, which as far as I can tell, go up to $4k/node/year on your own hardware [1] I wonder how much success they are having with this.

----------

[1] https://assets.ubuntu.com/v1/b5f9ae49-Enterprise_Kubernetes_...


I think it's because most of the engineers behind the code that goes into Ubuntu are paid to work on their project for their set of users and usecases. There are very few whose remit is the whole system and overall user experience. That means the small but high impact bugs often don't get fixed. Things like the GPU drivers being uninstalled on an update (because that isn't the problem of either the apt team or the Nvidia team)


Oh, you actually get a graphical interface to fix it?

I always had to fix it in text mode with elinks, it was a complete nightmare and I switched to Nouveau.

Now I have an AMD based laptop and it has it's own problems (briefly had a mac on Intel) - they are all tricky.


You can indeed tell how tired and old the complaint is by the nature of what it complains about.

In 2021 we are instead lumbered with inconsistent support for hidpi displays, lack of DTMF in linphone, and Evolution’s option to disable pc beep on new message being a plugin.

But that was just the last week. Next week will be better and the fight for freedom is indeed an eternal struggle.


> Evolution’s option to disable pc beep on new message being a plugin.

Are you really complaining about GNOME software lacking options as if that wasn't what GNOME is all about. Please don't blame Linux for problems with one specific DE and its applications.


It's a fair complaint if the DE is officially supported by a popular distribution. Most ordinary users don't make the distinction between OS and DE. To them, a beeping email application is just an unpleasant aspect of the OS.


> In 2021 we are instead lumbered with inconsistent support for hidpi displays, lack of DTMF in linphone, and Evolution’s option to disable pc beep on new message being a plugin.

Those are not related complaints at all. Shifting goalposts much?


I read it as things getting better, problems becoming more minor.


Second this. I use Ubuntu Mate on a RPi4 for a couple of weeks now. All went fine. Last week, I suddenly thought: I should connect my printer as well, and expected to have a slightly harder time, just like setting up my printer on my last Ubuntu pc.

Click-click-done. I didn't have a hard time, not even with connecting my printer. I'm almost disappointed a bit, since there's no way I'm a cool computer guy if it's this easy.


> The worst part is: how do I put my money where my mouth is? ..., debugging wifi drivers and tirelessly trying to make resume-from-suspend work?

Coincidentally, this is actually a good idea. Apart from using supported hardware (that others have checked actually works), contributing fixes for hardware that's not officially supported yet and hasn't been tested would benefit everyone in the future!

I remember having to dig through GitHub to find a repository that had the network drivers for my off-brand Chinese/Polish netbook (i'm somewhat poor and/or frugal) and they actually worked and turned a system that would otherwise not have any network connectivity into my daily driver for note taking. Now, the fact that i couldn't automate this lookup process and that there's nothing out there that lets you check for these drivers more easily (think something along the lines of https://appdb.winehq.org/ but for drivers) or maybe try multiple ones in a row, was disappointing because things felt needlessly hard. However, actually contributing or using the work of others isn't that much of a problem.

And, since the whole ecosystem is pretty much open, there's nothing actually keeping one from at least trying to address these problems for their particular configuration, apart from needing to learn how to do so. In a sense, working on open source is exactly putting your money where your mouth is, even if it's just alternative costs.


Yeah... when you have a home-built PC with *nix unsupported hardware and invested lots of $$ you're still looking for a hard time. First, there's the hardware cost. I'm not throwing my Surface out, and I'm not throwing out my perfectly working desktop components either. But that aside...

Every time I read how the great unwashed should just contribute fixes already to benefit everyone, I despair. I write code. I build my own PCs and I have a rather good knowledge of how to debug, fix, optimise and otherwise maintain all my software and hardware.

I happen to also LIKE my software and hardware.

If I were to start digging through Github repositories and... whatever other unknown resources I don't know the search terms to even search for I'd probably shoot myself.

The point isn't that Windows is good. The point is that *nix assumes everyone's a *nix rocket surgeon and has nothing else in the world to do than spend days getting the blasted thing to deliver an equivalent level of productivity I get from my Microsoft setup.

As much as I acknowledge everyone's right to like what they like, and agree that *nix might offer one or two better things than Windoze there's just no way...


For Linux there is supported hardware from Vendors like system76 or similar. You don't expect to run Windows on Chromebooks, Android Smartphones or M1 Macbooks without issues, because these vendors only optimize & test for linux. But since linux is such an open platform, you have a large grey area "vendor didn't test it, ymmv". It may work fine, but you never know if the vendor decided to e.g. ignore standards for reboots and only react to the exact things that Windows does (This also applies to macos running on other devices, but way worse). Most devices have an entry in e.g. the arch wiki describing if they work OOTB, with hacks or some parts just don't.

There are devices where the vendor tries to support linux, e.g. Thinkpads, but if you're using Nvidia it's still a pile of hacks in the background.

I haven't needed to contribute any code myself nor use non-upstream drivers, but the process to use non-upstream drivers is usually pretty streamlined in arch (packaged in the aur).

And since we're in hacker news: Programming things close to the hardware can be a great learning experience, maybe not when done under pressure.


That was why I bought an Asus 1215B Netbook with GNU/Linux support out of the box, an Ubuntu distrubution actually.

First the wlan stopped working, because Canonical decided to replace the proprietary driver wih the open source one, however since it was still work in progress, dozen of us had to suck it up with LAN until the open source driver got feature parity.

Then my GPGPU experience got degraded from OpenGL 4.1/hardware video decoding to OpenGL 3.3 with the replacement of the AMD driver (fxgr) with the open source radeon one.

So in the end, it doesn't matter that much buying supposedly supported hardware.


> Every time I read how the great unwashed should just contribute fixes already to benefit everyone, I despair. I write code. I build my own PCs and I have a rather good knowledge of how to debug, fix, optimise and otherwise maintain all my software and hardware.

But i'm not saying that everyone should contribute, or even that everyone can. I know that i'm certainly not experienced enough in systems programming to do that, instead being more proficient with higher abstraction level languages.

That said, the fact that there's the possibility of contributing for at least some people is better than what Windows and other OSes let you do. Not only that, but if the problem is annoying enough and makes enough people frustrated enough, it's likely that there's actually someone amongst all of them who cares enough to fix the problem for everyone.

For the fix to land in the mainline might take a few months or years, but the end result is still better than looking at a black box and having literally no options to get it to do what you want.

If a new Debian updates break my GRUB install, it's likely that there will be people on GitHub discussing workarounds and fixes within minutes. If my off brand hardware doesn't work as i'd like, it's likely that i'll have to do some digging but the chances of me finding a fix aren't 0 either. If the same happened with Windows drivers, i'm not entirely sure what i could even do, since the hardware setup is something that almost noone actually cares about. In the case of *nix, if i were skilled enough, i could probably dig around the source myself and work on fixing it, as someone luckily had.


> If I were to start digging through Github repositories and... whatever other unknown resources I don't know the search terms to even search for

The package managers and their associated websites tell you exactly where the sources are; you don’t need to search for them yourself. This is how I found some quotes for `fortune` from CentOS that I liked which weren’t present in the macOS version.


What kind of hardware are we talking about here for a self-built PC which isn't supported by Linux?


● Intel Core i9 with 64Gb RAM ● Samsung 500 GB 2.5" SSD ● 2 x Western Digital 4TB mirrored disks ● NVIDIA GeForce GTX 1650 ● Sharkoon 600 Watt power supply ● 2 x Samsung U28R55 28" 4k IPS panels ● Das Keyboard 4 Professional ● S.M.S.L M6 Hifi Audio USB DAC with headphone amplifier ● 2 x Behringer MS40 digital 40-watt stereo near-field monitors ● Sennheiser HD 650 Open back headphones ● ELP 1080P wide angle webcam ● MANLI omnidirectional condenser microphone


Everything apart from the DAC should work (that may also work, but I'm not sure). The most important factor however, the motherboard, is missing in your post.

The GPU should work on distros like Ubuntu or Pop!Os per default, but due to Nvidia driver shenanigans you might have to do some manual config on e.g. debian.


+1. Linux support is pretty good these days... unless you're trying to run it on a Surface Pro or something. I'm just worried that I won't be able to find any alternative to the iPhone. The Librem and Pine phones look promising... it's just not clear how long it will be till they're stable enough to use as daily drivers.


I recently spent a full day trying to debug wifi on my dual-boot desktop with "newish" hardware. The wifi would occasionally drop 100% of packets, for between 5 seconds and 10 minutes, and then go back to normal. I tried installing different drivers and following some askubuntu forum posts, but nothing seemed to work.

I'm not an expert on hardware stuff so I haven't got the knowledge to dig deep and find what caused it. But on Windows this stuff "just works". My impression is that Windows has "solved" wifi.

In the end I just bit the bullet and connected the ethernet cable...


Make sure windows is shutdown completely, not in hibernate. It is not like wondows has "solved" wifi but it created problem in first place.


It's not bad at all, in fact: it's great! I just switched from an M1 mac and an Intel MacBook Pro to a Manjaro desktop with Gnome 4. In several ways, I found the Gnome 4 user experience to be better than macOS Big Sur. Even 3 finger swipe gestures with the Apple Trackpad work fine to switch between desktops. And my PS5 DualSense controller? Even the touchpad works, in a Wine/Windows application under Linux of all things!

With proper cooling, the machine is near-quiet on light loads like browsing. The background noise in my house is generally higher than the idle fan noise. It's obviously noisier with higher loads, but that's what you get with a beefy graphics card. (the CPU cooler has a 24 db upper limit).

I also had no issues with BlueTooth or AX WiFi. Resume-after-suspend works solidly too. The only hickup I had was that my graphics card is too new (Radeon 6700 XT) and that I had to get a newer Manjaro ISO from GitHub, rather than the main website.


You ommitted resume from suspend which is still a frigging problem. At least some finally support UEFI so your usb key install doesnt freeze on the first screen.

But yeah wifi and nvidia are solved - just a black screen and single mode startup the first time to deactivate the opensource drivers for some obscure reason (I understand they prefer it but why does it crashes... might as well just put the nvidia ones directly)


Resume from suspend works fine on my Linux laptops and desktops.


A year and a half ago, we tried to convert a well-spec'd Dell laptop from Windows to Ubuntu for a new dev. We couldn't get the wifi to work. The dev now uses a MacBook.


There are tons of github repos for non-mainline wifi chips. Did you check if you could find something for the model you had?


I can't really remember. I think I tried looking for some error message, perhaps in combination with the model, but it simply took too long. It may be fine when you're tinkering with a hobby machine, but at work it's basically me who can support Linux, so it was a no go. Wifi was not a solved problem at that moment, from my POV.


Linux has all kinds of issues, but I haven’t had a single one with drivers in the last (probably over) 7 years, whereas I had a few on Windows.


Until they can manage to make updating software seamless, it will continue to not be viable for mainstream.


Fun fact: We had to switch from Lenovo to something else because wifi did not work reliably in fedora.


Which Lenovo model?


Sounds very odd indeed since Lenovo support is the first which hits the kernel. Maybe model was too new and bit patience was required.


My 1215B Asus Netbook with Ubuntu doesn't share that opinion.

It is basically the surviving device I still bother to run GNU/Linux bare metal on, and it was sold with Linux support from the get go, yet....


Can I upgrade video drivers without being dumped to a command line?


On Ubuntu, nvidia drivers update seamlessly as part of system updates. On Windows, I have to sign in to an nvidia account, solve 3 capcthas and then manually download half a gig of "driver".


Last time I had to install either AMD or Nvidia drivers there was GUI installers. Or in the case of AMDGPU the driver is already in kernel and is updated automatically via package manager.


You can with AMD cards, presumably Intel too. Applications, including the display server, will continue to use the old drivers until they are restarted of course.


With Ubuntu 18, no. Updating it fails roughly as often as it works. And the error messages are obscure and confusing.

I didn't try Ubuntu 20 because it has known incompatibilities with software that I use.


Ubuntu, PopOS and Manjaro have GUI utilities for driver installations.


Or Dell.


Whoever controls the hash list controls your phone from now on. Period. End of sentence.

Apple has not disclosed who gets to add new hashes to the list of CSAM hashes or what the process is to add new hashes. Do different countries have different hash lists?

Because if the FBI or CIA or CCCP or KSA wants to arrest you, all they need to do is inject the hash of one of your photos into the “list” and you will be flagged.

Based on the nature of the hash, they can’t even tell you which photo is the one that triggered the hash. Instead, they get to arrest you, make an entire copy of your phone, etc.

It’s insidious. And it’s stupid. Why Apple is agreeing to do this is disgusting.

And it doesn’t make sense. If I were a pedophile and I took a new CSAM photo, how long would it take for that specific photo to get on the list? Months? Years? As long as pedophiles know that their phones are being scanned, they won’t use iPhones for their photos. And then it will be only innocent people like me that get scanned for CSAM and potentially getting that used against me in the future.

If they really cared about CSAM, this feature is useless and stupid. All it does is make regular people vulnerable to Big Brother tactics which we know already exist.


There are numerous incorrect statements in your comment.

First: Apple has disclosed who gets to curate the hash list. The answer is NCMEC and other child safety organizations. https://twitter.com/AlexMartin/status/1424703642913935374/ph...

Apple states point-blank that they will refuse any demands to add non-CSAM content to the lists.

Second: Why can't the FBI / CCCP inject a hash into the list. Here's a tweet thread gamifying that scenario: https://twitter.com/pwnallthethings/status/14248736290037022...

The short answer is that at some point an Apple employee must visually review the flagged photo, and confirm that it does represent CSAM content. If it does not, then Apple is under no legal obligation to report it.

Third: You claim that abusers will simply opt not to use iPhones to distribute their CSAM content rendering the feature useless. This is in fact not how things have played out on other platforms like Google and Facebook that do already scan for CSAM. These organizations report on the order of millions of flagged images per year. [1] Clearly the abusers have simply not moved on to a different platform.

[1] https://www.businessinsider.com/facebook-instagram-report-20...


You've done nothing to address OP's concerns. The linked twitter thread assumes each actor (NCMEC, FBI, Apple) act in a certain way. There's no "provable" guarantee against an actor acting in bad faith or in a manner inconsistent with certain interpretations of the law (which we've seen routinely with the NSA).

The FBI/NSA can absolutely inject something into the hash list. You're assuming that NCMEC needs to be involved. Or that it would be broadly known to Apple. The reality is that the hash list needs to be updated on a different cadence than iOS itself. So it's likely downloaded rather than baked into the OS build permanently. That means that you can't necessarily rely on an iOS build being signed to know if you have a different hash list from everyone else. Ultimately, a small team at Apple cooperating with a secret court order could release a different hashlist to a select set of devices. There's nothing really stopping that.

Even if Apple didn't comply, we've seen recently how sophisticated cybersecurity companies armed with zero days can manipulate devices easily. If the mechanism for hash lists scanning the device is already built in all it takes is an exploit changing the hashlist and where it reports to which might be much simpler than gaining full access to the device.


It's worse than that. They don't even need to release a different hash list for you, all they need to do is add a few images they know you'll probably have (say from your Facebook or Instagram posts) to the regular DB to meet the match threshold. The people running the database aren't going to be continually going back through old images to double-check they're actually CSAM/theoretically CSAM-related.


How do you know there aren’t bad actors working at the NCMEC? If I know that adding a hash to a list will get it flagged, and I could conveniently arrest or discredit anyone I wanted, I would certainly send people to work there.

How will Apple know whether a hash is for non-CSAM content? Spoiler alert: they won’t.

And Apple claims it will be reviewed by a human. Sure, just like YouTube copyright claims? Or will it get automated in the near future? And what about in China? Or Saudi Arabia or other countries with less human rights?

The point is that it is completely an easy way to get tagged by a government or bad actors as a pedophile. It’s sickening that Apple would let this “technology” into their products.


If you don’t trust what Apple says about this, why even argue?

Apple could be doing all of this and more without telling.

I agree with you on the point that the concern here is what various governments may mandate, but if we’re going to argue about Apple’s specific implementation you should probably understand it.


There's nothing to argue. I'm incredibly disappointed in Apple and feel betrayed. I went all-in with the Apple ecosystem because I stupidly and naively believed their commitment to privacy.


You don’t seem to have read what Apple has said on the issue so that feels a bit extreme.

And for the record I’m not for this, but my concerns are more about what various governments may start mandating as this capability becomes an option.

If you want on-device and cloud backup of data that isn’t checked for illegal content, I think that change needs to happen at the legislature not the phone store.


I’m telling you what is going to happen in the future. What they write today is meaningless. In 2016 they fought the fbi in terms of unencrypting data. Then they decided to not encrypt iCloud backups. They didn’t mention in 2016 they wouldn’t encrypt iCloud backups just like they won’t say today that they won’t bend to the Chinese government tweaking their algorithm or their hash list in a few yearsnn


The only logical reason for Apple to implement such a complex system is to give them a defensive political tool to allow them to do things like encrypt iCloud backups and photos. “But CSAM” is a common political tool used against such encryption.

I don’t see this move from Apple making any measurable difference in how well a government can scan your device for arbitrary data.

If it happens it was going to happen anyway, your prediction of the future comes true with or without CSAM scanning, if you are allowing for new government orders and legislation.


The difference is that Apple has built the infrastructure for _global_, continuous image scanning. No need to hack individual phones or know who your targets are, all you have to do is get some images added to list. If you think the Chinese (or most likely American) government wouldn't love to be able to find out everyone who had copies of certain images, say from protests or inside camps, you're far too optimistic.

This is Apple creating a mechanism for worldwide surveillance and declaring they'd never compromise because they're good people who we can trust. Note that they're not making any actual promises that might land them in court if the system is abuses, they're just repeating that they're good people who'd never cooperate with authoritarian regimes.


All you have to do is:

- Get your image added to the list

- Order Apple to scan all images not just ones being uploaded to iCloud

- Order Apple to change the reporting threshold or remove the safety voucher system unless you only care about users with many matching images

- Compromise or order Apple to skip the human review and pass the reports directly to your government agency

Really you have to compromise or order Apple to change every step in the pipeline except “compare images to hashes” which is the part an intern could do in a weekend.

I’m sure governments would love to know which users have “undesirable” images on their device. I’m just saying instead of doing the above steps to take advantage of the CSAM system, they could just order Apple to scan for the images they want scanned for. Apple’s choice is the same, comply or risk participation in that market.


> How will Apple know whether a hash is for non-CSAM content? Spoiler alert: they won’t.

As I said, the flagged content is reviewed by an Apple employee before it actually triggers an external report. If the flagged material is not in fact CSAM, it will not be reported.

> And Apple claims it will be reviewed by a human. Sure, just like YouTube copyright claims? Or will it get automated in the near future? And what about in China? Or Saudi Arabia or other countries with less human rights?

First of all, the volume of flagged CSAM content is much, much smaller than the volume of YouTube copyright claims. It's entirely plausible to ensure that a human reviews all flags. Second, Apple is actually constitutionally-barred from automating this step entirely. You can thank Neil Gorsuch's decision in United States v. Ackerman for this. [1] The crux is that since NCMEC is a qausi-governmental entity, automatically sending CSAM-matched content to NCMEC without an Apple employee first inspecting the content would constitute an unreasonable search and seizure and would violate the 4th amendment.

[1] https://library.law.virginia.edu/gorsuchproject/united-state...


> How will Apple know whether a hash is for non-CSAM content? Spoiler alert: they won’t

Spoiler alert: they are building the hashing algorithm, so there is at least someone to confirm everything in the beginning. They are not hashing hashes.

Stakes are so high in this game that they must be very careful in review. This is not about Youtube copyright claims.

> And what about in China?

I don’t think that China even cares a lot. It is mandatory to install some apps by law for some minorities. They have the surveillance already.


By the same logic, how do you know there aren't bad actors working at Apple's software teams? Or your insurance company or bank?


If you think that FAANG and other tech companies aren't filled with spies from around the world, you're incredibly naive.


> Apple states point-blank that they will refuse any demands to add non-CSAM content to the lists.

How would Apple know if non-CSAM was added to the list? Apple does not and cannot curate the list. Apple only receives hashes from NCMEC (and other unnamed government agencies). The government does not allow Apple to verify that this list only contains CSAM. This pledge from Apple is at best misguided at worst intentionally dishonest. Of course nobody can make Apple add non-CSAM to the list: Apple doesn't maintain the list.


Do you work for Apple or the government? You just sounded like a lawyer talking about similar but unrelated things to win the case.


What on earth? To clear up some of your false statements:

- You need several hash matches to trigger a review

- The reviewer can of course see what triggered the review (the visual derivative)

- The reviewer would see that the matches are not CSAM, and instead of the report being sent on to the NCMEC it would instead start an investigation of why these innocuous images were matched in the first place

- If the CIA or FBI or CCP wanted to arrest you, there are much easier ways than this


Apple is putting a backdoor in the phones to legitimately frame anyone they want. The commenter's claims are valid and in my opinion not limited those.


Couple of gripes there.

Apple basically controls you phone anyway and have done for years as they can issue patches and os updates.

Also you can turn iphotos off - I've never used the thing in spite of owning various apple devices. I do use Google photos and doubt they are much different in terms of checking for CSAM.


I imagine this very well being extended not just to match photos, but also metadata within the photos. And then the list of things it matches against is extended from CP to other undesirables by society.

After a 12 hour flight - that was of course delayed - Liam was pretty exhausted, but was looking forward to getting to his hotel in the center of Munich. He got to the front of the queue, and handed his passport over to the customs officer. The officer scanned Liam's passport, took it off the reader, and after 20 seconds asked "You flew from Los Angeles today?". Liam replied "Yes...". The customs officer, with his firm German accent, said "I need to check something with my colleague, wait here.". Not that there was anywhere Liam would go.

The customs officer came back with someone else who was slightly older and clearly more senior. The senior officer said "Come with me please", and led Liam to a room at the side of the customs hall. The officer said "Sit down please", indicating to the chair in front of the desk. The room looked like any other office, with a computer on a desk and chairs either side. The officer sat behind the desk and started typing something on the computer. After a few minutes he said "You are wanted by Interpol".

The customs officer explained to Liam that he had been flagged as a photo he had taken 6 months before included a known terrorist, and so by association Liam had been flagged. Liam asked how they accessed his photos - he is tech savy and only takes encrypted backups onto his own devices. The customs officer explained that they didn't need to, as this flagging had been done entirely by his device. The customs officer gave the date of the photo, and Liam found it on his device. He had been on holiday with his girlfriend in Paris, and they had taken a selfie. There was someone clearly visible behind them, and the customs officer explained that the facial recognition had identified this person. Due to privacy laws he wasn't able to say (or even see themselves) who this person was, only that they were on the highest German terrorist watch list.

From the photo it looked quite obvious that this person was just a passer by who glanced at the couple just at the moment they were taking a photo. The customs officer took Liam's fingerprints and asked Liam questions about his trip to Paris - typing the answers into the computer - and then the computer decided that Liam could be released. However the customs officer told Liam that he would be closely monitored while in Germany, and may receive 'check in' calls from police. He told Liam he must answer them otherwise a team will be dispatched to intercept him. Liam was then allowed to go on his way. He was only delayed by 45 minutes, but it wasn't a great start to his holiday in Germany. 3 years later when he visited Germany again the same thing happened, at least he knew what to expect this time...

(This is partially based on something that actually happened to me. Nearly a decade ago my passport was stolen, and every time I go to Germany I need to have a fun conversation with customs officers. Every other country I've visited - including the US - let's me through without even mentioning it)


>The worst part is: how do I put my money where my mouth is? Am I going back to using Linux on the desktop (2022 will be the year of Linux on the desktop, remember)

people really need to retire this meme. On the desktop in particular as a dev environment Linux is completely fine at this point. I can understand people not wanting to run a custom phone OS because that really is a ton of work but for working software developers Fedora, Ubuntu whatever any mainstream distro is at this point largely hassle free.


While I don't expect any Linux phone to become "mainstream" any time soon, it would be good if we had at least one "polished" alternative available.

PinePhone is still in beta and according to its own creators "aimed solely at early adopters"[1], while Librem 5 is experiencing supply chain issues with backorder shipping now scheduled to resume in October[2]

There is a version of the Librem 5 which is made in USA and it's in stock and shipping now, but unfortunately outside of my budget[3].

I was also considering getting something like Fairphone and installing an alternative OS but looking at compatibility charts there are some things that may not work with one OS or another.

So, right now I can't have a daily driver that is not iOS or Android, I will hold onto my very old smartphone and hope that things will change in the next year or so. I'm working from home for the foreseeable future so I can wait a bit.

[1] https://pine64.com/product/pinephone-beta-edition-linux-smar...

[2] https://shop.puri.sm/shop/librem-5/

[3] https://shop.puri.sm/shop/librem-5-usa/


Problem will always be the hardware. Until we get some 80's IBM style architecture on mobile, there is no viable Linux for smartphone.

I have 4 phones. None of them support lineage os.


I recently saw a review of the "Volla Phone" which is running Ubuntu Touch. Seemed much better than the Pinephone but still has issues:

https://youtu.be/neG2Z21epLI


> Seemed much better than the Pinephone

Just curious, how is it better than the Pinephone?


I am mostly going by what I saw in the review. I saw the review of pinephone on Linus’ channel where it seemed very slow. The review of volla phone seemed much better. Maybe I am wrong though as I haven’t owned either.


Pinephone is only slow if the software is not optimized. It's true for most OSes, but have a look at this: https://sr.ht/~mil/Sxmo/.


> Until we get some 80's IBM style architecture on mobile, there is no viable Linux for smartphone.

Not sure what you mean. Pinephone and Librem 5 both have very well documented hardware with first-class support of desktop (!) Linux.


There was one but people wouldn't "put their money where their mouth is".

The Jolla phone was made by ex-MeeGo/Maemo devs from Nokia, but nobody bought them.

Now they're focusing on just the OS and they don't make any full fledged devices: https://jolla.com/


Yes, I knew about them, they're not the only open source phone that died out unfortunately. Hopefully things will change now that privacy has become something even the "average Joe" worries about.


I hate ubuntu from the bottom of my heart, for breaking stuff and changing stuff that used to "just work" all the time, but 99.999% of the time, that means "background stuff", "normal users" never mess around with, and for normal users, a "usb key -> install -> next, next, next -> finish -> reboot" just works.


I used to use Ubuntu for many years, but it became a such bloatware. So many things what you don’t really need.

Packages were sometimes also different compared to vanilla Debian. This caused issues in stability (talking more about feature set). Some advanced software just did not work, which worked on equivalent vanilla Debian.

I might recommend Ubuntu for very beginner developer, but not to stick with it longer time. It will give you headache. There are also more privacy-friendly distributions.


Are Mac and Windows not also full of an enormous amount of crap that we don't need?


That does sound quite bad, doesn’t it? That some Linux distributions are getting closer to them? We have a freedom, let’s use it. I’m using it for something minimal, like Arch Linux.


I used to run Arch but I got sick of trying to fix my Grub/EFI settings etc. which it would occasionally break.

Freedom to me is being able to ignore the OS' existence, and frankly Ubuntu is better at that than Arch.


I’ve been using Ubuntu and now pop os on a Thinkpad for a couple years now, and I don’t miss Windows (which I used since…well, DOS 6) at all. Quite the opposite. As time goes on, seeing what’s happening with MacOS and Windows, I’m more and more happy that my computer is actually my computer.


Good luck if you have two screens with different DPIs.


Hello, user with two screens with two different DPIs (and two separate resolutions as well, and one of them a giant touch-screen drawing tablet). Nice to meet you, I use Arch btw.

X11's handling of different DPIs is annoying but workable; there's a couple of different possible methods of handling it that have their own pros/cons. Per-monitor scaling is supported, but I personally don't like the way it's handled, so instead I just pick a happy medium scaling that works OK for both monitors. My understanding is that Wayland makes this easier, but I haven't switched over yet because I'm waiting either for GPU prices to drop or for NVidia to figure out whether it's ever going to play nice with Wayland.

There are definitely pain points with Linux, but it's completely serviceable as a workstation computer, the meme is really dead at this point. If you're on a touchscreen device, Gnome's most recent release arguably has comparable if not better touch handling than Windows (admittedly not a high bar to clear, but remarkable considering how bad Linux's touchscreen support used to be). I use a Mac at work so I'll fit in with my coworkers, but outside of work I do not own a single computer with Windows installed on it.

I'm not going to tell everyone to switch to Linux, there are very valid reasons why someone might not want to, including an increased technical burden. That's real, it's just not the giant hurdle that a lot of people seem to think it is. The "year of the Linux desktop" is really out of touch in my experience, modern Linux as a desktop OS is fine; it's perfectly serviceable as a professional environment for a lot of people. I use Linux in part because it makes it easier for me to get an ergonomic setup for drawing tablets, device compatibility, etc...

And at some point I figured out that I don't really care what desktop Linux's market share is, because even <1% still seems to be big enough that the desktop stays usable for professional work and for more complicated device/media setups, which is all I need it to do.


So your answer about different DPIs is that you solve it by using a single DPI setting for both monitors?

And the “year of Linux on the desktop” meme isn’t about it not being a workable desktop or workstation OS. Of course it is and many people use it as such.

But it’s not a mainstream option that puts any kind of competitive pressure on the alternatives. You can’t go into Best Buy and walk out with a linux laptop/desktop that doesn’t have some proprietary layer (Chrome OS, Samsung, Google services) baked in.

So yes Linux works on the desktop, but “everyone should just use Linux” isn’t a realistic answer to issues like this one from Apple.


Per-display scaling works in X11. I happen to use a single DPI setting that's an in-between for both monitors, but that's not a requirement, you can scale per-monitor if you prefer that. Wayland will handle this a bit better, but even in X11 you have options.

I agree that "just use Linux" isn't an answer to privacy concerns in the mainstream, if for no other reason than that the current conversation about Apple is a conversation about phones, a platform where Linux is not currently usable. Purism/Pine phones are exciting, but they're not usable smartphone alternatives yet. Really if we want to talk about Apple's recent changes, I'm not sure any desktop OS is relevant to that conversation.

My frustration here is that a lot of the people who use "year of the Linux desktop" seem to have a really outdated view of what Linux is actually like as a daily driver. HDPI scaling was a huge problem on Linux for a long time. It's a lot better now.

The main barriers for Linux adoption at this point aren't technical. The barriers aren't trying to handle multiple displays, or projectors, or wifi cards, or even getting hardware -- there are multiple companies now that sell good preloaded Linux laptops. That doesn't mean the market share is magically there or that we're going to see a mass migration to Linux any time soon, but the market share is good enough that Linux gets decent support, and stable enough that Linux gives me generally fewer problems than Windows used to (that may have more to do with the quality of Windows degrading though). For a long time that wasn't the case; even until a couple of years ago I would have said that Linux was way behind the curve on touch/tablet support. That gap is pretty much gone now as far as I can see.

That part of the meme that's related to grishka's "sure Linux is a workable desktop, just don't hook any monitors up to it" comment -- that part is dead as far as I'm concerned.


I don't have any issues using multiple displays with different scaling settings on Linux.

Pop! OS, Ubuntu, and Linux Mint are among the distributions that flawlessly accommodate different scaling settings for multiple displays without any special adjustments.

On Wayland, both the GNOME and KDE desktop environments support multiple scaling factors.

For GNOME on Wayland, if you need fractional scaling, you'll need to turn on a setting if your Linux distro doesn't do it for you:

https://www.omgubuntu.co.uk/2019/06/enable-fractional-scalin...


They can retire the meme when any measurable fraction of mainstream users are using Linux without a proprietary software layer provided by a big tech company.


Imagine taking a photo or have in your gallery a photo a dear leader doesn't want to spread. Ten minutes later you heard a knocking at your door. That's what I'm most worried about, how is this not creating the infrastructure to ensnare political dissidents.


I am profoundly disappointed that almost all of the discussion is about the minutiae of the implementation, and "Hmm.. Am I ok with the minutiae of Apple's specific implementation at rollout?" And almost nobody is discussing the basic general principle of whether they want their own device to scan itself for contraband, on society's behalf.


But Apple says, "You need several hash matches to trigger a review." See, that makes it OK!


Maybe people realize that’s not a winning strategy and thus keep going back to technical details…


> Am I getting a Pixel and putting GrapheneOS on it like a total nerd? FUCK.

Thanks for depicting people who care about privacy and act on their beliefs as "total nerds", that's an encouraging attitude.


I'm a nerd and don't take it as an insult. Rather as "going full nerd on this" in the sense of giving a lot more importance and effort to a small detail in an already full life.

And even if it was not, let's not get upset for every quip, I don't want to live in a world where bloggers have to ponder every word because they are afraid of offending someone. A bit of spice is ok, the dose makes the poison.


> I don't want to live in a world where bloggers have to ponder every word because they are afraid of offending someone.

I don't find it offending, I find it stupid to write like that, that's not the same. So you write a long piece about how Apple is bad for privacy, just at the end to detract the available alternatives because you know, you don't like them for no particular reason? Who said that privacy was going to be easy anyway?


As Stallmain said - people are trading privacy and freedom for convenience.


Nerd in 2021 is equivalent to "not casual/mainstream". If you consider sideloading OS's you are in fact a nerd.


It's not even remotely difficult to install GrapheneOS, so not sure how much of a "total nerd" you need to be to follow simple instructions. Has education dropped that low already?


Unless Apple can demonstrate that the techniques they are using are intrinsically specific to CSAM and to CSAM only--the techniques do not work for any other kinds of photo or text--slippery slope arguments are perfectly valid and cannot be denied.

Apple is a private company and as such its actions amount to vigilantism.


Hi, post author here.

To anyone upset or offended by the Linux/nerd paragraph: please chill, and please forgive my tone.

I am a nerd myself indeed, and what I wanted to convey by this not-as-funny-sa-expected paragraph was that "going full nerd" is not a solution. There are ways to protect your privacy that will not be available to less tech-savvy people, and it's a problem. HN crowd will use Thinkpads with Arch on them, and phones with Graphene or whatever, but most people won't.

Yours, Absolute nerd and lover of desktop Linux since SuSE 6.0


> HN crowd will use Thinkpads with Arch on them, and phones with Graphene or whatever, but most people won't.

Non-nerds can just buy devices with preinstalled Linux and never care about the maintenance or support. I never had any problems with WiFi or suspend on my Librem 15. Same I expect from Librem 5 smartphone.


I doubt Apple has not thought about the PR & policy consequences of such an iPhone backdoor. For me, it's even more sad to see Apple using the fight against CSAM, a noble cause, as a shield and a way to convince the masses that breaking its promise to protect privacy is OK. "What happens in your iPhone stays on your iPhone [no longer]". There is no court oversight, no laws, it's automated mass surveillance.


There is a great deal of misinformation and confusion on this topic. Here is a good interview with Apple's head of Privacy.

https://techcrunch.com/2021/08/10/interview-apples-head-of-p...


What part exactly do you think people are confused about?


The only information I care about is Apple is putting a trojan in my phone and check anything I do with my phone. That's where it is going. This is all messed up, no random bs article can change what's going on.


I used to always get the latest and greatest iphone but with the politics and everything that's going on why would I want to spend more than the absolute minimum on my cellphone? There are plenty of wholesome things to spend money on other than tech.


> The hypothesis that I have is that Apple wishes to distance itself from checking users’ data.

This is best explanation of the whole situation I have read.


And in the process hand over more user data to tyrants.

Surely they know this will be abused to check user data before it is uploaded to iCloud. All it takes is a willing government.


How is that different from how things work now?


Before, Apple would only scan data already in the cloud.

Now the pandora box has been opened. They are adding capability to scan files on iPhones before it hits the cloud.

Any technical or financial excuse they might have used in the past to not scan files locally is now rendered null.

Governments can just say: "you know what? scan these arbitrary sets of hashes as well, they are illegal in my jurisdiction and since you've shown that you can, scan them regardless if the user is sending to iCloud or not."


> Any technical or financial excuse they might have used in the past to not scan files locally is now rendered null.

This just proves that people don’t understand much about technology in depth. Capability has been there already, for very long time.

99% of their work has gone for implementing that perceptual hashing function and their PSI system.

If you want to give excuses, there will be always more. But they are not reasons to prevent this pandora box.


> Capability has been there already, for very long time.

Capability was always there but it wasn't implemented. Now arguments like excessive battery drain or processor usage or any argument they could have come up with can no longer be used since they went ahead and implemented software that scans iPhone files.

Perceptual hashing is a mere detail of how they are scanning files TODAY. Same for scanning only files which are to be sent to iCloud, a mere detail that can be changed anytime and surely will be requested by tyrants around the world.


I think you missunderstood a bit what I said. Capability was there, already as implemented, for example as Neural Nets which categorizes your photos, by scanning all of them. Detecting faces and giving them names. So on. Even your Files app scans all of your files, and probably collects some metadata for iCloud sync process.


Good thing this didn't exist in 1776, or I'd be living in Great Britain.


From A Concrete-Security Analysis of the Apple PSI Protocol:

> Taking action to limit CSAM is a laudable step. But its implementation needs some care. Naively done, it requires scanning the photos of all iCloud users. But our photos are personal, recording events, moments and people in our lives. Users expect and desire that these remain private from Apple. Reciprocally, the database of CSAM photos should not be made public or become known to the user. Apple has found a way to detect and report CSAM offenders while respecting these privacy constraints. When the number of user photos that are in the CSAM database exceeds the threshold, the system is able to detect and report this. Yet a user photo that is not in the CSAM database remains invisible to the system, and users do not learn the contents of the CSAM database.

https://www.apple.com/child-safety/pdf/Alternative_Security_...


So Apple is going to take care of positive matches with highly reliable and trained personnel? Just like their highly trained personnel who kept the App Store clean of shitty apps? :')


Apple implemented a backdoor that scans your photos on your device, then alerts Apple and the authorities if there is a match against an un-auditable list of reference photos.

Currently it's been activated for CSAM only and only scans photos backed up to iCloud.

That's the framing I prefer and which much better explains the issue with it.


Apple is not the police. Apple is not an extension of the Unites Government. There is simply no reason for Apple to enable local scanning for any content what so ever.


Apple was (is?) part of PRISM


My common sense is tingling, telling me that Apple's eventual move will be one of malicious compliance, finally implementing e2ee in a way that provides them with culpable deniability and users with a much desired privacy enhancement.


I turned off iCloud photos tonight. F** Apple. If there is a collision, then it gets manually reviewed by a human...so now my private pictures are on display for someone to see who I've not given permission. Just Say No.


On the internet almost everyone wants to extract money from kids and their parents and they try to hook them up with different mechanisms. That is also true for Apple, although they appeal to protective instincts of their guardians.

I get why a safe environment is appealing. Parents know that their kids get milked by virtual goods in games or social media and don't know how to protect them from that. I think states are indeed responsible to set sensible boundaries for the industry to protect minors.

But this cannot lead to subject the whole net to it. Age verification is also not possible, so a protected environment is the way to go. The latter is difficult to advertise to developers because they also know about corporate ambitions to get their hands on market share.

Google isn't even the worst actor, more aggressive corps like Amazon are far more destructive in this field, but there isn't a single corp that is guilty here, so legislation also needs to protect free spaces. While seemingly in contradiction, this is also extremely important for digital education of future generations, even more so than questionable content in my opinion. Most here might have been subjected to that as kids. Was it that bad as generally assumed? This is a threat that should not be overblown. Parents feeling guilty neglecting their kids are extremely vulnerable to this line of thinking, even if they don't neglect their kids at all.

Many countries have rules against cartels, but there is a conflict of interest here. No country likes to split their most successful companies for nothing in an international market. So nobody does.



The main statement on that site does once more not explain why real criminals just wouldn't use not backdoored software.

Indeed, it just looks like another move in the current crypto-wars.


> Do not implement end-to-end encrypted communications for accounts where a user has indicated they are under 18 years old.

It's repulsive how the NCMEC is pushing to deprive minors of privacy and agency, while simultaneously claiming to advocate for their benefit.


There is something you can do about it: don’t use Apple products


That strategy will last ~15 minutes until Google is doing the same thing.

Then what? I would argue that what Google is doing already is way more privacy-compromising than this.


Google couldn’t do it, not effectively anyway because android vendors and variants are decentralised. They could do it for the pixels but that represents less thats half a percent of the market - not to mention that everyone on a pixel could just move to lineageOS if they didn’t like it.

That freedom and decentralisation doesn’t exist on Apple (at least yet, maybe the DOJ or congress will regulate them).


There’s nothing OS specific here. Google Photos could ship this on any Android distribution, and they have a billion users.


That's a great argument for a Linux phone or de-googled Android build.


Then don't use Google products either (or don't use for photography). Seems obvious.

"Dumb" phones and "dumb" cameras still exist.


That is not practical for many people who wish to be a part of society anymore. See: rollout of virtual vaccine passports, etc.


You can get a physical one by printing out a QR code, afaik.


Virtual vaccine passports will be forgotten long before they affect even a small fraction of people on this planet.


I really don't see why the scanning would ever be done on the phone instead of on iCloud if it only affects iCloud images.

But I do have guesses why.


That's the crux of it. Why bother with on-device identification, unless one of:

a. Apple intends to E2E encrypt iCloud data.

b. This is intended to extend to all photos on the device in the future.

I'm hoping it's (a), but it's probably (b). And in either case it sets a bad precedent for other companies to follow.

Edit: This also turns every jailbreak into a possible CSAM detection avoidance mechanism, giving the government plausible cover to treat them as serious, criminal actions. Apple would probably love that.


Where is this stance coming form that Apple needs to break E2E crypto to be "able" to "E2E encrypt iCloud data"?

That makes absolutely no sense. There is nowhere such a requirement.

They could just E2E encrypt iCloud data. Point.


It is curious that particular notion keeps getting repeated ad nauseam given it makes zero sense.

It's coming exclusively from Apple fans desperate to give them the benefit of the doubt on this rather epic implosion of Apple's brand when it comes to privacy.

So many people hooked themselves up to the Apple-is-pro-privacy wagon. They invested into the ecosystem across the board. They've been swimming in Apple's products & services pool for years or decades. Apple just went from hero to maybe villain on privacy. So now many of those people that are getting screwed over by Apple are going to be emotional about it, irrational about it, in denial about it.

It won't be used nefariously in the future. It won't be expansive, it'll remain the very limited program they say it is today. It won't be abused around the world by authoritarian governments. One of the greatest potential surveillance tools ever deployed and it won't be rampantly abused by the world's most powerful governments and agencies (all of which are hungry to spy on their citizens and or other nation's citizens). Yeah right.


There is no requirement right now, but you only need to look at what's happening in the US, UK, and EU to see the battle setting up around E2EE. Apple may see this feature as a way to quiet critics of E2EE. Hard to know if it will be enough.

But, I think it's safe to say if Apple did turn on E2EE w/o any provision for things like CSAM, it would help drive legislation that is likely more heavy handed.


Even if that were true, that Apple might accelerate attempted government overreach by encrypting data end to end, the obvious better solution would be to hold the line on the status quo for as long as possible (including legally duking it out with governments around the world to hold the line to the extent they can), not to do what they're doing now.

If they were badly losing on that front (such that all hope were lost so to speak), then they could attempt this new approach at that time, they could offer that up as a middle ground, and their seeming capitulation then would buy them a lot of slack. Instead, they're supposedly pre-capitulating (if you buy into that premise).

Doing what they're doing now, it looks like one of a few possibilities: they're just that stupid (I'm skeptical), they're groveling for points (hey, if we give you this, maybe you'll leave us alone on anti-trust, as our size and quasi-monopolistic position is beneficial to your surveillance aims), or they're already being compelled (as they were with PRISM) and are not publicly able to reveal that.


They could E2E iCloud, of course. Question is whether they could while still staying on the right side of the law.


Is there a law requiring device manufactures to search (without any warrant!) the devices of all their customers?

How do for example hard drive manufacturers comply?


Even if it is A though, what is the point of E2E encryption if your endpoints are compromised (there is none). The only purpose at that point would be marketing which does seem like something Apple would do but I sincerely doubt that will be the case


End-to-end encryption sometimes leaking raw content to a "trusted party". What a joke.


This article speculates that that's because Apple is not scanning on iCloud to respect their privacy policy: https://www.hackerfactor.com/blog/index.php?/archives/929-On...

Apple's report count to the NCMEC is really low so it's probably true that they are not scanning on iCloud unless they receive a warrant.


"In order to respect our privacy policy, we need to even more egregiously violate your privacy in a weird lawyer-y way"


Only semi-good reason is it would enable E2E encryption in the cloud while still allowing detection of CSAM.


Apple is free to enable E2E encryption today, without the backdoor.


Except despite this being repeated over and over… Apple has not said anything about E2E


It’s also being repeated over and over that apple is doing this so they can later do some more evil scanning. They haven’t said anything about that either.


Apple almost never talks about features like that until they’re ready, so while you’re correct, it doesn’t mean much.


It's been leaked before that Apple folded under pressure from the FBI not to add iCloud encryption for images.


That would seem to back up the theory that they plan to roll out E2EE and are adding on-device scanning first to enable that.


With all this backlash, now would probably be a tactically good time.


As I said, the design enables this, if Apple chose to do it. It remains to be seen if they will.


The design more plausible enables total device surveillance than questionable iCloud Backups. (I refuse to call a backdoored setup E2EE)


That’s silly. The design is so narrowly tailored to scan for CSAM that nobody can use it for anything else.


It all depends on what perceptual hashes you use. If Apple can institute a process whereby those are tied to the OS version, but not to the region, then it would be impossible to impose jurisdiction-specific exceptions.


> It all depends on what perceptual hashes you use.

I’m talking about the mechanism as described, not a hypothetical.

> If Apple can institute a process whereby those are tied to the OS version, but not to the region, then it would be impossible to impose jurisdiction-specific exceptions.

As it is the mechanism they have built only works in the US jurisdiction.


> As it is the mechanism they have built only works in the US jurisdiction.

That is very worrisome. How do they want to withstand political pressure then to make the database of "bad hashes" dependent on the jurisdiction if they've already made the feature dependent on the jurisdiction?


Presumably they’ll just say, we built a child abuse prevention mechanism and we have no intention of using it for anything else.


> As it is the mechanism they have built only works in the US jurisdiction.

How do you know? I think it's just implemented in the US. Nothing stops Apple to implement it elsewhere.


They have said the mechanism is global in both their FAQ and in an interview with Tech Crunch. They have no way to target a database to a region or device.


In-device scanning is used for the feature that warns teens (and if =< 12yr old, their parents).

Biggest mistake Apple has ever done was to roll out three different features at once and announce at the same time. This is creating all sorts of confusion.


And CSAM detection as well.


"I don’t care what anything was designed to do. I care about what it can do!" <= Gene Kranz in 'Apollo 13'


"The hypothesis that I have is that Apple wishes to distance itself from checking users' data. They've been fighting with the FBI and the federal government for years, they've been struggling with not reporting CSAM content to the NCMEC, they don't want to be involved in any of this anymore."

However there is close to zero evidence to support this idea. I was just reading something the other day that directly contradicted this; it suggested the relationship has been excellent save for a single, well-publicised dispute over unlocking an iPhone. In other words, the publicly aired dispute was an anomaly, not representative of the underlying relationship.

Even more, unless the pontificator works for Apple or the government, she is not a good position to summarise the relationship. Plainly put, it is not public information.

What does such baseless speculation achieve. Is it like spreading a meme. I dont get it.

"The worst part is: how do I put my money where my mouth is? Am I going back to using Linux on the desktop (2022 will be the year of Linux on the desktop, remember), debugging wifi drivers and tirelessly trying to make resume-from-suspend work? Am I getting a Pixel and putting GrapheneOS on it like a total nerd? FUCK."

Is having a computer with closed source wifi drivers and proper ACPI support more important than having a computer with an open OS that does not include an intentional backdoor.

Maybe the problem is not how to put your money where your mouth is, its how to put your mouth where your money is. What does GrapheneOS cost. Maybe this is not about money.

Options like GrapheneOS, even the mere idea of GrapheneOS, i.e., that there can be alternatives to BigTech's offerings, get buried underneath Apple marketing. Much of that marketing Apple gets for free. It comes from people who do not work for Apple.

Bloggers and others who discuss computers can help change that. They can also help Apple sail through any criticism (and they do).


> However there is close to zero evidence to support this idea

But there is. For example Apple used a lot of effort in this area, when they built their hardware security module (HSM), which is basically on every iPhone and iPad.

This module is built in such a way, that nobody, not even Apple can access security keys from this device, or reprogram it again. Locked iPhone or password vault stays locked or gets cleaned. One blog about this matter: https://blog.cryptographyengineering.com/2016/08/13/is-apple...

How about user tracking? Apple is one of the few companies which is not caught yet by selling data to third parties, nor even collecting more than needed to develop their products.

Our new fancy iCloud is maybe the biggest evidence? It is maybe the cleverest way to this date to enable somekind of E2EE while getting limited info about the content. Highly recommed reading that PSI paper.


If Apple wished to distance itself from checking users' data, then it could stop collecting users' data in its data centers. This is the highest valued company in the world. It would not put them out of business. Would it even cause any decline in revenue. Lets be honest here. Apple wants user data.

(NB There is no reason to "sell user data". For example, Facebook does not "sell user data". Apple and Facebook provide access to consumers. These consumers are ad targets. In the case of Apple, they expected to purchase more stuff after they purchase an Apple computer. Apple intends to be an intermediary in those transactions.)


"They've been fighting with the FBI and the federal government for years..."

Besides a single anomaly, what is the other evidence.


Eh, I completely agree that this is a step too far, but the solution is so simple. Stop using Apple devices - luckily I switched from iOS to CalyxOS when my iPhone 7 broke earlier this year. Honestly, it wasn't so bad.


This is throwing the baby (pictures) out with the bathwater. I am for better or worse deeply rooted in the Apple tree (phone, laptop, tablet, and, recently, watch); for all its occasionally infuriating and arguably stupidly designed warts, the fact that so many features disappear and Just Work is something you can nary say for other ecosystems.


That’s the thing, for years I had to tolerate those silly issues from people who are supposed to be best in the industry. There is still no default calculator installed on iPad in 2021!

For some people, it’s simply no worth it anymore, after primary commitment is gone..


Any idea why Apple didn’t just implement server side scanning like everyone else?


In this TechCrunch interview, Apple believes it is less invasive since no one can be individually targeted.

The hashes are hard coded into each iOS release which is the same for all iOS devices. The database is not vulnerable to server side changes.

Additionally, FWIW, they do not want to start analyzing entire iCloud photo libraries so this system only analyzes new uploads.

https://techcrunch.com/2021/08/10/interview-apples-head-of-p...


>The hashes are hard coded into each iOS release

Do you have a source on that? Since it is illegal to share those hashes in any way or form. Even people working with photo forensic and big photo sharing sites cannot get access to them. I very much doubt Apple can incorporate them into the iOS release without breaking multiple laws. The hashes themselves can easily be reversed to (bad quality) pictures so having the hashes equals having child pornography.

Edit:

https://www.hackerfactor.com/blog/index.php?/archives/929-On...


From the interview I linked, Apple Privacy head Erik Neuenschwander said, “The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled.”

Where did you hear sharing hashes is illegal? How would anybody determine whether CASM at scale without those hashes?

Your hackerfactor source states, “In 2014 and 2015, NCMEC stated that they would give MD5 hashes of known CP to service providers for detecting known-bad files.”


The important part about the hackerfactor link is that the author claims he was able to reverse PhotoDNA hashes into images. Of course Apple has their own, different perceptual hashing algorithm NeuralHash which they use, but if the hashes of a similar system can be reversed into images, maybe NeuralHash hashes can be reversed as well. Therein lies the problem.

Edit: i.e. OP isn't talking about MD5 hashes


NCMEC will share MD5 but not the hashes used for perceptual matching.


I think you're forgetting that Apple isn't using PhotoDNA. Just because PhotoDNA hashes can be reversed into images, doesn't mean Apple's perceptual hashes can be reversed into images. I think there's a good chance they can be though.


No, these hashes can’t be reversed to an image. They’re not CSAM and therefore not illegal. That blog is not very good, either from a tech standpoint or a legal one.


They are perceptual hashes of CSAM images no? Sure, just because they are perceptual hashes, doesn't mean they can be reversed to an image, but it seems to be true PhotoDNA, a similar perceptual hashing algorithm, can be reversed to an image. For this reason, I believe it is possible Apple's perceptual hashes can be reversed as well, though, maybe they did it in a different way and it's not possible


PhotoDNA is Microsoft’s algorithm. Apple’s new one is NeuralHash. Plus the hashes are encrypted and blinded before being stored on the phone. They can’t be reversed.


In order for this system to work, Apple has to be able to compare the CSAM NeuralHash hashes against the NeuralHash hashes of the images to be synced with iCloud. How do they compare the hashes without decrypting?


They compare the encrypted forms of the hashes, the first step on the device and the second on the server.


How does one compare encrypted forms of hashes?


You hash images in the same way, then encrypt the hashes in the same way, and see if the results match.


Then you are just comparing hashes. I don't see what you gained by encrypted the hash.


That was very insightful article from a legal aspect. Strongly recommend others read this to understand more nuanced opinion.


> Since it is illegal to share those hashes in any way or form

Source? (The link you provide does not claim that, as far as I could see.)


The article claims that photoDNA is reversible to 26x26 images and claims that the hashes are therefore CP.


Ah, right. But Apple is using NeuralHash, not photoDNA, right? Does that suffer the same problem?


We don't know but I think there's a good chance it does. It's important to determine before release. Maybe it's possible to encrypt them in such a way, no one can access them despite being on the device and still be able to use them for comparison


Yes, that's what the paper A Concrete-Security Analysis of the Apple PSI Protocol from UC San Diego claims:

> Reciprocally, the database of CSAM photos should not be made public or become known to the user. Apple has found a way to detect and report CSAM offenders while respecting these privacy constraints.

https://www.apple.com/child-safety/pdf/Alternative_Security_...


Potentialy to be able to introduce e2e encryption later on.


It's a good question. The only explanation that makes sense is that this now allows them to begin end-to-end encryption of iCloud photos. I see that many commentators are claiming that they could already have started e2e encryption without introducing this "backdoor." While this is true, Apple would then be creating a perfect environment for child abusers to house their CSAM content on Apple's own servers. You can understand why Apple might not want to do that.

This allows Apple to get to what is in their mind the best of both worlds: a truly private cloud for their valued users while not creating a safe haven for child abusers.


Lawyerly word games. The e in e2e is the user, or a device loyal to them; malware on the user's device has always been understood as a subversion of e2e.


This article speculates that that's because Apple is not scanning on iCloud to respect their privacy policy: https://www.hackerfactor.com/blog/index.php?/archives/929-On...

Apple's report count to the NCMEC is really low so it's probably true that they are not scanning on iCloud unless they receive a warrant.


Pessimistically: to allow them to (eventually) scan other content that you don't upload. I think pessimism about Apple's behavior is somewhat warranted at this point.


As covered in other articles, that is exactly what they were doing previously.


No, they did not. That was erroneous reporting by the Telegraph that a lot of outlets copied [0].

The correction:

> This story originally said Apple screens photos when they are uploaded to iCloud, Apple’s cloud storage service. Ms Horvath and Apple’s disclaimer did not mention iCloud, and the company has not specified how it screens material, saying this information could help criminals.

And from the interview with TechCrunch:

> This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users’ libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through users’ iCloud Photos.

[0] https://www.telegraph.co.uk/technology/2020/01/08/apple-scan...


I'm not so sure. John Gruber's write-up said that Apple has only sent over a couple hundred reports in the last year to the gov't, compared to over 20 million from Facebook. This suggests to me that Apple's scanning wasn't nearly so widespread.


Because no one in their right mind uploads CP to a cloud service, and apparently pedos abuse Facebook’s easy sign up process to bulk upload CP.

Not that it matters when those Facebook sign ups are probably proxied with throwaway emails


This system has 0 transparency. I cannot appeal. I cannot check my status. I cannot check this mysterious "counter" I cannot even check an image i have to see if it is flagged. I cannot know who is manually looking at my data no matter how private it its.

I get a note in my fruit delivery with the name of the person and time they were at work packing my food. Yet I'm told that I cannot know anything about what is happening with at least a partially automated system that can potentially put me in jail for the next 20 years?


Question:

Would Apple report CSAM matches worldwide to one specific US NGO? That's a bit weird, but ok. Presumably they know which national government agencies to contact.

Opinion:

If Apple can make it so that

a) the list of CSAM hashes is globally the same, independent of the region (ideally verifiably so!), and

b) all the reports go only to that specific US NGO (which presumably doesn't care about pictures of Winnie the Pooh or adult gay sex or dissident pamphlets)

then a lot of potential for political abuse vanishes.


NCMEC is not exactly a normal NGO; it was opened with Reagan present and Congress frequently gives it funding. It also works with other government organizations on a frequent basis.


NCMEC isn't really an NGO; it is an agent of the US government, written in to US law.


Except it's not a government agency; it's a private non-profit with strong governmental ties.

https://www.missingkids.org/blog/2020/four-ncmec-myths


Apple said they're only enabling it in the US for now.


What would prevent someone from, for instance, printing off an illegal photo, “borrowing” a disliked co-workers iCloud enabled phone, and snapping a picture of the illegal picture with their camera?

On iOS the camera can be accessed before unlocking the phone, and wouldn’t this effectively put illegal image(s) in the targets possession without their knowledge?


These illegal photos are not trivial to obtain. Possessing (and here, the printing step necessitates possession) these illegal photos is in and of itself a crime in most relevant jurisdictions.

But OK, let's say that you've found a way to get the photos and you're comfortable with the criminal implications of that. At that point why don't you just hide the printed photos in your coworker's desk? My point is that if you have a disgruntled coworker who's willing to resort to heinous crimes in order to screw you over, there's many different things they could do that are less convoluted.


Yes, the bad actor would need to do non-trivial bad things.

To stay on the topic of iCloud, my point is that it seems quite hard to protect yourself/device from even this most basic attack scenario.


You are right. Easy to frame someone and destroy their life/career.


It takes 1 minutes on TOR to find enough to get anyone thrown in jail don't make it sound harder than it really is. As for photos vs printing taking a photo reports it for you so you're never involved.


An “interesting” part of this is, up until now these photos had little technical exposure. But now that millions of phones can be affected, creating unrelated pictures that purposefully match these hashes becomes a shinny target.


This is untrue. The same photos being used by Apple for this scheme are already in use for CSAM scanning by many major platforms (Google, Facebook, Microsoft).

If someone wanted to generate a false positive image, they could already leverage it on these other platforms.


I'd wager the target size of all iOS devices, with the scanning happening by default right after the photo hits the device, is bigger than the other platforms combined.

Google Photos (+ Google Drive I guess ?) is on by default only on Pixel phone I think, and not by default on Samsung phones. For Facebook you have to post the images, for Microsoft I don't think there is any scanning inside Windows itself and by default, I guess it's on OneDrive ? Those companies are big, but the majority of pictures still go through iOS first and foremost.


yes


This whole mess brought back a memory of when I was 4 to 5 years old (so probably 1971). During a summer vacation we were walking at a harbor in Tuscany with my parents and they told me suddenly I had to take a dump. Problem was that there was no bathroom nearby, well it probably was since the place was filled with restaurants, but we were like a hundred meters from the nearest one, which was incompatible with the sudden need of a baby like I was. So my parents quickly found an area with vegetation behind a building, helped me remove my clothes and sit down waiting for me to unload all that stuff. Then my father saw me doing an expression they later described as priceless, so he quickly shouted me to wait, then grabbed his Nikon and shot me a photo. That photo later that year won a prize.

Now imagine the same happening today with my dad shooting me a photo using his iPhone, only to trigger a CSAM alert somewhere an probably be investigated for child abuse. Just no thanks. Screw you Apple, and all those who pull your strings into creating this farce.


There will only be an alert if that photo is extremely similar to an image in the NCMEC database, AND there are numerous other such photos on the account that match. The threshold number of matches to trigger an alert is tuned for a 1/trillion chance of false positive.

Furthermore, if you were using say Google Photos to store your images, then you were already subject to this vulnerability.


So what if a small circle of people produce their CSAM material by themselves and share only among themselves? None of the pictures is being uploaded to that database, so either the algorithms are really really good at recognizing them, or it will require human intervention, that is, scanning one by one all phones, then deciding which picture matches the criteria and write down the names of the people involved. I can't think of a similar scenario that doesn't imply the total loss of privacy by anyone even remotely linked to one of these people.


If a small circle of people only share not-already-known CSAM content among themselves and never share known CSAM material then there is no way that Apple's scheme can catch them.

However, if at any point one of them screws up and shares a known CSAM image, the entire ring can be caught. This is not actually an implausible scenario. See for example: https://twitter.com/alexstamos/status/1424037132201431045?s=...


We need more people using Linux on desktop and pouring money into the Linux phone to justify these.

Voting with your wallet matters and works. It is why apple and Google still do so much marketing and hype about their phones and devices.


This ad seems fitting in the context: https://www.youtube.com/watch?v=tdVzboF2E2Q


> In the world of computer security this technology has a name, it’s called “a backdoor.” A well-documented and well-intended backdoor, but still a backdoor. Installed and enabled by default on millions of devices around the world.

Sorry, but that backdoor has already existed for a long time. It exists in every IoT gadget, smart car, smart speaker, smart home and other connected device that phones home to its vendor and can receive arbitrary firmware updates. It exists for every app and every desktop software that will automatically update itself in the name of "evergreen software".

This is just the first time someone is publicly making use of the backdoor.


#NotUpdating to iOS15, also #NotUpgrading this time until further notice.


Yep, that's deceptive advertising on privacy and everyone bought into it and walked straight into the reality distortion field.

Another innovative 'gotcha' by Apple. A reminder that they are not your friends.


As much as the tech and security community has concerns and objections to this policy change on the part of Apple, I’m skeptical there will be any notable impact to Apple’s revenue and future sales.


I remember people were saying same thing about Linux on Desktop, yet we have viable alternatives to proprietary OSes.

Yes, someone will have to struggle to get us there, but will have alternative if we don’t give up.


Now if the government hates you they can claim they found this on your phone


when are they going to add this backdoor to MacOS?


Dear tech users,

Associating with some of you has become a liability. One may be smart enough to avoid iPhone and Alexa et al. but what to do when one is surrounded by people who willingly expose themselves to nefarious technology?

In short, I don't want pictures of me being hoovered up along with your baby pics from your iPhone.


From the article;

> You could of course say that it’s “a slippery slope” sort of argument, and that we should trust Apple that it won’t use the functionality for anything else. Setting aside the absurdity of trusting a giant, for-profit corporation over a democratically-elected government,

And then later it reads

> and has previously cancelled their plans for iCloud backups encryption under the pressure of FBI.

Isn't the FBI in place because of the democratically elected government? It seems like the for profit organisation is trying to do the right thing, and the government is stopping them.

This is the fundamental problem with arguments based on "trust" - the government seems to be doing the wrong thing.


I'd like to point out that the government (and by proxy Apple, companies care even less) doesn't give a shit about children. They are advocating a policy of mass infection, they didn't give a crap about children in Flint drinking toxic water, etc. If they cared about kids, they would care a lot about thinks that physically hurt and kill them. This means we don't have to take their stated reasons for this at all seriously.

Apple, if you care about children, you'll pay more than your legally owed taxes and push for improved access to education, nutrition, and free child care. They're only interested in the avenue that coincidentally dramatically increases their surveillance powers and the powers of the government.

Weird, can't figure that one out.


apple also has a history of using child labour to build their products, even cases within the last decade


in "Photos" app, in the bottom right corner there is a "search" icon. When I click it, and entering "beach", I can see photos I've made on the beach (or in the sea, near the beach).

What does it mean? My (and your) photos are scanned and analyzed. I've heard literally zero noise about this feature - nobody was complaining (at least not loud enough to let me notice it).

So, why the hell all of that fuzz is being raised now? You're (and mine) photos will be scanned and analyzed AGAIN. Not by humans, by algorithms. In some really rare cases they might be checked by humans, but you 100% will not have troubles with the law if photos don't contain CSAM.

I have 2 kids and I’m not buying that argument “oh my library of naked photos of my child - I’m in danger”. If you are uploading naked photos of your child to iCloud - it's similar to publishing them. Everything that is uploaded to the Internet, will belong to the Internet, and you don't have so much control of it. If, for some awkward reason, you have sets of naked photos of your child and you want to save them - never ever send them to the Internet.

If you think that not-so-experienced users should not know about this rule - I’m pretty sure they don't even know (or care) about this “scandal”. All of that FUD wave is raised by the journalists and echoes on forums like this one.


It turns out people liked it when their phone scanned their photos for 'selfie' or 'beach' for them.

Apparently tagging 'child porn' on your photos for searching isnt the killer feature someone thought it might be.


Yeah :) Also, it's funny that Apple here goes for bigger risks: reputation, trust, all of that noise, then risks of false accusations. And for what? To help with stopping the pedophile networks.

“But no, wait, they want to use algorithms to scan my photos, it’s a privacy violation...”

Just wake up.


They can have a full resolution copy of my photo, all 12 million pixels, along with the exact time, location and direction I was facing when I took it... but I draw the line firmly at a hash of it being taken.


So what you’re saying is if Apple had a 5 year plan to help China disappear minorities, they should’ve just kept improving photos search? Maybe this child safety effort isn’t aimed at satisfying some authoritarian wet dream after all!


Given that they classified a photo I took at a pool as 'beach', they have an awfully long way to go. If their disappearing algorithm doesnt improve they will be disappearing chinese majorities instead of minorities.


Your photos aren’t being tagged as child porn.


* unless they contain child porn


What are those cases where they might be checked by humans? To determine whether it's an innocent baby bath? If you have naked photos of a partner which happen to hit a statistical match for certain patterns that are similar to CSAM? These aren't far fetched scenarios, these are exactly the most likely types of photos that would be likely flagged. Are you okay with those photos being passed around Apple's security review team for entertainment? Leaked to the press if you later run for office?

How about in 15 years when your small children aren't small? Is this the magical software that can tell the difference between 18 year old boobs and 17 year old? The danger isn't to child molesters, it's to people who get incorrectly flagged as child molesters and need to fight to prove their innocence.


I’m not even sure if it's a joke or you are serious.

It is a check against existing hashes in a big database of confirmed CSAM. What are the chances that photos of your partner are in that database? If your partner is older than 12 - it's 0%.

Who is taking more risk to be sued for the leakage of the photos, you or Apple?

The last part doesn't worth to be discussed because children in that DB are younger than 12.


I've now read up on NeuralHash a bit more, and while I think the idea that this is just a hash is slightly overstated, you're right and my above comment assumed this was a classifier rather than a perceptual hash.


It is very far fetched. The system matches copies of specific photos, not subject matter like “baby taking a bath”.


I really don't get all the hype. This is not a backdoor as it's called in TFA. It's not Apple "reaching into your device". It is literally checking for specific images and reporting their presence to Apply if found. It's not using AI to analyze your photos or anything like that. It's looking for specific images, and only prior to uploading them to iCloud. It won't even flag your own nasty images because the hash won't match.

Note: The above assume we're talking about a typical hash of data and not an image-analysis "hash" of what it thinks the content it. This is supported by the language they use.

Yes, it's a bit big-brother. But I already assume the authorities can fairly easily get ALL your iCloud data if they ask Apple the right way.

You know what's creepy AF? Having a private conversation and getting facebook ads the next day relating to the topic. Talk about an acquaintance acting schizophrenic and get ads about medications and treatment for that? Creepy as fuck. And that was on the wifes iPhone - I have Android and didn't get that stuff, but I seem to remember similar incidents where I got ads for stuff talked about. That's serious voice analysis, not just checking a file hash, and it happens when your phone is in your pocket.


The 'hype' seems to me to be the valid-sounding concern that this tool creates the ability to "look for specific images prior to uploading them to iCloud" on Apple devices, and that while today that capability is exclusively applied to save the children, that tool could later be repurposed by authoritarian regimes or other human rights abusers.

Speaking towards what we can assume the authorities can do, we know the FBI cannot compromise an encrypted iPhone because they attempted to force Apple to do that via court order. From what I can tell, the objections to Apple's expanded protection tooling is similar to the objections to adding backdoors to iPhone encryption so the FBI can break into devices used by criminals. It's great to stop the crime today, but how could this be repurposed tomorrow.


Plus it's just creepy that my phone will be scanning my pictures. I use iCloud Photos now and understand anything I put on someone else's servers is subject to search. With it on my phone, it feels radioactive to me and needs to be disposed of promptly.

It's just icky.


Wait are you saying you get ads based on things you converse about out loud in the real world because your phone is listening to everything in your pocket? You know that is a myth and isn't true, right?


>> Wait are you saying you get ads based on things you converse about out loud in the real world because your phone is listening to everything in your pocket?

YES.

>> You know that is a myth and isn't true, right?

No, it's not. I guarantee it. I don't know if it's built-in phone software or if it's the FB app listening while idle, or something else. Alexa seemed to be doing this too, so that's put away in a box somewhere.

Here's another story. I got an Oculus Quest 2 and had told a co-worker about it and one of the games I play. He was on the fence, so I brought it in and let him try some of the intro/demo apps on it. Having been convinced that this particular implementation of VR was finally good, he decided to order one from Best Buy to pick up after work. He got out his phone and (not sure what app he looked at, probably browser) the first thing that came up was an ad for the Oculus Quest 2.

These are not coincidences - particularly the one I mentioned a few comments up. That's not a topic in our house, but the next day there were the ads.


You saying one conspiracy is 100% true but another one is laugh out loud impossible?


Wait, what conspiracy am I saying is 100% true?

If your phone was listening to everything you do all the time then you'd need a lot of processing to analyze it on-device and exfiltrate the critical pieces of info (which would be pretty obvious when looking at CPU usage and battery life), or you'd be sending an insane amount of data off over the internet all the time.

I'm not saying it's technically impossible - I'm just saying that the ol "Facebook is listening to everything and that's why you get ads for something after you talk about it in real life" is almost certainly a myth with zero actual data to support it other than some anecdotes online.


There is a possible explanation for the schizophrenia example. My wife did a google search for a particular medication that person had been on and went off of. That was a couple days prior to the ads showing up. So good old Google could have been part of that - not sure how google searches affect facebook ads - or perhaps it was the pages she visited to read about that medication. It's all just f-ing creepy sometimes regardless of how it come about. It sure seems like verbal conversations being overheard is a part of it.


I think the perceived problem is the database of hashes to be matched. If this is a database that can be contributed to by other parties this can be used to "frame" people of a crime. This can the be used for example to get rid of political dissidents etc. Since there are just hashes in the database, this should be fairly easy.

This is what I THINK people are worried about. I don't have an Apple device so I haven't really fact checked all of this.


I agree with you, but I want to correct you - they are using an image analysis hash, not a cryptographic hash. However it doesn’t change the logic of your argument. They require multiple positive marches, and they also require a visual derivative of the CSAM images to match.


>> they are using an image analysis hash, not a cryptographic hash. However it doesn’t change the logic of your argument.

Yes, I think it fundamentally changes it. A crypto hash would be looking for specific known CSAM images. Image analysis is error-prone and will lead to problems and false positives. I don't care how good it is, that's actually looking at the users images and making a judgement.


> A crypto hash would be looking for specific known CSAM images. Image analysis is error-prone and will lead to problems and false positives.

The hashes they are using are only looking for specific known images.

Frankly, you should look at the technical docs on the mechanism at this point. It’s not the kind of image analysis you think it is.


It's a dam-break moment for Apple on privacy - until now, people could be unaware, or wave away , concerns about storing iCloud data exclusively on government servers because it was China, promoting ""antivirus"" software on boot because it was Russia, not encrypting iCloud backups because who knows maybe that's just been a complete oversight for years even after detailed reporting on how Apple used it to escape arguing about encryption after San Bernardino, and additionally made it super easy to get the backups, just show me a warrant

Now it's on _your_ phone in _your_ country. No handwaving, no sticking ones hand in the sand. The only hopeful argument being posited is that somehow this will "make iCloud more private in the long run"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: