Pretty clear cut for me: iMessage either offers robust end-to-end encryption , or it doesn’t. (Intentional backdoors place it in the latter ofcourse).
I want true end to end enc.
People who say things like this rarely also want the hassle that comes with it. Key exchanges, re-keying: all a big PITA. But iMessage (and WhatsApp) do key exchanges facilitated by a trusted broker. If you didn't trust the broker, you would have to do more work when making an initial exchange with a peer and more work if they lost their phone/keys.
iMessage has always been a compromise with subtle rough edges. But we trusted Apple because they talked about privacy and made it clear that their business model meant that we should trust them more than competitors. But now, precisely because of how good and effective they secured their devices -- they fear regulation and thought that they could further compromise things and people would go along with it.
>We are over a year into a pandemic which involved wide-scale lockdowns. Physical key exchange is a nonstarter for broad adoption.
I think the emphasis is on the -ed in involved. That's a temporary condition which is already resolving in much of the world. You may also be overestimating lockdown compliance among average people.
Yes and no. It's not 'hard' it's just friction that's not present at baseline (when iMessage was designed, and for the most part still today). Sometimes you send messages to someone who you haven't ever met in person, or you met but didn't think to exchange keys at the time. And when Dave drops his phone in the lake and buys a new one, he would have to sheepishly re-meetup with everyone he interacts with.
If iMessage had been designed to require a brokerless key exchange, its security would be superior (though in this case, Apple's interception software trumps everything). But iMessage would appear to be less convenient than alternatives like WhatsApp (brokered key exchange, re-keying).
It could offer both brokered and brokerless key exchange, distinguishing between messages from parties verified through each method by, I dunno, the color of the text bubble.
The thinking that key exchange need to be robust and P2P makes alternative secure solutions harder: most of the times big companies don't want to be caught doin a man-in-the middle attack, so iMessage or even Facebook is practical enough for it, as long as the key exchange can be verified on any other comunications channel easily.
We would be in a much better position right now with email privacy if the version of PGP that doesn't defend from an active attack would have been deployed worldwide.
Literally the first sentence under the article headline
"More than 90 policy and rights groups ask company to abandon plans for scanning phones of adults for images of child sex abuse.".
Maybe YOU weren't talking about csam, but everyone else was including the article author, if you even read it.
This particular thread and your own comments in this thread are about iMessage, with your comment implying it reports sexually explicit images to "big brother".
I pointed out that's not true (that's probably why other people downvoted your comment) and that you're apparently confusing the iMessage stuff with the CSAM scanning stuff which, as I already said, are completely separate from each other.
How do they manage the previews? Skype, for one, uses their servers to get the preview. Vastly different from a client-side preview (which some people wouldn't want either).
If a copy of your messages (even if it is only low resolution jpgs) get auto forwarded to Apple/FBI under certain circumstances it is not securely E2E encrypted
Apple's investigation code executes on the device, which allows them to preserve their definition of end-to-end as long as you plug your ears and ignore where the 'ends' are.
If you used iOS/iMessage, you have always trusted Apple. To be truly 'end to end' you would assume that there's no opportunity for another party to intercept your messages: the only trust would be in the keys you exchanged with your peer.
1. They are a trusted broker for iMessage key exchanges. You didn't do the key exchange, you assumed that when Apple did so on your behalf that you're really communicating with the peer you think you are.
2. They designed the iOS features that you trust keep iMessage data inaccessible to other untrusted software on your device.
3. They designed the secure enclave and make public statements that they won't compromise it for law enforcement. You trust that their deeds in private match their public statements.
In this case, they wouldn't be "between" the ends, no? They'd be _at_ the ends, just as sending a message on Signal doesn't prevent someone from stealing your phone and reading your messages.
I'm not in agreement with this being ok, but if it truly is on device, it still technically can be E2EE
> but if it truly is on device, it still technically can be E2EE
What do you think this software does when it finds a matching hash entry? Toss a notification to ask you nicely to pop round your nearest FBI office?
What meaning does 'end-to-end' have anymore if this applies? If Apple wrote software on-device to forward a copy of all messages prior to encryption to iCloud for 'backup', would it still be end-to-end? What if they sent it to an AdTech firm to index for interesting terms that match products you should be pitched? The software in this case is still Apple's, running on-device.
E2EE implies it's encrypted between the two parties exchanging messages, which it still is. I'm not suggesting that this isn't an immoral circumvention of the system, but it's still certainly end to end encrypted between the two clients, given that the scanning is performed on device, and not in transit.
That's not what happens with the iMessage feature - that's what happens with the iCloud photo library CSAM scanning feature, and iCloud photo library has never been E2EE.
"The Messages app will add new tools to warn children and their parents when receiving or sending sexually explicit photos.
When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo. As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.
Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages."
There are no visual derivatives, no neural hashes, and nothing is sent to apple.
I want to ignore most of this article to complain about an inaccuracy that keeps coming up.
> More broadly, they said the change will break end-to-end encryption for iMessage, which Apple has staunchly defended in other contexts.
But... it wouldn't. The iMessage feature doesn't expose the contents of your message to anyone else under any circumstance.
If you're a child under 13 and your parents have opted in to this feature, you get a choice of seeing naked-pictures sent to you and having your parents be notified that you chose to, and bypassing it with no notifications of anything. (But once you're 13+, no notifications would occur.)
There are potential issues with this, mostly relating to abusive families being controlling. They'd have to do weird things like forcing their teenaged children to keep making new under-13 accounts to actually take advantage of it like that, though. And none of these issues impact the e2e status of iMessage in any way.
Apple really screwed up PR by launching the iMessage feature alongside the scanning-and-reporting iCloud Photos feature. There's so much confusion out there about this.
(The breaking-e2e aspect does exist with the iCloud Photos scanning... not that it's currently e2e, of course.)
> Apple really screwed up PR by launching the iMessage feature alongside the scanning-and-reporting iCloud Photos feature. There's so much confusion out there about this.
It's possible that they determined that dealing with the backlash in one go would be easier than having two separate blowups. Most consumers have only fuzzy, vague senses of brands based on a poorly-understood game of Telephone linked to each news cycle. This only counts as "one" privacy ding against Apple for most people, whereas two separate news cycles would do much more damage to their reputation around privacy.
Let's also not pretend that some of this confusion done is 100% willfully by some of the news organizations reporting on this because it benefits them to conflate the two. We even see it a little from places like EFF who appear to believe their ends justify the means.
Honestly, I’m starting to think that this is deliberate attack by opponents of e2ee that want to prevent iCloud going dark (right now Apple holds the keys just like every other cloud storage provider but after implementing these CSAM features they could transition to true e2ee).
The odd thing is that it doesn't benefit the EFF to conflate the two.
The EFF's first article objecting to these features objected to the parental control on the grounds that if, say, I send your 12 or under kid a sex photo and your kid elects to continue after being warned the image might be harmful, and then elects to still continue after receiving a warning that if they do so their parent will be notified and see the image too, it is violating my privacy because I did not consent for the parents to see the sex pictures I'm sending to their kid.
Now their objection seems to be that if I'm, say, a 12 or under gay kid who is not out to my parents, receive a sex image on my phone that might reveal I'm gay, and I elect to view that image after being explicitly told my parents will see it too and confirming that I still want to view it, I might get outed to my parents.
Note that in both these scenarios the child knows their phone has parental controls enabled, has to go through two full screen dialogs that try to discourage them from viewing the image to view it, both of those have rejecting the photo as the highlighted option, and the second explicitly reminds them that if they view the photo their parents will be notified and shown a blurred version of the photo. And note that if they choose to reject the photo rather than view it, there is no parental notification.
For years, privacy advocates (including the EFF) have said when governments wanted to legislate controls server side or at the ISP level to provide a safer net environment for young children that the right approach is to give parents the tools to ensure a safe net environment for their children.
So now we are getting a parental control that goes out of its way to not notify parents or share a blurred photo with them unless the child explicitly chooses to view it knowing that this will happen--and they are objecting to that.
After this it is hard to imagine any parental control system that the EFF would approve of and that is actually even remotely effective. It makes them look like an organization that is just going to raise knee jerk reactions to every proposed solution, nor matter how reasonable.
When an organization raises objections to every attempt to solve real problems at some point policy makers stop caring what they have to say, and the EFF has either reached that point or is real close to it. They need to start suggesting alternate solutions to those problems if they want people that matter to listen to them.
There are plenty of comments on HN that claim there are no details of what has been proposed while simultaneously claiming to have read the white paper.
This is rapidly becoming a topic not worth reading more about due to the misinformation and shilling from all sides.
I agree, sadly. This prompted me to unsubscribe from them. In some ways it reminds me of the ACLU. Similar fixation on donations above all else, to the point of dark patterns. I still haven't managed to fully extricate myself from their pleas.
It's not fear though. Just watch how within the next five years or so encryption gets back doored. Mass surveillance is fully instated. Social credit score is implemented, and only state authorized programs may be installed upon approval to mobile devices and proprietary operating systems. Soon we will see The Ministry of Truth implemented. The past couple years the state is only after more power. Or, you can ignore reality and keep acting like everything is just normal and ok. Just as the continuous Middle-Eastern wars are designed, inflation will continue to increase to push all the middle class and lower into abject poverty and despotism is not very far away after that. Continuing to do these things slowly while dividing and confusing the mass of the public. This is no conspiracy whatsoever because this is exactly what is happening right this very moment. The most state power can be grabbed today via surveillance programs like that of the NSA/CIA and these false CSAM implementations "for the protection of the children." This is what the state has wanted, in every state, for 2,000 years. Increased government power and watch only goes one direction. What is very sad for me to see the past 20 years in tech is that if this is done slowly over decades and via confusion it seems very easy to get the mass of the non-thinking public to agree to illegal search and seizure, and to give up all freedoms and rights under a false guise of security.
"Regardless of popular sanction, war is mass murder. Conscription is slavery, and taxation is robbery. The libertarian in short, is almost completely the child in the fable, pointing out insistently that the emperor has no clothes. Throughout the ages, the emperor has had a series of psuedo-clothes provided for him by the nation's intellectual caste. In past centuries, the intellectuals informed the public that the state or it's rulers were divine. Or at least, clothed in divine authority and therefore what might look to the naive and untutored eye as despotism, mass murder, and theft on a grand scale, was only the divine working it's benign and mysterious ways in the body politic. —Murray Rothbard, For a New Liberty"
I’m in the same boat. Unfortunately, I don’t see myself continuing to donate to them, given the terrible job they’ve done on this topic. I trusted them to bring nuance and clarity —- really liked what they did in response to the cash bail discussion in California —- but this was too much of a miss for me.
I’m still a fan of the ACLU, even with their spotty record on gay rights, but it might just be a matter of time.
I generally support the ACLU mission, but they have antagonized me ever since I started a recurring donation a few years ago. For reasons, I decided to stop the recurring donation, but ACLU is just like the NY Times and many other places -- easy to sign up, damn near impossible to get them to stop. E-mails, phone calls, nothing. Ultimately I just declared the card stolen and had it reissued with a new number.
On top of that, at some point they must have asked if it was okay to contact me, and I must have inadvertently said yes, and now I get regular text messages from random people across the country asking if I will help out. About half the time it's for something in a state (like Georgia during the election) which is several thousand miles from where I live. No amount of begging has gotten this flow of unsolicited text messages to stop.
No it doesn't. You're thinking about the iCloud Photo Library reporting. The Messages feature literally never sends any of your content to anyone, no matter how much CSAM you're sent.
So, what parts of my phone are actually being scanned by this system? It seems like it's rummaging through any images that are touch iCloud, which would include:
- iMessage
- WhatsApp
- Camera
- Matrix (?)
- Any other application that you give 'photo' permission to?
There are two different CSAM related features. One scans photos before they get uploaded to iCloud, other looks for indications of nudity and only works on minors' phones if their parents enabled this detection.
To be clear, the content being scanned in messages is not a hash check against a database. It’s a plain old nudity detection algorithm similar to what Facebook has.
The article seems to cover both sides of the argument fairly and nails the concerns over Apple's on-device scanning:
> If governments had previously asked Apple to analyze people’s photos, the company could have responded that it couldn’t. Now that it has built a system that can, Apple must argue that it won’t.
Seems so, our corporate media is largely comprised of cowards and pay-for-play.
Most media is deeply unprofitable, and it compensates for this by taking payment for ads disguised as organic content (have first hand experience with this that has honestly made me cynical).
Most media won’t bite the hand that literally feeds them.
After seeing all these posts here and not hearing much about it outside of this space I’m starting to think that the cost benefit analysis that Apple did was that the profit from very likely convincing parents to spend a little more to get their children an iPhone as their first phone (and probably lock those children into their ecosystem) because of these features was greater than the amount of users who would switch to something else and that any blowback would be diminished because “what about the children”
I'm not sure we have an understanding of Apple's intentions. Given the massive backlash and the apparent maturity of their plans this must have been in the works for a while. They may have been under pressure from government actors to develop a program. Or they might see writing on the wall. Either way, I doubt they will ever share the total picture behind this move.
> convincing parents to spend a little more to get their children an iPhone as their first phone
What do you mean here: I don't think makes things any safer for child _users_ of iPhones, does it? It's scanning for images of child sexual abuse.
(To anyone inclined to respond without reading the context, I'm not suggesting that there's no privacy concern if you're not sharing CP. I'm addressing OP's hypothetical that some parents will see owning an iPhone as safer for their child)
There are two main features that cause concern. The one that has been getting the most attention and noise is the CSAM scanning. That only happens on upload of images to iCloud Photos (currently only in the US). No automatic notification of any law enforcement agency is done, and there is a threshold requirement before Apple reviews the account and then submits a report.
The second feature is iMessage specific. Parents have to enable this on their account on behalf of their children, and it only applies to minors (_probably_ just children under 13, possibly older; I am not clear on that). This looks for (mostly outbound?) messages going to the child's phone for “unsafe” images (e.g., nudity). If those messages are found, _the parent_ is notified. No one else.
I have _more_ concerns about the second feature as it is likely to be abused by controlling parents and is more likely to negatively impact children who are questioning their gender or sexuality. But this is _explicitly_ a feature to protect kids from sharing sexual images of themselves.
Though I still don't see how the parent comment's theory holds: I suspect that the PR backlash would be far smaller/nonexistent for this feature than for the CSAM one, as we already accept that the rights of minors are heavily curtailed.
And the NN totally-offline¹ optional configurable check is a good thing! Heck, I'd've enabled it on my own phone if I had an iPhone; it'd've made me feel that much better about carrying an internet-connected camera around with me wherever I went, if I knew it'd warn me if I accidentally photographed myself naked (or somehow decided to do so deliberately, in a moment of foolishness).
They built two different systems, one that I think is good / that would be a selling point to parents, and one that could be a powerful weapon.
¹: The “notifying parents” thing is a little iffy; I'm not sure what to think about it.
Honestly, this is deeply concerning. Apple shouldn't be in a position to be able to police what photos we take. If I'm a teen and I sext with a partner, does that mean that in the near future, our messages/photos will be flagged and reviewed for "child porn" and forwarded to police enforcement? That's horrifying and incredibly unfair to young people exploring their sexuality in a completely consensual and normal way.
There's nothing wrong with taking pictures of yourself nude and/or sharing these with whoever you want. This goes towards territory that I find deeply uncomfortable.
> If I'm a teen and I sext with a partner, does that mean that in the near future, our messages/photos will be flagged and reviewed for "child porn" and forwarded to police enforcement?
No. The system looks for files that are similar to known illegal content. It is not a general purpose underage nude photograph detector.
I think the parent comment is in reference not to the current system, but to potential future functionality, enabled by opening the Pandora's Box of client-side scanning.
Though I agree that underage nudity detection in particular is a little fanciful
Feature 1: CSAM detection. Only on upload of images to iCloud Photos.
Feature 2: nudity detection for minor children controlled via the parents' iCloud account. It is not an “underage” nude photograph detector, but a nude photograph detector. All done on-device. The only person(s) notified are THE PARENTS. This is all opt-in and age-restricted. I do not know whether it will affect teens sexting or only under-13s.
Siri a few years into the future: Dave I noticed that when you sent your last tex message my AI determined with a 99% probability that you were driving. I’ll be notifying the local authorities and you will be receiving a citation for that. Please note that after the citation has been paid and you have been through rehabilitation they send me an unlock code so I can reenable your sms account.
Or:
Hi Dave i’ve been noticing that you have been discussing Covid vaccines with your friends. Apple thinks your position on this causes public harm so we will be adding additional information to each of your messages that discuss this subject. Thank you for being part of our team.
Is there a possibility that this could be some kind of Dual_EC type of backdoor they are trying to insert under the guise of something "positive" -- i.e. CSAM? As others have said elsewhere, it seems so out of the blue like they were pressured from somewhere, which I guess I default to assuming government. After reading "The Hacker and the State" by Ben Buchanan, it made me even more aware of how much depth and penetration there is of the private-government partnership in the US.
iOS is closed source and builds are not reproducible by the public. You have zero control over what is running on the iPhone. It is also extremely challenging to go through each executable and firmware on the phone to probe for backdoors. Sophisticated backdoors won't be apparent even if you had the executable data dump for every iPhone system.
Right, I understand that. I guess my question was more of a general one on whether people think Apple in particular would cooperate with the government in that way? Obviously the San Bernardino shooter case comes to mind as an example of Apple not just folding to the government pressure.
Im not sure if im missing something here, but from what I understand, what Apple is proposing is an enormous privacy boon that seems to be completely misunderstood.
All cloud providers are required to, and do, make these scans.
The proposal provides a way for them to comply with that requirement, but at the same time, allows them to completely lock themselves out of being able to decrypt your data in the cloud, except in this specific case, secured and controlled by your device.
This means they couldn't comply with a nation states demand to secretly decrypt your data, even if they were legally compelled too, unless they change how the cryptography at play here works.
- I don't want to be treated like a pedophile for a device that I should, in theory, own because I paid for it.
- I don't want the rules to change in the future for what will be scanned.
- I don't want to support Apple scanning for pictures of Tank Man for the CCP.
- I don't want an untested proprietary algorithm making decisions about me that could alter my life. It's already been shown that you can make innocent pictures that collide with CP hashes. This screams of future abuse.
I see these points. But I guess personally, as someone that benefits financially from technology and the internet, I feel some level of responsibility to ensure that it isn't used to exploit the most vulnerable in society.
There is no requirement to scan, only report when found.
There is NO end to end encryption being rolled out at the same time (so they can still decrypt and hand over your data in the cloud).
> This means they couldn't comply with a nation states demand to secretly decrypt your data, even if they were legally compelled too, unless they change how the cryptography at play here works.
Not only is this completely wrong (they can hand over your decrypted cloud data right now), it's wrong even going forwards with the assumption you made above about real E2EE (which we don't have...) - All they have to do is add a few hashes from that nation state to their catalog and your data is whisked away to apple for them to do with as they please (which right now is meaningless, since they currently have access to it anyways, but makes your point about E2EE a lot less compelling).
I didn't realise that theres no requirement to scan. So I'll yield on that.
But its still possible that Apple are preparing for a scenario where it does become a requirement in the future.
As for the E2EE part, I can't imagine that this wouldn't be launched with E2EE alongside, otherwise theres literally no point whatsoever, they could have just done the scan on iCloud.
As for why this combats 'your data being whisked away', check the technical documentation. What they're doing with Private Set Intersection and Threshold Secret Sharing are clear steps to make this system unexploitable, anonymous, and so that it doesn't leak any metadata whatsoever.
The concern is that Apple is handing governments a new tool to go “give me a list of all users that have this photo”. It could track down dissidents based on this and combined with metadata is probably sufficient to pinpoint who took a particular picture.
Think you shared that picture of police brutality anonymously? Think again.
But what im getting at. Is that this is exactly what Apple is trying to fight.
A government could already coerce Apple into handing over iCloud data.
The cryptography at play here, combined Private Set Intersection and Threshold Secret Sharing are clear steps to make it as hard as possible for any institution to body this for that reason.
Y'all are starting to make me more sympathetic to Apple. Their biggest misstep was making these announcements simultaneously without foreseeing how many people would (willingly or otherwise) conflate them.
This is not true. There were multiple features announced. iMessage data is not being scanned by most phones. The ONLY CSAM detection that happens is on photos being uploaded to iCloud Photos.
The only phones where iMessage photo scanning happens are those for children under a certain age (maybe 13?) whose parents who have opted into child protection where the phone scans for nude photos and notifies the parents.
People are conflating these two _different_ but _related_ features and their goals and limits.
> The proposal provides a way for them to comply with that requirement, but at the same time, allows them to completely lock themselves out of being able to decrypt your data in the cloud, except in this specific case, secured and controlled by your device.
If the CSAM detection were released with encrypted iCloud backups (or generally E2EE iCloud), then I think I'd probably be entirely less outraged. This narrative would make sense then.
Apple claims the on-device CSAM scanning only occurs on device if the photo in question is uploaded to iCloud. Apple is surely already scanning iCloud photos on their cloud, so without E2EE to iCloud, what's the point?
My ability to control a device I own, with Apple, was already suspect. Apple's use of on-device scanning is a slippery slope: it's not a question of if it will be used for nefarious means, but when. The claim that the scanning is only done for cloud-destination images is suspect and could be changed by gagged government coercion and a minor update.
The feature itself has little going for it in efficacy too: like most of the US's punishment bureaucracy / legal system / policing. Real child predators likely already avoid the cloud. These kind of slippery slope arguments against crime are often used by law enforcement to erode privacy rights: they don't actually catch more bad guys, but they do spy on more law abiding citizens.
Who's to say that in the future, law enforcement won't use this kind of technology for the war on drugs or other failed law enforcement initiatives? Law enforcement often uses violence or terrorism, for example, as justification for its own expansion and more privacy-invading initiatives, but terrorism is a very low threat, and only 4% of crimes in the US are violent (as defined by the FBI). CSAM has been used for decades as justification for the state to nose more in private citizens' business too. It's certainly an issue, but there have to be better ways to reduce child harm.
You wouldn't need 30 of them or to upload them to iCloud.
If a nation state can demand Apple uses their existing function to scan your phone for political images, then they can likewise demand that 1 hit would be enough, and that there's no requirement for it to be uploaded to iCloud before the scanning can start.
They're combining Private Set Intersection and Threshold Secret Sharing in a way that means that 1 hit, isn't enough. They can't even tell how many red flags you have.
That’s bit how it works. They would need to have apple update iOS with the new hash, and then the photo uploaded to icloud.
If they change the system to just start scanning local photos globally there really is
Nothing to stop them from pushing out an iOS update today that does the same thing.
> All cloud providers are required to, and do, make these scans.
No scanning is required by law. In fact, the law states you don't have to go out of your way to invade privacy to scan. So yes, you and tons of other people are missing a big piece in this.
https://news.ycombinator.com/item?id=28230248
https://news.ycombinator.com/item?id=28232068
https://news.ycombinator.com/item?id=28231094