Hacker News new | past | comments | ask | show | jobs | submit login

> According to docs and endless reporting on iMessage, the messages are end-to-end encrypted in transit.

According to my rather brief examination of iMessage, you can't verify key fingerprints in iMessage. It means that Apple can install MitM on a probed subject any second, and you would never know that it is there. See:

You <-- end-to-end-encryption --> Apple MitM <-- end-to-end-encryption --> Your buddy

You have _zero_ control over it, and the only thing keeping your secure and private is Apple's pinky promise.

> The same is true for any app running on iOS/Android.

Umm, no. There are such things as open source, verifiable builds, and, yes, decentralized messaging protocols. You can take an open source client (for example, from a reputable source like F-droid), and connect to a server totally unrelated to client developer. You can run an encryption protocol where you can actually exchange public keys, verify your keys fingerprints, and confirm the identity of your chat partner. That's what people really concerned with privacy and security do. Others are satisfied with a promise that sounds good enough.




> the only thing keeping your secure and private is Apple's pinky promise

Indeed, that's what I meant in the last paragraph. It also means an open source, verified client, with fully secure encryption, is still powerless against the OS capturing keystrokes. You have to rely on that pinky promise either way unless you control the hardware.


That's true, that's why people who are _really_ paranoid buy special purged phones and flash their own builds of an OS. A friend of mine had a rather successful business selling people such phones.


Changing the goal posts? It was about usability of sync/search, now that's addressed, you've brought up the paranoid case of Apple registering an additional device (which only affects messages from on, forward, not the past, mind you, and it is an active attack that can possibly leave a trace if the sending party is looking at the traffic it originates). Sure there are challenges, and the whole point of designing a secure system is to balance various aspects of the system including usability and even politics so that in practice you maximize security for everyone. Apple's choice seems to have been a good one, and it deserves credit for bringing this much security to the masses at precisely zero cognitive burden to the user. Compare it to email, or other popular chat services, for example.

But yea, it is correct that you can always prove something cannot be done once you tighten up some mutually exclusive constraints. Question is how much it maps the real world and whether some cannot be loosened.


Where do you see changing goalposts? First, I started from the obvious truth that apple wasn't the first service that advertised itself as 'secure'. Then I addressed the popular technofetish about the e2ee: people kinda like to feel secure, but are rarely ready to accept all strings that come attached to real security & privacy.

You see, e2ee is only practical against the service provider that you have reasons not to trust. But if you don't trust Apple, then you should not trust it all the way to the bottom: and at the bottom we see that iMessage apps give a user zero control over said e2ee. You don't really know who decides your messages on another end: your chat partner or MitM proxy.

If you trust Apple that they do not have such proxy, you might as well trust them to not snoop on your chats and store them unencrypted on Apple's servers, saving you from a lot of problem worrying about your keys, devices, etc.


> Then I addressed the popular technofetish about the e2ee: people kinda like to feel secure, but are rarely ready to accept all strings that come attached to real security & privacy.

Yea, but most people don't think about security in terms of black and white, and neither should they. There is no such thing as "real security & privacy" and it's completely disingenuous to suggest that you've found it when it involves trusting you or your company as a third party in placement of, say, Apple.


> If you trust Apple that they do not have such proxy, you might as well trust them to not snoop on your chats and store them unencrypted on Apple's servers, saving you from a lot of problem worrying about your keys, devices, etc.

As a universal statement, this is far too simplistic of a comment about a system's security and trust. Security without a notion of threat model is quite irrelevant. There's quite a large spectrum between trusting Apple with respect to the binary they serve me not being actively malicious and by-and-large does what it says it does; that they are not actively presenting someone else's key to my chat parties, vs. trusting them with my unencrypted data on their servers. At the very least, the latter would not be safe under subpoena, or data leak, or a rouge employee, for instance. Plus, in practice, if they present a malicious binary to everyone or substitute keys, someone likely notices at Apple scale. If I am that interesting of a target for them that they decide to target me specifically, I have bigger worries, as I am trusting the OS and hardware anyway (and still, there's hopefully some level of forward secrecy). In fact, to me, and to vast majority of people, a random exploit in their OS or physical theft of the phone carries a higher risk than Apple directly attacking them.

So, no, I fully reject that iMessage security is substantially equivalent to say, "Facebook Messenger" (even if run by Apple). I posit the delta is almost as much as HTTPS with Let's Encrypt cert compared to plain HTTP. And yes, there are no doubt use-cases that iMessage is ill-suited for; doesn't mean we should just give up on it for the other 99%.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: