Hacker News new | past | comments | ask | show | jobs | submit login

It is indeed. The pattern that OP apparently doesn't like is that the cryptographic material needed to actually verify the other party is somewhat buried, and as a result surely few people do this.

As critiques of signal go, it is perhaps not entirely unfounded, but I'm not sure it'd be my first - or even second - qualm.




I don't necessarily agree with their decision to put it behind a button that isn't obvious, but I can see why they made that decision. Every additional step required to set up a chat makes actually getting to a chat more tedious.

All of the encrypted messaging apps have to deal with the fundamental problem of deferring trust verification to something else, somewhere else, and that work has to go somewhere, sometime. It seems Signal has decided that (1) it shouldn't be centralized, like through Keybase, and (2) verification of identity on first contact is less of a risk than MITM, and the fact that it's possible and people do it is enough of a general deterrence. It'll stop anyone who wants to be stealthy.


> Every additional step required to set up a chat ...

Nobody was arguing that it should get in your way. Again, see the alternative app I mentioned as an example of how it can be done: in Threema it also doesn't get in your way, you also have to go into the user's profile to do the verification, but the status is displayed while you're chatting rather than being just something ignored in a back menu for super nerds.


This is indeed what I meant. Of course it's encrypted by default, but unless you verify the keys, you're trusting the server to be honest which is not the point of end-to-end encryption.

Because you can verify keys, the chance of them getting caught inserting interception keys is pretty decent (if a small % of users does the verification), so they're quite likely to remain honest and anyone who might hack a Signal server would also think twice where the gain is worth triggering this indicator of compromise.


You are trusting the server to be honest, but maybe not when you think.

In Signal's design, participants have a long term identity key, and the thing you're verifying is essentially just the combination of your long term identity key with the other party's long term identity key, but deterministically ordered so that you both see the exact same value. They call this the "Safety Number". So e.g. maybe your identity can be summarised as A4 and Jim's identity is C6, Signal will show you the value A4C6 as the "Safety Number" for Jim, while Jim also sees A4C6 as the "Safety Number" for lucgommans.

This value is calculated by your client (and Jim's client). The server could present you a new long term identity key for Jim (because Jim's phone dropped dead and he bought a new one, or because the Secret Police want to intercept messages for Jim) but this triggers your client to warn you that the Safety Number changed and you need to decide if this is still Jim.

The Safety Number isn't calculated for each messages, or call, or whatever, because it's made from these long term keys.

The Signal UI reflects the reality that the only way to be sure Jim is seeing the same Safety Number as you is to physically meet up and compare. I think it has pretty nice affordance for that scenario, you can scan a QR code from another Signal user to verify them.

It's tempting to think, but I could just read my Safety Number out, on this call, and then we verify that way. Signal won't prevent you from attempting this, but it actually isn't necessarily safe, so it also doesn't encourage that. Can a Nation State adversary fake up the "verification" step in a voice call? Maybe. Would you notice if they tried? Maybe. Best to sidestep the maybes entirely and if your threat model requires it just actually perform Verification of the Safety Number in person.


I'm confused, what is it that you're trying to say? None of it adds up to "you are trusting the server" and all of it (except the part saying that you're necessarily trusting the server) is exactly my understanding.


It's unclear to me whether you understood when you are being given potentially untrustworthy information by the server.

Some of the other designs you described try to have users verify the ephemeral end-to-end encryption keys. If Signal did that then obviously each call, text conversation, or whatever has new keys, trust doesn't continue from one to another. But Signal's long-term identity key relates everything together. The Safety Number is about ensuring you really have Bob's long-term identity key (and Bob has yours) rather than about this particular call, conversation, etc.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: