Hacker News new | past | comments | ask | show | jobs | submit login
The Messaging Layer Security (MLS) Protocol (ietf.org)
134 points by Sami_Lehtinen on Nov 15, 2020 | hide | past | favorite | 26 comments



I find the most interesting discussions surrounding this protocol are about whether or not it should support deniability / repudiation of messages. The Off-The-Record messaging people--including specifically Ian Goldberg--were unhappy that this was a non-goal, and the reasoning on the mailing list at the time was frustrating (that deniability can be used by "terrorists" to dodge investigations); but, I have also seen them have better arguments on their issue tracker (about power imbalances... though this argument was also controversial).

https://mailarchive.ietf.org/arch/msg/mls/ZJ4e78obXSdYWnxmsN...

https://github.com/mlswg/mls-architecture/issues/50


Just because deniability is a non-goal for them doesn't mean we can't add it into a variant of MLS in the future.

The Signal Protocol achieved deniability by using a three-way Diffie-Hellman handshake instead of digital signatures: https://signal.org/docs/specifications/x3dh/#deniability

I think we can do something similar. The Matrix folks are already working on a decentralized MLS, after all.


I don’t think deniability is an interesting property honestly, I’d be much more interested in a protocol that protects against insider attacks (e.g. there is no transcript consistency, you can send some messages to some participants and some messages to others)


+1. What even is the use-case for plausible deniability? The way I heard it, is if your conversational partner is from the FBI and you want to whistleblow something, the plausible deniability can be used to plausibly deny that you said something to the agent. But who's the judge going to believe: the agent that claimed message X was sent to them, or the user who said that the agent must have forged message X? Heck, if the government is that suspicious of someone they can also order the ISP to setup a tap and see that the right number of bytes at the right time were sent. The feature requires extra complexity in security protocols, plus might give people a false sense that they can just deny anything afterwards when that is not the case.

Checking Wikipedia to make sure I understand the point correctly, it claims it's more about denying that encryption was used at all (good luck with that one in an encrypted chat) or that there can be multiple plaintexts (so you can later come up with a completely different chat log, lying in court and claiming the other party has forged everything, which has the same issue as above).

Transcript consistency on the other hand seems to me like an expected feature for any >2010 multi-party encryption protocol. That can be practically used to change the conversation shown to a third participant (taking a "yes" of mine from one place, pasting it elsewhere while withholding the original reply...).


The main advantage of deniability is that it makes the encrypted messaging model more similar to the non-encrypted messaging model. Without encryption, if Alice sends Bob a message, a third party who later gets access to Bob's device (but who was not present at the time the message was sent) cannot know whether Alice really sent that message, or it was a forgery from Bob. With encryption but without deniability, a third party can prove that Alice sent that message, even years after the fact. With deniability, the original property is restored: anyone who can read a message can also forge it.

Even more important, a signed message without deniability keeps its signature even when it's moved outside its original context. If malware copies all the data from Bob's device, whoever controls it can ask for a ransom by threatening to reveal the signed message and prove it came from Alice; with deniability, they cannot prove that this message was not a forgery, either by Bob or by whoever revealed it.

That is, the interesting case is not when one of the parties is an "FBI agent"; deniability does nothing in that case, since the "FBI agent" can be assumed to not forge the message (and thus, by exclusion, it can only have come from the other party).

> Heck, if the government is that suspicious of someone they can also order the ISP to setup a tap and see that the right number of bytes at the right time were sent.

Setting aside that, unlike non-deniable encryption, this is not retroactive (a government cannot order a ISP to setup a tap in the past), the number of bytes sent reveals nothing about the message other than its size (and if padding is used, not even that).


> a third party [...] cannot know whether Alice really sent that message, or it was a forgery from Bob

But realistically, how often does Bob create and store forgeries of all of Alice's messages to protect her just in case his device gets hacked? This seems even less likely than the blackmail scenario proposed in a sibling thread and your second paragraph.

I'm not saying "let's not add features that are not absolutely essential" (I'm one of the people in favor of client-side hashing, which other security people seem to reject for having too small a benefit), but rather that I've seen the amount of complexity it adds and I just really can't think of scenarios that are plausible enough to be worth the potential false sense of security and added complexity in an already complex protocol. TLS is simpler and we see how many ways that got hacked. (Come to think of it, it is curious that I can name various old TLS/SSL flaws but not any in previous versions of OTR, Axolotl/Signal/Wire, etc. Protocol versions, that is; not implementations. But perhaps they're just publicized less.)

> unlike non-deniable encryption, this is not retroactive (a government cannot order a ISP to setup a tap in the past)

There are much simpler solutions to prevent retroactive proving. Barring just throwing away the chat log, the receiving client could discard the signature after verification. That maintains plausible deniability for the messages because there is no signature of Alice's and adds literally zero complexity to the protocol. That's just an idea I randomly came up with on the spot so perhaps it's really stupid for some reason, but why don't we just do that?

> if padding is used, not even that

This is about minor details now but, padding is at most as large as the block size and modern modes like GCM don't use padding at all. The protocol would need to mix in random garbage and limit the rate at which you can send bytes to prevent time and size correlation, which can otherwise both be independently matched against the chat log. Unless specifically thwarted with a sizeable amount of noise, traffic correlation is going to be possible. Correct me if I'm wrong but I would not expect any general purpose chat protocol in 2020 to consider this to be within its scope.


This is only true if your conversational partner is a government agent.

If it’s just a blackmailer or con artist, deniability is very useful.


> [for] a blackmailer or con artist, deniability is very useful

Fair enough, but for any legitimate purpose?


You changed the meaning in an unhelpful way.

If you are being blackmailed by someone, you would find it useful to be able to repudiate the compromat they are using.


I misunderstood what you meant, I wasn't intentionally changing your meaning. Sorry for that.

Indeed, denying that you never said something that someone is blackmailing you about is useful. But you can deny it anyway. Have you ever heard of a criminal blackmailing someone saying "and I have your cryptographic signature to prove it"? The person that the criminal wants to prove it to would have to also be convinced that this public key is indeed yours. It's not an impossible scenario, but it does seem contrived.


No - I haven’t heard of that situation.

However I know of people being afraid of being blackmailed based on things like text conversations, and whether or not it came down to a signature or not, plausible deniability was definitely a factor in how seriously they took the threat.


Deniability is in the news now with the Hunter Biden emails. If you steal a bunch of emails, they’re only useful as blackmail if they’re not deniable. If there’s deniability, the would be blackmail victim can just say that the emails are fake and it’s he said/she said. It lowers the incentives for stealing emails.


For a layman, does this solve the end-to-end encrypted group chat problem?


It solves the scalability issues of current generation key distribution algorithms.

In my opinion the best article about it is: https://blog.trailofbits.com/2019/08/06/better-encrypted-gro...


> Establishing keys to provide such protections is challenging for group chat settings, in which more than two clients need to agree on a key but may not be online at the same time. In this document, we specify a key establishment protocol that provides efficient asynchronous group key establishment with forward secrecy and post-compromise security for groups in size ranging from two to thousands.


You would think this problem is already solved as it’s a similar problem when navy ships from different countries need to communicate then they need to communicate keys too and ships can be added or removed from the group.

For example, Joining a (n,n)-signature schema without a trusted third party.


Now increase that by orders of magnitude and you need a new protocol due to efficiency reasons. Things don’t always scale well.


It also solves the post-compromise eavesdropping problem.


Oddly, I do work reasoning about security protocols and the notations in this still seem opaque. The trail of bits article has some standard notation in it in their cartoons, but both seem to jump between levels of abstraction. Does anyone pay to have these things described clearly?


I have heard it does not support decentralized services as it requires a central entity?


There is a discussion on client-ordered state updates, but the implementation is outside of the protocol's scope. There is no requirement for a centralized service so long as you find an agreeable resolution strategy for conflicts. One that works from a user's perspective as well.


Some Wire and Matrix people are working towards a decentralized MLS proposal, last I heard.


yup, we’re working on Decentralised MLS for Matrix. Wire have something going on too.


I’m not sure how exact to read this, but I’m curious how it differs from the Signal protocol.


Signal doesn’t really address group chat, or it does but in some very limited ways that won’t scale passed a lot of participants (depending which group chat protocol you use among the different ones that have been implemented with the Signal protocol). Now if you ask me why you would want end-to-end encryption in a group with hundreds, thousands, or more users I have no answer for you. IMO group chat passed 30 users should revert to public group chats instead of giving you a false sense of privacy.


> IMO group chat passed 30 users should revert to public group chats instead of giving you a false sense of privacy.

Whether or not that's a "false" sense of privacy depends on your threat model.

Is your primary threat "What if someone hacks the communication server and leaks our conversations?" (This is something I worry about a lot.)

Is your primary threat "What if someone in our chat is a spy and we're planning crimes?" (This is something I don't worry about, but others might.)

In the first case, E2EE for up to 1000 participants in a group still makes sense.

In the second case, every additional participant is additional liability of government subversion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: