Hacker News new | past | comments | ask | show | jobs | submit login

With the recent dust-up around protonmail's "end-to-end encryption", it's hard for me to believe that the hammer isn't going to come down on Matrix in a similar way. Nobody has seriously turned their attention to the compromised-homeserver issue, even though that's the only threat model under which e2ee makes sense. Expecting users in hundred-person group chats to manually verify each other individual user's keys is nonsensical. And even a single unverified key means that their homeserver operator could be listening in. I feel like Matrix should move more towards emphasizing how easy it is to self-host and maintain control of your own Matrix data, making e2ee is a non-issue.



> I feel like Matrix should move more towards emphasizing how easy it is to self-host and maintain control of your own Matrix data, making e2ee is a non-issue.

Matrix is way ahead of you. There's ongoing work to build P2P clients [0]. That is, each phone/device can be its own home server. The latest update was in May [1].

[0]: https://matrix.org/blog/2020/06/02/introducing-p-2-p-matrix

[1]: https://matrix.org/blog/2021/05/06/introducing-the-pinecone-...


That sounds very promising! Having two mobile devices with 5+ radios (2G, 3G, 4G, wifi, Bluetooth, NFC, ...) inside 5 cm from each other yet they can't easily talk directly to each other in a user friendly way is such a fail!

Like really, my Palm TX could do that via infraport!

So any effort to break this stalemate makes me very happy. :)


I have tried to understand the reason for this design limitation (no easy efficient peer to peer on phones) for a long time now.

I have come to the conclusion that the inability to communicate is not a fail but intended.

It seems like security and business interests are standing in the way, not primarily technical hurdles.

Mesh networking and service discovery are issues where technical solutions exist. But their application is slowed or blocked by network operators and phone/OS manufacturers to enforce a central authority and paying subscribers.

Privacy concerns are often used to explain these decisions, but I see those as mere excuses. Mesh node identifiers could just be randomly regenerated periodically or handled anonymously. Also firewall rules could easily ensure only authorized services can communicate.

It was interesting to observe how quickly similar P2P features were enabled in the fight against Covid-19 (contact tracing via BLE advertisement packets).

At the same time it is harder than ever to use, for example, Android's WiFi or Bluetooth in an App-controlled manner. Only the central authority, not the owner of the device nor independent App developers are apparently supposed to actually use the devices capabilities.

Quite frustrating.

I have been thinking about building a generic case/USB gadget to enable free communication, but such a solution would have many drawbacks versus using the internal radios.


Privacy concerns are real, but possibly misguided. Mesh networking will obviously reveal much more to people around you than the centralised model. But in return you get shrouded from centralised infrastructure. It's a legitimate trade-off and I imagine people preferring one way or the other. But instead of talking about a trade-off, we tend to reject ideas because one of the aspects would regress.


I also think it's intended by the people actually paying for development of the phone hardware and OS. Also iOS being fully proprietary and even the open parts of Android being effectively developed by a privileged group of Google engineers with zero community input for project direction does not help...

The involved parties want to sell you cloud services(hello Google!), to have you use up mobile data & calls/SMS (hello operators! ) and ideally to throw the phone away after a year or two (hello manufacturers).

And all they need to do is build a device OS combo that needlessly peddles data via cloud somewhere on the internet, has not data card slot and can't talk directly to similar device on the table next to it...


It still does not solve key distribution.

Matrix P2P removes the homeservers, but clients are still talking to each other through some medium that can impersonate them.

There's really no magic solution to solve this, you need either a trusted third-party (such as Certificate Authorities, who are expensive) or a Web of Trust (impractical for users).


Matrix's existing E2EE has identity verification as effectively one- or two-hop web-of-trust using QR codes or emoji comparisons. It's not impractical for users, because everyone's familiar these days with scanning a QR code as a mechanism for logging into an app (c.f. Discord, WhatsApp, Signal etc). So as long as you scan the people whose identity you care about, you're sorted, and it's much easier for normal users than the clunky old PGP signing parties of years gone by (plus it doesn't leak as much metadata, given it's just one or two hops of trust).


Can we get a arewep2pyet.org ? Maybe the wait would be more bearable ;)

Or if you prefer threats: "If I die from covid before matrix p2p is a thing, my ghost will haunt you for evar!".


excellent idea - have registered :)


For me it looks very weird and defeats the whole purpose of Matrix. I mean: it was designed from the ground as a federated network, similar to E-mail. Turning it into a completely different topology will not end well. If they want a P2P client, they should design it from the scratch. Otherwise it'll end up as a bunch of hacks glued with tape.


It wasn't designed exclusively as a federated network, tbh - we've been planning for P2P since pretty much day one; https://matrix.org/~matthew/2016-12-22%20Matrix%20Balancing%... is a talk I gave in 2015 when Matrix was less than a year old on our plans to go P2P.

Personally, I think it's really nice that in P2P Matrix precisely the same client-server API is used as for normal Matrix, so all the work that goes into the Matrix client and E2EE etc is preserved without any changes at all. Instead, it just talks to a server running on localhost, which then replicates traffic around over the P2P overlay (in future using store-and-forward servers if the target server is offline). It's really not a bunch of hacks, thanks to Matrix being architected to make the replication transport & protocol pluggable from the outset.


Thanks, that's reassuring. I really like Matrix and don't want it to collapse because of lack of a vision.


Realistically, when would you be in a hundred-person encrypted group? Mostly this is the case when you're a member of some kind of organization, and there are ideas how to solve this case without pairwise verifying all participants (e.g. by delegating trust through a single trusted person such as the CEO, reducing the number of verifications necessary from N(N+1)/2 to N). Even without this, fully verified E2EE is still feasible and useful for smaller groups.

And even if you own the homeserver, you still want E2EE since you don't want the data to rest in plaintext server-side.

However, there is work currently being done to make it feasible for every node to also be its own homeserver, via P2P Matrix (https://matrix.org/blog/2020/06/02/introducing-p-2-p-matrix).


At uni, it's common to have whatsapp groups for classes, which tend to be encrypted 100 to 200 people WhatsApp groups.


How important is it for these kind of groups to be E2E encrypted though? If you're sending a message to 100 people then you probably ought to consider it de facto public even if only the intended recipients receive it.


Partly "middlingly important" because yes, it's kinda effectively public.

But also: why the heck wouldn't you want to encrypt it? The existence of leaks doesn't make basic prevention useless.


How many of those have verified all the public keys? If you never do verification e2ee is basically meaningless.


"You can fool some people sometimes, but you can't fool all the people all the time."

If you don't verify the keys, e2ee is basically meaningless against targeted surveillance. As long as some fraction of people verify keys, it is still effective against mass indiscriminate surveillance.


how is e2e better against mass indiscriminate surveillance than just normal TLS? The only time when e2e is meaningfully different then https is when the server you're talking to (i.e. your personal matrix homeserver) is compromised. In that case, aren't you already in the realm of targeted surveillance?


Some homeservers are larger than others (e.g. matrix.org). They don't all need to be compromised to enable mass surveillance. It also depends on where TLS is terminated. If you're running a homeserver on AWS or something behind their load balancer, there's a difference.

Generally, I'd argue that E2EE provides defense in depth against "unknown unknowns" if server infrastructure is compromised by any means. Although I do acknowledge it adds one more level of complexity, and often another 3rd party dependency (presuming you're not going to roll your own crypto), so it's not a strict positive.


> The only time when e2e is meaningfully different then https is when the server you're talking to (i.e. your personal matrix homeserver) is compromised.

Only if everyone's running their own personal homeserver, which seems pretty unlikely for regular people. You could've said the same thing about email (it's not meaningfully different unless your personal email server is compromised), but in reality the NSA ran mass surveillance on gmail and picked up a lot of data that way.


Serious question, if a surveillance organization had control of a certificate authority trusted by your client, would that allow them access to traffic whose security relied on a certificate from that authority?


By that logic the vast majority of users of whatsapp, signal and most other e2ee protocols/apps use it in a useless way, right? Most people I know who use these apps (even the security-conscious ones) never verified the key.


Signal tells you outright when someone's key has changed, though. It's usually pretty trivial to establish that the original conversation is authentic when you're just talking with people you know in real life (where an impersonation attempt would likely fail for numerous reasons), and you can assume that device is still theirs until their key changes.


There is still a risk that someone is running a MITM attack. The initial conversation would be authentic, but the key belongs to someone else who is just forwarding the messages. Your communications would no longer be private and they could switch from passive eavesdropping to impersonation at any point without changing the key.


Most people rotate their Signals keys every time they rotate their phone hardware (which is inexplicably often for some people apparently), because keys are just auto-accepted everywhere so there is no real incentive to bother moving them. In larger groups there's always someone.

It isn't helped by the fact that the backup process is a bit obscure and doesn't work cross operating systems. For the select few that cares, verifying keys is effective against attackers who aren't Signal themselves, Google or in control of the Play Store. Just make sure to keep an eye out for that key changed warning, it's easy to miss.


It used to tell you. Does it, still?


Yes


Yes, pretty much.


> Matrix should move more towards emphasizing how easy it is to self-host

They are already moving beyond that - towards P2P Matrix, with a network mixing traditional servers and individual nodes that collapse client and server: https://matrix.org/blog/2021/05/06/introducing-the-pinecone-...


What dustup around end-to-end encryption? I'm only familiar with ProtonMail's IP logging controversy. I was unaware it impacted E2E at all?


I'm pretty sure OP is just misinformed. That, or they're talking about that one huge hole in PM's PGP implementation years and years ago.


Right, the dust-up was "PM advertises their service as secure and end-to-end encrypted, but in practice that doesn't mean much because user access is non-anonymous and they're still a central metadata honeypot for any law enforcement agency to log accesses to" (compared to e.g. Signal which AIUI is designed as an anonymous dropbox so that Signal can't correlate between user access to their servers and individual senders)


Did you find further information? Would've been nice for the GP to point to a source or news item.


Even if it's not currently practical for 100 person group chats, verifying each other's keys is quite practical for 1on1 or other small groups of say 10 people or so, and you get a great benefit from doing so! Two normal users could use any matrix server, including the huge matrix.org one, and communicate privately in a direct chat with peace of mind.

Of course, I think self-hosting and decentralizing the servers is very important and good work too!


>I feel like Matrix should move more towards emphasizing how easy it is to self-host and maintain control of your own Matrix data

The problem is, it isn't easy at all. Running homeserver requires gigabytes of RAM and significant CPU performance to work at usable speeds. It just isn't feasible to run it on some dirt cheap VPS, you need beefy machine.


You'll be thrilled to know this pretty much isn't true anymore, even with Synapse. My instance is running on a cheap Hetzner VPS with a not very powerful CPU and is currently using about 700M RSS and not much CPU. And I'm in a lot of rooms, some of them quite large and high in traffic.

I'm also not even using Synapse workers at all, just a monolithic instance. Splitting the setup into workers would buy me an additional speedup if things got overly slow.


Yes, unfortunately Synapse is possibly one of the worst apps i've ever sysadmin'd for, and I'm not even the primary sysadmin for my homeserver. I still use it, regularly, but we're all really anxiously anticipating the release of Dendrite. I'm not the biggest fan of the protocol either, or the UI/UX of Riot. I think Matrix is a good idea, but there's a lot of historical baggage and I think the world probably needs one more "throw everything away but learn the lessons of the past" cycle before we get something really, truly good in the chat space.


Why does it require that much? I run a Jabber server, and it barely uses any resources on my VPS. Unless you mean for significant users?


If you want to actually use the federating feature, you will probably join some big channels. And that causes pulling of some huge amount of data from other homeservers.

disclaimer: I tested it some time ago with Synapse. Now I see there is also new homeserver software, Dendrite. It is possible that it is order of magnitude less resource hungry, though I wouldn't count on that.


One of our primary goals on the Synapse last quarter was to make it possible for a new homeserver to join Matrix HQ (a large, public room) in under 1 GB of RAM. And we did it. https://matrix.org/blog/2021/06/15/synapse-1-36-0-released

It's not the slimmest beast, in part because Synapse needs to scale up to country-scale deployments, but its ability to scale down has significantly improved over the past year.

That said, Dendrite and Conduit (https://gitlab.com/famedly/conduit/) are exciting projects which will optimize for different operational contexts.


How and why is that different from a client being in the room? I'm pretty sure that neither desktop, nor especially phone clients are supposed to take gigabytes of mem to listen to chats.


I'm not sure when you last tested, but my Synapse instance (v1.42) is currently using around 180MB RSS. I'm on a few rooms with 500+ users and multiple with 50+ users.


> Expecting users in hundred-person group chats to manually verify each other individual user's keys is nonsensical.

Maybe this could be helped if Matrix clients supported OpenPGP encryption which allows for key signing/web of trust? Some XMPP clients already have OpenPGP support, it would be nice to one day be able to send encrypted messages to Matrix users.


The possibility to perform manual or semi-automated (QR code) key verification is still useful.

1. Those who care about encryption are able to verify their keys.

2. The service can't implement mass-surveillance in secret if there are at least two random users who verify their keys.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: