Hacker News new | past | comments | ask | show | jobs | submit login
Building end-to-end security for Messenger (fb.com)
289 points by contact9879 on Dec 7, 2023 | hide | past | favorite | 385 comments




"Why we’re bringing E2EE to Messenger

... “

Okay, can someone give a good guess as to what's the real reason to do this? Not like all the feel-good BS - what's the business case? How is this gunna make them money?

It seems this just makes them lose access to a ton of data to mine for advertisement. I chat with a friend on IG about something and I immediately get ads for it. It's a bit creepy, but I feel it's working the way they'd want it to (never bring up watches, you will get watch ads for the next 6 months)

Are they bleeding a lot of user to Signal/Telegram b/c they lack encryption? (my impression is only nerds care about encryption)

Are they getting harassed by requests from law enforcement?

Are they in hot water b/c of child porn?

Do they need plausible deniability?

I don't really get why they're rolling this out. Like what's their angle. Seems like something users don't care too much about and they lose a ton of valuable data


Maybe they're doing it because it's the right thing to do, and because they'd like people to trust them.

Also: https://techcrunch.com/2023/07/11/teen-and-mom-plead-guilty-...

They must be able to do good targeted advertising without message contents, with public likes and other data on scrolling behavior, especially as AI tools improve. Maybe having this data is more trouble than it's worth. Data is a liability as well as an asset.


> Maybe they're doing it because it's the right thing to do, and because they'd like people to trust them.

I worked for FB briefly. It was enough to convince me that the "it's the right thing to do" is definitely not relevant to this question.


Yeah I wasn't under the impression that respect the rights of customers was in their top 10 priorities.


Customers are the ones paying for ads. Users are just a resource.


The few guys in charge of security engineering don't have to share the values of the whole company.


That's kind of what I thought before I saw inside.

What actually happens is the reverse. 99% of engineers can have amazing values that you share, but they do not ultimately make the decisions. The board, Zuck, and the $ do.


Nonsense -- they do as long as Mark, and his chosen exec team, control whether they work there or not. Anything else is a pretty lie people tell themselves because they like the paycheck.


There were two parts to what you responded to. In terms of brand risk into the future, the second part could reasonably stand, no?


They sell ads. E2E encryption doesn't hurt that and it also appeals to the trust. So why not?


It does hurt because you can’t deliver targeted ads based on message content.


I don't think they need the exact content to sell ads.

1. sell ads base on the message itself, get crushed in the media; 2. encrypt message but sell ads based on profile and meta data, get good publicity. I think they are doing option 2

Messaging is just part of the platform. My guess is that they want to forgo this part and concentrate on others


All you do is have the end devices build the ad profiles and send them back to FB every once in a while.


Seconded


- It's hard to imagine a project getting signed off for just being "nice" - especially when it hurts their own business interests.

- I don't really see it making sense as a PR move to build trust. I think outside of the tech sector they are doing fine on that front. The vast majority of people use their Bytedance, Meta, Tencent, etc. apps and aren't considering their encryptedness.

- I don't think this announcement will get any substantial press coverage

- It could be preemptive so that they don't get bad PR when they end up being "complicit" in getting people sent to jail for abortions (in the US) or being gay (in some African countries) or whatever


> It could be preemptive so that they don't get bad PR when they end up being "complicit" in getting people sent to jail for abortions (in the US) or being gay (in some African countries) or whatever

Exactly this. With the recent laws passed they see how their altruistic "save the children" partnerships with law enforcement could be twisted for causes that aren't as popular everywhere.

Also it cost money to serve all those warrants.


> Maybe they're doing it because it's the right thing to do, and because they'd like people to trust them.

If that is true, good. But it'll take a very many doing "the right thing"s before I would trust anything Zuck owned.


“People just submitted it. I don’t know why. They ‘trust me.’ Dumb fucks.”


OP’s question rejects the premise of your justification. Facebook doesn’t have much of a track record of “doing the right thing” for its users.


its the "right" thing to do only by a small population of tech workers. Most people do not care, and ad customers would be very upset if this degrades targeting.


I contest both opinions:

1. More and more people care about this, eg journalists, politicians, etc. Apple has been talking about this a lot, although some being propaganda for messages, but their customers are already somewhat aware of private messenging.

2. It may not degrade ad targeting that much. I imagine doomscrolling does it way more: you engaged with this post, you ignored that one, and so on.


for 2, I will say anecdotally I have never in my life bought something directly from an ad until the last 2 years on instagram. It actually found things directly useful to me that I did not know about beforehand (I did still go through an hour of research or so, but was amazed at the algorithm discovery capability)


I think end to end encryption should be the minimum requirement for any private / direct messaging in any chat application. Group chats and larger I don't think it's as necessary since the guarantee that the conversations will be leaked is much higher, just reasonable encryption for those is fine. I do think its entirely possible to have a conversation that sounds incriminating out of context, and in fact is not even remotely relevant. If my shitposting conversations from my teens were taken out of context and shown in a court room I'd be facing several life sentences in an asylum.


I think you're right insofar as they are trying to reposition themselves as more trustworthy. I think they see the writing on the walls.

But at the end of the day, their ultimate end is to make more money. If they do the right thing it isn't out of some altruistic motive. It's because they think that by doing so will make them more money.


Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend's Name]: What? How'd you manage that one?

Zuck: People just submitted it.

Zuck: I don't know why.

Zuck: They "trust me"

Zuck: Dumb fucks.

https://www.esquire.com/uk/latest-news/a19490586/mark-zucker...


I maintain that that was an intelligent commentary on human nature, and that it has been misconstrued.

He was saying “I could be anyone” not “I can’t be trusted.”


What a refreshing perspective. I also trust them to act altruistically.


Some trust is indeed refreshing. But don't be naive.

With Zuckerberg's and FB's track record, their business decisions ought not to be trusted.


I guess the /s was needed.


Ha, good one


> Maybe [Facebook is] doing it because it's the right thing to do

This is the most autistic thing I have read on the internet this year, second to a personal friend sperging out thinking he was going to wife up the first girl he met at a party.


Encryption is a great argument against messenger-interop regulations like the EU is planning.

https://www.eff.org/de/deeplinks/2022/04/eu-digital-markets-...


Why would it? Diffie-Hellman key exchange is a thing.


I'm not sure what Diffie-Hellman has to do with anything here, but yeah, there's no reason encryption would prevent interoperability as long as all clients are using the same protocol (which they would have to do anyway in order to be interoperable).


Encryption makes it practically impossible to transform messages between different protocols, since the cyphertext contains not only the text content of the message, but also formatting, some attributes (e.g. `reply_to`). Even if it were the same, E2EE algorithms also differ between protocols, and you can't reencrypt the message for other protocols server-side.


I thought the opposite, at least as a first thought: Roughly two or three years ago, facebook announced their intent to integrate their messengers - so that you could send a message from your fb inbox to whatsapp, from whatsapp to instagram. And since whatsapp has E2E as a major part of their marketing, I'd think adding it to FB and IG rather than removing it from WA would be the way to go.

(though of course it's not REALLY: it harasses you to backup your messages all the freaking time, and when I say "never", as I ALWAYS do, it asks again in 2 weeks. I assume once they're backed up on Meta's servers, there goes the encryption. But that's a parlor trick and they STILL have that data, as I assume at least 80% back up anyway and the rest is mostly worn down by the constant prompting.)


That's because a single org controls all three messengers and they can develop them to converge to the same message format and to the same encryption mechanism. At the same point Signal or XMPP will use a different format and a different mechanism, making them incompatible with messages from Meta, unless a client with a private key reencrypts them.


Except this change seems to be triggering a reversal of that integration: https://help.instagram.com/654906392080948


WhatsApp doesn't backup to Meta servers. It only supports Google Drive on Android and iCloud on iOS.

You can also optionally encrypt the backups.


but then, why do they not take no for an answer and keep nagging about it, and interpret "never" as "not in the next two weeks, but ask again, please!" if they don't have an interest in having these messages there?

(and no, "it's to help YOU, the hapless user! is of course never the right answer. Corporations never do things for users without an interest of their own.)


That's why we need internet standards for IM. Sadly, not many people seem to care about if their messenger is XMPP compatible or not.


An argument, not a valid argument.


"How can we be sure a 3rd party implements the encryption properly" is the counter-argument.

How would you refute that? Trust users to check that some code is the same on both devices? What would prevent a bad actor from MITMing the whole thing from the start?


> What would prevent a bad actor from MITMing

It's not man-in-the-middle, it's man-on-the-end. If your chat app wants to spy on you, there is nothing you can do, but at least it becomes obvious and easy to analyze because it's client side code. It's not a counter argument to interoperability. You need to trust both sides, the same way web works.


Hmm, but it’s OK to trust that web browsers implement TLS properly? And your router isn’t MITMing you? Or your SSH app exfiltrating all your server information? Why is this different?


> And your router isn’t MITMing you

Can it do so if the encryption and key management is at the client?

> Or your SSH app exfiltrating all your server information

That's a small niche, and most service don't expose SSH to public.

> OK to trust that web browsers implement TLS properly

hmm, you may have a point, maybe they'll ensure that only whitelisted browsers can access it, like Chrome with DRM for HTML. Only purpose is public safety. /s


Pretty simple actually, it either decrypts successfully or it's not implemented correctly. Same way push notifications work.


FWIU, this[1] was decrypting imessages successfully. But was also storing all your imessages in a serverside database accessible to the server (instead of being e2e encrypted like imessage is supposed to be) and leaking the authentication token to access the imessages over unencrypted HTTP.

https://arstechnica.com/gadgets/2023/11/nothings-imessage-ap...


So is key management, key exchange and discovery, revocation, etc.

That stuff is very hard to get right within a single app.

Now do it across mutually-antagonistic companies with incentives to not cooperate.


Interoperability and encryption do not contradict each other. OMEMO is such a federated encryption protocol for example.


Since they control the client, is it possible that the "ad profiling" can still take place on the client, after the message is received and decrypted for visualisation?

The E2EE only means the message is not readable "in transit" (as in after it leaves a Facebook client)


1. Meta can read the metadata perfectly well (who communicates with whom and when), which is enough for ads. 2. Meta doesn't want to be able to read messages, since it's a PR nightmare when doing so. Case: Ordered to do so by a government agency. People could switch to Signal. 3. Data isn't readable "in transit", since it's encrypted with HTTPS. Only Facebook servers could read it if they wanted.


As long as they control the client any kind of government order is still a problem for them.

However it does make it a bit more difficult for them to spy on a conversation, which is arguably a good thing.


What use is the metadata for ads?


If two of your recipients interacted with a certain ad, there's a chance you have similar interests.

Combine this with the frequency of your chatting and your location (at least based on ip) and the other little bits of stuff users give about themselves, Meta doesn't really need to know specifically what the contents of your messages are.

In the mass of their users, an informer smart guess is more than enough.


Example: Went to the dentist, and the clinic messaged me a confirmation message via whatsapp. Next day, I got several ads for orthodontic braces.


You make a post looking for a plumber, you spend time chatting with people who are plumbers in their profile - you are interested in plumbers


Top of my head:

* Who are you messaging most (best friends, family). If they like stuff, you might like the same stuff.

* When are you messaging people (awake time > profiling)

* Messaging companies (obvious, what are you into)


Also: Where are you messaging

You clicked an ad about Product X, you're messaging your friend B from a store that sells Product X

-> Serve ads about Product X to B.


They control the client, so they can do whatever they want. They can take the plain text, encrypt it with my key, encrypt it with their key, catenate the two, send to FB, split off their "copy" and decrypt it to do whatever with, and send my "copy" on to the recipient.

e2e isn't a tech issue, it's a trust issue. Do you* trust FB?

* You in general.


>1. Confidentiality in transit

As opposed to the prior step, "0. Analysis During Composition", in which the Messenger client is doing all the metadata analysis/collection while you are typing, and already knows all the tags its going to assign to you for Meta, before the message is encrypted.

Sure, third parties won't be able to see your message. But you did give Meta permission to analyse your content prior to posting.

This anti-pattern is all over Meta's products. You can see it in use when you type an update in Facebook using a browser - just try to leave your comment un-posted, or close the page, etc. Every single keystroke prompts Meta's analysis - which is completed when you press "Post" (prior to encryption/transfer ..)

So this is some slick positioning on the part of Meta's technical PR managers ..


> is it possible that the "ad profiling" can still take place on the client

I believe this is the future in a GDPR world. The server sends a list to the client of 1000 ads, and the client decides which to show based on all the data available locally and a big local neural network model to decide which you're most likely to click.


IIUC the Brave browser is already experimenting with this model. They promise[0] "privacy-preserving" ads to users AND targeting to advertisers:

"...when a Brave Ad is matched to you, it is done on your own device, by your own device, inside Brave itself. Your personal data never leaves your own device."

The mechanism is very similar to what you describe.

[0] https://support.brave.com/hc/en-us/articles/360026361072-Bra...


The problem is that the 'secret sauce' of ad targeting is that model that decides what you're most likely to click... Ad networks really don't want that model outside their data centers...

Alas, the GDPR might force a rethink on that when it gets enforced with teeth.


Exactly this; that's why for both Messenger and WhatsApp the APIs are very closed and protected so that nobody can make any third party clients.

E2EE is great but does not help at all if you don't want Facebook to read your messages and profile you based on their content / who you talk to etc.


No, this is not what E2EE means at all. E2EE means the message is not readable in transit nor is it measured, scanned, sampled, copied, exported, or modified in any way without explicit action taken to do so by one of the legitimate parties to the conversation.

If the client just leaks the plaintext or leaks any information about the plaintext that encryption is supposed to protect then the encryption scheme cannot be described as "end to end".


I agree. The "end" isn't the network interface; it's the user interface.


Client dictates what ads are shown. Fb knows what ads are shown to who. Fb now can deduce what topics people are talking about. Technically convo info has leaked. If someone is getting served ads for Trump, they probably like Trump. If they are getting ads for Biden they probably like Biden. Etc…..


Yes, so that would violate the end to end principle. If the client downloaded all of the possible ads and the selection was totally local, and interaction with any of them was a user choice I think that could still be fairly described as E2E though. Or ads were fetched by private information retrieval.


“What self interested, selfish reason do these terrible people have to this ostensibly good thing?” - paraphrasing your question.

Answer - Message content wasn’t used for advertising. I believe it had been tried at some point and found to be sort of useless. But people like you won’t believe that, so end to end encryption might help build trust and increase engagement.


That's only a valid paraphrase if you think prioritising making money and value for shareholders above all else makes you "terrible people".

Personally I doubt that any more than low single digit percentages of people care at all about E2EE. Even me, as a tech person, I don't care about it, and I actively avoid Signal because of the inconveniences that E2EE causes.

This has been a very big effort to implement, and FB no longer deploys those kinds of resources on vague whims. I think most likely something to do with regulations, and not wanting to be on the hook for user message content, but it's just wild guessing really.


A security conscious person should assume that whatever can be exploited will be exploited - especially when dealing with actors that are economically incentivized to do creepy things.


We still have to trust that Zuck is going to do e2e correctly. I just don't have that trust in him. Messenger app is doing the encrypting, and I don't trust that FB isn't doing it in a way that they get the message also.


e2e encryption can be negotiation chip with gov agencies: "we turn off encryption this time, but you'll forget about our shady ads business"


Firstly E2E can’t be turned off on a dime. WhatsApp e2e has never been turned off since it was turned on.

Secondly, please educate yourself about what the government actually thinks about the ad business you’ve described as “shady”. Even if it was “shady” to show ads based on preferences and never reveal or sell those preferences to a third party … the elected representatives in government really like having social media ads as an option in elections.

Here, read this so you can learn how government actually influences social media to do their bidding - https://knightcolumbia.org/blog/jawboned

> The senator’s office told Katie that they really wanted to ban that practice but knew they would never get it through the Senate since so many campaigns relied on the tools for their elections. So instead, they said they were going to pressure tech companies like ours to ban the use of the tool in the hopes that if one of us did so, the others would as well. Although we did not stop using Custom Audiences entirely, Facebook and other platforms did dramatically reduce the targeting options for political advertisers.


> show ads based on preferences and never reveal or sell those preferences to a third party

That's not how modern ad markets work. Those preferences are indeed revealed to third parties, specifically ad exchanges and DSPs, as part of the bidstream data.

Now, you say, those bidstream data contain no PII! Except that de-anonymizing those data is absolutely key to targeting, and is widely practiced.

Recently in the news: "Patternz", an Israeli spy-tech company, for years hoovered up and stored all the bidstream data across 87 ad exchanges and SSPs including Google, Yahoo, MoPub, AdColony, and OpenX, de-anonymized them, and claims to have profiles on billions of users including their location history, home address, interests, information about 'people nearby', 'co-workers' and 'family members'. (See also: https://pbs.twimg.com/media/F-5bA6QW8AAyfSK.jpg )

Please stop spreading dangerous misinformation about the threat programmatic advertising poses to our privacy and national security. Your extremely sensitive data are being passed around willy-nilly and this will not change until RTB is outlawed.


> claims


Soon it will be illegal to offer non encrypted chats not scanned for child porn. So either encrypt or start scanning

(EU)


Ylva Johansson, the EU commissioner who proposed that (apparently failing [1]) law, has used Meta’s model behaviour in reporting CSAM material to NCMEC & EU authorities as the justification for why that law should exist.

Considering it was Meta’s policy to scan even when not mandated, it seems like an internal shift in attitude.

[1] https://fortune.com/europe/2023/10/26/eu-chat-control-csam-e...


This makes the most sense to me.


All large competing messengers have P2P encryption. Today it is what the customers expect, and if they are to stay competitive Meta must roll it out.


Given that E2EE messengers usually require being run on a smartphone as primary device, my guess is that they are trying to push the last remaining non-app-and-web-only users to their messenger app.

I'm one of them and I don't like this.


The end-to-end encryption also works on the web. I’ve used it and it’s excellent. You need to use a PIN to access your past messages from their backup HSMs, but other than that it’s completely transparent.


If I understand the parent comment right, this was an argument against ProtonMail's End-to-End Encrypted Webmail 5+ years ago.

The argument being that some assurances typically associated with E2EE (that "even we can't see what you're doing") are shakier without a disinterested third party serving the application to the user. If you have some target user `Mr. X`, and you operate the distribution of your app `Y`, you could theoretically serve them a malicious app that sidesteps E2EE. And since it's just a web app: the blast radius is much smaller than if you were to go through the whole update process with Google or Apple and have it distributed to all users.


Do you know if E2EE also works on the web without having to install the app? That would be novel.


Yes. It does.


??? FB Messenger is available on facebook.com ?


Yes, and my guess is that they are planning on removing the standalone messenger from the web version. You'll probably need to have the FB Messenger app installed on a smartphone device in order to use E2EE. That would make it impossible to write messages on the web version (i.e. facebook.com) without having an app installed. I currently do not have the app installed and am able to write messages on the pure web version of FB on desktop. My guess is that they are enabling E2EE to get the last remaining desktop-only-and-website-only messenger users to install the app. Hope that cleared it up.


According to the article, they went through a lot of trouble to make it work in web browsers. It would be odd to drop it after doing that.


Again, my point is not that FB Messenger will stop working in the web browser altogether. My point is that FB Messenger will stop working in the web browser if you don't have the FB Messenger app installed on your smart phone as the primary device.


OA mentions bringing E2EE to web clients


In a way that works well on low power mobile devices?

Most people I know using FB messenger do so on desktop via facebook.com and the app on mobile. I don't see them removing the former any time soon but if the web only version still exists for mobile users perhaps that will go.


You can't use the web version on mobile, it tells you to install the app.


Or if you have to use desktop mode in your browser...


WhatsApp (also by Meta!) supports E2E encryption on the web app.


Future interoperability with WhatsApp.


Yes, that was explicitly stated in an interview[1] a while back. Quoting from the specific section:

> “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on. But that basically has just meant completely rewriting Messenger and Instagram direct from scratch.

1: https://www.theverge.com/23889057/mark-zuckerberg-meta-ai-el...


Surely they could still mine the data you send and receive, because their app decrypts it for you and displays it for you.

So they could still be sending their server data like ‘likes watches’ without technically breaking the encryption.


They need to show the regulators they're doing something about the absurd level of data mongering they do as their quintessential business model.


What’s the total cost of encryption engineering / bau? I assume the ‘Facebook cares about my privacy’ goodwill from unknowing users will be worth more, but building a ‘secure’ public reputation has to start somewhere.


They can easily identify what and who you're talking to with message metadata, which is usually not encrypted. They can cooperate with government agencies this way. You don't need to know the exact content of a message, you just need to know who you're talking to and when.


Well, it’s encrypted in transit, maybe encrypted in their storage on their backend. But when the text, after being decrypted, appears in their textviews and websites I don’t think it is not kosher for them to tag every single word and glean lots of data/metadata from there and send home do their magic without associating with the identity. I thought that is something that is given. They also have not touched upon it. Except maybe the “Logging limitations” part - that section read like hogwash to me.

A kind of fatigue is setting in when it comes to Fb messenger and Instagram. They have already bloated these apps and they can’t really add any other gimmicks. So they are trying the “other” gimmick now.

My take is or guess is - they are doing it because they really have nothing else to do.


> How is this gunna make them money?

It's about them not losing customers to the competition that does offer E2EE


"we cannot give access to the user's messages as we do not have them"


Complying with warrants and other requests has a cost. By claiming not to have access to them, they can save money. I think they or some other advertisers have used the actual messages before, but concluded it was too noisy to be worth it.


Given the timing this was decided it the answer was “if we can’t see it it can’t cause a scandal for us and can’t be regulated”


I mean, it's a proprietary platform. It can't even be guaranteed the data isn't tampered with - same case as Whatsapp.

It's a PR move to _say_ they did it.


Win more users and therefore more metadata by buzzwording.


Half of their customers might end up in jail with the latest Supreme Court ruling on abortion.

Nearly 1/4 of women have done abortion in their lifetime. And there are also co conspirators like husbands, Uber drivers, nurses and doctors.


> Half of their customers

I have no side to take in this discussion, but just wanted to point out that the USA is not the only country that exists. I know it sometimes seems that way on Hacker News, but I promise you that there is a big wide world out there that has nothing to do with the Supreme Court :)


The topic is about a large US tech company though.


The USA makes up only about 8-10% (250M) of total Facebook users (3B) based on a quick search I just did.

They're not even the largest user base by country, which is apparently India.


But they are the most valuable cohort in terms of revenue https://www.statista.com/statistics/251328/facebooks-average...


How the hell does Facebook generate 56$ of revenue per US user per quarter? Are selling ads and selling personal information really that lucrative?


What is the share of revenue from US? I would guess it's not thaaat far from 50%. The median income in the US is like 20x of India, so presumably ad views from the US ought to be quite a lot more valuable. I would guess EU + US is the vast majority of revenue.


That doesn't matter if we're talking about % of users that could be affected by some local-US law.


It's published, but last I recall us users are worth revenue about 4x per user in Europe and >10x users from Asia


are you talking about individual users or the whole aggregate?

because if the former, then users from asia likely are largely outnumbering users from the USA


The post they're replying to says "half of their customers", which implies 100% of their customers are in America, which is obviously completely wrong.


It also seems to assume that all women in the USA have had an abortion???


Facebook might be a US tech company, but the US isn't their largest user base, that crown currently goes to India according to Statista: https://www.statista.com/statistics/268136/top-15-countries-...


The number of users matters much less for FB. The amount of users which can be monetized matters much more and the CPC for US users is always much higher than other countries.


At this point, with the impact they have on the global stage and the fact that they will only pay their taxes where they want, it's a but irrelevant to keep this frame of thoughts.


If their entire user base is American, and exactly half are women, and every single one of those women have had an abortion in the last year, and they all live in states where it's illegal, yeah, half of their customers might end up in jail.


My guess is deniability, just like Apple. Apple wanted to make CSAM detection work and make iPhone essentially a weapon law, but when their users hit back, they just made iCloud e2ee. With the number of child predators on FB, I am guessing that Meta wants to wash their hands of responsibility.


Their history approach is interesting, supporting key rotations as well.

However, metadata is still un-encrypted, same as on whatsapp. Meta knows who you talk to, and when - this is juicy enough for both ad-targeting, and government surveillance.


Also I believe they create an id on device for the media and can identify (known) images are going back and forth. Don't know why they couldn't use this for targeting even though "the data is encrypted"


I think this is a next step we must demand after everyone gets on board with E2E messaging.

Metadata is still data!


I would say that people have currently major misunderstanding between what is more important.

Let's imagine a situation where all the messages from Meta's platforms are leaked. On other scenario message content is plaintext, but senders, receivers, timestamps and locations are encrypted (on top of app usage behaviour).

On the other scenario, all the contents are encrypted, but the metadata is public.

We would know to whom everyone, in anytime, in any location, in which interval has talked to.

Which is more dangerous or damaging?


As a thought experiment, I’m interested in people listing metadata that fits the legal definition and teasing out types that the public would probably not think is metadata.

I’ll start first off the top of my head:

- The (real) identity of you and every person you talk to

- The time of the messages

- The location they were sent from

- The specific device used to send them

- A sentiment analysis: were the messages positive? Negative? Depressed? Anxious? Sarcastic?

- A description of the pictures that were sent (for example by an on-device AI model)

- A transcript of any voice memos/videos


Raised elsewhere in this thread: hashes of media, maybe perceptive hashes sent and revived.

Read receipts.

User is typing indicators.


> perceptive hashes

And if you’re sending media they have on record that means they can look up the exact same media and still have it qualified as metadata


Exactly. This is pure marketing. "Normal" people do not know the difference and there is greater chance that they stay in Meta apps instead of switching.


I assume users must still be able to send messages from different devices, just by entering their login data into a new Messenger client.

According to their paper, they are doing client fan-out:

"Messenger uses this "client-fanout" approach for transmitting messages to multiple devices, where the Messenger client transmits a single message `N` number of times to `N` number of different devices. Each message is individually encrypted using the established pairwise encryption session with each device."

This means, the system is only as secure as its client registration protocol. They don't write a lot about it:

"At registration time, a Messenger client transmits its public Identity Key, public Signed Pre Key (with its signature), and a batch of public One-Time Pre Keys to the server. The Messenger server stores these public keys associated with the user's device specific identifier. This facilitates offline session establishment between two devices when one device is offline."

If I interpret this correctly, the server can, at any time it desires, silently add new clients. Those devices will receive all messages directed at that user, and will be able to decrypt it.

I guess that's in line with their bla-bla about setting user expectations:

"Our focus is on determining the appropriate boundaries, ensuring that we remain true to our commitments, setting the correct user expectations, and avoiding creating meaningful privacy risks, while still ensuring that the product retains its usefulness to our users."

Don't forget, their commitments are making profit and exploiting user data.


This sounds similar to what Apple's iMessage does as well. Ultimately, if the user cannot check which devices that their client is sending messages to, then yes, the central server can tell clients to establish a pair with a hostile device the central server controls.


> Typically, E2EE messaging services rely on local storage and encryption keys to secure encrypted messages. Messenger, however, has a long history of storing people’s messages for them so that they can access them whenever they need without having to store them locally. That’s why we’ve designed a server-based solution where encrypted messages can be stored on Meta’s servers while only being readable using encryption keys under the user’s control.

I remember Telegrams founder saying they don't use E2EE because you can't store messages with full E2EE, which obviously BS because matrix does it, and now Facebook too.

Now they say there is no "elegant" solution. https://telegram.org/faq#q-why-not-just-make-all-chats-39sec...


It’s also obviously BS since the end-to-end encrypted data in their secret chats passes through their server tier. They could simply save it!

The fact that they go to such lengths to convince us that they are doing a favor with their insecure-by-default approach has always rubbed me the wrong way.


But they don't store secret chats, meaning you can't restore them on other devices and that's their supposed problem with E2EE


Yeah -- they created a product offering that supports their weird worldview.

It's perfectly legit to store end-to-end encrypted data in its encrypted form and then secure the key material in some manner not visible to the cloud service provider. The Telegram folks have tried their darndest to convince us that this isn't really an option, and so therefore they must go insecure-by-default, even though they also pitch themselves as a bastion of secure messaging.

It's always rubbed me the wrong way, since their claims are so obviously false. Which makes me assume they either don't know their domain well enough to be trusted to do any end-to-end encryption properly, or they have some hidden agenda. Neither of those make me want to treat any part of the Telegram chat experience as secure.


They transmit the messages twice, once to relay to recipients via e2ee (signal), and the other to the storage backend using a different e2ee approach (labyrinth).


With their proprietary client it does not matter how secure is a protocol. There's always a risk of bad update or total compromise. And of course ads need to be targeted.


Exactly. E2EE is just "transit encryption" when clients are not open source/ audited/ trusted. And FB cannot be trusted (I'm not going to list instances by which they gained my distrust here).

Encrypting metadata is also really hard. See the Matrix (and XMPP) community for detailed discussions on why that is.

I use and advise others to use E2EE encrypting tools, all are open source, audited and popular.

Fake sense of safety is worse than understood unsafety.


That’s it right - they get the goodwill of “we value privacy, look we gave you e2ee” and yet they still get to use your data and serve you targeted ads. Creeps.


Exactly. The protest against Reddit's ban on third party clients needs to be more widespread. We want FOSS clients for all IM, discussion and social media platforms.


Same with WhatsApp and signal


Well signal isn’t taking personal data or targeting people with ads, so no. Signal afaik cant access your data and it’s open source so assume has been proven.


Last I checked, Signal requires and uses your phone number.


Actually I think you’re right - there is one data point. Pretty impressive


I thought Signal is open source?


Also their builds are fully reproducible on Android.


Is there a guide how one can check that the PlayStore version matches the source code?



US spooks can get Apple or Google to deliver altered apps to targets, if nothing else.


Source?


AppStore and PlayStore are not open source, so you trust the distribution mechanism, is what I think parent wanted to say.


But you don't need to install them via their store. Also, you can always check the hash code of the binary.


I'm not sure about that. But true, in that case only the fact that's not open source is still in the way of me giving it my "baseline safe" approval. :)


It's "source available". They make changes to their server code, run those modified servers for a year or so and then release a source.


The server code isn't relevant in this case, you want the client code to be secure.


I can never get excited about E2EE encryption... It's not because it isn't important, it's because while I've lived I've had 2 phones die in my hands, 2 family members have lost phones (one of which is sitting at the bottom of an ocean and is clearly unrecoverable), and phones are consumables that change every few years.

I see there's some effort here on history sharing. Does that effort allow recovery of a chat history after an unrecoverable death of a primary phone? That's (honestly) the only usability thing I care about when it comes to E2EE.


The solution to this is to encrypt the cloud backup with a key derived from a password that the user remembers and can enter into a new phone. The password has to be strong because the cloud provider has the encrypted data and has unlimited time to do a brute force attack. Unfortunately strong passwords are hard to remember and users hate them.

But there is a trick to prevent brute forcing of a weak and easy to remember password like your four digit phone unlock PIN. The trick is to have a secure element chip in the datacenter, with storage encrypted by a private key that can't be extracted from the chip. The chip stores an encryption key that unlocks your backup, but it can't be extracted from the chip unless you present it with the right weak password. The chip's firmware rate limits and caps the number of attempts to unlock the backup, and if too many attempts are made it ultimately erases the key and your backup is permanently lost. So you're protected against brute forcing even with a weak four digit unlock code. If you know the unlock code from your old phone and enter it into your new phone, the secure element validates it and releases the key so you can restore the backup.

Obviously you have to trust the manufacturer of the secure element for this to work, but it's probably a good compromise for most users because losing your backups when your phone dies is quite bad. I know Google's Android backup uses this method and I believe iCloud does as well. It seems like Messenger supports this too but you have to choose your own PIN because Messenger doesn't have access to your main phone unlock code.


Personally, I am not attached to chat history and am totally fine with losing it. I've never backed up any chat history, and to be honest I feel weird about having 10 years old chats on my messenger account.

I might screenshot some important messages, but that's about it.


I am with you. Put aside lost phone, I still have all my older phones. Every time I start with a new phone, I use WhatsApp from scratch without restoring history/etc. I don’t back it up at all. Has not been a problem.


> The trick is to have a secure element chip in the datacenter, with storage encrypted by a private key that can't be extracted from the chip.

Ah yes, a HSM. Thus transferring the foundational "trust me bro" from Apple to someone like Thales Group.

Let's hope the HSM supports secret backup well enough to protect against server failure, and yet not so well as to allow the unlock attempt limiter to be bypassed.


> Let's hope the HSM supports secret backup well enough

Secrets backup for HSMs from a certain vendor—that you may or may not have named in your comment—I've worked with is actually the easy part. You just make copies of it and all the key data and check it into a git repo, because all of that data is protected by an HSM secret. Distributing that HSM secret among several HSMs for redundancy is also pretty easy.

The hard part is all the administration around it, specifically around custody of the smart cards that contain chunks of the HSM secret: where are they protected, where are the backups of the cards, who has access to them, coordinating sufficient card custodians to meet quorum, etc. You need to meet quorum to provision HSMs with the same secret.

The real "trust me" part of this is arguably less that the vendor backdoored the HSMs, and more that Apple pays the vendor support contracts (that hardware eventually fails) and maintains the knowledge continuity for the teams responsible for administering these HSMs as people join or leave those teams over time.

For what it's worth, this is pretty much why you don't see HSMs used often at less mature companies.



I backup my (encrypted) Signal conversations and sync them to some computers with Syncthing. I think it covers the "phone is at the bottom of the ocean" case.

To be fair I haven't done any disaster recovery yet, so it might not work that well...


That's not something you can ask of a casual user though. Not saying I'm one but I can already not recommend Signal to random friends and family that are not tech savy for reasons like this one.


Yep, that's the old convenience vs. privacy dilemma...


How do you backup Signal stuff? Is it included in local "iTunes" Backups?


Apparently it is not possible to automatically backup your message on iOS. (I run Android.)

Source: https://support.signal.org/hc/en-us/articles/360007059752-Ba...


K what about via Android, is it more flexible on that side?


You can export all your FB data easily anytime, in the sense that is also easy for casual users.


Yes, the storage back-chains epoch secrets specifically for this purpose of restoring history.


The white paper "Server-Side Message Storage" section links to a google doc, labelled as "draft" and with no public access. Should that point to https://engineering.fb.com/wp-content/uploads/2023/12/TheLab...? Pretty poor review.


The real link is https://engineering.fb.com/wp-content/uploads/2023/12/TheLab....

I also stumbled over that; only the link from the other whitepaper is broken, the one on the parent page works.


Can I just say that I am a little surprised that their engineering blog is hosted on wordpress?


As someone who runs an enterprise WordPress host, Facebook aren’t that surprising - loads of large organisations use WordPress either within their marketing department, or as their “second CMS” (AEM is very often the primary). We’re still seeing adoption growing too.The ones that might surprise you are banks and other financial institutions :)

Ultimately, WordPress is as secure as any other piece of software, but the ecosystem is so large and varied that there’s a low bar for many add-on plugins. A lot of enterprises build their own plugins for that reason, rather than using the full power of the ecosystem.

(Disclaimer: I’m also a member of the WordPress security team, but not speaking on behalf of them.)


As a Meta employee, I consider this a victory against NIH :)


Why? Afaik all Microsoft blogs are hosted on WordPress too. Everyone uses WordPress.


Wordpress is pretty big in enterprise blog-like sites, with their WordPress VIP offering.


Meta gets a lot of flak for privacy, but at the same time, they end to end encrypt the majority of communication happening globally (Whatsapp+Messenger), at cost to the company, with no obligation to do so.


> at cost to the company

If by "cost" you mean Meta being in the business of siphoning user behaviour, Meta controls the E in E2E a.k.a the apps, so it's a matter of trusting them to not do covert on-device analysis + result exfiltration.


Plenty of people have reverse-engineered the apps and found no evidence of this. They use the same protocol as Signal under the hood.

Many cybersecurity engineers passionate about this stuff have worked for Meta. They, too, would have blown the whistle at some point.


The E2E protocol is immaterial, it's about what the endpoint app does and which telemetry it reports.

Not saying they do it, again it's about trust in the context of:

- Meta (the company, not the employees) having a bad track record

- Meta's business model being what it is (building profiles and selling tooling around that), creating tension with privacy matters


Their track record with E2EE is pretty great, given that they opt to disable WhatsApp in countries that ban E2EE instead of disabling E2EE.


No company is obligated to do anything. Such lack of obligation is not sufficient reason to praise companies that do the bare minimum to keep user data safe. Sure they aren't obligated but how on earth does that matter?


E2EE certainly is not the "bare minimum", TLS is. Maybe encryption at rest, but even that's debatable.


If E2EE is “the bare minimum”, how are there so many successful and thriving companies who don’t do it? And why are you even on HN, which doesn’t do it?


Good for Meta and their user base! It's great to see Big Tech follow suit. We've been doing this for a decade already as only end-to-end encryption can truly protect data.

Plus, it's going to help with fighting bills like the Online Safety Bill and Chat Control when huge corporations join us; so bottom line: great news!


Possibly good for meta (if they were forced to do this by law, this means they did not want to do this them selves, which by definition makes it "not good for Meta").

Certainly not good for their user base, as (as many pointed out) it's not safe if the clients are all closed source. This promotes a false sense of security, which is worse than an understood lack of security.


Agreed, should have phrased this more carefully!


What Big Tech really wants is metadata. Metadata and which images are sent isn't encrypted. So this is E2EE minus what Meta wants to see. If one cares about privacy, one cares about metadata. Access to metadata equals poor privacy and is fluff encryption at best.


Absolutely right, and for that reason most tech-savvy people will still not trust Meta with their data. But that they start encrypting end-to-end is a good thing, regardless.


I wish they didn't do this.

They already have an end-to-end encrypted messaging application: it's called WhatsApp. I have seen so many people (and have myself been) bitten by WhatsApp's E2E implementation: messages lost because your phone was barely online and you "read" the message but didn't fully receive it, leaving you to awkwardly ask people to re-send things. Plus the constant need to backup your messages because if you don't you can lose access to them forever. Plenty of my family have lost messages/images that were sent to them and were important to them.

I'd rather not deal with this. Sometimes I want all my messages to be stored on a big company's servers. They should at least give people the option to choose.


Apparently in FB Messenger, conversations will still be stored in the cloud, albeit encrypted:

> Messenger has always allowed clients to operate off of a small stored local cache, relying on a server-side database for their message history. Neither WhatsApp nor Secret Conversations operated in this manner, and we didn’t want all users to have to rely on a device-side storage system. Instead, we designed an entirely new encrypted storage system called Labyrinth, with ciphertexts uploaded to our servers and loaded on-demand by clients, while operating in a multi-device manner and supporting key rotation when clients are removed.


The big company does not want to store your messages, as they need to deal with Chinese, Turkish, Saudis and other people whose messages someone wants to read. If there is “nothing to read” (let’s forget about metadata) then governments and such should abuse the system less.


Doesn't matter, if they want to keep operating in such countries, they've got several options to choose from, amongst providing a backdoor or disabling E2EE.


Sometimes I feel like these are the “plastic-free bag” solution[0] designed to make the market embrace the “tried and true” old way.

[0] There was this chip bag from a major chip manufacturer and it proudly claimed the bag was plastic free (might even have been compostable), but it was the thinnest, loudest, crinkliest bag you’d ever heard. It seemed like the chip manufacturer board meeting went like this: “The people want us to cut out plastic! For the environment or something. Don’t they know how much easier and more profitable plastic bags are?? You know what? If they want plastic-free we’ll give them plastic-free... We’ll make them regret even asking...”

That bag didn’t last long before it vanished never to be seen again.


Honestly, this seems like the sort of thing that they just didn't consider when testing. Maybe they even saw some 'Bag is unusually loud' notes and thought 'How bad could it possibly be?' and greenlit it. Feels more like incompetent bureaucracy (which I'm sure we all understand) than something malicious.


Much more likely they had a strict time limit (internally or externally imposed) on what they considered biodegradable. If you make it stronger but thinner, bacteria can break it up much faster which leads to faster breakdown. By comparison the bag being loud seems pretty irrelevant if it's just not possible to make a thicker bag degrade faster.

Also, many biodegradable plastics are more brittle than more common plastics. They only way to keep the bag flexible in that case is to make it thinner.


Any sufficiently advanced incompetence is indistinguishable from malice.


I’m not trying to prove that there was malice in the case of the chip bag, I’m just using it as a way to explain the tactic.


> We are beginning to upgrade people’s personal conversations on Messenger to use end-to-end encryption (E2EE) by default

The first line of the article suggests that it's an option


> They should at least give people the option to choose.

Messenger does. You can have a normal chat and a private, end-to-end encrypted chat at the same time with the same person, both completely separate.


Possibly related: Meta is removing cross-platform chats between Instagram and Messenger [1].

[1]: https://news.ycombinator.com/item?id=38528306


Interesting, I thought the plan was to finally make all three Meta messengers interoperable with end-to-end encryption.

I wonder what changed. Maybe testing showed that people actually prefer them to be separate?


That was an insane idea that was only put forward as a hedge against someone breaking them up.


Imo maybe it's bc of eu regulations


What are the implications?


Daily active users are going to grow on both platforms.

Let's consider you have Instagram users on one-side, and Facebook users on one-side.

As long as you have at least one contact using only Facebook (like your parents), then you have to be active on both platforms in order to talk to your contacts.

If the two platforms would be unified, then you would be active only on Instagram for example.


or you would leave one of them.


> Message contents are authentically and securely transmitted between your devices and those of the people you’re talking to. This is, perhaps, the primary goal of E2EE, and is where much E2EE research and design work is targeted, such as the Signal protocol we use in our products (such as WhatsApp, Messenger, and Instagram Direct), or the IETF’s Messaging Layer Security protocol, which we helped to design and was recently standardized.

Will Messenger eventually use IETF MLS?


I imagine somewhere in the planning stages is complying with DSA by adopting MLS and allowing interoperability between WhatsApp and FB/Insta Messenger and other services.


Messenger/Facebook and Instagram interoperability is apparently being discontinued: https://help.instagram.com/654906392080948


oh wow, so will E2EE apply to Instagram too? or just Messenger?

Edit: nvm just read the last paragraph of the post


By now we know none of this means anything if a notification is triggered with the message content.


That's not true. Both iOS and Android support sending encrypted notifications that are decrypted on-device.

How do you think Signal notifications work?


Note that Signal only uses push notifications to wake the app up. Then it directly connects to the Signal service to receive messages.


We are talking about whatsapp, no?


It's strange that your comment is not on the top.

I looked at the whitepaper and they didn't even mention it.

The reason e2e encryption got enabled is to make people feel safe.


This is at the same time as they've announced they're getting rid of encrypting their outgoing messages with PGP! (If you add your public key of course!)

I was always very impressed by this-- every service that sends emails should support this. Even banks don't!


This is because hackers were using this to lock people out of their own accounts. They would add a PGP key and then the user could no longer read any emails from FB to recover their account. There are maybe alternative solutions, but it's not a bad reason to remove it IMO.


This seems like a pretty dumb reason.

If they can set the PGP key they can also change the email. If the account recovery team allows access to recently removed emails as part of the recovery process then it should also allow contacting those addresses without a recently added PGP key.

Logically adding a PGP key is equivalent to changing the email, the previous person can't access the messages anymore. If the recovery process handles these cases differently it is a flaw in the process.


Regular people talk illegal things on messenger all the time, and if law enforcement gets addicted to sweet data requests, users will flee to wherever.

E2E messaging is a better product. Offering it free (at a loss) makes it hard to beat.

What's more, some major govts may be prompted to ban E2E, so they keep their data and kill Signal and others, while they are the 'good guys'.

FB can probably create a 99% accurate ad profile on you with just metadata, likes and tracking you on the web. If not they can push local profiling models on your phone.

With all that said, I still think it is a genuinely good thing for humanity as it is now, and I am cautiously optimistic.


I don’t know how Meta will benefit from this (perhaps they are protecting themselves from upcoming regulations in EU). Important question is who is owning the key, if they own the key, that means nothing in terms of protection against their usage.

Even if they don’t have the key, they don’t even care about messages itself anymore 1. It’s risky business, regulations might hit you hard 2. Metadata is good enough for them 3. They own the client, so they know how to extract more than useful data in the end.

E2EE is important marketing trick nowadays, most users see it as if this makes them completely anonymous to companies like Meta. After all ads are their only source of generating money. They will do whatever it takes to satisfy advertisers, not the users.



Just to be clear though, Messenger is still closed-source, so this all still gets lumped into the "source: trust us" bucket, no?


I think practically the best thing you can have is independent audits. Ideally multiple of them, over time. This is the same for open source and proprietary stuff. Otherwise, even if the code is not malicious and not backdoored, there's still no guarantee that it's not accidentally buggy.

That doesn't prevent a malicious update from coming around and just sending the entire database wherever, but nothing stops that from say, Element, if you're not actively vetting the updates. The best you can really do is hope that nobody compromises it (or that if somebody does, it gets caught as early as possible). Thankfully it seems like outright compromises to this degree are rare (as far as we know) whether the software is open source are closed source.

Basically imo it's a mixed bag. I don't see any obvious way to push the status quo vastly far forward because there's no way to really prove, especially to non-technical users who aren't cryptographers and programmers, that the software is 1. secure 2. doing what it says.


There's no process for verifying that a particular binary is built from known source code, or that the source code lacks any sneaky back doors.

The gold standard is and probably always will be analyzing the binary itself, with disassemblers, debuggers, etc.


Or reproducible builds to prove that the app I downloaded from the Apple walled garden matches the one I built myself from this known-good source code.


It's not even possible to extract the executable without jailbreaking


That’s a good point, but in theory only one researcher needs to confirm that an executable from a jailbroken app contains a build that’s consistent (or inconsistent!) with the published source code. We don’t all need to do it.


In theory they can force apple to serve a backdoored version to a particular region/person, which means one confirmation from a random security researcher isn't enough.


You wouldn't actually need to extract the executable for reproducible builds to be useful.

You could also just have the ability for your phone to reliably tell you the hash of the executable, without giving you the executable itself.


Why would you trust the hash function if you don't trust the rest of the platform?


This assumes you trust eg Apple (to a certain degree, eg to have their hardware provide legitimate hashes, but eg not to just run a messaging services), but you want to avoid also having to trust Meta.

More generically: you might trust that a company can do the Right Thing now (or at most points in time), but you avoid having to trust that they always do the right thing at all points in time.

See how Apple famously could refuse to give law enforcement access to people's phones, because Apple deliberately designed their systems in such a way to remove that ability from their future selves.

Similarly, a company that doesn't keep any logs, can't be forced in the future to divulge those logs.

They can be forced to start keeping logs, and then be forced to divulge those. But doesn't work retro-actively and is still one extra hoop for the forcing party to jump through. And perhaps you can even set up matters such that adding this vulnerability can't easily be done in private.


Ah, yes, I was mixing up iMessage and Messenger here (too much messaging encryption news these days!) – for the case of trusting your OS and hardware vendor, but not a third-party messenger's vendor, reproducible builds would indeed be advantageous.

It's a real shame the app store does not allow for reproducible builds.


Pretty much all security is trust based. Any product you pick you trust the vendor not to fuck up or be corrupted.

You can argue then lets just use open source. Ok but is it that much of a security guarantee? Open source products are of limited functionality. It is great for web servers and frameworks, stuff developers care about. When it comes to feature rich client applications the track record is not nearly as good. Also who is going to pay the server costs?


If there are no security audits by disinterested third parties, then it can clearly not be trusted. If they want trust and can provide it, they likely would have done this. Have they?


Sure but you then have to trust the auditors. It all ends up in trust.


Sure, trust in a third party that has a good reputation versus the company itself that says "trust us". Big difference.


It's slightly better, because lying about this would be securities fraud for the company.


My personal security audit is to look at what a company returns when compelled to do so by law enforcement.

A passing grade is a block of garbled encrypted mess for which they have no unlock key.


Yes this is 0% trustworthy


So the client basically transmits the messages twice - once to relay to recipients via e2ee a la signal which specifically prevents the decryption of historical messages (forward secrecy), and the other to the storage backend using a different e2ee approach which allows the recovery of history (labyrinth via epoch segmentation and back-chaining of secrets).


WhatsApp is end-to-end encrypted, but all messages have to be stored locally, which is a problem on cheap smartphones. Telegram can work on a cheap smartphone by storing most messages server-side, but it's not end-to-end encrypted by default. Would be great to get the best of both worlds with Messenger, and see the other messaging apps follow suit.


>but all messages have to be stored locally, which is a problem on cheap smartphones

Why is that a problem for cheap phones?


Because if you send and/or receive a lot messages with large attachments (pictures and videos), then it will eat up a lot of storage on our your storage (can be gigabytes), and if you use a smartphone with only 64 GB of storage, it can quickly become an issue, and then you have to decide what to delete.


That's a non issue for most cheap phones form the last few years, even sub 200 Euro phones, as most ship with at least 128GB as base storage. My OnePlus 3T came with 128 GB of base storage. In 2016! 7 years ago.

Hell, on Amazon right now I can find a brand new 99 Euro 'Chinese Brand' Android phone with 128GB of storage and 8GB of RAM. Granted, I wouldn't recommend anyone actually go and buy that one, but it shows even if you're tight on cash and need a phone with lots of storage you can get it even on rock bottom prices.

Only Apple is the one left who shortchanges you in 2023 wiht 64GB base storage even at +500 Euro phones, but that's an Apple-only problem, not a smartphone problem.

Still, I'd much rather pay a bit extra for more storage on a phone to keep my encrypted messages locally than in the cloud of some shady app like Telegram that's "FREE" and yet needs to finance it's massive cloud bills somehow.


iPhones have 128GB base storage and have had that much since last year. 2021 was the last time a 64GB phone was made by apple.


Only if you choose to ignore the iPhone SE, the cheapest iPhone currently on sale by Apple.


I had end-to-end security on the Facebook messensger about ten years ago when I was able to connect with Pidgin (iirc) and use the OTR plugin.

That end-to-end security also didn't rely on Facebook.

Not sure how this could work nowadays, I have closed my facebook profile many years ago.


What's interesting to me is that they never quite explicitly claim that they (Meta) can't access the contents of the "E2EE" messages. All the weasely language they use around this makes me think that they in fact can do so.


How can a user like me actually verify this is the case? All we see before and after are encrypted byte streams? Do we have access to our key pair?


Progress that is highly welcome, to be sure. But what about Trust?

In a world of continuous data breaches, exfiltrations, malicious Three-Letter Agencies, incompetent and decades-behind-the-curve legislators, trust is a paramount factor. Trust that a given communication system's first allegiance is to the interlocutors(A), trust that my data won't be subjected to surveillance capitalism, trust that it won't be stripped mined for reflecting advertising right back at me for useless shit I don't want, trust that I can speak my mind without needing to continuously look over my shoulder with one eye pealed for powercenter goons sicked on me by partisan logic.

If you want me to trust your system, show me the complete source code, show me the disinterested third-party security reviews, show me what can happen at the ends, show me that it's not compromised by secrecy laws, along the full chain of custody.

A problem for our age, one step at a time ..

A) See? Even this assumption is a cardinal mistake!

Edit: Trust, but verify. Trust needs to be earned.


> If you want me to trust your system, show me the complete source code

I think this is a bit extreme and not really plausible for something like messenger at its scale.


>If you want me to trust your system, show me the complete source code

Sounds like you set yourself up to never trust anything.

I mean, do you fly on airplanes without having inspected the flight code? Or do you put your money in a bank without having inspected the accounting code?


I don't think it's necessary for everyone to fully review the complete source code themselves. But having it available for applications at a serious enough scale, would allow the community at large to proof the vendor's claims about secure encryption. And at Facebook scale I would be satisfied that I'd hear about it if the encryption turned out to be a lie.


Airlines and banks do have to have to prove compliance with formal standards via audit to operate. These audits often require revealing some code to regulators under NDA. So our trust in them stands on sturdy ground compared to the offerings from the big tech companies.

I don't need to see the code for it to fail my audit though:

- Phone number attached to real identity is required

- Metadata is not e2e

- Contact list is not e2e


Even if you did, how would you know that the airline or bank is running a binary generated from that code? Would you also need to check the compiler? How do you know which compiler was used?


These examples do not work. If a plane fails, everyone would know it immediately. If money disappears from my account, at least I would know it immediately. The problem with privacy is that if it's broken, I wouldn't know it. So we have to spend all our life in blind trust, and it's insufferable


> But what about Trust

I finally took off my pink-tinted glasses when I noticed they restored old deleted messages once they released the new Messenger, the one we have today which replaced old facebook chat feature.

Probably it was for my "convenience" but what I know /s.


> Our aim is to ensure that everyone’s personal messages on Messenger can only be accessed by the sender and the intended recipients

Don't forget the authorities in the mass surveillance industry.


this have to do with industry wide push to RCS over SMS, to help with 3rd party doctrine. And in FB case to unify US (messenger) with rest of the world (whatsapp) audiences.


I don't understand how conceptually they can do e2ee history of chat.

If any mobile phone that logs in can query and read the history, why cannot the server? What's the trick?



So practically it means just logging in to a different device is not enough to get history, you have another logging in mechanism for that? (recovery code)


I am curious why it was very difficult for messenger to implement E2EE by default. iMessage has been doing it* (with a giant asterisk because there are flaws in the way Apple markets its E2E on iMessage but in general it has something there). The implementation they're describing on the blog seems very similar to iMessage (keys stored on a server so that syncing is possible

Asides from iMessage, they pretty much have most of this working for WhatsApp from the perspective of the user. The challenges they've mentioned seem like they've mostly been solved in WhatsApp? I could be totally naive here though.


Enabling E2EE has historically required usability tradeoffs (no multi-device, no backups, chat transcripts are stored on device only, etc). It took them this long to make it seamless.


But WhatsApp has had it for a while, no? Or at least I think it's a solved problem looking at both WhatsApp and iMessage


Yah, I'm constantly amazed by software that can restore my message history after I lose my private key.


Maybe read the papers then?


I promise, tomorrow when I have a little time, I will venture into the labyrinth and see what is to be learned.


If the keys are stored on the server, how is that a private message?


Protonmail holds your key encrypted on the server. The key could be encrypted+decrypted by pin or by password.


Does Proton (or Meta) see that Pin or password, or is it client-side only (as far as we know)?


Client side. The client downloads the encrypted key backup and decrypts it using user-entered pin/password.


Will Facebook continue to support mbasic for messaging? This is one of the things I fear will be dropped when end-to-end security is added.


But the Meta data still shows who is communicating with each other right?


There is literally no security standard, papers, or explanations that Facebook could offer that would compel me to trust anything they say or make. It may as well be a messenger app branded by the CIA telling me about its new privacy features.


Because WhatsApp is their product they’ve messed up the least.


The input field is not E2EE, the text rendering is not E2EE. Plenty of opportunity to look at your fine messages in plain text.

Impressive theater though.


Doesn't this mean a bunch of features that run on the server have to be removed, like searching message history - which I use all the time?


Usually yes, but check out their Labyrinth implementation which might mean the answer for most features is no.


I just read the white paper, and I haven't seen anything about search, which would require some kind of homomorphic encryption, but I think that's pretty much an unsolved problem: for example, to be able to search message content in Proton Mail, you need to enable the creation of a local search index — it's not done server-side, which means all the messages have to be downloaded locally to be indexed.


Search works fine on WhatsApp, not sure whether it's client / server though.


Because everything is stored locally.


By default yes but you can always disable it


E2EE won't be disable-able


You meant EE2E, a more favorable privacy form than E2EE.


What?


ahhh, that's why it forced me to set a PIN! I wonder what percentage of people will forget theirs


If it's a 6-digits PIN, computers are going to find it easy to break.


This can somewhat plausibly done in a relatively secure way using HSMs: https://engineering.fb.com/2021/09/10/security/whatsapp-e2ee...

That approach puts a lot of trust in the HSM vendor, though.


This is great. I feel like E2EE has slowly fallen out of focus in recent years as the tech has stabilized, but important developments like this and the MLS standardization still continue to happen.

One specific area where I'd love to see more focus and attention is the web as a platform for E2EE applications. Currently, because of the inherent problems related to application delivery and trust relationships on the web, every step forward in E2EE adoption is a step away from webapps being first-class citizens -- even as PWAs keep becoming more viable for a wider range of use-cases otherwise. Even though an increasing number of companies maintain web implementations of their E2EE apps, these are always the fallback option when nothing "better" is available; the tech to make E2EE secure in webapps doesn't exist yet, but companies also have a unrelated incentives to push users to native apps. There are no serious efforts to remedy the situation and develop tech that would make it possible to deliver secure E2EE through the web.

The post mentions a couple of relevant goals:

> 3. Control over endpoints

> 8. Third-party scrutiny

They also mention the Code Verify extension[1], which may seem like a solution, but does not stand up to scrutiny: It only notifies the user of unexpected changes in the app, but does not prevent them. The detection logic it implements also seems trivially bypassable, and in more ways than one. Even if it was sufficiently enforcing application integrity, an extension like Code Verify is unlikely to ever become widely-adopted enough to make a dent. And of course it's not even available in all browsers on all host platforms.

There are also other similar extensions that suffer from similar shortcomings.

Browser vendors could solve the problem by providing APIs that allow the kind of integrity enforcement needed, akin to SRI[2], but that would mean you first have to agree on a standard and then implement it consistently everywhere and then webapps could slowly start adopting it. And because of past failures like HPKP[3], browser vendors would probably be hesitant to even start considering anything like it.

I believe a solution is possible using only the currently available web APIs, however, and for the past few months I've been prototyping something that's now at a stage where I can call it functional. The general idea is that using service worker APIs and a little bit of cryptography, a server and a client application can mutually agree to harden the application instance in a way that the server can no longer push new updates to it. After that, the client application can be inspected manually with no risk of it changing unannounced, and new versions of the app can be delivered in a controlled way. While my prototype is nowhere near production-grade at this point, it's nearing a stage where I'll be able to publish it for public scrutiny and fully validate the concept. Until then I'll be implementing tests and examples, documenting the API and threat model, and smoothing out the rough parts of the code.

If anyone's interested in collaborating on this or just hearing more details, feel free to reach out. I'd love some early feedback before going fully public.

[1] https://engineering.fb.com/2022/03/10/security/code-verify/

[2] https://developer.mozilla.org/en-US/docs/Web/Security/Subres...

[3] https://developer.mozilla.org/en-US/docs/Glossary/HPKP


I've actually been thinking quite a bit about this very issue. As it stands, it's not really possible to do E2E encryption on the web in a secure way, since the server can always just silently update the client side code for a particular user to steal any encrypted data. I'm kind of curious about what you're doing with service workers to lock the server out of being able to update its own client side application. That sounds almost like a bug.

My ideal solution to this problem would be Web Bundles[1] signed by the server's TLS key[2], combined with Binary Transparency[3] to make targeted attacks impossible to hide (and maybe independent Static Analysis[4] to make attacks impossible to carry out in the first place), but work on many of those standards seems to have died out in the last few years.

[1]: https://wpack-wg.github.io/bundled-responses/draft-ietf-wpac...

[2]: https://datatracker.ietf.org/doc/html/draft-yasskin-http-ori...

[3]: https://datatracker.ietf.org/doc/html/draft-yasskin-http-ori...

[4]: https://datatracker.ietf.org/doc/html/draft-yasskin-http-ori...


I've looked at web bundles and a variety of other solutions myself, but the service worker approach feels like a winner so far. There's no magic, nor any bug being abused, but the client does have to trust the server to behave nicely during initial setup. After the initial setup is done, the client never again has to trust the server again as long as the browser's local storage isn't purged manually; so if the server is compromised after the initial setup, the compromised server cannot compromise established clients. It's not perfect, there's still the need for initial point-in-time trust, but it's still a significant improvement on the standard way of serving webapps where a server can compromise any client at any time.

The way it works is the server returns a unique service worker script every time, and the script file itself contains an AES key. The user trusts the server not to store this key and the server never sees it again. This AES key is then used to encrypt all persisted local state and sign all cached source files. If the server replaces the service worker, the key is lost and local state cannot be accessed. If the server somehow replaces a source file, its integrity check will fail and the webapp will refuse to load it. If the server manages to skip the service worker and serve a malicious file directly (e.g. because the user did Shift+F5), the malicious file won't have access to any local state because the service worker will refuse to give it access. The server can destroy all local state and then serve a malicious application, but the user will immediately notice, hopefully before interacting with the app, because suddenly all their data is gone.


That's really clever! Fixes the "silently" part at least, though given that most applications typically require frequent updates and that this doesn't prevent targeted attacks, I'm not sure how useful it is in practice, at least for mainstream applications.

Signed web bundles with binary transparency and independent review would be far superior, if they actually existed. (Which sadly, they don't right now.)


Thanks! Automatic updates are still possible; you can implement a code signing-based flow on top of this, or fetch hashes from GitHub releases, or anything, really. Attacks are only possible during setup, and targeting at that point in time is difficult because the client won't have authenticated yet. Anything else (attacks that rely on clearing the local state) can be mitigated using careful UI design.


The big problem with transparency logs is that they can't prevent attacks in real time because of the merge delay. You'll only find out afterwards if you've been attacked. It significantly raises the bar for an attack, but can't stop one from happening.


This is HORSE SHIT, and I will tell you why:

Was source code released? Can I build Facebook messenger with private cicd running on my own silicon? If not, there is no encryption worth a damn. You have to be very very stupid to believe otherwise.

What Facebook is doing here is deploying a massive honeypot.


Can you still see the meta data (sender, receiver and timedate)?


can FB be trusted? Is there a catch?


How annoyed I am by these attempts to name-squat a whole category of technology (or other things) ... "Messenger". How idiotic on one hand and how insidious on the other hand that naming is. Just like MS "Teams".


> 1. Only the sender and recipients of an E2EE message can see its contents.

> 2. Nobody (not even Meta) should be able to forge messages to appear to have been sent from someone they weren’t.

From a business perspective, it makes perfect sense. Users want security and Meta don't want to be responsible for the data communicated.

One question I have is how Meta will comply with UK law on E2E [1]:

> Meta has been a leading industry player in the fight to tackle child sexual abuse. For over a decade, Meta has utilised hash matching technologies to enable it to detect child sexual abuse material being shared on its platforms. This has made it one of the leaders in detecting and reporting online child sexual abuse, providing law enforcement with leads to safeguard children and arrest child sex offenders.

> However, Meta and other companies are now planning to implement E2EE, without similar technologies in place, across their messaging platforms such as Facebook Messenger and Instagram Direct Messages. The roll out of E2EE is likely to happen later this year. The National Center for Missing and Exploited Children (NCMEC) estimate up to 70% of Meta referrals could be lost following the roll-out of end-to-end encryption.

Firstly, I would like to know what the UK government does with all of these referrals. Given the current state of UK policing, I predict it's almost nothing. Local government was itself complicit in child exploitation [2].

It appears this E2E implementation may have some form of backdoor anyway [1]:

> The Safety Tech Challenge Fund is a UK government funded challenge programme that first ran from 2021 to 2022. The fund was designed to support the development of proof-of-concept tools to detect child sexual abuse material across E2EE environments, whilst upholding user privacy. The fund demonstrated that it would be technically feasible.

> It is recognised though that each and every online social media platform and service is different, and therefore solutions will need to be tailored. Therefore, companies such as Meta should utilise their vast expertise and engineering resources and build on the outputs of this fund and develop solutions for their individual platforms/services.

> In addition, some of the UK’s leading cryptographers have written an academic paper outlining a variety of techniques that could be used as part of any potential solution in E2EE to provide both user privacy and security, while protecting child safety and enabling law enforcement action.

"The fund demonstrated that it would be technically feasible" - you can't leak information from a message without leaking information. The same method used to detect harmful content could also reveal information about the messages. For example, if an E2E message was delivered with hashes of images inside the message, it could also be used to detect political decent memes.

[1] https://www.gov.uk/government/publications/end-to-end-encryp...

[2] https://en.wikipedia.org/wiki/Rotherham_child_sexual_exploit...


Where do I find my private key?


well, first, which one? and second, somewhere in your Android or iOS keystore


Link?


Android developer docs and the whitepapers linked in this post


My account


Dear Facebook. Still. I dont trust you.


More discussion (different URL, similar content) of "Launching Default End-to-End Encryption on Messenger"[0] (59 points, 3 hours ago, 55 comments)

[0]: https://news.ycombinator.com/item?id=38551993


Thanks - although that one was posted earlier, the OP here seems to have the more informative article, so I guess we'll merge those comments hither.


Here's a hard problem that I would like the world's highest paid people to solve:

Give us an app where everything, including metadata, is end-to-end encrypted and works at "web scale". No one other than the people in the conversation can know who is talking to who and when. Then figure out a way to pay the bills to run the infrastructure and pay the people working on the app and also make a profit. (A novel idea: Maybe charge people to use the app?)


Even harder problem is getting the average user to care about things like end-to-end encryption. Let alone getting them to pay for it. Thats why services like Signal are not in the majority.


Give it a decade or two. The increasing amount of things we store digitally and the risk of exposure or losing them rises day by day, with it the amount of people who were credit scammed, crypto stolen, bully sharing explicit images etc, not speaking of scandals like Meta had. Not long ago people didn't care about their home being energy efficient either.


The everything encrypted is a hard one, but should be doable so long as the incentives are right. For example, the money needs to be paid by the users for the product, and to keep it fully encrypted. It needs to be clear that not only is there no incentive to sell data or mine for ads, but doing so is a disincentive because it would actually lead to customer loss.

If I could raise enough money to replace my dayjob, I would totally do this. I'm guessing it's not doable, but I put together a quick Google Form to gauge interest. If anyone would be interested, please submit the form[1]

[1] https://docs.google.com/forms/d/e/1FAIpQLSe1sl5MI1Mxna6RBTXB...


Doesn't Signal do all of this, minus the profit?

> Maybe charge people to use the app?

Who will pay for a service that's offered for free by at least 3 major messaging apps?


I pay for Signal. I used to pay for WhatsApp, before it was acquired. Not many users are like me, granted.


I'm not aware how to anonymously use Signal in any meaningful way but there might be some way.


Anonymity isn't necessary. e.g. Briar isn't anonymous (you still need to trust your contacts), but it's essentially impossible for third-parties to track who is talking to who and when.


Use it via a VPN and with a telephone number that is not associated with your identity.

This is what I do.


If I would have to be anonymous then I won’t even bother with any phone number. In fact I won’t even bother with Signal or WhatsApp. I’ll find an app or service that is e2ee and doesn’t require a phone number at all.

Because what’s even the general use case of using Signal and WhatsApp (in their current forms)?

Because then it would just be weird talking to friends and family. And with those few people you want to be totally anonymous with? Those apps and services which are already anonymous.

Will be awesome if Signal adds that feature? Yes. But even then I would still ise dedicated anonymous apps.


That's pseudonymity, not anonymity.

Signal doesn't allow creating multiple identities for different contexts (without using multiple phones).

And depending on your personal risk profile (e.g. journalists protecting their sources), phone numbers are extremely difficult to acquire anonymously in many countries. In most EU countries, you need to show photo ID to register prepaid SIM cards these days.


You don’t need to acquire a phone number from the country you’re in. Any phone number works on Signal. There’s no long distance.

In EU countries you can easily and anonymously get a US number via the internet that works fine on Signal. There is no requirement that the SIM card in your phone be the number you use to register on Signal, or that the number even be a GSM number.

My Signal number is a Google Voice number, for example. Anyone with access to a US GSM number can create these; many are for sale for anonymous payment methods.

It’s easy to use Signal anonymously in any country that has mostly uncensored internet.


> In EU countries you can easily and anonymously get a US number that works fine on Signal.

How? And even if that's feasible (for many people I'd argue it's too big a hurdle) it still only gets you pseudonymity.

There are some interesting encrypted messaging ideas floating around that allow you to have per-contact or even per-conversation keys or multiple identities for different contexts; Signal is just not doing much in that regard.


It is completely anonymous, there is no link from the telephone number used to register to the person using it.


Yes, but you only have one number/identifier that links you across all contexts. That’s a pseudonym, which is a different thing from being anonymous.

You’d need multiple phones and a phone number per context to be actually anonymous. That’s just not practicable for many people, I’d say.


Switch to using Session instead.


A non-free messenger app seems pretty challenging because

* there’s lots of free competition, including Signal, which is free and decent.

* you’d probably need to open source it if you want to convince anyone it was trustworthy

* what if you want to message somebody that doesn’t want to pay?

Of course maybe a huge company could work this out… but it seems fundamentally very hard


Briar gets most of the way. And Cwtch gets all the way (except maybe the pay the people working on it).

Any sort of project that actively tries to minimize data collection will not be profitable. At least not in this current economic system. They will have to be non-profit (Signal, etc), or run by volunteers (Cwtch, Briar).


Who wants this? All the effort and few thousand users paying 20 USD / year doesn't make much profit. The world is happy with the encryption WhatsApp provides and would have been happy even if that wasn't there.


Also iMessages. 135M users in the US.


It also isn't really encrypted. The really juicy bits (metadata) is available for Apple to read as it sees fit.


It's not possible to have a "web scale" that end-to-end encrypts metadata, because part of the metadata is understanding the recipient.

If the network or server has no idea where to deliver an encrypted message, then the only way to guarantee delivery is to send all the messages to all the users to unpack to see which ones are relevant to them, which fails "web scale"


What if all inboxes were public, but only decryptable by the recipient? Recipient can then poll for new messages.


Only the recipient would be able receive/send messages from that inbox. So you can easily match inbox to the recipient.


You wouldn't send via your inbox. And anyone would be able to download any inbox, the data would just be useless without the key.

There might be problems with the proposed model, but they aren't the problems you suggested.

(As an individual, you wouldn't want to download just your own inbox. But to obfuscate, you can download a random subset of inboxes that often-enough includes your own.)


And that's not something I want to have to do: download random inboxes on a limited phone data plan. As a polled service, how many times do you have a poll a minute? If I'm instant messaging, I could be pulling multiple times a seconds. How many more inboxes would I have to poll at the same time to obfuscate my actual inbox?

And if you want to be anonymous you can't filter by the last message you've received, so every time you pull your inbox, you're getting the last X time in messages. So if I have multiple active group chats, that would really start to add up.

Videos and images like every other messaging service would have to be anonymized the same way. Even 10x noise polling for a 10MB video would be too much data on phones and probably not enough to anonymous. How about obfuscating the sender? Would a sender be uploading 10x trash, not just text but also video, messages in every inbox?


If you have such a limited data plan, perhaps you shouldn't communicate with videos?


Even if I don't, any inbox chosen to randomly download could have videos.


Good point. Though you can probably use 'blocks' (think like hard-disk blocks) instead of complete mail boxes. You download random blocks that also contain blocks from your mailbox.


You can't just download random blocks until you get your data though, you might never download some blocks. So you need some kind of index of blocks. And your client can't generate that because the server stores all the data. So it's a server-sided index.

If you only download the index for specific users, that's no different than an inbox: if you pull the index for an inbox and don't pull all the associated blocks (including videos), that's obviously not your box.

The other alternative is downloading the entire index for every single block, which sounds even worse than just downloading random complete inboxes with videos. Especially if the blocks are going to be filled with trash inboxes filled with trash data to obfuscate the sender. Even my own blocks would get trash data including videos that I have to download to pretend it's real.


Fountain codes and other tricks might help.

Ie any block could be useful for multiple inboxes and messages. See also how freenet used to do it. https://en.wikipedia.org/wiki/Freenet


Freenet, as a distributed data store, just makes the issue worse. Now I'm expected to host images and videos that other people send. Also a distributed data store doesn't free you from downloading random inboxes either. Someone just needs to to be running enough instances to be able to track and identify you if you're only downloading your inbox.

That's why Tor makes you bounce between multiple nodes, to decrease the chance you get only nodes that belong to a tracker. Actually not sure why I didn't think of Tor as something that already fulfills all those anonymous requirements. It's also an example of the drawbacks of being completely anonymous, that network is extremely slow and will definitely not scale to anything mainstream.


This is very hard to get right.

Some Bitcoin SPV clients have tried solving an almost equivalent problem, but the obvious approach does not work for various subtle reasons: https://eprint.iacr.org/2014/763.pdf


Oh, it's definitely not easy to get this right properly. I just wanted to point out that things aren't as clear-cut impossible as the comment suggested.


Y'all are struggling so hard to describe newsgroups :)

Check out alt.cryptography (I think) if you can.


How would the servers know where to route the poll requests without metadata? They wouldn't.


This is doable and has been done. Briar is an example. The Tor network is used for transport and there is no central server. Even the clients don't know who is talking to who, everything is done in terms of the crytographic identity number. That means that even the users don't for sure know who they are talking to; as part of the introduction to someone you have to give the name you will know them by.

The Briar way of doing things actually solves an important E2EE usability issue. Since the cryptographic identity is the only identity, it is much harder for the user to end up using the system without a verified identity.


The #1 thing that sounds impossible is how do tie a payment to an account anonymously? Unless you only take anonymous payments, you end up putting a credit card number on every account.

>No one other than the people in the conversation can know who is talking to who and when.

This also means the infrastructure cannot block messages. Or tell the difference between spam and legit messages. Effectively you can DDoS a client with messages.


> This also means the infrastructure cannot block messages. Or tell the difference between spam and legit messages. Effectively you can DDoS a client with messages.

Not necessarily. Eg as a simple model, assume that you 'send' a message by publishing it to something like a usenet group (and perhaps that group charges you a fraction of a cent for doing so).

There's no denial-of-service for a client that receives a lot of messages, distributed or otherwise, but still outsider don't see who is sending what to whom.


I'd love something like the Matrix [0] data model (JSON messages aggregated in an eventually-consistent chatroom CRDT) transmitted over something like simplex for metadata resistance.

[0] https://matrix.org [1] https://simplex.chat/


It may not check all of your boxes, but the waku protocol and applications’ use of it have been evolving since 2018:

https://waku.org/

https://github.com/waku-org/nwaku



It seems really straightforward to suggest that the better way to solve this is with standard protocols and self-hosting, but I do realise that's quite hand wavy and often not very accessible.

SMTP is an example of this succeeding, as problematic as that protocol is.


> Veilid is a peer-to-peer network and application framework released by the Cult of the Dead Cow on August 11, 2023, at DEF CON 31. Described by its authors as "like Tor, but for apps", it is written in Rust, and runs on Linux, macOS, Windows, Android, iOS, and in-browser WASM. VeilidChat is a secure messaging application built on Veilid.

https://en.wikipedia.org/wiki/Veilid

> Veilid is an open-source, peer-to-peer, mobile-first, networked application framework.

> The framework is conceptually similar to IPFS and Tor, but faster and designed from the ground-up to provide all services over a privately routed network.

> The framework enables development of fully-distributed applications without a 'blockchain' or a 'transactional layer' at their base.

> The framework can be included as part of user-facing applications or run as a 'headless node' for power users who wish to help build the network.

https://veilid.com/

https://gitlab.com/veilid/veilid

https://veilid.com/discord

https://youtube.com/watch?v=Kb1lKscAMDQ

> VeilidChat is a chat application written for the Veilid distributed application platform. It has a familiar and simple interface and is designed for private, and secure person-to-person communications.

https://veilid.com/chat/

https://gitlab.com/veilid/veilidchat

Previously:

Veilid is an open-source, P2P, mobile-first, networked application framework

180 points 4 months ago 71 comments

https://news.ycombinator.com/item?id=37118124

Cult of the Dead Cow wants to save internet privacy with new encryption protocol

141 points 4 months ago 79 comments

https://news.ycombinator.com/item?id=37018404

https://gizmodo.com/cult-of-the-dead-cow-launches-veilid-enc...


Facebook and privacy are two opposite poles!


There’s probably an anti-competition angle here: with end-to-end encryption, you need a trusted central authority to distribute the public keys, which would make interoperability across different messaging services more difficult (impossible?), something that regulators have been trying to make happen.

If I owned the largest messaging networks, I would enable end-to-end encryption by default too.


If I'd want secure chat, I'd rather use Telegram.

I will trust FB only if metadata is also encrypted, their code is audited by a trusted 3rd party organization, and they can prove they are running only the audited code in production. And even then I would have some doubts.


Telegram conversations are not end to end encrypted. If you do opt to use their end to end encrypted chat feature it only works on a 1:1 basis, and uses a dodgy protocol. The owner of telegram , company and employees are based in some of the most un-democratic police states in the world, where you have no privacy rights. To call Telegram a secure or private messaging service is laughable.


>The owner of telegram , company and employees are based in some of the most un-democratic police states in the world

Maybe. I don't live in that country. If perfect secrecy isn't achievable, I rather not let domestic actors to peek at my data.

>uses a dodgy protocol

What do you mean by dodgy protocol?


I knew a guy that used deep packet inspection in a service[1]. He told me that back in ~2016, when using WhatsApp, the text was encrypted but nothing else was. If you sent a picture, it was in clear text.

I don't know if this is still true, but because of this, I have serious doubts about anything security related coming out of FB.

[1] Being intentionally vague here as to not dox him.


WhatsApp was/is heavily scrutinized, and it's fairly easy to sniff your own network traffic. It's unlikely that "some guy" and no one else discovered that WhatsApp was not encrypting content.


Is it a true EE2E where data at rest in the Facebook server remains encrypted and keys held only by the sender/receiver? I didn't think so.


What? That's exactly what this is.


Meanwhile too many Facebook fans are downvoting me. This should age well.


Note that most E2E encryption services still let the service read your messages. Apple, Facebook, etc. can still read your messages.

This is because encryption needs keys, but when you talk to your friend over these services, under no point did you personally exchange any keys. This means that you’ve decided to let the service generate and keep the keys for you.

It’s like getting a deposit box at the bank but instead of keeping your key, you give your key to the bank to keep.

What would stop the bank from opening your box would be its own ethics policy and the quality and depth of their internal processes. They are not physically prevented from bypassing them however.

It’s still a W with these rudimentary E2E chat implementations because your ISP and government might find it a lot more difficult to read but it’s not quite exactly super strong security.


Apple says that iMessage uses asymmetric encryption, the private key of which is stored only on device. You can choose to believe they're lying, I guess, but what they are describing is indeed possible.

https://support.apple.com/guide/security/imessage-security-o...


Until now, Apple hasn't offered any way of actually comparing what keys you are writing to, though. In other words, there was nothing technically stopping Apple (or somebody with the power to compel them) from adding additional recipient keys to your account that your or your contacts would not be aware of.

This is about to change with contact key verification and transparency, though.


Asymmetric encryption doesn’t change anything. Apple is still the one telling your client whose key to encrypt the data for.

A BIG part of successful encryption is key exchange.

A successful SSL/TLS man-in-the-middle attack exchanges false asymmetric keys.


Read the whitepapers, the client side generates the keys and only transmits the public keys to the server. E2EE is truly end to end (as in client to client). Meta has no access to the content of your messages, same has been true for WhatsApp


I never said the keys are sent to any servers.

They keys are still generated and kept using software they wrote.

Second, they also control who they trade the keys between.

This is contrasted to some chat apps (which are painful asf to use) where you have to manually exchange keys, meaning you have to engage with the party you want to talk to and so you can confirm who you are really encrypting messages for. It’s physically impossible to be given the wrong person’s key because you personally had to get them.


> They keys are still generated and kept using software they wrote.

This is a prerequisite for forward secrecy, which is arguably much more relevant.

> It’s physically impossible to be given the wrong person’s key because you personally had to get them.

Does that matter at all if the (in your threat model non-trustworthy) software just exfiltrates all messages?

If you don't trust your encryption software, it's game over (unless it encrypts everything fully deterministically and you regularly audit its outputs).


Well these apps don’t even let you verify the keys even if you wanted to, so you can’t even tell if it’s being man-in-the-middle’d.

Some people said they are finally adding key transparency features to let you do that, but it should have been there since the start. Something a lot of people already use called SSH literally has had that since forever. It’s like basic 101 cryptography if you design an encrypted protocol that isn’t using a trusted third party for key verification (like certificate authorities in TLS/SSL).

If you implement ANY encrypted protocol, key verification is extremely important. If you aren’t verifying keys are possessed only by your recipient, you cannot verify who can read your message.


WhatsApp has always allowed key verification (at least since they've supported encryption), as far as I remember.

> It’s like basic 101 cryptography if you design an encrypted protocol that isn’t using a trusted third party for key verification (like certificate authorities in TLS/SSL).

SSH/TOFU is one model, PKI is another. Both have their respective merits, especially when combining PKI with certificate transparency.


You can compare verification codes on WhatsApp and (starting ~next week) iMessage [1]. And both are in the process of establishing a mechanism similar to Certificate Transparency for x509 certificates [1][2].

[1] https://security.apple.com/blog/imessage-contact-key-verific...

[2] https://engineering.fb.com/2023/04/13/security/whatsapp-key-...


The keys could be generate on the devices and never leave them.

Of course, if the software is not open source, that's hard to police. But abstracting away the key generation from the user doesn't mean that the services can read your messages.


I don't think this is actually true. As far as I understand it, Apple is generating an asymmetric key on client and not sending the private key in an unencrypted form at all. I am a bit fuzzy on the specific details for iCloud but IIRC they basically have your devices verify each-other and then the key exchange process happens between them. Authentication uses SRP challenges instead of traditional password authentication, which means the password itself can be used with a KDF for keychain recovery, since Apple never sees it (not in hashed for, not over TLS, not during signup, etc.)

Even with that, there are still two concerns:

- Compromised clients: either malicious updates or buggy code.

- Security design issues: for example, if iCloud can just surreptitiously add keys to your account, it can trivially make the security moot. (I am aware this is currently an issue, though apparently it is finally going to be resolved somehow.) At least this one is tamper-evident though.

I hope I'm not misrepresenting reality here. Either way, software with reasonably strong E2EE guarantees that still has decent user experience is more possible than it ever has been. The last remaining problem is account recovery, and Apple has a bit of a leg up on this one since a lot of Apple users will have multiple devices that they can use as a backup, even if they lose a device and forget their password simultaneously.


It doesn’t really matter how Apple is doing because the rules of cryptography are set in stone.

- To ensure that you message is unreadable, you must correctly encrypt the data symmetrically or asymmetrically with a key. Well we can assume Apple or Meta can do this properly.

- Second, as the sender or recipient, you MUST verify the authenticity of the key, whether you are using asymmetric or symmetric encryption.

In TLS/SSL, key verification is handled by third parties called certificate authorities.

In SSH, key verification is handled by comparing the key signature that the SSH client displays.

Most of these services right now do not do either (trusted third party or display of a key), therefore it cannot be verified overall. (That said, some people said they are doing what SSH is doing soon.)

I’m happy Apple is doing those things to exchange your own key between your own devices. This is already way better than most services. However, that problem is orthogonal to the problem of key exchange between you and a recipient.


TOFU (Trust on First Use, e.g. what SSH is doing) is already the defacto standard. The only difference is that the warning is less annoying in Signal/iMessage (soon)/etc. Matrix and Signal also offer out-of-band verification, but since compromising TOFU requires actively compromising a user before the key exchange (and it's tamper-evident) it's not really a very big concern for a vast majority of communication.


I know the argument is somewhat moot because they're all closed source. But with WhatsApp and Apple shipping key transparency, this isn't necessarily true anymore. You can verify that the keys that were given to your contacts are the keys that were generated on your device without needing to meet in person.


WhatsApp shipped key transparency before Apple. I assume FB will follow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: