Hacker News new | past | comments | ask | show | jobs | submit login
Deploying key transparency at WhatsApp (fb.com)
204 points by snowboarder63 on April 13, 2023 | hide | past | favorite | 202 comments



Something I've often pondered: what about push notifications? Aren't their contents floating around in your phone in plain text?

I've thought about it since I used to use https://www.pushbullet.com/ (a great app, by the way) to show my mobile push notifications on my desktop. Really convenient, but a bit worrisome that an app can read these notifications in the first place!

I'm aware that you can change the notifications to not show their contents, which may resolve this issue, but (a) I imagine they're still being _transmitted_ in full, and (b) most users don't do that, of course.

Anyone know the answer to this?


This is usually done by sending a push message with no content, that just prompts the app to fetch messages whenever new ones are available. Of course, if the App then proceeds to post the message contents to a notification in plaintext, well then... But you can adjust the settings for what to display in the notif and of course just disable the notifications, so no other app will be able to listen.


For that the app must be in foreground. How does WhatsApp manage to be always on the foreground? My app always gets killed by Android.


On Android, push notifications are handled by a service defined in your application's manifest. The system launches the service when a notification is received, and calls its onMessageReceived() function which you customize to display the notification's layout, add actions to tap events, etc.

https://firebase.google.com/docs/cloud-messaging/android/rec...


Apparantly `onMessageReceived()` is called only when the app is in foreground.


I spent a day of my life at work over this annoying confusion. https://firebase.google.com/docs/cloud-messaging/android/rec...

It is counterintuitive, feels arbitrary, and logical thinking will cause you to misread the chart they provide. Whether that method is called is based on the payload of the notification as well as making sure you've properly registered the types of notifications. This is helpfully spread between Android docs and the Firebase docs. The exact differences between the notifications that will trigger this (while still being compatible with iOS) is left as an exercise to the reader.


It's not that bad if you completely ignore Firebase and override onHandleIntent. Now you can choose what to do with the message without caring about the difference between "data" and "notification" messages or whether your app is in the foreground.


Can you elaborate on this a bit? Is it possible to use a non-Firebase push notification service on Android, that properly works in the background? Our the app still needs to use FCM but you can override the client-side handler somehow?


Both are possible. The source code to the Firebase push messaging service is available on Maven and GitHub, so you can re-implement it. Otherwise, you subclass the same service but override `onHandleIntent` instead of `onMessageReceived`, and avoid the code path that immediately displays the notification for 'notification' messages when the app is in the background.


by having a permanent/persistent notification. for example KeePass2Android does it pretty well. (usually when the battery saver kicks in it kills these semi-foreground things.)

when the phone starts the user has to first start the app though (and I found this to be true for Signal, Skype and probably for WhatsApp too)


WhatsApp doesn't have a persistent notification though.

> when the phone starts the user has to first start the app though

Just tested this and this is not the case with WhatsApp. I am on a Samsung and I am not sure if WhatsApp get's any special treatment.


ah, I probably misunderstood the context. KeePass2Android does this to prevent Android from putting it to "sleep", which leads to the in-memory unlocked DB getting unloaded.


It used to be that you had to send a silent push to wake the app which would display a local notification. The iOS push API got updated so that now you can send an encrypted payload and modify the notification before displaying: https://developer.apple.com/documentation/usernotifications/...


Added in iOS 10 - that is 2016.


TobTobXX is correct about how to do push notifs without transmitting the message through Google or Apple servers. This lets the app can fetch the message through its own channels (and you should be able to inspect the push payload coming from Google to verify this).

Once the messaging app has downloaded the message securely, the "end" in "end-to-end" is satisfied and what you do with your notification is your decision. If you elect to explicitly grant Pushbullet permission to read your notifs and forward them to your desktop, then that's your choice I guess.


I haven't use the app you mentioned but KDE connect and a notification log app I use both can do this.

I think maybe you forget that they need to first get the accessibility permit before it can read notification, which is a very high level permission in Android. You cannot even grant it directly in normal permission menu and there will be pop up warnings. (Well then there is the question of how many 0day on earth are there in android)


On the web, there is a web push protocol. And a number of extensions which help out.

One of the extensions is Message Encryption, which effectively keeps the message secure during transit at least (but could be more!). Quite important really because most browsers are configured to use push services provided by the browser maker, which are public & high availability. Browsers then ask the push service for their messages.

https://datatracker.ietf.org/doc/html/rfc8291

I rather doubt Signal would have worked with PushBullet but now that it's no longer an SMS client too it's less interesting to think about.


I was a Pushbullet user years ago, but it became expensive and bloated (they included a chat service?)

For Android users, there's a free and more flexible alternative: Join https://joaoapps.com/join/

I've been using Join's API for years to send a notification to my phone when my programs are done running with a simple request.

You can also exchange files between your devices using your Google Drive account, and many other features.


As far as I know you can also have an encrypted payload in the notification, which only the app can decrypt.

I'm not sure that payload can be decrypted by an app without unlocking the phone or launching the app though.


KDE connect is a good open source alternative for pushbullet.


iOS and Android allow apps to process push notifications before displaying them.


WhatsApp announced that they’re adding key transparency to enhance end-to-end encryption verification within their suite of apps. There’s a blog post going into details on the engineering blog including announcing that the core logic for managing an auditable key directory is being open-sourced on Github: https://github.com/facebook/akd


I can’t really put my finger on why, but this comment looks like it came straight from chatGPT.

Did it?


It's phrased as if it's an answer to a question, which it would be if it was the result of a prompt to chatgpt. Of course, a person could deliberately emulate that style as well, making this an unreliable way to determine whether a comment was written by a bot.


I did not use ChatGPT to write this - Me lol


Sounds like something ChatGPT would say


> Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html


Not to me, even after re-reading it. ChatGPT often sounds too formal but this isn’t.


Could be. I'm saw one on Reddit the other day, checked the users history and they all had that uncanny valley feel to them.


Cut this crap out. This type of questioning all the time will not be good for HN.


Using chatGPT to make a comment isn't necessarily a bad thing. I'd say the curiosity about how the comment was written is a good thing, as long as it's not a criticism.


Sure, but it's still pretty rude because it's usually baseless. And as you said, it doesn't even really matter. If the comment is bad, just downvote it. If it's good enough, why even bring it up?


"Don't ask rude questions", usually said by those with something to hide. Obviously there's a current novelty factor with cgpt answers, and i for one am happy people 'challenge' them. If the comment isn't what you like, ignore it?


This is starting witch hunts for ai generated comments, attempting to discredit the comment by its format, not its content. This violates the goodwill we all share in conversation on HN.


Why would someone even bother using a chatbot to write a comment? Just writing it would be a lot quicker.

There isn't really anything to gain here with that either.


“Someone” wouldn’t. A bot would. Which was implied (albeit maybe not obvious) in my question.


But a current-gen bot doesn't go look around for forums to leave comments on by itself. There must be a human that set that up through the API.

My question is why, what's the point? If it even happens, in this case it was not so.


The problem is not technical, FB could write anything, the security of the system is as weak as its weakest link.

The problem here is way way behind the computer.

https://xkcd.com/538/

The weakest link here, is that Facebook has to respect US laws.

They don't have a choice there.

So, if US law permits or requests in some way interception of communications, or that operators have to report certain activities, then your right to secrecy is done.

Of course, a random user won't have its dog food or gardening communications intercepted, but once you trigger certain patterns, welcome to the new "user trials / feature flags / beta".

Not saying it specifically for WhatsApp, it's valid for any US-based app

-> and broadly any app where the founders may eventually be arrested by the US (as the US has a lot of extra-jurisdiction power).

(think about it, how easy it would be to decrypt Mega.nz file, for example in a real-life scenario. One push of code on one URL to send back the part about the # sign, and done, or to activate new trials in Google Chrome, or to push a Play Store update to single users, etc...).

I'd be really surprised that Zuck takes responsibility and ends up in jail because he refuses to execute a legal request regarding imminent terrorism attack (risking penal risk and being charged as helping the criminals, well, there's a plus; that's more time to spend in the Metaverse).

The most likely scenario, is that the US-gov is very powerful and capable to enforce laws in their own country and that you have to respect the laws if you want you company to continue.

Same with China.


You're describing exactly the problem that key transparency helps to solve.

With this rolled out, the WhatsApp app itself will be able to detect, by default without any manual verification, if FB attempts to MITM the connection.

While this doesn't make it technically impossible for Facebook to modify the app and servers, it does make it organizationally almost impossible to do so secretly. Such a move would require the involvement of numerous individuals across multiple teams and would be noticeable to security researchers through changes to the app.

This approach is taking off in a bunch of similar problem spaces (web PKI, code signing, etc), so very exciting to see it applied here.

Randomly, and somewhat weirdly, Facebook actually offered one of the first Certificate Transparency monitoring tools, which made it possible to monitor all certificates issued for your domain using a very similar approach: https://www.facebook.com/notes/3497286220327506/


Not really ?

I don't see what prevents the app from pushing a decoded copy of the conversation ?

Even a variant of Skype was caught doing such (we only know about it because they left the server which had the raw logs completely open).

And still, Skype is very secure/encrypted/blablabla; which is true, but within the borders of local regulations.

https://web.archive.org/web/20090210230204/http://www.inform...

The end comment/advice from the US part is even a bit funny: "travelers should assume that all communications are monitored."


You're making my point: some Chinese Skype variant did this, back in 2009, and got caught.

There's just no way, in real life, for Facebook to add what you're describing to one of the most prominent messaging apps in the world without somebody noticing.

I'm not here to tell you that your WhatsApp messages are perfectly secure. If the CIA wants to read your messages they'll probably just hit you with the wrench instead of some FB exec. But I do think that transparency logs are deeply under-appreciated for their ability to make undetected mass-surveillance dramatically more challenging.


> There's just no way, in real life, for Facebook to add what you're describing to one of the most prominent messaging apps in the world without somebody noticing.

That assumes somebody is digging through each update and the thousands of classes. FFS the OG Facebook app was already blowing past the limits of Android in 2013 [1], and the current Whatsapp app isn't much better - just look at the current APK file:

    2023-04-12 11:38:58 .....      2578508      1171345  classes.dex
    2023-04-12 11:39:04 .....     13312588      6020223  classes2.dex
    2023-04-12 11:39:08 .....      7671448      3310145  classes3.dex
    2023-04-12 11:39:08 .....      2118352       945166  classes4.dex
25MB of already compressed Dalvik code, probably double that if you restore it to Java class files and triple to quadruple that in Java source files. It's impossible to audit that there is no routine pushing keys to, say, the usual analytics backend they use - and to make it worse, according to APKMirror, they push updates every few days [2].

Although my biggest question is... it's a fucking messenger app. Why does it produce a larger binary content than a full-blown Linux kernel?!

[1] https://engineering.fb.com/2013/03/04/android/under-the-hood...

[2] https://www.apkmirror.com/uploads/?appcategory=whatsapp


>Although my biggest question is... it's a fucking messenger app. Why does it produce a larger binary content than a full-blown Linux kernel?!

Because it does so much more than messaging. Also, UI code is generally very verbose.


Also, conversely, the kernel doesn't do that much. Most of the Linux kernel source consists of device drivers, which compile to modules rather than get bundled into vmlinuz. Many of these modules are also rarely built if ever. The kernel itself really is a pretty small fraction of the complete software bundle that makes a Linux system functional.


I know. But still, that amount of compiled code is insane.


> There's just no way, in real life, for Facebook to add what you're describing to one of the most prominent messaging apps in the world without somebody noticing

Your point moved from "key transparency is the defense" to "someone will notice". But if your defense is the hope of "someone noticing" you're in for a big surprise. Sometimes things go unnoticed. Look no further than OpenSSL, open source, used by billions, deployed by companies worth as much as small countries, and yet nobody noticed Heartbleed for years.

So I'll be very cynical that some development flag targeting a handful of people in an app like WhatsApp and then is removed will be so noticeable that it's a strong defense.


I think you are trying to say "it's never 100% secure", and the parent agrees with you. The parent is just saying "this is making it more secure (but not 100% secure)".


The trick is to push a modified version only to the few clients you want to attack. Use it sparingly and you won't get caught.


Or just hack the phone of those few clients with another attack vector. Doesn't mean that security is entirely useless. It depends on the threat model.


and most of all, do not forget the logfiles on an open server (it was their mistake, otherwise it would have been fine I think)


There are also tons of ways to exfiltrate data through known channels in ways that are difficult for security researchers to distinguish from otherwise secure app analytics code.

A crash/exception logging system, say, might appear to researchers to anonymize data, but it would be very possible for code to be written that happens to raise a mundane exception when specific users or geofences see specific words on screen, in a way where that list of users/geofences/words could be controlled by non-technical teams. The log message itself doesn't even need to carry sensitive data; its existence alone, when the trigger conditions are known, can be used to carry out a highly targeted attack.

Even open-source systems can be vulnerable to this: see e.g. https://github.com/signalapp/Signal-iOS/blob/eaed4da06347a3a... and consider the ways it might be possible for a small group of people at Signal to cause a specific set of messages to be seen as corrupt without raising any flags to the community auditing the code.

Of course, lack of visibility into runtime errors can lead to vulnerabilities as well. I don't think the solution is for us as a community to advocate for removing all error analytics in distributed systems. But we can't ever forget that: all analytics surfaces are attack surfaces.


Exactly, without Zuck opening the protocol and sanctioning the use of open-source clients it is not meaningful.


Somehow I think this is still possible. The engineers behind WhatsApp seem to be very talented, and they may be able to convince Zuck that an open client would increase trust in Meta's brands, and increase usage (which can then be used to promote other Meta's products).

If they keep the server-side closed, it's totally fair I think.


Or open source alternatives will pick up their work, and the WhatsApp engineers will probably be happy about it.


This solves a real issue. Key transparency for SSL certificates introduced years ago by Google surfaced a lot of misissued certificates and fixed large hole in the whole system. Of course, this is for WhatsApp and the impact is smaller, but still. Congrats to Meta team.


Because of my handle, I would appear to be biased but here is a popular Indian news article around WhatsApp.

https://scroll.in/article/1044425/how-a-cross-border-love-st...

The fact is, Indian govt has long been able to intercept WhatsApp and failing which they force group admins to get "registered"

https://thenextweb.com/news/kashmirs-police-want-people-to-r...

With local Intel suggesting malware being installed when admins to go the station for "registration". Remember, this was before all the state sponsored malware came into light a few years ago so we locally have known this for quite some time now.


> So, if US law permits or requests in some way interception of communications, or that operators have to report certain activities, then your right to secrecy is done.

Yep. FISA Section 702 allows that but supposedly only if you're not in the US and not an US citizen. Will an American get caught up the in the net? Maybe? Oh and it doesn't require a warrant. It's set to expire the end of this year but they've been known to renew it. https://www.eff.org/702-spying


Actually, this is exactly what Pavel Durov (Mark Zuckerberg’s counterpart and founder of the Russian Facebook vkontakte) did when Russian authorities asked him to reveal who helped organize the Maidan protests in 2013/2014

https://globalvoices.org/2014/02/22/pro-maidan-video-goes-vi...

https://www.documentcloud.org/documents/1146789-durov1

(Besides these guys I mean: https://www.ndi.org/eurasia/ukraine)

And he just posted the middle finger on his site.

Pretty soon he started receiving the standard “tax evader” treatment (i.e. offices being ransacked, veiled personal threats etc.), his shareholders pushed him out and he and his brother fled the country and started Telegram.

Pavel is a true libertarian who’s stood for his beliefs against his own government, and lost control of his company as a result. Unlike Moxie Marlinspike (founder of WhatsApp and also Signal) who claims he is an “anarchist”, Pavel walked the walk.

When he started Telegram, he pissed off his US investor who got heat for Telegram being used by ISIS to communicate. The investors were pissed off that they never made a profit on Telegram and was mentally associated with helping ISIS. Although Pavel eventually did take action: https://www.wsj.com/articles/telegram-app-tackles-islamic-st...

Pavel also claims his team was approached by the CIA multiple times and they successfully resisted it. Telegram offices are nowhere to be found in Dubai: https://m.youtube.com/watch?v=Pg8mWJUM7x4

That is how you run a free speech absolutist social network that governments all want to control. Telegram is probably the most secure and trusted centralized social network (with Signal a distant second).

But that is insane. We don’t have to trust Pavel or Moxie to be our “last line of defense.” Why do we rely on giant, centralized corporations to host all our private conversations?

This was my response to Moxie’s critique of Web3 and decentralization:

https://community.intercoin.app/t/web3-moxie-signal-telegram...


So why the fuck Telegram is not e2e encrypted by default, and why are group chats not e2e encrypted?

Not to mention that even when chats are e2e encrypted, they are encrypted using their proprietary algorithm?


As mentioned in the linked article, E2EE group chats are more or less impractical due to the identity verification problem. This initiative is intended to help with that. I will also point out that large group chats are impractical due to the simple fact that not everyone will know everyone else. So someone can just leak the messages.

The Telegram method of dealing with this is obviously not the only way, but it is a legitimate way.

>Not to mention that even when chats are e2e encrypted, they are encrypted using their proprietary algorithm?

The algorithm is public. It is a straightforward application of well known primitives. It is hardly proprietary.


> The algorithm is public. It is a straightforward application of well known primitives. It is hardly proprietary.

Note that its predecessor, was very much not that (e.g. https://words.filippo.io/dispatches/telegram-ecdh/ was a vulnerability in it, and it stuck to some weird choices of crypto primitives/key sizes for a pretty long time). This colors my expectations about the current version slightly.

I personally know nothing about the current protocol used (mtproto 2.0) and a few minutes of googling surfaced https://eprint.iacr.org/2022/595.pdf, https://eprint.iacr.org/2023/469 and https://arxiv.org/abs/2012.03141, which I'd need to read in reasonable amount of detail to have an opinion on mtproto 2.0.


That’s like Mehdi Hassan nitpicking small factual inaccuracies in the Twitter filea last week and ignoring the main discussion with Matt Taibbi about government censorship around the world.

Look, if people want to encrypt their chats on Telegram, they start a secret chat. That’s how it should be. Why should it be the default? Because you think people are idiots?

If I make a secret chat on Telegram, I trust it more than a default chat on Signal. Both are good, but one company is much harder to pressure than another.

And this is all a moot point - like arguing which homeless person is richer. If you want real privacy and control — simply communicate without using the infrastructure and software provided by centralized corporations!


> Look, if people want to encrypt their chats on Telegram, they start a secret chat. That’s how it should be. Why should it be the default? Because you think people are idiots?

Because everyone is an idiot once in a while (just after waking up, when drunk, when stressed, when sick, ...). Also, because the very presence of a secret chat is something that can be observed and can be enough to raise suspicion.


I know this is a bit of a cop-out but even writing in a non-secret chat and having Telegram know, then totally deleting a message on Telegram with no visual trace to the counterpart, is less worrying for me than doing the same on the “e2e encrypted” WhatsApp which shows “Message deleted” and if I failed to do it, prevents me from deleting the message after a while. Telegram lets me delete everyone’s messages and even the entire chat anytime. That shows where their head is at.

That said, you are right that not-on-by-default-for-everyone makes the encrypted chats more suspicious.

I have to say that I have a nuanced view on encryption, which isn’t matching the orthodoxy on HN:

https://community.qbix.com/t/balancing-privacy-and-accountab...


If I understand your proposed world correctly (I understand it as morally equivalent to the escrow of ~all keys with k-of-n split across some well-chosen entities/people), I expect a person holding that view to support encryption-by-default even more strongly, because in a world that looks that way (and that way actually works as described) there is no apparent downside to that. I'm curious whether you disagree with any part of this.

OT: Do you have anything more concrete written on the choice of holders of escrow shares (so that they can be trusted to actually follow the audit rules)?


Thanks for reading through what I wrote and grokking it! It means a lot to me. Now we can discuss it.

Yes, for all private interactions / conversations I support encryption, provided it can be decrypted in the way I said. Obviously there is room for innovation to make it harder and harder to do bulk decryption without a proper reason and audit trail. And make sure somehow that the cameras can prove they aren’t sending unencrypted video or that encryption keys are secure. It’s hard to prove a negative, but possible if verifiers can search the entire signal. Those innovations are part technological and part societal… but the underlying technology (like blockchain) has to exist first. Has anyone built it yet?

Now having said that, I don’t think encryption keys should be that hard to get for conversations within a corporation, and probably should be nonexistent for public servants on duty. Today we have the opposite … NATO promises to Gorbachev are secret, Normandy format talks were closed for years, Ukraine-Russia negotiations were behind closed doors, we don’t know why they all failed. And regular people have to go to war because of their failure. I think if the government wants to know where my $600 goes I should be able to know where trillions go.


I've read the post and i don't understand exactly which scenario they're trying to solve.

i think it's aimed at solving the problem of someone impersonating whatsapp server and responding with corrupted public keys ( but then this person could also impersonate the key repository server ?)

it doesn't however protect users against whatsapp cooperating with states to introduce spying devices / intermediate in your conversation. Does it ?


So it does indeed even guarantee that the server is acting in a trustworthy manner due to the public auditing scenario. We will shortly be making our audit logs publicly available to show that the verified crypto proof the client performs does indeed match the publicly available records. The academic works SEEMless (https://eprint.iacr.org/2018/607) and Parakeet (https://www.ndss-symposium.org/ndss-paper/parakeet-practical...) jointly outline how this all works from a technical perspective.

While we do maintain the directory, we are held to an honest standard by our audit logs. Should any auditor find invalid records, they can publicly hold us accountable.


I think in certain countries there are easier ways to clone numbers and also to generate links to switch a whatsapp account to another phone. Attackers will then hijack the account and send out SOS status updates and messages to contacts asking for money. Public Key verify can help 2 parties to authenticate manually so to speak.


When I left Canada after my PhD, I stopped paying for my pre-paid phone plan for my Canadian number. A couple of months later, someone tried to scam my cousin in Pakistan, using Whatsapp. The scammer

1. Acquired my Canadian number.

2. Identified it as "my" account, and downloaded a picture of my social media to use as a Whatsapp display picture.

3. Somehow, identified my cousin as a potential target. Not sure how they did it.

4. Located where roughly my cousin lived. Perhaps via social media check-ins or some other way.

5. Asked my cousin to meet in a park not too far from his house, because I was in trouble and needed money.

Thankfully, my cousin did not fall for the scam and contacted me via other means to verify.


In the scenario you're describing, what would let whatsapp know it actually shouldn't register that new device in the public key repository ?

Either whatsapp knows the phone is hacking the account to a new number / device, in which case it should simply disable it, or it doesn't and then it will treat it exactly like a normal one.


Well the pubic key would refresh and the other side could see that. I think whatsapp already sends public key refreshes anyway.


I occasionally verify my security barcode with friends, and so far have never found any MITM's.

Has anyone ever found barcodes that mismatch, indicating a MITM?

If my understanding is correct, then for this to happen, either someone must be impersonating whatsapps server (which involves faking an HTTPS cert), or whatsapp themselves must be running en evil server. Both of those are quite a high bar, even for a state sponsored attack.


Some technical experts at GCHQ have publicly suggested using "ghost users" (a kind of server-forced MITM) as a way to wiretap encrypted messaging [0]. The proposal is slightly different from a traditional two-party MITM, since it involves adding additional users to group chats, but the basic idea is similar: legally compel messaging operators (e.g., WhatsApp) to participate in wiretapping. Presumably there are other clandestine agencies who might try to do something similar by illegally compromising service providers.

Key transparency makes standard MITM much more detectable, and so it will significantly dissuade agencies and private actors from investing in those capabilities. It obviously doesn't solve all problems (governments will still be able to hack, and hypothetically even to mandate client changes that break the disable transparency) it removes a piece of low-hanging fruit and signals to governments that it isn't worth trying to exploit protocol weaknesses.

[0] https://www.lawfareblog.com/principles-more-informed-excepti...


> Key transparency makes standard MITM much more detectable, and so it will significantly dissuade agencies and private actors from investing in those capabilities.

Just the private actors. Agencies would just spend more on it, or investigate side channels that achieve the same ends.


> Agencies would just spend more on it

That's the whole point of security: you try to make it too costly for your opponent to do it. That's why you have a threat model: to decide how much is "too costly".


Sorry, I didn't communicate my point well.

Agencies, the ones you'd care about, don't spare expenses.

If your threat model includes agencies, making an operation expensive isn't your goal because money might as well be infinite. So you target a different resource:

Time.

It would've taken more time to convince Apple to let the FBI into the San Bernardino shooter's phone than it took for the FBI to use a vendor with a crack for that device and OS. Hence.

---

I'm not disagreeing with the value. I'm merely pointing out that if your goal is to tamper with attack economics, you need to target resources that are finite for the adversary. With many state actors, that resource is time.


Agencies have different interests that are much more complex. In particular, the United States government does not want to cause a global iCloud or WhatsApp outage because they were trying to spy on a few potential terrorists. They don’t want to spend a year in FISA court trying to make Meta alter their platform. They don’t want a whistleblower software engineer blowing their operation up. They don’t want half the world to ban US technology companies because they clumsily got caught adding backdoors. Even if none of that happens, they don’t want to risk their precious access getting broken because someone pushes a software update or a new security feature.

Updates like key transparency don’t perfectly prevent all those things, but they make it less useful to invest in capabilities that might now be incompatible with them, or might get detected because of this feature. They also signify that the organization is hostile to the sorts of exploit that might enable surveillance, and that it’s probably better not to engage with them.

Lastly, government agencies do not have infinite money.


Matt, we had this conversation in person at a bsimm conference years ago when we were talking about how best to focus energy during threat modeling exercises. And while your position has become more nuanced, it still reconciles with our original agreement that time is the finite resource.

Unless I'm missing something specific? I imagine the reason why an agency would avoid said hostile battles is specifically to preserve time or perhaps to also buy time. (Being noisy is a great way to lose time quickly)

Agree to disagree on the money component, though. Maybe my comment is best clarified as "infinite from the perspective of [defender]"


Got it, thanks for the explanation :)


Because of people like you, we'd at least find out soon if someone does hack Facebook and starts tapping some percentage of conversations. It's like people that read terms of services, or use custom email addresses per service to learn of breaches before the company even knows it themselves. Keep it up :)

For me, I also just like the idea of not having to trust the servers. Probably is fine, like with IRC and MSN and old unencrypted WhatsApp and Facebook's homegrown chat (etc.) I don't think there have been major breaches either. Doesn't mean it's not nice to have better options and exercise them, at least in my opinion. (Not that I use WhatsApp, but I casually apply this principle on Signal/Threema/Wire, just whenever I meet someone irl whose key changed.)

I wonder if Facebook keeps stats on how many people do the key verification. It would be such a shame if the servers knew whom to mitm and who not.


WhatsApp claims to be private and secure, but they insist on slurping up your entire address book before they'll let you use the app (or to be more specific, you can't do things like name a contact or start a group chat without allowing them to upload your entire address book).

Is that data encrypted in such a way that Meta can't sell or make the data available to third-parties who might use it in nefarious ways? Or do they deposit that data directly into their bank account?

I'm in the US, but the owner of the company I work for lives in Europe and refuses to use anything other than WhatsApp to communicate, so it's a tremendous pain in the ass. I wish Apple had the concept of multiple address books so I could choose a limited or even fake dataset to hand over in cases like this.


No. It’s because the metadata is enough for the 3 letter agencies. They are pefectly happy knowing who’s in your contacts, who and when are you texting, and who’s in the contact book of your counterparty without knowing exactly what are you talking about.


> I'm in the US, but the owner of the company I work for lives in Europe and refuses to use anything other than WhatsApp to communicate

I feel your pain. It doesn't even make that much sense anymore: So many people in Europe have Signal these days…


If a company wants to dictate the software that its employees use, then it's the company's moral responsibility to provide hardware to run that software on. Not a chance in hell that I'm installing random spyware on my personal machines just to send an IM to my boss.


I also feel your pain. In your specific case, you could not allow Whatsapp access to your contacts, and still use it with your employer? Of course, that does not prevent your colleagues from uploading their contacts...


Yeah, this is what I do. I mean I know my data is being collected all over the place through other people clicking “yes, take all my data” on this and many other services and apps, but I still don’t like doing it to other people.

So what this means is that when the owner starts a group chat and adds people I haven’t chatted with in WhatsApp before, those third parties show up as just numbers, and I can’t label them even once I learn who they are.

Also, if the owner says “let’s hop on WhatsApp; start a chat and invite (other co-worker)” I have to say that I can’t start group calls, and explaining why gets a lot of eye rolling. And maybe the eye rolling is justified? Trying to maintain privacy is tiring, and I’m probably accomplishing exactly nothing by refusing to let WhatsApp have my address book. They can probably tell me what’s in it with 95% accuracy already, so it does feel a bit like making a stand to prove a point, except literally nobody cares. It would be like if I had a mustache and chose sustainably-produced mustache wax instead of the regular kind, and it turns out the sustainable-mustache-wax company is a subsidiary of Nestle anyway; that’s about the level of impact I’m having here by refusing to press certain buttons inside WhatsApp.


This thread by Matthew Green explains a bit better why this is significant: https://twitter.com/matthew_d_green/status/16465167988405166...



If I'm understanding correctly... This merkle tree should give clues as to how many users whatsapp has (unless they insert fake users - but either way, it gives a lower bound). It also allows me to see how often a user re-registers (ie. sets up a new phone).

Currently, I can see that if I have a conversation with a user, but with this public log, I can see the same info for the whole world?


If we can guess when a key was added in relation to another, it may even be possible for us to deduce who has initiated a conversation with whom ?


Why not open source the client?


Yeah, the more of these (admittedly excellent) security features they add the more glaring a hole the lack of an open source client seems by comparison.

Like, key transparency only helps in a situation where Facebook's servers are compromised, right? That's the most obvious way a man in the middle attack could happen. But if you're worried about an attacker who can compromise Facebook servers, why not also worry about whether that same attacker can compromise the WhatsApp client app, or even compromise Facebook itself at an organizational level?

I don't want to downplay this change too much; it's a genuinely useful security measure that I don't think I've seen any other messaging app implement. But at this point, adding further defenses against external attackers is starting to feel like layering more and more complicated locking mechanisms onto a vault door that's made out of glass.


I think there are two different things: a) The developers of WhatsApp cannot decide to make it open source, but they can decide to write genuinely nice security features. b) Those in power don't want to open source it.


Parent company: Facebook


Open source is of very limited usefulness in terms of ensuring security. Unless you're building the client from the source, you have no assurance that the binary you're using corresponds exactly with the source.


That can be fixed with reproducible builds to ensure the source matches the binary, and binary transparency to ensure the version of the app you're running is the same one everyone else is running.

Open source code is a prerequisite to either of those being useful though.


But that's the whole point: open source gives you the opportunity to build it from source if it matters to you.


Which is why I said it's of limited usefulness. It's useless for people who don't have the ability to build from source, which is most people.


Yes, but that is why I said that it is actually very useful to people who need the security. Most people don't think they do, but those who actually do can ask help to build from source (which is not that hard).


> that is why I said that it is actually very useful to people who need the security

How is it useful to people who need the security but don't have the skills?

> those who actually do can ask help to build from source (which is not that hard).

It's not hard to people like us, but to the majority of people, it's simply not going to happen. No normal person is even going to consider building from source as a possibility, let alone ask for help to do it.

And who would they ask? A huge number of people don't have a suitable nerd friend, and they're not going to follow online instructions to do it. It's too intimidating and scary.

Besides, don't you think it's a bit much to expect everyone -- regardless of skill level -- to build everything from source just in case the binaries aren't built from the same source?

This is all why OSS does not, all by itself, do much to address security issues.


I think we're talking past each other.

I think it's good to have open source clients, because it makes it easier to audit them. If you get your Signal client from F-droid, and competent people can compare the F-droid binary with the open, audited sources, then it's easier for you to trust the binary.

Of course competent people can reverse-engineer a proprietary binary, but that seems harder than having fairly reproducible builds.

Of course many people are not competent to make the audit by themselves, and therefore they need to trust someone.

And of course, OSS does not all by itself address security issues.

Still I am convinced that it helps.

> It's useless for people who don't have the ability to build from source, which is most people.

I strongly disagree with that. If you have an open source client and a reproducible build, then many competent people can audit the binary you provide on some store. Then most people can benefit from those third-party audits.


This.

Meta probably doesn't make any money from WhatsApp anyway it doesn't really hurt them to open source the WhatsApp mobile apps.


It's about power. If they DON'T open source now, with very little effort they have the option to in the future. They don't make money now, but maybe circumstances change? No one knows the future, and we don't know their plans.

But if they do open source it, then it will take a lot of hard work to revert it.

In other words, it's a company and this is their asset.


I see your point, but Signal is out there and it does pretty much exactly what WhatsApp does.

It's just that people are too lazy to switch, because they don't care about their client being open source.


I support your point on open source. At the same time, it has to be said that Signal doesn't to "pretty much exactly" what WhatsApp does. On my Android it's quite sluggish, whereas WhatsApp is very smooth.


Works very well on my Android, on a not-very-powerful phone...


WhatsApp users who sync their contacts provide Meta with a social graph annotated with phone numbers.


If you don't think Meta makes money from the WhatsApp user contact network and linking that data to their other data sets, you are most certainly mistaken. There's tons of money in it, even now, before they start adding new privacy compromising features.


Because then users could just disable anti-features.

See https://faq.whatsapp.com/1217634902127718/

They are trying to make "unofficial" synonymous with harmful. Sure the article doesn't outright say that all apps are harmful. But it definitely doesn't make any suggesting that a third-party app from a trusted developer could be ok.


They can't hide all the backdoor stuff that way. Closed Source is clearly a much better and the obvious choice for a privacy-invasive platform.


At the end what matters the most is who is operating the project and which laws they have to follow.

One point to keep in mind is that almost all open-source projects don't really have transparent builds, and the transparent builds are rarely built really transparent (using public compilers, etc), but more behind the curtains.

Plus, even an open-source app that would have perfectly transparent builds (which is not the case from what I've seen), the app publisher can find way to push targeted updates (via app stores), feature flags, betas or settings to very specific users, etc if compelled to do so.

And there is always a potential excuse that store builds don't match open-source code, because the stores are re-signing the apps (and changing the checksums).

So it's more about who you decide to trust, unless you build the client yourself, which is an extreme outlier.

Most Signal users I know (even very sensitive users), they have iOS, they don't build the code themselves, they don't review the code, etc.

They just press Install (and I understand them, I would do the same).


The thing is that if the client is open source, you can build it yourself if you need to.

Most Signal users don't need to do that. But sensitive users can. I think it matters.


The choice matters. That's what open source is all about, a choice and giving the users some power.

I still can't believe people feel the need to justify Facebook's actions, even after their horrible track record and continuous violations of the user privacy and trust.


I am not sure if they justify Facebook's actions, or if they just don't care.


Privacy is about granular controls provided to the user. Closed source doesn't even offer you an option. It's either "Use it how we like" or "Get out".

If open source clients aren't that of a big deal or big privacy win as you've explained, then maybe there shouldn't be a need to justify the decisions of a company like Facebook either.


Unfortunately this provides zero guarantees of privacy unless I trust Meta's deployed WhatsApp client to handle my E2E decrypted messages with care and privacy.

Seems clear we cannot trust Meta, given they've been fined more than a billion Euros for lax data protection. Meta have set aside two billion Euros to cover potential fines in 2023.


Well, you have to establish chain of trust to use ANYTHING. By using whatsapp, you trust:

1. FB Won't push code that relays your messages somwhere

2. Google won't backdoor your Android

3. Some networking driver does its job

4. Broadband manufacturer hasn't introduced hardware backdoor

5. Hope GOV won't ask things to FB

Sure there are tons of more points. But the good thing is, everything else out of this circle has a really hard time penetrating your privacy.


I guess the point of the parent is that if the client was open source, one could audit it.

But you're right, you need to trust something at some point (be it your hardware, at the lowest level).


This is great to see, congrats to the Meta team!

It might be lost to history, but Keybase has had Key Transparency running in production since 2014:

https://keybase.io/_/api/1.0/merkle/root.json?seqno=799


Can this public log be used outside whatsapp?

Ie. could a third party app use it as a way for two people to verify each other's identity securely knowing only the other persons telephone number?


That would actually be a better application than what they're doing here. In your scenario, you're trusting that Facebook's and that third-party app's servers aren't compromised. What Facebook is publishing here is Facebook verifying that Facebook isn't compromised or compelled.

Anyway this idea is out there and most typically done with PGP, using the standard tooling to sign whatever info (or key) you want people to be able to verify regardless of what platform you're on.


> What Facebook is publishing here is Facebook verifying that Facebook isn't compromised or compelled.

I don't think so. Anyone could run a third-party audit record, right? At least I thought that was the whole point of it...

See https://news.ycombinator.com/item?id=35555910


I think we're having the same argument in two places :). I would say let's centralize to here https://news.ycombinator.com/item?id=35564592 as I already replied there before seeing this reply.


Their promises are worthless unless all client-side software is open-source and available for inspection with reproducible builds. It is unfortunate that WhatsApp managed to get significant network effects so fast (especially in Europe), meaning that many people feel the "have to" use it. If you "have to" use it:

0: encourage people to switch to Signal, Session, Briar, Matrix, etc.

1: you can use matrix bridges and similar things so you don't have to have Facebook's spyware on your phone. Examples:

-beeper.com (pulls together many messaging platforms, WhatsApp Twitter Slack IRC Instagram Telegram...) (can self-host)

-element.io operates another matrix-bridge-for-hire (can self-host)

-texts.com does something similar, but I don't know the details

-probably more I don't know about :-)


I wonder if this can be used to replace the fairly broken GPG keyservers.


The OpenPGP SKS keyservers are broken because they are append only. So script kiddies can do stuff like signing a particular key zillions of times or swamping the servers with zillions of bogus OpenPGP identities. Afterwards there is no way to fix things.

The system discussed in the article (Parakeet) is also append only. So it would be vulnerable to the same sort of attacks. The difference is that it can eventually expire old entries in a reasonable way to free up resources. So no help against signature attacks but possibly of help against the resource usage of bogus identities. The bogus identities would still exist though.

I think there might be merit to the overall idea of having a semi-trusted entity in charge of the system and then making it so that others can judge that what that entity is doing is reasonable. Still a problem if the entity goes rogue and you have to replace it. I suppose that is a problem in the Whatsapp case as well.


I think this is interesting since Apple is also turning key management over to the Users. Identity Key management seems like the next step in the App Store dance.


> Identity Key management seems like the next step in the App Store dance.

Can you expand on what you mean by this?


Warning this is all my opinion and one should use their own judgement.

Apple's whole identity system is basically kerberos+ldap. This is how Apple deals with identity and authentication. Apple has made recent announcements at handing this management to the user. This is how Apple will open up the App Store. There will be a status quo set-up and an open set-up. The status quo will be for those who could give a flying flip about all the freedom people claim they are missing out on. The open set-up will let the identity owner share their keys with anyone they one want. Apple can use all sorts of alerts and lock down all of data held by Apple and the user can create and share keys with whoever they want without providing unintended data access held by Apple. Facebook users will create keys to use with their iOS device and and that will be the point that FB can start to collect data. The only thing this is based on is my Sysadmining Macs since about 1989.


GPG key management should have been built into vCard and CardDAV! It's such a missed opportunity!


Yet SMS based authentication still is a thing. Like SIM duplication and fake bases weren't a thing.

Yikes.


That's why you can enable a two-step verification PIN. You're asked to provide this PIN after installing Whatsapp on a new device. https://faq.whatsapp.com/1095301557782068


FYI It also asks PIN from time to time on the same device too. Just a small inconvenience.


It's a reminder, right? Signal has the same feature.


What do you suggest as an alternative, taking in to account their 2 billion users?


Using one of the standard authenticator apps would be much better, and at least as feasible as SMS.


The authenticator apps are just as weak as SMS against phishing, which is a gazillion times more common than sim swaps.

The authenticator apps cause loss of credentials more often because people don't back them up and then drop their phones in the toilet. The thing that makes SMS weak to sim swaps is also the thing that keeps a mountain of people from losing access to their accounts.


Sorry, that goes as paid work on my book. I'm sure you understand.


Ah ok just criticisms come free ;)


Correct, and valuable answers command balanced compensation.


Meanwhile WhatsApp likely still coaxes people into enabling cloud backup, e.g. to Google Drive

- which completely bypasses end-to-end encryption by putting plaintext chat history into the hands of Google et al.

(Not a WhatsApp user myself anymore, don't know, sorry, if someone can confirm this in the comments would be nice.

But it had been nagging to enable cloud backup long after they have been advertising "encryption" already.)


Why not use the builtin E2EE backups? https://faq.whatsapp.com/490592613091019


This is silly. You’re just taking their word for it:

”When they do match this confirms a secure, end-to-end encrypted conversation.” - Says Meta.

This doesn’t matter. Why would we trust the word of a giant corporation that has lied to us repeatedly and abused every single loophole they can, to safeguard our private chats?

Lies and deceit just in the last few years:

WhatsApp messages: https://www.businessinsider.com/facebook-reads-whatsapp-mess...

Audio: https://www.bloomberg.com/news/articles/2019-08-13/facebook-...

Video: https://medium.com/macoclock/apples-ios-14-catches-facebook-...

Metadata: https://www.propublica.org/article/how-facebook-undermines-p...

They could easily ship any version of a client to anyone at any time. The answer is just to opt out of Big Tech and communicate without using clients and infrastructure of giant centralized corporations! Use open source alternatives.

Capitalism and the profit motive will always be pitting them against their own users, trying to extract value from the data for their shareholders.


They are going to start providing public access to the audit log. It's basically Certificate Transparency for WhatsApp keys.

https://news.ycombinator.com/item?id=35555910


You have to trust their clients, since they are not open source. I agree with that.

Still this project in particular seems nice. Why couldn't Signal deploy a similar thing, for instance?


Because it's a very marginal improvement. It's Facebook verifying that Facebook's other server is providing you the right key; I'm not holding my breath for Signal to start doing the same. I'd honestly hope they'd instead focus on usernames, message editing, markdown, or any of the other features it's still missing compared to Wire or Element. (In Signal's defense, it's the most stable of the three so that's why I've got my family using Signal, but I wish for Wire's, or even better, Telegram's feature set on a daily basis.)


> It's Facebook verifying that Facebook's other server is providing you the right key

Again, no :-). The third-party audit record is there such that... well... a third-party server can do that verification.

For WhatsApp it may be marginal because the client is proprietary (hence you can't audit it and verify that it actually uses the feature), but the Signal client is open source. So you could actually see that your Signal client checks the keys using third-party servers. That's something, I think.


> well... a third-party server can do that verification.

Sure, but your phone isn't asking said third party what the results of their verification are. It's asking Facebook.

It can get hacked. It can lie. If you're looking to protect from a malicious server by protecting the key exchange better, you don't achieve that goal by asking the same party for the same information again via a different protocol and hope it answers differently. It increases attack cost because now the attacker has to fool both systems, but idk by how much honesty. The main cost will be getting into their infrastructure undetected in the first place. Sending discrepant responses to differing IP addresses seems relatively easy beyond that point.


Right.

> It increases attack cost because now the attacker has to fool both systems, but idk by how much honesty.

That's a fair question, but it may be better than you think, right? At least it seems pretty easy to increase that cost later on if they want to (e.g. securing their audit server or by having the clients check third-party servers). The technology is here now.


Genuinely curious: how do I know Whatsapp isn't keeping a copy of my private key somewhere?


The same way you'd know that Signal isn't keeping a copy of your private key somewhere - people can inspect the clients. Yes, the app could use some sort of really sneaky way of leaking the key such that nobody ever finds it. It would also be possible for the developers to distribute a very specific version of the client that does this only for a very specific region and device combo to minimize the likelihood that this is caught. But it is the sort of thing that would both be unimaginably bad for PR and challenging to keep under wraps.


Is WhatsApp client open source?


No. But plenty of people are good at looking at decompiled code. I'm more experienced with Android reverse engineering, so somebody else would need to comment on ios. But dalvik bytecode isn't too far from what people are used to looking at and the decompilers up to java are quite good. The native libs are the only actual challenge and even then there are tons of people out there who have no problem throwing ghidra at a binary and understanding it effectively.

Publishing the source online doesn't fundamentally change the way that interested parties can inspect the client. And the reversing process has the nice property of actually guaranteeing that the thing you are looking at is what runs on the device.


Don't they obfuscate the code though? Decompiling APKs makes sense but they obfuscate it on purpose.

Also, WhatsApp's T&C forbid you from doing it anyway.


Obfuscation is annoying at best, doesn't do much otherwise. Someone has to follow what happens to the private key through the control-flow graph anyway. (And if some function says innocent_crypto_method2 it still needs to be "audited" anyway.)

If the key ends up used for signing and authenticating messages and for nothing else, then it's sort of safe to say that it's not leaked. (Sure there might be some other part of the code that reads it indirectly, but that's also something that will likely not be named leak_priv_key() :))

The only thing that would help is open source + trusted reproducible builds.


no


So... How can we access the private certificates on our devices? Without direct access to those this has no partial privacy safeguard.

Furthermore, Facebook issued the private keys to us for every contact, therefore they have the private and public key to every contact pair we've ever made.. and can therefore decrypt every message we've ever made. What proof do we have that this is not the case?


It's likely not end-to-end encryption if you need to trust somebody. There should be a zero-trust policy in place.


Props to the whatsapp team for improving the machinery!

One question I have is how the protocol handles forks, or in other word what prevents the centralized service from sharing different views to different users?

To me the problem screams: you need a consensus protocol. Without a consensus protocol you’re just hopping that users might be able to detect malicious forks by gossiping.

If there’s a non-cryptocurrency usecase for blockchains, this is it.


> If there’s a non-cryptocurrency usecase for blockchains, this is it.

They just explain in their blog post how they solve it.

There is no non-cryptocurrency use-case for blockchains, period.


A distributed blockchain would actually prevent the malicious server issue. It would theoretically be better.

The practical problem is that you don't want to consume a ton of battery juice so this just isn't really feasible on mobile. (PoS wouldn't help because you'd still have to keep up with two billion users doing key changes.) Keybase ran into the same issue, there's a Github ticket somewhere where this was discussed. There was some blockchain lite they were hoping to implement, but that never happened, not sure if it was just off their radar or has issues of its own. I don't remember what its properties would be / if that would genuinely have distributed the trust for key exchanges. (Keybase was already publishing merkle root hashes on a chain, but the client didn't verify anything so it didn't help anyone, just like here with WhatsApp publishing the keys on a second system but that's still their own server controlled by a single party.)


> A distributed blockchain would actually prevent the malicious server issue. It would theoretically be better.

No it would not: the third-party audit record is there to prevent the malicious server issue, while being infinitely more practical than blockchain.

Blockchain enables crypto-currencies. For the rest, we have more practical solutions already.


> the third-party audit record is there to prevent the malicious server issue

Third party? Isn't Facebook running the show, like, their own servers that do the key log recording and publishing? If so, I've misread the news (I looked into the details but can have missed something of course)

I 100% agree with you that this is infinitely more practical (and achieves essentially the same level of assurance if the third party is in another jurisdiction), if that's indeed what they're doing


In this case it may be Facebook running the show. But I was answering to the "this is a use-case for blockchain" point.

It is not a use-case for blockchain: it is (much) easier to get one or more third-party servers to host the repository.

Blockchain is a pretty cool technology, but let's face it: it does not solve anything (other than cryptocurrencies) that we don't already know how to solve in a more practical way.


It would totally be doable on mobile. See what Celo does, for example. It’s a combination of using a BFT consensus (so that as a mobile user you only have to verify a few signatures and a merkle tree membership proof) as well as zero knowledge proofs to quickly verify epoch transitions.


It wasn't clear to me from reading the article what the specific usability improvement is here. So you still have to compare the ridiculously long number once? With each and every device that anyone you want to message with owns? What does it mean if you don't?

It used to be relatively clear that you had to verify everyone and maintain that verification. This seems potentially more complicated...


My understanding is that with this, your client can verify the key automatically. If you don't trust that verification, then you have to do the manual check.


Is the weakest link here iCloud backups?

Afaik, the backups are encrypted but Apple has the key and is the one doing this encryption.



A past thread that may be of interest: https://news.ycombinator.com/item?id=25685446


One thing I do not understand: how does web.whatsapp.com work even if my phone is turned off? The key must be somehow transported via Facebook servers to my PC.


Each device has a key, the message is sent multiple times with different keys.

> WhatsApp multi-device uses a client-fanout approach, where the WhatsApp client sending the message encrypts and transmits it N number of times to N number of different devices — those in the sender and receiver’s device lists.

https://engineering.fb.com/2021/07/14/security/whatsapp-mult...


WhatsApp web generates a key which your phone signs and then sender's encrypt messages for all those keys.

Presumably this means that the device knows how many devices it will be send to.


If the private subkey is stored on their servers, then that means their servers are one of the "ends" in "end to end encryption", and they can read all your messages.

Like putting a screen door on a submarine.


It’s stored in a local session on your computer. You won’t be able to start a new session.

So no, not their servers.


Same way as how Signal desktop works with the phone being turned off.

As another commenter noted, they just generate another key and when someone sends you a message they have to encrypt the same message twice (one for each key).


Signal exchanges keys in the QR code. It's not that you generate a random new key and people just start encrypting for that key without verification.

(I don't use WhatsApp so I don't know if they do the same.)


Is this a thing now? Never used to work for me, always got an error about needing my phone to be on.


Yes, this is part of their multi-device support, which was in beta at the end of 2021 and then rolled out to everyone some time later.

https://www.makeuseof.com/use-whatsapp-multiple-devices/


Read https://en.wikipedia.org/wiki/OMEMO I'm guessing it's built on the same idea.


Good question. Have you confirmed it works when your phone is off? I haven't tried.


Just tested myself and you're right. Seems like very bad behavior. Someone with temporary access to your phone could setup whatsapp-web to transfer the key to a PC they control, then remove whatsapp web so there's no longer any devices listed in "linked devices," and still maintain a copy of the key. Doesn't seem like doing this forces any change in device keys.

edit: Maybe I'm missing something in how the web device is provisioned (maybe treating it like a group chat with multiple keys?), but I don't see how it could decrypt messages intended for my phone without just getting a copy of the key


It’s not a copy of the key, it’s a separate key


If it is a different key, wouldn't there be a "keys have changed" notification in my friend's chat window when I add a new whatsapp-web login? If the keys haven't changed whatsapp-web must be capable of decryption of messages meant for my device, no?

edit: Is there documentation somewhere? Makes no sense to me that my friend is encrypting with the same public key (before, during, and after whatsapp-web is provisioned), but somehow it is decrypted on a new device with a different key


Your primary key vouches for that secondary key. Therefore, the “keys have changed” dialog don’t pop up because the new key’s legitimacy is verified.


AH, I see. Found it described pg 5-9 here too https://www.whatsapp.com/security/WhatsApp-Security-Whitepap...


The privacy policy alone on whatsapp is what is keeping everyone I know off of it. It's egregious. FB is 100% snooping on what is being discussed, and if they are now moving to fully client-to-client encrypted on ALL chats (not just the ones you explicitly choose) it's only because they're losing market share to alternatives that are more trustworthy.


Whatsapp has been fully e2e for years, maybe since 2016? It uses the Signal protocol.


> It uses the Signal protocol.

Is there any evidence for this? Like any independent audits like Signal and Telegram have for their protocols?


It was done by Signal themselves: https://signal.org/blog/whatsapp-complete/

The client is closed so it can't be independently verified outside of sniffing traffic or decompilation I assume.


As someone unfamiliar with their privacy policy, can you explain what’s egregious about it?


Remember when they tried to force everyone to accept the new terms or they would disable access after a certain date? Then they walked it back after people started using alternative platforms.

https://www.livemint.com/technology/apps/whatsapp-to-make-ne...


The privacy policy was never "walked back", they decided to do it anyway after the controversy (on HN and other platforms) died down.

Here's a newer article showing the metadata being sent to Facebook (presumably for advertising purposes), it includes your phone number, device ID, and location.

https://www.androidauthority.com/whatsapp-privacy-1189873/


> FB is 100% snooping on what is being discussed

Wrong. It has been end-to-end encrypted since 2016. The whole TOS thing was about metadata.

Still I like Signal better, but that's not a reason to be wrong about WhatsApp :-).


Does this in any way enhance protection of a citizen from government or other powerful actors?


But why would anyone use Whatsapp after the Onavo stuff?


I use WhatsApp to talk to friends in other nations who only use WhatsApp. I don't consider it to be a secure communications channel.


Care to elaborate? All I can find is that Meta bought a VPN company under that name.

Was it forced into Whatsapp somehow?


Billions of people using it creates a strong network effect...


Sure, network effects are hard to overcome; my point is that Zuckerberg bought it as a competition killing move so they're not likely to put any quality control into it


I guess you are entitled to your own opinion (which I disagree with, though I recommend using Signal because the clients are open source).

IMO the answer to your original question is: lock-in effect. People are generally too lazy to move to an alternative, and on top of that, if they decide moving to one, they don't necessarily know which one to chose. And many privacy-advocates don't help by being (again IMO) too extreme: "Do NOT use Signal, you MUST use <choose your complicated ultra-secure alternative here>".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: