Hacker News new | past | comments | ask | show | jobs | submit login
WhatsApp's Signal Protocol integration is now complete (whispersystems.org)
1055 points by piquadrat on April 5, 2016 | hide | past | favorite | 376 comments



This is really excellent. A few thoughts:

1) They seem to have replaced TLS/SSL between client and server with "Noise Pipes". Based on a couple of minutes Googling this seems to be a brand new one-man protocol from Trevor Perrin (the same guy who did Axoltl on which Signal is based). At least, I'd never heard of it. I wonder if this is the first inkling of a post-TLS future?

http://noiseprotocol.org/noise.html

2) It's a shame to see key words be killed off by internationalisation concerns. 12 words seems so much more friendly, at least to English speakers, than a 50 digit number. In practice I doubt any non-trivial numbers of people will ever compare codes by reading out such a number. I hope further research here can develop better replacements for encoding short binary strings in i18n friendly ways (perhaps with icons instead of specific words? if you don't speak a common language with your chat partner then the app is useless anyway).

3) What's the next step? My feeling is that the next step is securing the build and distribution pipeline. WhatsApp could partner with security firms around the world, like Kaspersky Lab in Moscow, perhaps one in Germany and another in Iran, to make it harder for the software to be forcibly backdoored by a single decision of a single government representative. This would require splitting the RSA signing keys used by the app stores. I have some code in my inbox that claims it can do this (it's written by some academics and I obtained it after a bit of a runaround) but I never found the time to play with it.

Of course, getting a bunch of security firms to sign off on every update, no matter how trivial that update is, might prove politically difficult inside Facebook. If mobile platforms supported in-app sandboxing better then the app could slowly be refactored to be more like Chrome, where the base layer doesn't trust the upper layers. Those upper layers wouldn't have access to key material and could then be updated more freely than the higher privileged components.


> They seem to have replaced TLS/SSL between client and server with "Noise Pipes".

WhatsApp was already using a custom protocol instead of TLS. We worked with them to transition over to Noise Pipes, which has some advantages over what they were doing before. Also, we've renamed Axolotl to Signal Protocol: https://whispersystems.org/blog/signal-inside-and-out/


It is killing me that you didn't rename Signal to Axolotl.


If user growth and mainstream adoption are your goals then calling the product 'Signal' is the better choice by leaps and bounds.

Axolotl is cool in a techy/underground sort of way (if your into that) but completely misses the boat on being 1) Easy to spell, 2) easy to pronounce, 3) easy to understand.

Mainstream users would just say "wtf" and move on to whatsapp/telegram/facebook messenger.


Why?

"Axolotl" at least has seriously pronunciation issues so I am glad it is not used "user-side".


For my part: because "Axolotl" is one of the most widely name-dropped terms in hipster cryptography, and because it's been adopted by other projects, and because it's distinctive, and because they basically own the term. In a stroke, everyone doing secure key ratchets would have been using their product.

It's also just a cool name.


I always liked "Axolotol" because axolotols are a type of salamander with the amazing ability to regenerate parts of their bodies (including their brains!), and the Axolotol protocol, like OTR, is "self-healing", meaning it's capable of recovering from a compromised session key (https://whispersystems.org/blog/advanced-ratcheting/).


And you're spelling it wrong which pretty much validates the renaming.


Axolotls are also cool because they're not really a species in their own right --- they're actually just the juvenile form of a salamander. But they can breed while in their juvenile form! It's as if tadpoles could lay eggs without having to turn into frogs first.

In fact, axolotls only metamorphose into salamanders if triggered by the external environment. If you don't stimulate them in the right way, they stay axolotls. For a long time they were thought of as two different types of creature completely.

There's a story of one of the very early researchers looking at his vivarium one day and discovering that it was now full of salamanders rather than the axolotls he was expecting. I would love to have heard the conversation...


Basically like a Pokemon.


I thought it was a misspelling of axlotl from dune, so I had to google it. Frank Herbert must have taken his inspiration from the salamander as well, given what the axlotl tanks were for.


Thanks and I agree about that!


Because words coming from Nahuatl are cool! (coyotl, mesquitl, tomatl, ahuacatl, etc.) See more from https://en.wikipedia.org/wiki/List_of_English_words_from_ind...

They’re distinctively spelled, don’t collide with existing search terms, often have available domains, etc. Most importantly, they anticipated the web 2.0 trend of ending words with two consonants in a row. ;)

Finally, just look at this guy: https://upload.wikimedia.org/wikipedia/commons/f/f6/AxolotlB...


Linguistic tangent: the common suffix on those words is interesting. Is it required for Nahuatl nouns, like the Latin -t suffix? I notice that not all the other example words on the linked page-section have it, but a large majority do.


t͡ɬ is a phoneme in Nahuatl [0] (Voiceless alveolar lateral affricate [1] if you care to hear it), which explains why many Nahuatl words have sequences of "tl". I don't know if there's a specific reason why these words end with that sound (it doesn't have to be a suffix).

Edit: yes, it's a suffix (for Classical Nahuatl): "Non-possessed nouns take a suffix called the absolutive. This suffix takes the form -tl after vowels (ā-tl, "water") and -tli after consonants [...]" ([2]).

0: https://en.wikipedia.org/wiki/Nahuatl#Phonology

1:https://en.wikipedia.org/wiki/Voiceless_alveolar_lateral_aff...

2: https://en.wikipedia.org/wiki/Classical_Nahuatl_grammar


> Is it required for Nahuatl nouns, like the Latin -t suffix?

I don't understand what you're thinking of here? Here are some Latin nouns, all in nominative case:

    nauta   (first declension)
    puer    (second)
    gladius (second)
    malum   (second)
    lex     (third)
    limen   (third)
    tempus  (third)
    virtus  (third)
    civitas (third)
    cornu   (fourth)
    manus   (fourth)
    res     (fifth)
I gave the nominative case, but in fact none of those nouns has any form ending in -t. The only -t suffix I know of in Latin is for third-person singular verbs.


Ah, sorry, yes, I meant the verbs. Originally I was going to make a comparison to nouns—nouns in Lojban, which have a -j suffix.


I'm confused and am not sure if you're thinking of something else or if I'm quite misinformed about Lojban. I don't think lojban has suffixes at all (prefixes, on the other hand). For -j specifically, the only words in lojban which are permitted to end in j are names, but there it's not obligatory. Names are required to end in a consonant, but there are at least a few that are more common than j, I think.


FWIW `-j` is the plural suffix for all nouns and adjectives in Esperanto (not in accusative where the full generic form followed by an additional `-n`), but I don't know whether the OP had that in mind or something completely different.


> Because words coming from Nahuatl are cool! (coyotl, mesquitl, tomatl, ahuacatl, etc.)

> Most importantly, they anticipated the web 2.0 trend of ending words with two consonants in a row. ;)

But those words (coyote, mesquite, tomato, and avocado, unless I seriously miss my guess) all end in a vowel.


> But those words (coyote, mesquite, tomato, and avocado, unless I seriously miss my guess) all end in a vowel.

Because they were adopted by Spanish speakers. English products using these names could either adopt the Spanish form, or as the GP suggest follow the 2.0 trend and recover the original form, publishing their webpage in the East Timor .tl domain.


But it had a significantly better signal-to-noise (SCNR) ratio when looking for specifications, code and docs. The original "axolotl" term comes from a completely different domain, so it was easy to distinguish and filter for.

"Signal" in IT usually means UNIX inter-process communication.


Kids would love that name because they know it as that cool looking salamander with gills (or whatever they are).


Confirming this: my daughter was thrilled that there was something in my field actually called "axolotl".


This thread is making me want to reread Dune.


For me the biggest thing is - how do we have any idea that WhatsApp is actually using this wonderful crypto tech under the hood? For example my WhatsApp is claiming my messages to my friend are encrypted, whereas his phone is claiming they are not. This is not exactly conducive to trust!


Mine did that for a while, then it resolved itself. Maybe it takes some time for all instances in a group to agree they are encrypted; I also think at least one of my groups marked itself encrypted after everyone in it had posted at least one comment since yesterday.


What's the incentive of not using TLS?


It's easy to shoot yourself in the foot with TLS (see: OpenSSL). Also, TLS has roots in a time where we knew much less in terms of crypto; as time went on and flaws were discovered, SSL/TLS was patched all around, meaning it has become much harder to implement correctly.

Noise starts from a clean state with modern knowledge of cryptography and modern cryptography. Much easier to understand and replicate, much harder to shoot yourself in the foot with.

TLS brings modularity and evolutivity, much needed in a protocol the scale of HTTP. In Whatsapp's case, Whatsapp controls both the server and the client; it is much easier to transition between versions because all bricks are under control. When you don't need what TLS brings anymore it makes sense to discard it.

As another example: Tarsnap (https://www.tarsnap.com/) uses spiped (https://www.tarsnap.com/spiped.html), a very simple yet powerful mechanism to build an encrypted channel. Its protocol and proof fit in ~100 lines (https://github.com/Tarsnap/spiped/blob/master/README). When you don't need all the jazz provided by TLS (and when you're lucky enough to be able to pre-share keys, which helps a lot) then a simple protocol is good.


It looks like Noise leaves an implementer with more than enough rope to hang themselves with - for example, it allows unauthenticated Diffie-Hellman, it appears to permit user-selected handshake sequences, has 16(!) different official handshake sequences with subtly different properties, each of which allow payloads to be attached to any message in the handshake process that in turn have different levels of reduced security, etc.


You're describing Noise. WhatsApp uses Noise Pipes, which is one of many instances of the Noise framework.

You're right that it's not a good idea for generalists to pick up Noise and go to town with it. Noise isn't misuse-proof.


> You're right that it's not a good idea for generalists to pick up Noise and go to town with it. Noise isn't misuse-proof.

Nor is SSL. Couldn't an opinionated SSL library (supporting exactly one protocol version, one cipher only, etc) provide the same simplicity of use while remaining interoperable and reusing an already-validated protocol?


Simplicity of use isn't the only benefit they get from Noise Pipes; they also get increased security, in a couple of important ways.

It's not the choice I would have made, but I am not as good as Moxie or Trevor.


FYI there is an RFC (currently a draft) for a suite of parameters considered secure for some time, ie some kind of LTS: https://tools.ietf.org/html/draft-gutmann-tls-lts-02

The idea is more or less what you want, hardcode some known-good configuration for servers that can't or won't be upgraded every other month.


That appears to not fully interoperate with regular TLS implementations, and does not require the use of the most popular secure ciphersuite ECDHE_RSA_AESGCM. If they need to make changes to make it secure, define an extension and say this profile makes the extension mandatory (like MAC-then-encrypt), but don't just change the protocol.


This is less a practical draft and more a political statement about the author's dislike for TLS 1.3. In particular, it makes extensive modifications to the TLS protocol itself (it doesn't merely propose a limited set of safe parameters).


> It's easy to shoot yourself in the foot with TLS

I would argue it's easier to shoot yourself in the face trying to re-implement/re-design something like TLS.


The mind behind Noise (Trevor Perrin) is one of the minds behind Axolotl (actually Noise can be seen as a generalization of the Axolotl's cross-signing of DH keys, see this post: https://whispersystems.org/blog/simplifying-otr-deniability/) so I would likely be comfortable with it. Although it is completely true that it's been far less used and studied than TLS.


The OP's point is that it is like TLS, except with a novelty effect. Personally, I prefer that everyone standardize than use a hundred different crypto protocols, none of which receives adequate scrutiny. TLS1.3, with encrypt-then-MAC and zero-RTT setup, can't get here fast enough.


About 6 round trips, which gives you about a 30s latency on EDGE networks. Doing a key exchange once, retaining it and then doing future communications with just 1 round trip is significantly faster.


Isn't a full TLS handshake 6 messages (thus 3 round trips)? Also with false start and resumption, TLS can also typically achieve 1 roundtrip, right? (I'm also confused by 30s latency for 6 RTT, but maybe I"m too focused on US market, where EDGE latency would be more like 500ms each way.)

Maybe not all of WhatsApp's platforms support the latest TLS improvements though, thus it's easier to roll their own?


I'm curious about this too. Short of some rigid technical requirement, it's hard to imagine why you'd want to reinvent the wheel or forgo a protocl which is so widely available and studied. (Although I've not read the Noise whitepaper though, I'm sure it must have some benefits compared to TLS.)


Thanks.

As an aside, what's with Nietzsche? He seems to crop up in your screenshots quite frequently.


"The individual has always had to struggle to keep from being overwhelmed by the tribe. If you try it, you will be lonely often, and sometimes frightened. But no price is too high to pay for the privilege of owning yourself."

(just a guess)


Is there any documentation on the old custom protocol?


What exactly is wrong with TLS? Why the switch?


As a protocol, it's quite old, complex, and has a huge surface area for attacks. E.g., the Wikipedia page (https://en.wikipedia.org/wiki/Transport_Layer_Security#Attac...) lists over a dozen major attacks against TLS/SSL. Its only advantage is ubiquity; it's the standard in every browser and web server.


Moxie, what do you say to this?

https://twitter.com/JZdziarski/status/717399098563891200

Is that true? Are the messages decrypted server-side for iOS users?


This post illustrates how silly it is to link to a tweet, and not just paste the text of the message directly. This tweet has been deleted.

And for anyone else that hates visiting these social networks, Moxie's reply was, "I'm sure, but it'd be to everyone's benefit for you to verify."



> What's the next step?

Imho one thing that the Signal project suffers from -- and that a lot of open-source projects suffer from -- is poor documentation. They need document their protocol better, to make it easier for third parties to integrate with their system.

Signal still lacks a working desktop client (no, the one that you can use if you're running Chrome doesn't count), and I'm sure tons of people would be eager to do stuff like provide integration for Pidgin if only there was better documentation.


+1 for documentation. The XMPP community is trying to get "OMEMO"[0,1] rolled out, which is an XMPP adaptation of Axolotl, but fails due to missing specs[2].

[0] https://conversations.im/omemo/

[1] https://conversations.im/xeps/multi-end.html

[2] http://mail.jabber.org/pipermail/standards/2015-November/030...


I agree about the documentation and a number of other things, but why doesn't the Chromium extension count as a desktop client? You don't need to use Chromium for anything else if you don't want.


I suppose the main reason is that it links to your existing Signal installation on iOS/Android, and does not work standalone.


I keep wondering if package managers (be it Google Play, apt or dnf) are in need of a solution like Certificate Transparency. Like Certificate Transparency, it would not necessarily prevent a backdoor(/certificate) from being pushed to selected users, but it would guarantee detectability, at least after the fact. If the package as a whole is included in the log, it would also allow BinDiff-style reverse engineering of any backdoor. Application developers could scan the log for unauthorized builds of their software (i.e. key compromises), and I imagine there are some things you can do with gossip protocols for end-users as well (i.e.: how popular is this build? Am I the only one running it?).

This would solve part of the trust issue both for proprietary and open source software (I mean, who's actually using or verifying reproducible builds?). It would also be a huge problem for intelligence agencies or other adversaries who want to do this undetected.


It would be interesting but I suspect ultimately futile.

Certificate transparency works because detected problems are actionable: the bad certificates can be revoked and, if necessary, so can the entire CA that was compromised.

There is no way to revoke an app store, as there's only two of them that matter. So if the transparency system revealed subversion .... then what? There'd be a big uproar for a few days, but nobody would know who the target was, or what the bad app was doing, and there'd be no action that could be taken.


I think the actual implications wouldn't be too different in production.

Those are the main scenarios where Binary Transparency might be of use:

1. An application developer's private key gets compromised (either through a breach or an inside job). Binary Transparency would allow the developer to monitor the log server for any unauthorized binaries and, once detected, investigate the source of the attack, rotate keys and possibly blacklist the affected binary (not sure if a mechanism like that exists at the moment, but it would make sense with Binary Transparency).

2. The application developer is coerced into signing a binary including a backdoor. Binary Transparency would guarantee that there's a public record of a modified binary being released. This alone would probably make any adversary think twice about doing this, especially if they want to stay unnoticed. In addition to that, gossip protocols and dedicated auditors could allow users to detect odd releases even if the developers were gagged. This might very well result in the app being effectively revoked because users stop trusting it.

Both scenarios would at the very least allow the backdoor to be reverse-engineered.

Ultimately, I'm afraid you're probably right and the limitations and complexity of this approach probably won't make it viable for a long time. Maybe the thinking will change if we see some high-profile cases where apps are backdoored - that's what got the ball rolling for Certificate Transparency, after all.


Yes binary transparency for package managers is extremely needed esp because we've been widely publicizing this vulnerability a lot now with no real fix.


> 12 words seems so much more friendly, at least to English speakers

I have a feeling that English speakers are the minority of WhatsApp users.

> if you don't speak a common language with your chat partner then the app is useless anyway

They do speak a common language, it's usually just not English.


And the language of the clients might not be the same. I might write in English to a German friend, but my whatsapp is localized to Danish and his to Germany.

How would whatsapp know what language to present the words in?


It could have a language picker. It's not like you're stuck with just one language; the underlying fingerprint could be converted into any language that has an appropriate dictionary constructed for it, so all you would have to do is ensure both parties have picked the same language and you can then compare the fingerprint easily.


That could get confusing.

"Ok, your pattern is Bread Cloud Tiller..." "Scheiße! We're being MITMed! My pattern is Brot Wolke Pinne..."


If I were to design this system, I'd make the language picker very prominent and have a very clear explanation saying to make sure both parties have the same language selected before comparing the words.

Alternatively, the party whose code is being confirmed would just select the language, and the other party's UI could automatically update to show the code in the chosen language. The first party would still have to pick a language both people understand but (as long as both parties have a working internet connection) there wouldn't be a sync issue.


I would just use numbers.


I might be misunderstanding the situation, but isn't emoji universal? Would it be ill-fitted here?


It could be a selectable option.


>> if you don't speak a common language with your chat partner then the app is useless anyway

> They do speak a common language, it's usually just not English.

I think you misunderstood the original poster was getting at. Here's the full parenthetical: "(perhaps with icons instead of specific words? if you don't speak a common language with your chat partner then the app is useless anyway)". So replace the words that you're supposed to read with little pictures. Then it doesn't matter what language the people speak as long as they're speaking the same language, since they'll be "reading" the images. You'll see a little house icon, and read "house" or "casa" or "jia" or whatever.


I'm hoping they deploy some from of key transparency. E.g. CONIKS or CONAME, the version Yahoo and Google are working on for e2e that already has a productionish grade sever and client[0]. Either that or give the community a practical reason why it can't be used as is, and then we can start working on alternatives.

[0] https://github.com/yahoo/conam


I think he already made a strong point why CONIKS style transparency isn't all that great for end-users. See this mailing list thread that includes the author of CONIKS talking about it with moxie before CONIKS was published:

https://moderncrypto.org/mail-archive/messaging/2014/000226....

https://moderncrypto.org/mail-archive/messaging/2014/000234....

Basically, users change their identity key in practice so often that a 3rd-party can't be the source of truth on whether a MITM has happened. Only the user can audit that and most users are not equipped to do so. So, towards the goal of universal e2e messaging its not clear that key transparency is a real win over the simplicity of trust-on-first-use.


So I think CONAME solves 90% of this. It's set up in such a way that third party verifiers only have to check that the update counter for each key is monotonic. Users can check just that, for the current epoch, the write counter is correct and their key is there. So only a user themselves should see a warning. (and warning cans be disabled by default ... as long as the attacker doesn't know if the user has them on or not, it's an issue)

In contrast, in CONIKS, a user had to check every single epoch. Which was impractical and meant you relied on third party auditors.

This doesn't deal with the restore from backup issue. I think given the FBI apple phone issue though, maybe it's worth reevaluating the risk vs. reword here. Especially if user visible warnings are opt in.


Oh thats really interesting. I've never read CONAME before so I'll check it out. It sounds like its definitely worth evaluating more.

I've wonder if things like multi-device support or deriving a private key from a passphrase (mini-lock style) will eventually make user identity far less likely to change. Then it might be safer to complain louder when your in the situation that someone's identity has changed.


> 2) It's a shame to see key words be killed off by internationalisation concerns. 12 words seems so much more friendly, at least to English speakers, than a 50 digit number. In practice I doubt any non-trivial numbers of people will ever compare codes by reading out such a number. I hope further research here can develop better replacements for encoding short binary strings in i18n friendly ways (perhaps with icons instead of specific words? if you don't speak a common language with your chat partner then the app is useless anyway).

There's a QR code representation as well that can be scanned to verify.


A wild idea: a string of distinctive emojis (distinctive: I mean, not allowing emojis that are too close, like too similar face expressions where only smaller details differ)

They're unified, they're mostly language-agnostic, and there is shitload of them so the representation would be much more compact than hexadecimal or base64-encoded string.


Cryptography software built with emojis would probably the final straw to me burning down my career, destroying every computing device I own, buying a shrimp boat, and living out my days at sea with a Lieutenant Dan stand-in by my side, wistfully remembering the good times when the computing world made sense as Bud Light finds its way down my jaded, cynical throat.

Put it on a .io domain for bonus points.


Hey, hey, if that nonsense would appear in my XMPP client I'd really feel the urge do to the same. But aren't we're talking about modern popular messaging apps here?

If those only'd be international, I'd seriously propose key fingerprints based on animated meme-filled "gifs". Because that's what the common audience understands (or at least my impression is so). Not some weird-looking strings of meaningless digits and letters.


I feel like there are a lot of security issues with QR code representations, namely that the information are even less transparent than hexadecimal fingerprints, and therefore are not really a fail-safe to MITM attacks.


I'm not sure about this. If they can change what QR code you are seeing they can change the words as well.


My hope is that the next step for WhatsApp is to add editing of arbitrary received messages in the client app. While the protocol provides plausible deniability of messages in the cryptographic sense, it's hard for someone who sent an incrimination message to argue that their layperson recipient someone modified the message when the app doesn't allow this to happen easily


Reproducible builds help with 3) as well. There was just an announcement re that a few days ago: https://whispersystems.org/blog/reproducible-android/


Noise is really interesting. I'd hoped from the name that it might incorporate some of the ideas behind Dust[1], which attempts to hide even that there is a Dust session going on, and what the public and private keys of the recipients are. No such luck — still, it looks very interesting.

[1] http://freehaven.net/anonbib/cache/wileydust.pdf


What's interesting about Noise?


It's an expert-authored modern framework for designing fast cryptographic transport protocols that have (depending on the options you select) forward secrecy, peer authentication, and resistance to key compromise.

The notion of a cryptographic protocol framework, rather than a complicated negotiation scheme for a kitchen-sink protocol, is also somewhat novel.


> It's a shame to see key words be killed off by internationalisation concerns.

I think it's actually a solid decision from WhatsApp since the majority of their users are from non English speaking countries[0].

[0]: http://www.statista.com/statistics/291540/mobile-internet-us...


The question isn't really what country they're from though. How many of those users can't speak English?


Quite a lot.

One example: WhatsApp is one of the primary messaging services used in Brazil, with something like half the population using it. The vast majority of which don't speak any languages other than Portuguese.

When the courts in Brazil shut down WhatsApp for 48 hours, there was a huge outcry because it is THE primary means of communication for people, especially to talk to family who have gone abroad.


Why on Earth would someone from a liberal democracy want to partner with a security firm based in Iran?


Perhaps because said liberal democracies are known to spy on that someone more than Iran, North Korea and China combined.


That's an absolutely ridiculous and ignorant statement.


Downvote me all you want, but it doesn't make you right. It just goes to show your own lack of knowledge regarding the intelligence efforts in support of social control in totalitarian states.


To keep said liberal democracy honest?


Iran, a brutal theocratic dictatorship is going to keep a liberal democracy honest? You've got to be kidding me...


> It's a shame to see key words be killed off by internationalisation concerns [....] I hope further research here can develop better replacements for encoding short binary strings in i18n friendly ways

I rather like the urbit way of encoding numbers. I can't remember the exact details but it's something like: There are 256 unique three letter words (all nonsense, but deliberately picked to be possible to pronounce). Each word is mapped to a byte and then a blob of binary data becomes a nonsense word.

So for example "tasfyn-partyv" instead of "1242B24A".


This idea isn't unique to Urbit; it dates back to at least S/Key.


This sounds a bit like babble print, which is a more complex type of encoding mechanism.[0]

The first I encountered that was in SILC, where it was used to make the key fingerprint more or less pronounceable. It's interesting that the encoding scheme never really got wider attention, although it is available in both openssh and openssl.

Similar ideas come and go, and get rediscovered - so maybe now is the time for wider acceptance.

A bit of googling provided me with a nice starting point for implementations, so babble print has certainly been recognised in some circles.[1]

0: http://bohwaz.net/archives/web/Bubble_Babble.html

1: https://github.com/eur0pa/bubblepy


> deliberately picked to be possible to pronounce

That only works in English, again.


It seems like there could be a lowest-common-denominator set of phonemes that such a system could be built on, with translations into symbol groups for different languages. As long as those symbol groups are relatable by two people who speak the same language, that might be sufficient?


Or just offer different sets of phonemes for different languages (and allow easy switching). You don't need any commonality across languages.

Of course that's not helpful when two people who don't speak a common language are trying to establish communication, but it probably isn't their biggest problem.


>> "lowest-common-denominator set of phonemes"

They already spent a bunch of time and effort finding these phonemes to build Esperanto, right?


No, see this criticism of Esperanto phonemes: http://www.xibalba.demon.co.uk/jbr/ranto/#b

Its choices are large, irregular, unclearly defined and basically Eastern Polish.


Esperanto was designed to be partially understandable by people across Europe by lifting words from various languages liberally.

Lojban is designed to be well-defined on phonemes, but that still only works when knowing its (rather simple) pronunciation rules.


That would probably still be a very eurocentric set of phonemes, so probably would not really be suitable for the modern multilingual world, I would think.


Still western-focused, but Oren Tirosh's mnemonic encoding [1] project is pretty close as well.

[1] http://web.archive.org/web/20090918202746/http://tothink.com...


Similarly, proquint.


Could someone explain the benefit of the noise protocol? Some of us are not well-versed in the problems or attack vectors of modern crypto.


I'd be interested as well I'm building a chat system myself and I'm currently using AES (its web based and zero knowledge so I need a javascript implementation)


If you don't understand the difference between a block cipher and a secure transport you probably shouldn't be writing cryptographic software.


> What's the next step?

Is there any work on encrypting group chat sessions?


Please read the articles. It already says that those are encrypted, as well as voice calls. Apparently once everyone is upgraded, nothing will be unencrypted. Their white paper also describes the protocols used in quite a lot of detail.


I was asking because I have a notification today in one of my chats saying that e2ee was enabled. I didn't see the same notification in any group chats. Outside of that notification, it's not clear to me when things are encrypted.

Maybe the next step is a more obvious indicator.


I just checked this with a group chat I'm in, and it said that it was not yet enabled because one person has not yet updated.


After a few weeks, non-updating users will get a stern warning that Whatsapp will stop working until they update. Whatsapp does this with every version. They expire every few months.


Doesn't everyone in the chat have to have the new version of the app before it's all e2e?


I guess? Hence the indicator. It would be nice to know if someone switched phones and e2ee turned off, for example.


> Once a client recognizes a contact as being fully e2e capable, it will not permit transmitting plaintext to that contact

They do this to prevent degradation attacks.


Once your number had e2e support0, you can't switch to a phone without e2e. Notifications on key changes are switched off by default, but I hope they still warn you in the chat history. We'll see.


It doesn't seem to say anything about the Web client, though.


no proof without open source


The open-source alternative is the Signal app made by Whisper Systems, the same people helping with WhatsApp from TFA, available for Android and iOS: https://whispersystems.org/

Don't know what kind of funding they've been having, but for an open-source app, it's pretty polished.


Matrix uses the Signal protocol for encryption as well, if you want a fully free implementation of the same tech.

https://matrix.org/git/olm/


thanks! could i use it to connect to WhatsApp?


WhatsApp uses a proprietary (fork of XMPP) messaging protocol and does not interoperate with anyone else (intentionally), like Hangouts and Skype, and unlike Matrix which is federated, and has gateways for IRC and XMPP already available.


You could inspect the packets yourself to see that they are encrypted.


skype is encrypted too, and backdoored. don't forget whatsapp is owned by facebook.


how do i know they aren't sending the chat log in a side channel, encrypted?


It's worse to use an encrypted but potentially compromised channel than a plain text one only when you assume the encryption is not in fact compromised, which you shouldn't do.

So I fail to see the problem. It still likely protects you from nosy neighbors or nasty non tech savvy competitors, even if it doesn't protect you from state level actors or from Facebook itself.

The above, in addition to the fact that open source is neither required for auditing, nor guarantees proper auditing occurring (see OpenSSL having vulnerabilities for years before anybody released them to the public)


What the article fails to mention:

1) I would assume Facebook still gets unencrypted access to my address book for use with their shadow profiles

2) We have zero control over what key the client encrypts the messages for. Is it only the other peer's phone? Or is it for the peer's phone plus Facebook for analysis of the messages?

Especially 2) is of some concern to me (against 1 I can't protect myself anyways because if people add me to their address books I'm screwed anyways). From that perspective, I'm still inclined to trust apple's iMessage a bit more especially after recent events.

The only safe solution right now is to compile your own signal client and use that, of course at the cost of reach because nobody else is on Signal.

WhatsApp might be a good compromise at least for only semi-important messages: The probability that any of your contacts has WhatsApp is much, much higher than the probability of them having Signal running. On the other hand, whatever you're sending over WhatsApp is likely going to be used by FB (and then possibly handed out to governments and/or stolen by attackers).


I think this is a reasonable analysis. I would refine it this way (examples are only for illustrative purposes):

Tier 1 secure messengers: all possible tradeoffs in favor of security made; use for worst-case adversaries:

- Signal/TextSecure - Pond - PGP† - OTR

Tier 2 secure messengers: serious secure messaging protocols that make some tradeoffs in favor of adoption and usability; use for normal messages of low sensitivity:

- WhatsApp - iMessage

Tier 3 secure messengers: secure messaging protocols with flaws, limited cryptanalysis, only content-controlled browser clients, &c

- Redacted to avoid flamewars.

Everything else: insecure; use only to bootstrap conversations to a Tier 2 protocol (knowing that if you're a state-level target, you're exposed even after you "upgrade")

- Google Talk - Facebook Messenger - AIM

I think WhatsApp would like to be a tier 1 secure messenger (they'd be the first mainstream tier 1 secure messenger!) and they have a shot at being that, but they're probably some years away from it.

(I know PGP and OTR are cryptographically limited compared to Signal Protocol, but from an OPSEC perspective they're still a tier above iMessage.)


I would place Signal in the same tier as WhatsApp here (Tier 2). They both upload your contacts to an intermediate server that gives you their public key, which you can optionally verify afterwards.

This means a malicious server could both store away your contacts and try to MITM you, risking detection if you do verify your fingerprint with the other party.

A messenger that really did "all possible tradeoffs in favor of security" would force you to verify the recipient, either in person or via a web of trust. I think not doing that is a perfectly acceptable tradeoff to protect 1 billion users from passive mass surveillance.

These are all UI issues though, the Signal protocol seems solid, and I'll be glad to see it used everywhere. Hopefully we will get clients that you could recommend to a journalist or lawyer and have confidence that they will be able to use it for secure authenticated communication.


That's something that's at least somewhat true of OTR and PGP too, in their normal use, and in all three cases if you're serious about OPSEC you can completely mitigate the problem. So in my evaluation, Signal's a tier 1 option, and WhatsApp is tier 2.

Reasonable people can disagree, of course.

I hope it's obvious that, since OTR is in tier 1, these tiers aren't an analysis of how much I like different messengers. :)


> That's something that's at least somewhat true of OTR and PGP too

Well, OTR is just the protocol, implementations vary in how well they make you authenticate your conversation partner. And PGP will whine at you a lot if you haven't marked a key as trusted. But somewhat agreed. A bad UI can ruin the security of otherwise solid crypto.

I just don't see how the security of Signal and WhatsApp are different. Assuming for a moment (very optimistically) that the user verifies the fingerprint and enables the alert for changed keys, they are using the same protocol with the same guarantees.

Are you factoring in some level of trust in their ability to write a secure client or run secure servers?


Its probably only a half-step up, but signal's client (and server?) are open source. You could compile your own to avoid problems with the binary blob being different. Maybe you could run your own server to dish out keys, but that may be a stretch.


> I would place Signal in the same tier as WhatsApp here (Tier 2). They both upload your contacts to an intermediate server that gives you their public key, which you can optionally verify afterwards.

The last time this came up, inspired by a post by pbsd, I submitted a detailed write-up of a protocol Signal could use to avoid requiring users to submit their contact information: https://news.ycombinator.com/item?id=11289223

It's a solved problem.

> A messenger that really did "all possible tradeoffs in favor of security" would force you to verify the recipient, either in person or via a web of trust. I think not doing that is a perfectly acceptable tradeoff to protect 1 billion users from passive mass surveillance.

Except it introduces another point of security failure. I would prefer to use a system like SDSI/SPKI's friend-of-a-friend naming, with a contact-sharing mechanism so that I could, say, know that I was talking to 'Sheila's Dad' or 'My Boss's Jim's Phone' (presented in a somewhat more user-friendly way, maybe 'My Boss → Jim → Phone' or something).


Does signal still rely on Google's marketplace and services being installed?


Yes


Don't forget Ricochet†, it only does synchronous communication but it does solve the problem of leaking meta-data. All the other clients except Pond leak meta-data.

https://ricochet.im/


Ricochet appears to rely on Tor's encryption, with an additional custom RSA handshake. That's two added levels of "nope" for me, but other people might feel differently.


I can see that but I think the self authenticating nature of Tor Onion Services and therefore bypassing bgp, dns and CA weaknesses is worth something†.

Maybe in the future when prop224†† is implemented the encryption will be more solid.

https://media.ccc.de/v/32c3-7322-tor_onion_services_more_use...

†† https://gitweb.torproject.org/user/asn/torspec.git/tree/prop...


Stipulate that Tor's encryption is modernized and drastically improved. I still don't think it's a good idea to build a messaging application directly on top of that, for some of the same reasons that it isn't a good idea to simply run a messaging application on top of TLS or Nacl. The service model and security requirements for a simple transport are different from those of a messenger.

That's what's so exciting about the WhatsApp announcement. WhatsApp is by all accounts a pretty great messaging application, and it doesn't just have decent encryption now; it has best in class encryption specifically designed to protect a messaging application, designed by experts who thought about this problem for a long time.


(I've updated the parent about bypassing bgp/dns etc. before I saw your reply)

The nice thing about using the onion address (transport layer) is that you have mandatory e2e authentication with only one id that solves multiple real world problems with bgp/dns/tls.

How would you propose to go further from current state-of-the-art WhatsApp to stop leaking meta-data? I know Ricochet is open to use a stronger encryption layer on top of Tor †.

https://github.com/ricochet-im/ricochet/issues/72


> - Redacted to avoid flamewars.

Aww... shucks.


My Twitter mentions are noisy enough already this week. :)


I'm assuming Telegram.


Signal is pretty simple. Even my relatively technologically illiterate mother uses Signal.


What are WhatsApp's tradeoffs in favor of adoption and usability, compared to Signal?


> whatever you're sending over WhatsApp is likely going to be used by FB

The article links to the technical white paper[0] which explains why your points are invalid.

> I'm still inclined to trust apple's iMessage a bit more

Do you have any proof why iMessage is more secure or is that statement also baseless?

[0]: https://www.whatsapp.com/security/WhatsApp-Security-Whitepap...


The white paper doesn't say anything about additional public keys messages are encrypted for.

It states that:

- The private keys remain on the devices (which is good, but they don't need them to read your messages)

- The messages can't be read by WhatsApp (they don't say anything about their parent company Facebook or governments though).

If you read all the whitepapers and alert messages, you can totally read them in a way that allows Facebook to see, store and analyze the clear-text of every message you're sending, without anybody lying anywhere.


That's actually a good explanation of your point. Thanks.


A white paper is not an implementation. Whatsapp is owned by Facebook and required to increase their bottom line. They are not a charity.


Assuming Facebook is lying isn't a very good technical argument. There are other ways for them to make profit than doing the shady things you are implicitly accusing them of doing without any base.


They spent billions on WhatsApp. I'm open to suggestions - how would they profit on it if all the data that passed through it was opaque to them? Perhaps they are just catering to a recent strong interest by the public in encryption.


> From that perspective, I'm still inclined to trust apple's iMessage a bit more especially after recent events.

// edit: got my answer here: https://news.ycombinator.com/item?id=11432629

I'm curious, is that because of what they did in the FBI case, or for technical reasons? IIRC iMessage would allow Apple to add public keys which they (or the FBI/$ADVERSARY) control as a sort-of backdoor as well. I can't say that I've been keeping track of Facebook's position or history on this topic, so it might or might not be fair to say that Apple deserves more trust here, but on a technical level there's not much of a difference.


The unfortunate reality too is that average users will never go to the extra trouble of authenticating keys themselves. I'm also more likely to trust a company like Apple or Google with key management than a "trusted third party" (simply because they're bigger companies, with more valuable brands to protect, and resources to throw at the problem).

So, it feels like for the average consumer, a product like iMessage ticks all the necessary boxes. While WhatsApp adding features like the ability to compare your keys via a QR code looks cool, I have to wonder how many users will actually take advantage (and how would they even know if that number or QR code really represents what they think it does?)


Manual authentication via QR (or comparing the digits) works on top of the server-side authentication. It does what iMessage does, but you also have the option to actually compare the fingerprints if you are so inclined.


The good thing about verification is that if only a few people do it, it already profieds a benefit.


> From that perspective, I'm still inclined to trust apple's iMessage a bit more especially after recent events.

I'm surprised by this statement. After recent events I'd say the imessage protocoll is a weird ad-hoc construction that failed to follow basic modern crypto constructions like authenticated encryption and forward secrecy. I don't expect anything alike from the signal protocol.


The one thing iMessage and WhatsApp have in common is that they both end-to-end encrypt without giving the user control over the keys being used.

So ignoring actual protocol and implementation flaws (I agree with you that WhisperSystems will probably be ahead of Apple there), both rely on the key management being done in a thrustworthy manner.

And this is where I trust Apple more than Facebook, especially in light of the recent FBI events. If Apple says they are not maliciously injecting fake public keys for the purpose of surveillance and marketing, I tend to believe Apple.

If Facebook is silent about this, I assume they are doing it.


WhatsApp allows you to verify keys. Apple doesn't. This makes a pretty big difference.

Also, iMessage's crypto protocol is cryptographically broken. It's been patched together in ways that prevent the obvious attack, but don't actually fix the underlying issue. Doing so involves replacing the protocol. In the mean time, those patches are not fool proof and someone may get around them.


> Whats App allows you to verify keys. Apple doesn't. This makes a pretty big difference.

Does it? It allows me to verify a key. It doesn't give me any ability to control what other keys it's working with.

This only helps me to make sure that the message I just got, I actually got from the person it's claiming to be. It doesn't give me any other protection though.

I agree about the broken iMessage crypto and I hope to see Apple upgrade the protocol in a future OS release. With their quick uptake of new OSes, a fix could propagate relatively quickly.


This is insanely cool. For everyone (very rightfully) worried about the PATRIOT act, the NSA, etc., etc. this is absolutely huge.

One BILLION people just got their messages encrypted. Facebook was under no obligation to do this; for the vast majority of tech history the messages sent by 99% of people were very insecure, and when the tech giants responsible were asked their response was "ehh". This suddenly cuts into a huge portion of that. Pretty much everyone with a mobile phone in Europe or South America, as well as large parts of Asia, suddenly now has completely encrypted messages.

Kudos to Whatsapp for this fantastic move.

On a related note, if anyone is inspired by this announcement to start using encryption in other parts of their life, I have a handful of Keybase invites available (to bypass the 25k+ waiting list.) Keybase's security depends on lots of people tracking other people, so only ask for one if you'll track other people. I see too many people that make an account and nobody ever tracks them/they don't track anyone. My email's in my profile.


> For everyone worried about the PATRIOT act, the NSA

Does this actually protect you? If data goes through US FB servers, don't the NSA have access?


To the metadata and ciphertext, sure. Still better than metadata AND data.


Where are the keys generated & stored? Is the APP openly audited?


WhatsApp have published further details for users[1], as well as a technical whitepaper[2] explaining the implementation. There's also a blog post[3].

[1]: https://www.whatsapp.com/security/

[2]: https://www.whatsapp.com/security/WhatsApp-Security-Whitepap...

[3]: https://blog.whatsapp.com/10000618/End-to-end-encryption


That last blog post is clearly written by Jan Koum, as it talks about his past in the Soviet Union. But his name doesn't appear anywhere on the blog post and if you didn't know that odd bit of trivia, it'd be completely confusing - who the heck is talking? Some random employee?

They need to add the name and job title of the blog posts author to the bottom.


It's now signed "Jan and Brian."


That's great. I wonder if it's because they saw my post or if they realised the issue themselves.


Neither is Open Whisper Systems or the signal protocol mentioned in this blog post, but maybe this would have been too much of advertisement for a competitor.

To be fair: whatsapp mentioned it at the other two links.


Is there any reasonable way to verify that end-to-end encryption is actually being used, and used correctly? From a user's point of view the app looks and works exactly the same as before, except for the addition of a QR code which could be doing anything.


Define 'reasonable'.

Essentially, no. You have to trust that the app is really doing what it claims to be doing.

If it were open source, would it be different? Maybe. Signal Messenger is open source and the Android build is reproducible. However, the way you reproduce it is to run a Docker image, so that isn't really meaningful unless you audit the code that's used to build it. And then audit the code of the app itself. Fortunately, the Android app is written in Java, so you can easily do static analysis to narrow down what code needs to be audited. Unfortunately, it also includes native code that can reach into arbitrary parts of the heap if it were to try and steal keys. So you'd have to audit all of that. And then keep up with all the changes. Which you aren't going to do.

Even if you did all of that, at some point the naughty thought will occur to you that all your effort can be undone by an operating system update that contains more than it claims to. And then you'll give up.

In practice, it's impossible to really verify anything about how our modern computers work: there are too many people and companies who can silently access key material or just fool us in other ways.

So it's best to understand this work in context - this is not about allowing you to use WhatsApp if your threat model is that WhatsApp is entirely malicious. It's about discouraging governments from going to WhatsApp and saying "you need to give us all your data" because it creates the same situation Apple has: WhatsApp would need to write new software to undo the encryption, and that can be legally argued to be "compelled speech", and thus it can fall outside the legal powers granted under wiretap orders.


But can we still be sure of our privacy if we combine two or more such semi-secure systems? TOR does such a thing with the Onion Routing protocol by using intermediaries.


No. There is no way to be completely sure of your privacy in any context when using modern computers, it simply isn't a problem that can be solved with technical tricks.

Consider all the work I listed above. Imagine you actually did it. Great. But ... you're using WhatsApp to communicate with someone else. Did they do the same work? No? What if their device received a different binary to yours?

You can't even solve this with clever auditing frameworks because then you're just shifting the trust to whoever writes and distributes the auditing tools.

That's why it's so important to understand that you cannot solve political problems with cryptography. At most, you can reconfigure the set of people you need to trust (hopefully by making it smaller). But if those people can be forced to act by a political entity, then no crypto will save you.


What's more, my client says "You aren't secure because X needs to upgrade WhatsApp", but the other party is seeing "you are secure", complete with fingerprint and everything.

I wonder which of the two devices is lying.


I hit the same thing - I think my client just had a stale view of what version the person I was talking to was running. I did a refresh of the Favorites list on iOS and it sorted it out.


thanks for the hint. that solved it for me too.

what I'm wondering: My WhatsApp tells me it can't encrypt because the other person uses an outdated version. But the other person gets told the chat is encrypted. What is the truth then? Is it still doing crypto but my UI is denying it? Or is it not encrypted and the other person has a false sense of security? That's at least a bit strange...


Same issue and this resolved it. Thanks for posting that.


Create a new domain, new site, send a link through whatsapp, see if it's pinged by their servers. That would be my first test.


i'm with you on this. claims of e2e encryption are easy to make and without open source, impossible to prove.


Without whatsapp being open source, how do we know for sure that Facebook is not somehow storing or reading our messages?

As good as this sounds on paper, I hesitate to trust Facebook to transmit my data without wanting to peek a bit. I currently use both Whatsapp and Signal and will probably continue to do the same unless there is a way for users to verify Facebook doesn't keep a copy.


Closed source software isn't impenetrable. The idea that you need source code to evaluate security claims is mostly a meme from the 1990s.


Taking the example of Skype, the hardening/on-the-fly decryption techniques used in the binary made the reverse engineering very difficult [0]. Difficult to reliably audit such software. Don't know about Whatsapp though.

[0] http://www.oklabs.net/skype-reverse-engineering-the-long-jou...


That was true in the case of Skype (which was eventually reversed), but it is not true here.


Maybe it is feasible, but at the very least I would wait for someone to reverse engineer it and publicly publish its findings. I do not have the skills to do that.

Moreover, if reverse engineering is so easy, why not open-source it from the beginning?


If you don't have the skills to do basic verification of a non-obfuscated binary, you don't have the skills to verify an encrypted messaging protocol implementation from source either: the latter task is harder than the former!

I think the misconception some people here have about the necessity of source code is born out of the idea that a cryptographic backdoor would look something like a mysterious HTTP POST of your key or plaintext to some random endpoint (that POST, by the way, would be trivial to spot in the binary; you wouldn't even need to read assembly).

But real cryptographic backdoors can be extremely difficult to spot. A cryptographic algorithm that uses signatures, for instance, can be fatally compromised by breaking signatures (see: TLS). An injected cryptographic flaw that breaks signatures can be as simple as biasing a single-digit number of bits in a nonce; a bias can be as subtle as generating one less byte of randomness than the protocol requires.


> If you don't have the skills to do basic verification of a non-obfuscated binary, you don't have the skills to verify an encrypted messaging protocol implementation from source either: the latter task is harder than the former!

Those aren't quite the same skill though. Some folks could have the skills to verify protocols from source, but not the skills to work with a non-obfuscated binary. Task 'A' being harder than task 'B' doesn't mean that everyone who can do 'A' (harder task) is capable of 'B (easier). Nor does the inverse follow at all.

If we admit that doing basic (non-messaging protocol impl) verification on a binary is difficult and doing messaging protocol impl verification is also difficult, it seems reasonable to presume that doing both will take more time, work, and as a result, allow for more errors in verification.

Essentially, verifying impls without source code is more difficult/time consuming/error prone than verifying ones with source code.

Of course, they could give someone the source code to verify without making it open source. But that requires that one trust this other party, selected by folks who have an interested in their protocol being reported as secure (whether it is or not).

The goal when folks are looking for freely available source code is to eliminate some of those needs for trust (by allowing a greater number of verifiers) and eliminate some of the conflict of interest (by removing some control the interested party has).

Sure, closed source bits that promise to be good and that we can (potentially) look at are OK. But having source code for them is still better.


I think the argument (one I'm not expert to make) is that the source may or may not be helpful to someone who is competent enough to validate a encrypted message application, but it is not what you need to verify.

You must verify the binary because you cannot trust the source, so it is a basic skill of anyone who has the competency to validate an encrypted message application.

Now, its possible, that the source along with repeatable builds make verifying the binary easier for someone with the skills necessary, but even with those things, they still have to verify the binary.


Yep. I don't think source is a bad thing! It's good to have, but it isn't the predicate people on this thread were saying it was.


You don't need the source, but it makes it a hell of a lot easier.


Only if builds are fully reproducible, which is rarely true. Otherwise, the source can make it harder, by lying to you.


Don't tell me that it's easier to RE an entire multi-megabyte messenger app than it is to real the source code. Assembly can lie to you as well. There are all sorts of ways to trick IDA and friends.


Then demand reproducible builds from software with security claims?


Can I have serious cryptanalytic audits first? Because virtually nothing has that. At least I trust what Signal Protocol is trying to do!


One does not preclude the other. For instance, the current Signal implementation is almost certainly prone to remote code execution.

How does the Signal project handle reports of potential vulnerabilities? I haven't seen any security contact information on the OpenWhisperSystems site.


how's that boot taste?


I realize you've gotten hammered by downvotes already, but this comment crosses the additional (and much worse) line of personal attack. Please don't do that on HN, regardless of who you're disagreeing with or how wrong they may be.


Question for me is still around iCloud backups. Per http://www.popsci.com/whatsapp-now-encrypts-all-messaging-fo...:

> However, WhatsApp on iOS still backs up chat logs to iCloud, and despite any effort by Facebook, those could be given to a law enforcement agency. It's not known whether the backups are encrypted, but we've reached out to Open Whisper Systems and will update with any new information.

Apple stores iMessage backups unencrypted and hands them out when given a lawful request (per https://thehackernews.com/2016/01/apple-icloud-imessages.htm... WhatsApp needs to store encrypted backups to prevent this attack.


Does this include file transfers, if so, how?

Curious because when sending a .webm video from an Android device to a iOS device, the video file was transcoded on WhatsApp servers and then delivered to the iOS device as H264/mp4 (since iOS can not play .webm files)

This should no longer be working.


The whitepaper (https://www.whatsapp.com/security/WhatsApp-Security-Whitepap...) claims that the attachments of any type are encrypted


It definitely should include file transfers, according to their blog post: https://blog.whatsapp.com/10000618/end-to-end-encryption.

Are you sure encryption was active, both devices on the latest version? Is there a chance that the conversion was done on either of the devices?

Otherwise, yeah, something isn't right.


Conversion is done by the devices I think.


Excellent! 3 questions:

- What if the government forces WhatsApp to write and push a targeted software update in order to compromise the end-to-end encryption (I'm of course thinking of the FBI vs Apple case)? Is there a way for the user to be notified?

- Does WhatsApp Auto Backup encrypt messages before sending them to Google Drive or iCloud?

- Would it be possible for WhatsApp Web to rely on backend servers storing an encrypted version of messages, instead of relying on a connection to the user's phone, and still be able to perform keyword search over the encrypted messages with something like github.com/strikeout/mylar?


> - Does WhatsApp Auto Backup encrypt messages before sending them to Google Drive or iCloud?

So you make a backup and loose the phone. What about the key? Is that gone too? Without it, encrypted backup is useless. How do you backup the key? You and me maybe will manage to do this, but what about grandma and all those people without anybody close who knows about this?


You could backup the key like every other account password, too. Save it in your password manager, print it. Whatever. This isn't really an argument.

When Android backups Wifi passwords, those are encrypted with your Google password that Google has only a hash of. Okay, it's easily interceptable the next time you login and Google has an warrant. But still an option.


The decryption key you're talking about can be a simple password. For symmetric encryption (e.g. AES-256) even a simple password would be secure enough, while something like a 24 chars password would be unbreakable.


No, not really. Passwords are terribly low entropy, and so having a common password is not a solution, at least without using some sort of KDF or such.


I came here apprehensive because you need UI support for this to work, but reading the article I was pleasantly surprised to see that they implemented all the verification and other bits to make this reasonably visibly secure.

Great job from everyone, I'm glad WhatsApp has done this. I look forward to these features on my device.


Great news. I'm just wondering why Facebook/Zuck is doing this. Is he fearing the competition–all the other E2E messengers out there?

I'm asking because I could imagine that Whatsapp might get banned in some countries soon (as recently happened in Brazil) and thus, lose market share.


I'd imagine Zuck is going along with it because WhatsApp announced that they were going to do this before the acquisition closed.

Zuck is way too savvy to publicly reverse that announcement once the deal closed. Imagine how bad that would look...


I also wonder how Facebook will monetize WhatsApp with e2e now.


They don't need to. They just need to stop it becoming a full competitor.

Facebook's asset is the social network, your contacts. WhatsApp was a threat because they were getting a social network to match. That could then have been acquired (by Google) or developed a more social platform on top of it. By owning it Facebook can keep WhatsApp limited to a messaging platform only and remove the risk and limit the damage to Facebook itself. Next step moves to encourage WhatsApp users onto Facebook for group messaging type services.


They'll monetize Facebook with the phone numbers they get from WhatsApp users.


That's kind of what I was fearing. Would love to see an official response from Moxie on the meta and contact data handling of WhatsApp's implementation.


This is presumably why Facebook bought it in the first place, though. They don't need to see anyone's message content for this. They get all the info they need the first time you use it; confirming phone number 123-456-7890 belongs to person@domain.com, so they can tie together your facebook usage with whatever customer surveys, browsing etc you've done. I don't see this as remotely altered by the recent encryption work, other than the latter will entice more people to use the app.


And I think that's really the messaging that needs to have some PR light shed upon. Facebook has access to the meta and contact data (confirmation, Moxie?). For all the hoopla over WhatsApp E2E crypto, users shouldn't forget that they may be handing over phone number, contact data and other meta to the largest ad-tech platform with a real name policy.


I'm looking at libaxolotl-c. I'm a little bit disturbed about perfect forward/future secrecy. Perfect forward secrecy ensure that a session key cannot be compromised if a long-term key is compromised in future. With something like OTR even if a session key is compromised at n, session key at n-1 or n+1 will not be compromised. Here, we got perfect forward/future secrecy.

If i take a look at axolotl, in scenario Alice send message to bob when Bob is offline:

(1) , (2)

MK = HMAC-HASH(CKs, "0") // (3)

msg = Enc(HKs, Ns || PNs || DHRs) || Enc(MK, plaintext)

Ns = Ns + 1

CKs = HMAC-HASH(CKs, "1") // (4)

return msg

We can see that Alice re-use CKs to get a new symmetric key. So if an attacker get CKs(n) he could easily compute CKs(n+1) CKs is not a long term key, but we cannot honestly call this _perfect_ futur secrecy... One more thing, if I remember correctly, according of perfect forward secrecy definition, an implementation must NOT re-use previous session key to derive a new one ...

I'm wrong ?

(1) Quoted from https://github.com/trevp/axolotl/wiki

(2) see session_cipher_get_or_create_message_keys (https://github.com/WhisperSystems/libaxolotl-c/blob/0640b5ac...)

(3) i think we should read MK = HKDF(HMAC-HASH(CKs, 0x00) see ratchet_chain_key_get_message_keys (https://github.com/WhisperSystems/libaxolotl-c/blob/0640b5ac...)

(4) i think we should read MK = HMAC-HASH(CKs, 0x02) see ratchet_chain_key_create_next (https://github.com/WhisperSystems/libaxolotl-c/blob/0640b5ac...)


"Perfect forward secrecy" requires synchronous key exchange. The compromise that signal protocol makes is for forward secrecy to "eventually repair" itself while in the meanwhile a limited number of messages are potentially vulnerable. That is one of the novel feature of the protocol and it is what allows for async communication without some central server doing all the key mgmt (central key mgmt doesn't have this problem because it's actually synchronous).

I am not a cryptographer.


"Eventual forward secrecy"?


Perfect forward secrecy is actually the opposite of what you mentioned.

"In cryptography, forward secrecy is a property of secure communication protocols in which compromise of long-term keys does not compromise past session keys."

https://www.wikiwand.com/en/Forward_secrecy

If someone were to compromise your key and they had a packet log of all your communication then PFS, which Signal has, guarantees that they wouldn't be able to derive previous keys from the current key to decrypt previous messages from the packet log that came before the key compromise.

The thing you're talking about can be resolved by revoking compromised keys but knowing when to revoke those keys is a whole other problem that hasn't been solved by anyone to my knowledge...


Does this mean that WhatsApp can talk to Signal Private Messenger app?


This is the first question that came to my mind. What about the signal apps? I've been trying to get everyone in my vicinity to migrate from whatsapp to signal. It would be more than neat, if I could choose to use signal, while still being able to talk to whatsapp users.


On a protocol level, probably yes. On a practical level, probably hell no. Connecting to one another must be very nontrivial, given how different the actual wire protocols are.


No. Signal would probebly like to add Federation as a feature but they have not done so (yet?). So even if Whatsapp would be down to do it, what I don't think, then their would still be a technical issue.

Actor is a messanger that seems to focus on federation and they want to use Singal Protocol as well. So maybe they will devlop software for that. However their is still the issue if Whatsapp would want to do that.


It does support federation - for a while there Cyanogenmod ran their own Signal (then called TextSecure) server and built it into the Cyanogenmod source.

It supported federation with Signal. But they removed it as the hassle of supporting the server was hard, the code was outdated and everyone agreed if you wanted that level of security, just install Signal yourself.


So, yeah, they had federation.

That seemed to be a bit hacky, not proper federation, it was not designed to scale. Maybe Im wrong, I would like to hear somebody that knows more.


I would really, really like to hear more about federation in Signal too. They used to mention that it was planned, but that was a while ago now.


Okay, first off: This is great. The most popular messaging app finally gets the security it needed. And we've just rolled out E2E to 1b 'monthly active users'.

However, I have always wondered one thing about WhatsApp: How does it generate any kind of meaningful revenue? Apparently they've ditched the old $1 subscription model [0], and even that was so loosely enforced that I have never paid a single cent for WhatsApp in my life--and never will (got it while it was free on the iOS App Store and now have a 'Lifetime' subscription, if they don't change those at some point). And even back then, maybe half of their 900m monthly active users [1] were iOS users who paid only once, and the rest may have dodged the fee in various ways. I have a really hard time believing the revenues so gained could ever actually cover the cost of R&D (especially for so many platforms) and infrastructure (which should be huge, given the amount of data they shift). Now they say they want customers to use WhatsApp as a platform, the way Facebook Messenger is doing it, but I'm not seeing any of those features implemented anywhere. I always assumed there was some heavy data analysis going on behind the scenes--which would have been fair, I guess, since we're neither being shown ads nor really paying. Facebook's involvement added to that conviction. Now that they're encrypting everything (which, again, is wonderful), they can't analyze what is really, really interesting data anymore (keywords, etc.). And it's not like there was a public outcry for them to take this step--I would guess that not many end users actually appreciate the importance of E2E encryption.

So the question remains: How are they making money? You still have metadata (I presume), but then again, how do they use this data to make money if they can't always match it to a Facebook profile (where they can show you ads), and also, does this data really provide such a big improvement over all the data collected by Facebook and Facebook messenger? It just seems strange to me that WhatsApp apparently does not want to make any money.

Does anyone have any insight on this? What am I missing?

[0]: http://www.cnet.com/news/whatsapp-kills-1-subscription-fee/ [1]: http://qz.com/495419/whatsapp-has-900-million-monthly-active...


You raise some interesting points.

FB shelled out $15B for Whatsapp, and clearly the contents of these messages was a major factor in that valuation, especially once the subscription fee was phased out.

Now they're voluntarily encrypting. The only explanation I can come up with is that they are realizing that they will not be able to keep the contents of these messages for themselves in the face of motivated nation-state actors, and so they're killing that golden goose rather than share it. This seems pretty extreme and makes me wonder what is really going on behind the scenes.

My assumption is that there is a contingency to monetize metadata (something I'm not seeing discussed here too much) but I can't help but wonder if FB is now looking at the $15B as a huge overpayment.


One possible way they can benefit is to generate metadata (e.g. keywords used in messages) against a Whatsapp profile that they have linked to facebook accounts. They would then unleash their ad platform to users across the web where facebook ads are used.


I figure this isn't the question others want to discuss, but I too wonder the same. I think I used WhatsApp for a short period of time, until the point they asked me for money. Since then I haven't looked back. Now I am left wondering where their revenue comes from.


It probably doesnt need to generate any revenue (in dollars, perhaps when measured power it does). Facebook does not really want it to be profitable, at least not as long as the communication is c2c. Facebook is only profitable because of its b2c parts!


They get your phone number from WhatsApp which they can then use to connect customer surveys you've done in the last, which have phone numbers, names, email addresses. Even absent some of this info they can often discover who you are.


That's great news - secure-by-default is a huge thing, since it makes encrypted communication more normal. If most of your real-time communication is encrypted, then when and to whom you used encrypted communication isn't leaking valuable information.

The next step is some kind of noise injection into the metadata. There are almost certainly ways to look at who is chatting with who when. It'd be fantastic to automatically generate realistic-looking traffic to hide the normal stuff within. Plus, you'd be adding deniability to any communication you're having.

There's likely some pretty severe battery usage issues with it. If you offload the metadata fuzzing to a proxy server of some sort, then you're adding a vector to filter out that fuzz. It might be too big of a technical tradeoff to be worthwhile.


We're using noise to hide metadata in the Vuvuzela messaging system [1]. Using noise to hide metadata is efficient and gives good privacy (better than Tor-based approaches). We've formalized the privacy guarantee using differential privacy and are now working to make the system more user-friendly and easier to deploy.

[1] https://github.com/davidlazar/vuvuzela


Great, now I really hope Telegram folks realise how much they already lost for not having open sourced their servers. Here in Brazil they've lost a huge amount of community power to Actor, which doesn't even have encryption.

Now, it's a shame Whatsapp doesn't have an open source client as well. As much as I appreciate encryption, it still looks like we're not going anywhere with a closed sourced program that is a pain in the arse to run on Linux.


Telegram secret chats are device specific which means you can't recover them if you get a new phone. How can Whatsapp recover encrypted chats on iCloud?


Whatsapp's own post:

https://blog.whatsapp.com/10000618/End-to-end-encryption

I'm grateful Whatsapp itself finally made a public statement about this as well, but I would hope they would go a step further and integrate this new change into its Privacy Policy as well.

Then they would be at least somewhat legally committed to using end-to-end encryption for the foreseeable future in which they'll keep using e2e encryption. I'd have a little more trust in them that they aren't just going to drop the E2E encryption for various individuals with just a phone call from government officials.


moxie: Does this mean that both Facebook and Signal servers are unable to see the plaintext?

(I would assume so, but I would like to have it confirmed. From someone who actually knows what he's talking about.)


Correct. The only people who can read a message (or hear a voice call) are their intended recipients.


Would it be possible for Facebook to introduce a backdoor later without breaking the Signal protocol, or alternatively "forking" it while keeping compatibility between their clients?


The client has access messages in clear text. A backdoor could easily deliver those messages to a third-party, yes.

This is not something E2E protocols can protect you from. You'd have to audit every piece of firmware and software on your device to verify that's not happening.


If he does not answer does it mean whatsapp or signal servers can see the message?


This is the first thing that came to mind when I read this, having clarification from them would be reassuring.


Kind of weird but I got the message claiming my chats were e2e encrypted but when testing it with a friend, his said no such thing, and his client claimed mine was out of date and our messages were NOT encrypted, despite there being a lock my side.

https://imgur.com/a/pgJsH

This is kind of worrying. I'm sure it's not malicious but I have literally no idea if things are encrypted right now.


There was another comment in this thread that said one of you might have stale data. If you kill the app and try again it should update.

https://news.ycombinator.com/item?id=11432356


We went as far as killing the app, rebooting our phones, etc.



Does anyone know how the WhatsApp backup to Google Drive is encrypted? If so, how can it be decrypted so easily from a new phone with the same number. Clearly either WhatsApp or Google have to store a key - or am I missing something here?


I'm not sure, but I think WhatsApp stores your key and it sends it to your mobile so you can decrypt the backup you downloaded from Google. But you can disable backups...


Much appreciated! Girls behind me in the train were already freaking out about the message which popped up in the chats. I guess they don't really understand the value :)


OT but https://whispersystems.org/workworkwork/ was great to read. I would definitely apply if I was in the market for a job.


Is it still the case that verifying a user's text identity does not verify their voice identity and vice versa?

IMO it would be very nice if calling someone and verifying the short code would confirm their text identity as well and if, once someone's text identity is verified, if voice calls to that person were protected by the verified text identity.

(IIRC the reason that Signal does not work this way is that texts use Axolotl whereas voice uses ZRTP and the key material is completely independent.)


WhatsApp uses a shared identity across text and voice, so Signal Protocol is used to secure both connections.

We've long planned to do the same thing in Signal, but WhatsApp is ahead of Signal here. Axolotl is now called Signal Protocol, btw.


Speaking of voice calls, are there near-to-medium-term plans to do multi-party voice calls in Signal, or would such calls be too awkward for various reasons?


Does anyone know if WhatsApp still stores your master key? I assume you can still reset your password.

If so, kind of makes you wonder what you really buy by adding the axolotl protocol. Thoughts?


Very cool, but I wonder when it'll be possible to key Signal off of something other than my phone number, and when it will be possible to support multiple accounts (e.g. phone numbers) on a single device. Also, I wonder when it'll be possible for me to successfully reset Signal registration.

And when I'll be able to share my certifications of contacts with other contacts …


How does this compare with Telegram?


By default, Telegram stores a plaintext copy of every message you've ever sent or received on their servers. WhatsApp does end to end encryption using the Signal Protocol by default, and doesn't store anything server side.


When you say Telegram servers store plaintext "by default", does that imply this is not also true of their "Secret Chat" feature? That mode appears to behave as if it's exchanging keys and doing end-to-end encryption... (I am aware of the Telegram flaws you and others have pointed out).


"Secret Chat" uses Telegram's (flawed) E2E protocol, so the server would only see ciphertext. A "normal" chat is stored in plaintext.

This is also why normal chats work in multi-device environments, but secret chats don't. Unlike iMessage (and I assume Signal - haven't looked at the actual protocol), they don't do anything fancy like making the sender encrypt messages with multiple public keys (one for each device the recipient owns).


If you know something that telegram doesn`t know, maybe you should contact them, and ask your bitcoins worth of 200,000$ https://telegram.org/crypto_contest. Easier to "talk".



Does this mean Whatsapp do peer-to-peer message transfer or messages still pass through whatsapp servers?


Their is no mobile message app that does peer-to-peer as far as I know. Whatsapp, Telegram, Signal, Actor, Threema all go over a server.


Why?


Because its very hard to make reliable peer-to-peer apps. Phones are not always reachable easly on the same ip address. They move around, they are sometimes offline, sometimes they have high latency, the network drops most packages, they have limited space and computational power.

To deliver a message they both need to be online at the same time. For group messages you need to be very clever to get it working when only part of the people are online. The key destribution is also extreamly difficulte.

So lots of problem for very little gain.

What would be smart is to have a federated architecture so people can host their own servers and that would solve PART of the same problems, pure peer-to-peer would solve. Actor is devloping in this direction, and they want to get it working with Signal protocol, Signal itself would probebly do it as well if they had the manpower.


Because of NATs and (restrictive) firewalls.


Don't forget that Telegram uses custom in house encryption and they say "trust us", it's good. Telegram encryption can't be verified.


As long as the clients are open-source and the encryption is end to end, can't it really be verified?

Whatever the server, if the client encryption is reliable, data can't be read on the server side.


  if the client encryption is reliable
It's not[1].

[1]: https://eprint.iacr.org/2015/1177.pdf


It is not, or was not? Did this issue get rectified by Telegram?


No, they still use MTProto and not an AEAD construction.


I was referring to the padding attack. Did they patch this?

And are there any property of MTProto that makes it infeasible to replace AES IGE in a later revision of the protocol?


The problem isn't IGE. It's that they're using SHA1 (not HMAC-SHA1) in a "MAC and Encrypt" construction.


Its still their own crypto. Even if you have the code, their could be a mathematical weakness that is unknwon to anybody outside the company. The protocol has not been studied a lot.


Has anyone offer to fund an audit of the code/crypto/workflow? If not, it'd be nice Moxie would spec out a public request for funds via some sort of crowdfunding and means to post public offers to audit the code based on the funds raised.


Moxie has asked them multible times but they dont seem to care.


Oh, interesting, not a huge suprise, though guess I wasn't clear, I was talking about auditing the open source stuff, not Facebook.

Basically, crowdfunded audits of open source crypto tech have been done before; I'd be up for helping with this if it made sense, since currently my sec dev skill are not that useful:

https://www.indiegogo.com/projects/the-truecrypt-audit#/

--

Might be worth funding a exploitable bug bounty too.


The custom in house encryption is a protocol that if tou have the skill to break, go collect 200,000$ in bitcoins https://telegram.org/crypto_contest


> Telegram backer, Pavel Durov, will give $200,000 in BTC to the first person to break Telegram' encrypted protocol. Starting today, each day Paul (+79112317383) will be sending a message containing a secret email address to Nick (+79218944725). In order to prove that Telegram crypto was indeed deciphered and claim your prize, send an email to the secret email address from Paul’s message.

That is a fairly ridiculous challenge. Your average crypto professional is not going to have access to the communications channels between the two phones. The NSA or CIA or other government group might, but not really anyone else. Putting this up as a challenge is just silly.


Cool. So FB/WhatsApp can't even decrypt messages themselves?


If they actually do what they say the do, then yes. Thier is no evidence that they are lying, so for now its probebly save to assume that they can not read your messages.


No, it is not safe to assume that. Edward Snowden already proved that we cannot trust large US corporations.


Thats what I said. You have to trust them. As far as we know they can not read it. Its fully possible that its not true but its far more likely that it is true.

If the work together with the NSA then its more likly that they deploy bad updates to individuels rather then deploying a backdoor to everybody. That is to easy to detect and it would be a marketing desaster.


No, you said it is safe to assume they can't read your messages, which is the exact opposite to what Snowden said. What you personally think is true, likely etc is irrelevant. Facebook could be compelled to release private keys/data to the US government and not tell anyone and you'd be none the wiser.


He never said that ;)


Yes, he did, elsewhere on this thread.


Looks like they feel threatened by Telegram growth, which means Telegram is doing quite well.


Telegram doesn't do end-to-end crypto by default, and uses home-baked crypto which has many weaknesses[1]. I like it as a platform (for their Bot API, Channels and Groups), but I don't like their security mentality.

[1]: https://eprint.iacr.org/2015/1177.pdf


All they need to do to supplant Telegram (at the very least for my use-case) is to have a decent desktop app, preferably native. Their current web interface (is that still active?) is a UX nightmare. Of course it's "for the sake of encryption" but it comes at the cost of adoption.


Also Telegram has built-in bot support, which is a big plus in my book.


I am not seeing the information about encryption they mention in any of my chat details on the iOS client. Is this part Android only or did simply non of my contacts upgrade yet? I have version 2.16.1

edit: After a while it now shows up with certain contacts for me


I got a message in my stream right after I sent a message saying 'the messages in this conversation are now protected by end to end encryption' clicking on it takes you to this page https://www.whatsapp.com/security/?l=en - I'm using 2.16.1


Still not seeing it myself. I wonder if this is being rolled out in chunks.


FTA:

> This includes chats, group chats, attachments, voice notes, and voice calls across Android, iPhone, Windows Phone, Nokia S40, Nokia S60, Blackberry, and BB10.

> Before all users have updated to the latest version of the software for their platform, there will still be some plaintext on the network. To make this transition as clear as possible, WhatsApp clients notify users when their chats become end to end encrypted. Starting today, users will see a notice in their conversation screen as their individual and group chats become end to end encrypted. Additionally, the encryption status of any chat is visible under that chat's preferences screen.

I imagine all of the end-points in the conversation, including you, your friends, and any bots or loggers (I don't know how these things work) will need to be able to handle end to end encryption before they switch or I'd imagine those who are left out will just see a bunch of garbage.


    This includes chats, group chats, attachments, voice notes, and voice calls across Android, iPhone, Windows Phone, Nokia S40, Nokia S60, Blackberry, and BB10.

    Users running the most recent versions of WhatsApp on any platform now get full end to end encryption for every message they send and every WhatsApp call they make when communicating with each other.
It likely requires both ends to have the latest version before e2ee is enabled.


According to their blog post, it's available on a plethora of platforms

> This includes chats, group chats, attachments, voice notes, and voice calls across Android, iPhone, Windows Phone, Nokia S40, Nokia S60, Blackberry, and BB10. > > As of today, the integration is fully complete.

I haven't seen the notification either, yet, though.


Cool! Though am I the only one wondering how this fits into FB's plans? I suppose they still get all contacts and know how frequently I contact them, which builds very valuable info. (Something MSN Messenger never leveraged, sigh.) I'm still very hesitant about ever trusting anything to FB.

Apparently by default there's no notification of changed keys: "WhatsApp users can opt in to a preference which notifies them every time the security code for a contact changes". I guess the question is, does this preference get sent to the server or exposed any other way? If it doesn't then it might be too risky to use this as an attack channel.


One thing that's odd about this is that it's phone only, so they need your phone number. It would be a million times better if it were an android app you could sideload onto any device and use anonymously; exchanging your contact details anonymously. It's hard to see how whatsapp can legally/technically defeat a request for `which contacts does this person communicate with`. The same goes for the Signal app too. Am I missing something here? Clearly none of the protocols require phone numbers, so why do the implementations?


So exactly how do group chats work in Signal Protocol?


How is the WhatsApp contact list data handled? Is it completely encrypted too or does Facebook have access to it?


Isn't the contact list how whatsapp knows how to encrypt messages and who to send them too?


Yes, but Wickr keeps it encrypted - salted


How does this effect web.whatsapp.com?


So does this seriously mean that if verified FBI /NSA or anyone else would have a tough time reading your messages. I just find it hard to believe if Facebook owns Whatsapp that they wouldn't create some form of backdoor.

Total security layman speaking here.


I assume old versions can still connect, and therefore there are legacy unencrypted modes. Is there any protection against downgrade attacks?


Would it be possible for WhatsApp to be compelled to provide both parties' private keys/state representing same, and any other data resorted ml required, such that messages could be captured and decoded, spoofed etc?


What is the incentive for Facebook to add encryption here to WhatsApp? Don't they want to mine every piece of data about every person that they can? I can't imagine there is that much customer demand?


Facebook Messenger is the thing where everything is minded and you'll get all the AI bots etc.

WhatsApp instead is their chat app for users how don't like Facebook. WhatsApp has way more users than Facebook Messengers. And now they have the PR and even don't have to deal with wiretapping requests anymore.


I'd really like the option to send SMS to non-whatsapp users with a GUI signaling that this is happening (and an option to disallow this). Balancing multiple text clients sucks.


What about system-level notifications? They go via Apple/Google and include message content. If those are not encrypted, then this sort of defeats the purpose...


Just out of curiosity, this update isn't just magically available for current versions. Users need to actually update their apps in order to use the new protocol, yes?


Signal's stuff is all GPL'd (AFAIK). Does this mean that WhatsApp's clients (and whatever else would apply) are also released under the GPL?


I doubt it. The protocol is well defined, and a single developer could probably copy in in a reasonable amount of time. I guess that is what whatsapp did.


plus the fact, between them Moxie and Trevor probably own the copyright, and can just provide them with a copy under different terms


Also all URIs sent on whatsapp now open with https!


Mainstream cryptography is the new snake oil elixir.

Just replace cure-all-diseases secret ingredient from mysterious land with unbreakable secure algorithm that takes trillions of years to crack.

The algorithm is only secure under specific circumstances. The implementation might be not secure, the hardware it runs on can be tampered, the advertised security could only be the best case scenario but some protocols degrade encryption during handshakes... and you can start simplifying the thing by several orders of magnitude.


> As of today, the integration is fully complete. Users running the most recent versions of WhatsApp on any platform now get full end to end encryption for every message they send and every WhatsApp call they make when communicating with each other

Emphasizing "running the most recent versions of WhatsApp", does this mean the fancy new protocol can still be downgraded to the old one by an older client? I'd save the fanfare for a few more months yet


Nope. From the article: "Once a client recognizes a contact as being fully e2e capable, it will not permit transmitting plaintext to that contact, even if that contact were to downgrade to a version of the software that is not fully e2e capable. This prevents the server or a network attacker from being able to perform a downgrade attack."


Anyone else reading the headline in Emperor Palpatine's voice?


Is it open source? I can't find the source code anywhere.



correct. moxie's link below is not to the whatsapp source.


Signal is a protcol not an app!? Sweet jesus yes!


What next? Will they open-source the client?


Ofcourse not. They can't put a backdoor code in the open source. Some people would build from source, resulting in the corporation not being able to access their messages when needed. Not really acceptable for the main shareholders.


According to WhatsApp security whitepaper, strp master key is sent throw network. Even inside encrypted payload, it breaks pfs...


absolutely no proof it is e2e encrypted without the source.


We detached this subthread from https://news.ycombinator.com/item?id=11432031 and marked it off-topic.


I'm sorry, but that's not how computer software works. Anyone who knows what a basic block is can straightforwardly verify the claims WhatsApp is making.


How would we go about verifying Whatsapp's claims? It seems to me that they can easily be lying and we have no way to check. Also, they can just make a client that simply copies the unencrypted text displayed on-screen and leaks it to wherever they want.


you don't know how e2e security works, do you? without the source, it is not verifiable. it could be making a copy and sending it in an encrypted side-channel for example to their overlords.


Very angry open source advocate, I understand how frustrating it must be that the current safest mainstream messaging protocol is not open source, but once again: that's not how computer software works. WhatsApp isn't even obfuscated. There are tens of thousands of people who read "closed source" software as a hobby.

Later:

Also? That's not what the term "side channel" means.


do you know even with source, backdoors can be hidden? open source is a must to even begin to work to verify any claims of security.


Your comment here is incoherent. I agree: source can easily lie. How does that make it the gold standard for verification? The reverse-engineering of the actual shipped binary (for instance, the ARM code lifted to LLVM) seems like a safer analysis target. But, what do I know.


>The reverse-engineering of the actual shipped binary (for instance, the ARM code lifted to LLVM) seems like a safer analysis target.

Reverse engineering binaries is only permitted under the DMCA for the purpose of interoperability, AFAIK.[1] If so, that would make reverse engineering to verify stated security claims illegal in the United States. I'll just assume I'm wrong here for the sake of argument though.

Reverse engineering sounds much less friendly than providing actual source code. It's much easier for me to compare changes from version to version with source control, commit comments, and diffs. Then all I need is a reproducible build to verify that the output matches the binary being shipped.

Do you have some reverse engineering tools you would suggest? Obviously the one's I'm familiar with are not as good as yours. Links to those would be greatly appreciated.

[1]https://www.eff.org/issues/coders/reverse-engineering-faq


The LoC DMCA rules explicitly exempt security research. It's possible the EFF needs to update this page.


I actually in agreement with what you are saying about viewing the shipped binary, but source is a must also. ideally it is a reproducible build to verify the source created the binary.


You keep saying "source is a must" but you have yet to explain why that is the case.


fair enough! https://www.schneier.com/blog/archives/2016/03/possible_gove...

i think this is why source is a must. if a user compiled and installed the app themselves, and hypothetically had the entire stack above it be similarly open, then it would prevent the kind of attack mentioned.

do you agree?

if the source is closed anywhere in the stack, or pushed out in a walled garden as it is currently, then it allows the vulnerability mentioned.


I understand what you're saying. I don't think source is bad thing! Source is good.

But I think you're a little confused here.

The cryptographic building blocks of the new WhatsApp protocol are available in source code. You can get source for the Signal Protocol (fka Axolotl). You can get source for the Noise framework. WhatsApp borrowed these tools from a very open secure messaging project.

You're unhappy that the source code for the fully assembled messaging product WhatsApp isn't available. I understand that, too.

But you've overplayed your hand by arguing that not releasing WhatsApp source is fatal to its security. Even if WhatsApp had done that, the overwhelming majority of its users (in fact: basically all of them) would be installing binaries. If WhatsApp is so evil that they've backdoored their product, it is "supervillain monologuing for an hour while the hero escapes"-grade stupid to leave that backdoor in their source code. Consider that carefully when you trust binaries from open source companies, by the way!

In reality, source code does very little to resolve the government backdoor problem. If you want to ensure that your secure messenger hasn't been backdoored, reverse engineer and/or instrument the build you're actually running.

Nobody does this, of course. But while lots of people glance at source code for crypto applications, I think they're pretty much just kidding themselves. Having the source code makes them feel safer, but it doesn't actually make them safer.


> Nobody does this, of course.

Every Linux distribution out there compiles everything in their repo by hand. If you use an AUR package of almost everything on Arch, you are building the software by hand locally from source. You can pull build scripts from launchpad, the Suse OBS, or almost every distros package repository (they all have automated build systems for everything, and you can do it all yourself).

Oh, and there is a little distro called Gentoo some people use where you literally build everything from source yourself.

So nobody builds open source software from source, besides the millions of people who do it all the time. Sure, it is a tiny fraction of total users, but there is a dramatic difference between "nobody" and "somebody" just like there is a dramatic difference in confidence between "were using this secure crypto, trust us, its there! its not just something else masquerading as secure crypto that we have backdoors all over!" versus "here is the source, go build it yourself if you don't trust us".

The value is not in everyone building from source themselves, just like my confidence in my software is not in auditing every line myself. It is in the general freedom of information on the security, and by offering the information to test it myself I will have inherently more trust in a project even if I never take advantage of that source availability myself - even if nobody ever does find exploits in the disclosed code, there is still trust a proprietary vendor can never have that I implicitly give free software projects because it is much more precarious to backdoor transparent software than a black box, and by just making the box transparent at all you add value.


I'm not interested in the argument about your freedoms and I don't much care about things that work only for people running Gentoo. That stuff is a game. When security really counts, it's used by people who don't build their software.

That's what happened when Snowden started talking to Greenwald. Unfortunately, because our field is completely incoherent about security, instead of using a secure messenger, they used Cryptocat. Oh, the source code for Cryptocat was easy to get at, by the way! Didn't do much to keep those messages safe.


> cryptographic building blocks of the new WhatsApp protocol are available in source code

is there any way to verify that they actually use (an implementation of) this in their builds ? there's only one client, and I presume we can't make a third-party client to show it's interoperable with their implementation of the axolotl code.


>If WhatsApp is so evil that they've backdoored their product, it is ... stupid to leave that backdoor in their source code.

Reproducible builds[1] are the solution to that problem. Everyone doesn't have to build the source if a) building the source produces the exact same binaries every time, and b) source can be built by a trustworthy party to verify that it matches the build being distributed. This way, you only need one good build cop to warn the rest of the population about bad binaries.

Signal for Android supports reproducible builds[2] so it seems entirely possible an open source WhatsApp client could as well.

[1]https://reproducible-builds.org/

[2]https://github.com/WhisperSystems/Signal-Android/wiki/Reprod...


Just in case you don't know, tptacek is a world-class security/cryptography expert who happens to hang around and share his wisdom here. You appear to be assuming he's someone thinking one or more levels lower than you. I think it would be smarter for you to assume that he fully understands your argument but is thinking one or more levels higher than you.


agreed, i had no idea. i am humbled.


What's worse is that I am nowhere close to the best crypto security person on HN, and (unlike me) the best crypto people here don't brag about it.


Could you share some links about Snowden & Cryptocat story you mentioned above?


Here is one reference: https://en.wikipedia.org/wiki/Cryptocat. Additional googling will show more.


Well I asked because googling showed me very little. Basically Greenwald and Snowden used Cryptocat during some online interview.


Is there something more you're looking for?


iOS is not completely open source, so how do you know that Apple isn't scraping your screen or the iOS keyboard isn't sending your characters?

Also, Apple's push messaging subsystem gets a copy of (most) of the message too so Apple could be doing evil things there too!

OH NOES!


i agree! the full stack needs to be opened.


What about their hardware?


yes, the hardware too. you're reinforcing my claims, not negating them.


I can't reply to your other comment for some reason, so I'm replying to this one.

>If WhatsApp is so evil that they've backdoored their product

Unless they want to go the way of Lavabit, every company is evil when kindly asked to be. (Well, maybe it helps when you have the weight of Apple, but that wasn't even a NSL if we heard about it.)

>it is "supervillain monologuing for an hour while the hero escapes"-grade stupid to leave that backdoor in their source code.

And if they don't, a reproducible build process will cause someone to notice the binary doesn't match up and sound the alarm, somewhat similar to how a warrant canary works even if not everyone is checking it. (Sure, the company could always go back on their open source promise for other reasons, but at least everyone is left with a last known-likely-good inspectable and forkable version.)


> I can't reply to your other comment for some reason...

Try clicking on the timestamp for the comment when you have this trouble. A reply box should appear.


You have made this point seven or eight times that I have seen and it is bordering on cultish repetition. It is definitely spam at the least. We heard you. You're wrong, you became personal in your wrongness and accused a guy who does security for a living of not understanding E2E (do you?), and it's probably time for you to take a break.

I implore you to do so. Post haste.


i apologize. i'll try to be more sensible in my posts. it is just really frustrating sometimes.


> open source is a must to even begin to work to verify any claims of security.

I think this is losing sight of what source code actually is. It's the easier to read and easier to edit way to define what a person wants the computer to do. But, it isn't actually the exact instructions the computer executes.


Yeah.... I am pretty sure he DOES know how e2e security works.


That's not necessarily true. I haven't spent much time analyzing it yet, but your assertion seems seems. Please correct me if I'm mistaken, but can't you verify end-to-end encryption without viewing the source of the program?

They're using the Signal protocol, which has been well-vetted.

What I've seen so far in Wireshark looks good, but I am not a crypto expert. I'm in the process of reading and trying to understand the whitepaper[0] now.

I doubt OpenWhisperSystems would condone the use of WhatsApp without verifying the app uses e2e.

[0] https://www.whatsapp.com/security/WhatsApp-Security-Whitepap...

EDIT: Spelling.


without the source, it is not verifiable. it could be making a copy and sending it in an encrypted side-channel for example to their overlords.


You can definitely verify the code without the source. It may often be much harder, but it is not un-verifiable. Please stop spreading misinformation like this.

With all the wonderful reverse-engingeering tools, dissassemblers, debuggers, etc. available, and the numerous massive communities that use them, it seems pretty out-there that you could seriously think such a thing...


It's much easier to verify the source of a program once, and then verify the future changes to the source code with the aid of a diff. With a binary, a change can result in a very different binary that takes a lot of effort comparable to the original verification all over again.


skype is encrypted and backdoored. how can you know the same isn't true of whatsapp?


People have verified that Skype is NOT secure[0]. Also Skype has not claimed to use end to end encryption.

[0] https://www.eff.org/deeplinks/2014/11/scorecard-update-we-ca...


Obfuscation would be a better term here. They've made it hard to reverse-engineer, but certainly not impossible.


Well, people were able to figure out skype was not secure without the source.


> without the source, it is not verifiable

And would you also claim that without the source, you can't find security problems?


That is not absolute. The binaries are the absolute truth.


Can you stop spamming your comment into every single thread? If you have a smart comment post it at the top level.

A smart comment would be a detailed analysis of the pros and cons of OpenSource in terms of verification. Your are barly more then a troll.


I should use this as a strong argument for the rest of my no-please-not-another-instant-messenger-i-stay-with-whatsapp-contacts:

When whatsapp is implementing the signal protocol, why should one use whatsapp in the first place?[1]

I hope this good advertisement gives signal another round of traction.

[1] Ok, you've got me. The less techie contacts will swing the feature-bat and the other already use signal.


"Normal" people will not have their choice of messenger swayed by any argument that uses the word "protocol".


Point taken. If I talk to "normal" people, I wouldn't use the word "protocol" neither (or at least not as conversation starter). Normal-people-rephrased it's more like

"When whatsapp take the fundamental core from signal..."


can Signal be used to contact people on Whatsapp?


As far as I know: no. Whatsapp just implemented the (open source) signal protocol formerly known as Axolotl with the help of Open Whisper Systems. The rest of the IM "ecosystem" (like push notifications, contact recognition, server infrastructure etc.) is left untouched.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: