Hacker News new | past | comments | ask | show | jobs | submit login
WhatsApp Rolls Out End-To-End Encryption to Its Over 1B Users (eff.org)
330 points by randomname2 on April 8, 2016 | hide | past | favorite | 226 comments



It's great that WhatsApp can't see my cat pictures anymore. But there are 2 privacy and free speech issues that are not met.

Firstly META DATA. They know who I contact, when I contact them and how frequently. So people could derive information about me based on who I talk to.

Secondly, they can ban me.

Perhaps the EFF need to add more criteria to their secure message score card. https://www.eff.org/secure-messaging-scorecard

But on the whole, a positive move.


I think the biggest privacy / free speech issue is sustainability. Or the OpenSSL / OpenPGP problem.

What's a feasible monetization / business model look like built around software that is privacy-focused and customer-centric?

Because it's all well and good when you have a massive social data-mining company to bankroll your development, but what happens if that disappears / priorities change / the US government leans hard on FB?

Sustainability is important in code development too.

PS: In no way is this intended to diminish the efforts of the team. Hats off to everyone at WhatsApp for implementing this, FB for supporting it, and EFF for their general awesomeness!


When the client is a phone and the service is rapid messaging, how do you even begin to solve the metadata problem?

Even with opaque routing the phone company can correlate activity times.


There's never a single right solution, but if traffic analysis was the issue, endpoints could introduce latency, use Tor, sync/mix traffic with other endpoints via P2P, inject fake traffic, etc. - though if the core system leaks metadata, this would be pointless.


Messaging apps are often used outside, with limited bandwidth, and are highly interactive. I would be annoyed to wait more than 0.5 sec until my message is delivered.

Maybe an "ultra-secure" mode could be created, and if you enable that for a chat, traffic could be routed through tor. And you would have a separate list of "ultra-secure" contacts, which are not based on phone-numbers.


Special routing for the ultrasecure messages highlights that they were requested to be ultrasecure and makes correlation easier (by reducing the candidate pool).


Right, but it would be like the ISP knowing that you use Tor. Useful, but not that much if you do it constantly.

This ultra-secure mode for example could send a 1 KB packet every 15 minutes, weather you are talking with someone or not. Something would also need to be done to incoming packets, so that they won't stand out. I'm sure there is a suitable protocol out there. This would increase latency and reduce capacity, but this is why it's called ultrasecure, some inconveniences are expected.


You could use a decentralised network such as Bitcoin to send messages.

I would also implement Bloom filters to lower my network usage and confuse the nodes. https://en.wikipedia.org/wiki/Bloom_filter

I would choose random nodes to cause a bit more noise.

And finally my messages would have some sort of Bloom filter derived destination address. So as to confuse it all a bit more.


If you are merely confusing the metadata, it's irresponsible to tell people that your app does something useful to resist metadata discovery.

It doesn't necessarily make it useless to implement the obfuscation, but short of an implementation that guarantees resistance, it's likely to be harmful to tell people that it resists metadata discovery.


The open source part covers both those problems. If whatsapp respected user freedom, users could fork it and add metadata encryption to the system, and by the nature of forking and redistributing would create a network of similar apps.

It doesn't stop whatsapp proper from being non-federated like XMPP or Matrix, so maybe that does deserve its own criteria.


Agree. Reluctant to be critical of the topic, given how much those involved have managed to get done, but it is very concerning to me from what I'm able to tell how low priorty this issue appears to be.

Second issue that to me is pressing given the volume of users is that in my opinion a commitment to execute an audit of the code/crypto/workflow/etc needs to be made; I'd start a campaign to fund this, but such a campaign would be much more likely to succeed if offically acknowledged and more importantly, resources are provide internally to enable it.


True, forgot about that.

Even if we could see the code, it might be possible for google or apple to send certain groups of people modified software updates?


Apps that force you to give them (and everyone else) your phone numbers for .. ummm ... "contact discovery" and yet talk about privacy are a bit of a contradiction.

No plausible reason for why apps like WhatsApp (and Signal) couldn't use e-mail addresses for this. Or at least provide it as an alternative. It's even problematic for people who change phone numbers, have multiple phones, want to use desktop clients, etc

My opinion: if it requires a phone number, it's not really interested in privacy. Move on.


I feel there is no relation to privacy here. It's also the practical choice. Emails can be easily created to by people pretending to be someone else. If you have someone's number on your phone, there is a good chance you have communicated with them before and have thus done some authentication. You can keep creating new email ids to spam people. That becomes very difficult with numbers.


Sure there is. My phone number is private information. And it can trivially be traced back to me (no possibility of anonymity).

I don't necessarily want to share it with everyone I want to talk to. Or every company that makes a chat app.

I'm pretty sure Snowden didn't start out by giving his phone number around...

Plus every successful chat app before WhatsApp (from ICQ, XMPP apps, MS Messenger, Google Chat, Facebook Messenger) was fine without this requirement.

Is the hypothesis that it is now impossible to have a successful chat app without it being tied to a phone number?

We are not discussing a convenience "option" the user can skip but rather a requirement for these apps to work.


WhatsApp is a mobile messaging app. It's a drop-in replacement for SMS. There would be absolutely no reason for anyone to use WhatsApp if they were just another chat app with their own contact discovery that's not based on phone numbers. Their market share would be miniscule.

No one is forcing you to use a messaging app which is using your phone number as an identifier to contact anyone you don't want to see your phone number.


No one is forcing anyone to make claims about privacy either.

1) New social networks tend to replace old social networks: I'm not forced to use it but, for example, I can't go back to XMPP because others don't use it anymore. Meaning the option is not to communicate.

2) If everyone, including open-source developers (e.g. Signal), are doing this now and no-one is developing alternatives that run on modern platforms and are easy to use in non error-prone way w/o phone numbers, what exactly are the options even disregarding social network effects?

3) I doubt it's fully clear to many people what can be inferred from metadata related being trivial to prove who they spoke to and when. E.g. if it had been trivial for the NSA to know that one of its contractors was chatting with reporters it would've been game over for Snowden. In courts, circumstantial evidence like this can still be (and has been) used to imply guilt. Implying privacy under these circumstances seems unwise.

4) WhatsApp is mobile, fast, reliable and easy to use; Most of their competition at the time it launched and even now can't match that. This includes FB Messenger, Skype, Google, MS Messenger, etc I'm not sure it is proven that their success is entirely due to phone numbers. I've seen lots of people exchange phone numbers just for whatsapp.


> 1) New social networks tend to replace old social networks: I'm not forced to use it but, for example, I can't go back to XMPP because others don't use it anymore. Meaning the option is not to communicate.

Right, but this is a mobile messaging app. There are still hundreds of desktop (or: not primarily mobile) messaging apps out there, many of which are very popular, and they're not going away any time soon. Phone numbers are how mobile phones are usually identified, and to ask a mobile messaging app to use something else for identification is, to put it bluntly, quite silly.

> 2) If everyone, including open-source developers (e.g. Signal), are doing this now and no-one is developing alternatives that run on modern platforms and are easy to use in non error-prone way w/o phone numbers, what exactly are the options even disregarding social network effects?

There's nothing in the protocol that forces anyone to use phone numbers as identifiers. The current users of the Signal protocol just happen to be mobile messaging apps, where it makes sense to use phone numbers as identifiers.

> 3) I doubt it's fully clear to many people what can be inferred from metadata related being trivial to prove who they spoke to and when. E.g. if it had been trivial for the NSA to know that one of its contractors was chatting with reporters it would've been game over for Snowden. In courts, circumstantial evidence like this can still be (and has been) used to imply guilt. Implying privacy under these circumstances seems unwise.

Now you're talking about metadata, and that's a different topic entirely. Merely using a pseudonym instead of your phone number as an identifier is going to do exactly nothing to prevent any of the things you mentioned. If you want to hide your metadata, you should be looking at something like vuvuzela[1]. No one claimed that WhatsApp is doing any of that.

> 4) WhatsApp is mobile, fast, reliable and easy to use; Most of their competition at the time it launched and even now can't match that. This includes FB Messenger, Skype, Google, MS Messenger, etc I'm not sure it is proven that their success is entirely due to phone numbers. I've seen lots of people exchange phone numbers just for whatsapp.

I'd argue that they wouldn't have been able to reach critical mass without phone number-based contact discovery. That they were the only ones who did it right at the time doesn't mean the two aren't related.

[1]: https://github.com/davidlazar/vuvuzela


> . Phone numbers are how mobile phones are usually identified, and to ask a mobile messaging app to use something else for identification is, to put it bluntly, quite silly.

I disagree it's silly.

> There's nothing in the protocol that forces anyone to use phone numbers as identifier

I'm aware. But there's also no alternative to phone number now. So my point stands until this changes.

> Merely using a pseudonym instead of your phone number as an identifier is going to do exactly nothing to prevent any of the things you mentioned

Yes it does if the pseudonym can not be connected to me. That requires other security/privacy measures but phone numbers prevent that.

> No one claimed that WhatsApp is doing any of that.

If you say something is great for privacy but don't include the asterisks it's misleading.

> I'd argue that they wouldn't have been able to reach critical mass without phone number-based contact discovery.

No one has proven it either way. I can't be certain they would've been successful, you can't they wouldn't. But I do submit as evidence of my hypothesis that several social networks were successful on mobile without it (e.g. Twitter, Instagram and Facebook).


> But I do submit as evidence of my hypothesis that several social networks were successful on mobile without it (e.g. Twitter, Instagram and Facebook).

All of those examples use phone numbers as (at the very least) a secondary contact discovery mechanism on mobile. All but Instagram had existing user bases when they went into the mobile market. None of them are primarily a messaging app.

Do you have an example of a popular mobile messaging app without phone number-based contact discovery?


Do you have an example of a popular mobile messaging app other than WhatsApp


WeChat has over 650 million users, mostly in China. Quoting Wikipedia: "WeChat allows people to add friends by a variety of methods, including searching by username or phone number, adding from phone or email contacts [...]"


Good one.


The trade-off here is to make it seamlessly fit into the place of SMS. The only thing the WhatsApp or Signal KNOWS that you have for all of your SMS contacts is the phone number.

It's very possible to make a chat app that doesn't use this number but then contact discovery becomes a harder issue and people are less likely to use it if it requires them to recreate their whole contact list.


EFF should stop with this silly scorecard. I hate the thing because it's inaccurate and incoherent (arguments I've made ad nauseam elsewhere on HN), but on this thread you can see another good reason: it makes EFF the ref, and crowds always try to work the ref.

So whatever "score" WhatsApp gets, it's the wrong score, because: not open source; because: runs on iPhones; because: metadata; because: Facebook is evil, &c.


But the point the EFF makes is very important. If the code is not open source, we just cannot verify the security of the application. All applications must be considered unsafe unless we can review the code. We can't review Whatsapp's source code, and therefore Whatsapp is to be considered unsafe.

So yeah, Whatsapp has made a probably positive move, but it is still largely unsafe.


But if the code is open source, we _can_ verify it's security?

Heartbleed and Shellshock, the two most significant vulnerabilities found in heavily used open source software, were found by vulnerability testing and not code inspection. So while being open source is a nice-to-have attribute for a piece of software, that's as far as it goes. Painting open source as being a magical wand that wishes away all our security troubles is completely out of order.

Edit: I'll go further. It's become dismayingly apparent that very little systematic code review of open source software in order to secure it is actually taking place. It now seems quite possible that the most thorough investigations of software vulnerability, via code analysis or any other techniques, are carried out by those wishing to exploit them. They are well funded and highly motivated. Looked at in that light, the balance may well tip towards open source actually increasing the likelihood of software vulnerabilities being exploited maliciously.

The open source community has a long way to go if it's going to clearly demonstrate that it's model is advantageous, and complacent pronouncements of it's assumed superiority like this aren't going to achieve that.


A verification of security is always based on an attacker model. There is no general security.

The actual point is not security, but trust and certainty. Being Open Source does not change whether some messaging app is secure or not. It changes our knowledge about the code. It also makes me have a little bit more trust in some company for revealing their stuff.

Even if WhatsApp open-sources the clients, there is still the problem of US jurisdiction. NSA can force them to silently including a backdoor or send everything to NSA-servers.

Trust is a difficult thing.


> But if the code is open source, we _can_ verify it's security?

No - the code could be "open source" but unreadable. But if it's not open-source there's definitely nothing we can do.

> Heartbleed and Shellshock, the two most significant vulnerabilities found in heavily used open source software, were found by vulnerability testing and not code inspection. So while being open source is a nice-to-have attribute for a piece of software, that's as far as it goes

a) afl-fuzz and the like require access to the source code.

b) Those were the vulnerabilities that made it into production. They tell you nothing about what proportion of potential vulnerabilities were stopped by it being open source.


Fuzzers do not require source code. That is absolutely false. Afl-fuzz does, but you badly overplayed your hand by adding "and the like". Fuzzing proprietary closed source protocols by instrumenting closed-source binaries is a bread-and-butter software security project that virtually any application security consultant can do effectively.


And even afl has qemu-mode, although I have no idea how well that works.


there's definitely nothing we can do.

This is very much not true. Understanding what any executable does is possible without the source code. In fact, if you are looking at the binary, you know exactly what you are dealing with, and don't have the doubt about what the build chain actually does with the source.

potential vulnerabilities were stopped by it being open source.

This presumably comes from "many eyes make all bugs shallow". But operationally this is not true, because there isn't really, except for rare circumstances, any useful review.


Thanks for the correction regarding ali-fuzz, I wasn't aware of that. Unfortunately I can't update my post to fix that anymore.

I think this doesn't fundamentally change my point though. ali-fuzz was used to find shellshock, but that vulnerability had been in bash for decades. If it had ben found and closed within months of being introduced I'd be cheering the advantages of open source with the best of them, but that's decades during which a bad actor could theoretically have found and exploited that vulnerability with impunity using code analysis. That's exactly what I mean by the balance of advantages versus disadvantages of open source being tipped the wrong way right now.

I'm not enemy of open source, far from it, I'm just arguing for an open and honest assessment of the situation. If open source is really going to be a genuine security advantage, there's an awful lot of work to be done to make it so. It's not going to happen spontaneously.


> I'm not enemy of open source, far from it, I'm just arguing for an open and honest assessment of the situation. If open source is really going to be a genuine security advantage, there's an awful lot of work to be done to make it so. It's not going to happen spontaneously.

I don't think you've shown anything about relative vulnerability rates. I agree that there are massive problems with even major open-source projects, the general state of software security is terrible, and we have a lot of work to do on that front (starting with moving to memory-safe languages post-haste). But none of those things contradicts that open-source software is much more secure than closed-source software.


You haven't yourself empirically established that open source software is more secure, so you should be less smug about the logic you're using here.

In reality, software security is a function of the amount of expert attention that has been paid to a given piece of software. Popular open source software attracts a certain, stochastic, significant amount of attention, but money buys a more reliable amount of attention. There is insecure open source and secure open source, insecure closed source and secure closed source. Open source is a red herring.

There's a reason why, for instance, Firefox had Bleichenbacher's E=3 vulnerability, while IE didn't.


note that Heartbleed was not discovered via fuzzing.


"The open source community has a long way to go if it's going to clearly demonstrate that it's model is advantageous"

How is it less safe than not having the source, though? Absent the source, how can you trust that the once trustworthy, if secret, code hasn't been nobbled via hackers, a court order etc.

The model of security through obscurity is what's broken, and that's what we've got here. You have to trust Facebook today, and forever and trust anyone else with the technical or legal power to silently violate its security; something that's much harder if the code is available for analysis.


It's not less safe to have the source code. Source code is a good thing. All things being equal, I prefer open source components too. But all things are not equal.

In this case, if you're ideologically attached to open source, you're in luck: the cryptographic components WhatsApp uses are open source, and there's another good messenger that uses them: Signal, by Open Whisper Systems.


The problem with WhatsApp is that IT is not open source, so you only have facebook's say-so that they're using the signal code, and you have to have faith that they've implemented it securely, and you have to trust that, as an american company, the current or future government won't silently compel them to introduce a weakness, such as allowing that when a phone gets the message `sing` it sends an encrypted copy of the last n messages back to facebook. I'd be more comfortable using the compilable, open source signal app, because i at least have the opportunity to see what the code is doing, should i somehow manage to find an sabotaged version of clang, gcc, vs etc.


People need to stop writing this exact comment. Even if you're right about open source and security, you can't be right about it for this reason, because this comment is false.

You are not in fact stuck with Facebook's "say-so" about what WhatsApp is doing. Obviously, if you have the executable binary, you can straightforwardly validate its functionality with a disassembler/decompiler. There are thousands of people that do this professionally.

There might be some other reason why it's important that WhatsApp's code be published, but it can't possibly be this one.


Has anyone actually done this? I lack the skill to but for a piece of software this important, surely someone is working on this? And will it be repeated for every update?

Personally if I was the NSA or GCHQ I think it would be a wicked smart plan to release this and have it be actually secure, have it audited, gain trust, let people use it for a while then after a while release an update that shoved all the messages in memory to a server somewhere once people let their guard down. Implausible yes, but that's the issue with trust. And I'm not sure releasing the clean source would really help this. The only thing would be to hold up on updating the app until a trusted party had audited the updated binary. More paranoia: Also would have to audit the specific binary for every country with wierd laws (e.g. UK's RIPA) appstore to make sure we didn't get a nice local flavour of poison.


If WhatsApp releases the code, how do you know that's the code they run their service on? Open source does _nothing_ with respect to the trust issue with third party services like WhatsApp or Facebook. It's completely orthogonal.

I was wrong about heartbleed, it was found through a source code analysis technique called fuzzing - decades after the vulnerability was introduced. That's decades during which source code analysis could have been used by bad actors to find the same vulnerability and exploit it.

Opening the code doesn't by itself eliminate the vulnerabilities. What it does do is fire the starting pistol on a race between black hats and white hats to find and either exploit or close the vulnerabilities. It's very much a two edged sword. So what we need is confidence that the white hats are winning that race.


"Opening the code doesn't by itself eliminate the vulnerabilities. What it does do is fire the starting pistol on a race between black hats and white hats to find and either exploit or close the vulnerabilities."

The race is more dependent on motivation than whether the code is open or closed. Often, blackhats are financially motivated (whether they themselves monetize the vulnerabilities directly - or they are hired-by/paid-for a 3rd party) - they factor in their return on investment (ROI). Open source merely makes their job easier but given enough motivation, they can and will find vulnerabilities in binaries, remote services, etc almost as easily.

One problem is that many people believe in "many eyes makes all bugs shallow". Numerous examples prove that assumption false. Is it because there is more financial interest in closed systems? ...or is it because in open systems people automatically assume that because it's open that someone else must have vetted it? (e.g. "if I'm interested in it, and thousands/millions are too, then it's highly likely some expert better than me already looked at")


If the source is open you can verify/test both ways, if it isn't then you only have the external testing option.

The code being publicly available does not guarantee security but makes insecurity easier to find which on balance indirectly increases security.


"Rogues are very keen in their profession, and know already much more than we can teach them" said locksmither Alfred Charles Hobbs, who in 1851 demonstrated to the public how state-of-the-art locks could be picked, in response to concerns that exposing security flaws in the design of locks could make them more vulnerable to criminals. This quote is not less true today when applied to blackhats. Open source doesn't mean secure, but security trough obscurity is not a good security model.


>The open source community has a long way to go if it's going to clearly demonstrate that it's model is advantageous

Some in the open source community flout the "many eyes makes all bugs shallow" argument, but in the free software community we do not assert this. Part of security is transparency, and this means being able to read the source code, freedom 1. Free software is a prerequisite for secure software that a user can trust.


I understand and respect the tenets of your religion, but I do not practice that religion myself.


Be civil. Don't say things you wouldn't say in a face-to-face conversation. Avoid gratuitous negativity. - Hacker News Guidelines

If you are opening up for incivility, I doubt there is an end what people will start calling each other. Do you really want to go down that road?


Thanks for being condescending.


If it helps you understand where I'm coming from, while I do in fact admire the FSF's ethic of free software, more so than I admire "open source", as a practitioner in the field of cryptographic security I find FSF-style preaching about security to be intensely condescending as well.

More here:

https://news.ycombinator.com/edit?id=11455588


The linked comment has NOTHING to do with what the FSF has said about software security.


If you want to compare track records of close sources software with open source, we have IE6 and flash. We have windows 98 and XP. We have every snake oil product ever sold that the producer knew wasn't safe but sold anyway. Heartbleed and Shellshock were big bugs, but they did not create the concept of botnets nor was they so successful that they created a market where before there wasn't.


So basically the theory here is that if you look back to software written in the mid-to-late 1990s, you tend to find insecure software.


If you want to be like that, all of them were written mid-to-late 1990 or before it:

Bash was released in 1989 and initially development started some time before that.

OpenSSL project was founded in 1998.

Windows XP was initial developed in the late 1990s, released 2001.

Macromedia Flash 1.0 was released 1996 and initially developed a couple years before.

So should we only count software vulnerabilities that was created after 2010? 2016? What proof would satisfy you?


I'm really not sure what point you're trying to make here, because obviously both Bash and OpenSSL are poster children for the perils of legacy insecure software, and both are still in wide use --- far wider than Windows XP!


Are you confused by the parent argument that Heartbleed and Shellshock is proof that open source has the same likelihood of software vulnerabilities being exploited maliciously as closed software?

Or are you confused by your own argument that software written in the mid-to-late 1990s tend to be insecure software, and thus we should disregard that some software has been more exploited historically than others?


Neither. I think age has a lot to do with whether software is or isn't secure. Software written in the 1990s (or before) is unlikely to be secure. I think open/closed source has very little to do with security. Some open source software is secure, some isn't, and the same goes for closed-source software.


When the source code has a whitelist of usernames for which encryption should be disabled, put there under court order, you'll see the difference between external verification vs source code inspection.


I really think that's what most people arguing against WhatsApp believe: that a backdoor in a secure messaging application would take the form of intelligible source code expressing directly that backdoor.

Obviously, a WhatsApp backdoor would not in fact take that form.


What would it look like?


A binary patch.


Could be a number of things. Some might be visible only during build/packaging as you say, others visible in code.


Why would anyone patch the source code when they can just as easily patch binaries?


"Open source" isn't quite the right term, but it's on the right track.

What you definitely don't need is permission to redistribute, modify, etc. the software. Those are important for user freedom, but for the goal of verifying that a particular app on a walled-garden app store does what it claims to do, those don't help you. You just need access to source code.

What you do need, whether or not you have source, is an understanding of what the software does. Sure, you can probably do this by disassembling the software. But if you can disassemble the software and understand it, there is nothing more that having the actual source would tell you. If the company is expecting that people will disassemble it to verify it, they might as well release source.

What you also need, given either source or a binary, is assurance that everyone else is running the same binary as you have. Given source, that probably requires reproducible builds and a documented and reasonable build chain. Given a binary, that requires some distribution mechanism that ensures that everyone in fact gets the same binary.


> But if the code is open source, we _can_ verify it's security?

But if the code is not open source, we _cannot_ verify it's security?

Ability is already good. Not enough but necessary.


Why do you believe that we _cannot_ verify its security?


I mean closed sources are .. closed sources (did I say something stupid?). Unless you consider black box testings or things I don't know.


The original source code is unavailable. But the assembly code that actually implements the program is not, and there are a variety of techniques reversers use to lift assembly into higher-level representations --- not that you'd need to in this case.


Do you understand the concept of "necessary, but not sufficient"?


Yes I do, but that's not what the post I was responding to was saying. However I have not seen any arguments that convince me if its necessity. Furthermore were talking about third party services. What code they open sourcezsndcegat code they actually run on their servers are two different questions. Open source doesn't solve this problem whichever way you slice it.


See this thread for tptacek's views on the topic:

https://news.ycombinator.com/item?id=11432047

In short, "sources do not guarantee anything, and it's better to inspect the binary directly".


I don't think that open source is considered a requirement by the professional security community.

Which makes sense, we've known for a very long time that you can't trust the source, so you must verify the binary one way or the other.


And one way to increase confidence in the binary is to compile it yourself.


Twice, using 2 different compilers. That you have also audited.

Or you could observe what the original binary actually does, which you need to do anyway.


Exactly what I would expect to hear from an NSA stooge!


At which point you might as well just let someone else compile it and look at their binary.


If you have the source then you can see what it does.

If you only have the binary you have no idea what it does. You can observe what it does today, but what use is that if tomorrow it receives a short message from Facebook which tells it to email back the encrypted, compressed copy of your messages it's been quietly keeping for the last few hours/days/weeks?

The argument "what about this or that open source project with bugs in it" is silly. It's an argument for more scrutiny of code, not blindly trusting in third party code where you have no idea if they can be trusted today, or in the future, or can be compelled to spy on you, or introduce weaknesses deliberately into their products. Open source mitigates against several of these risks.


> If you have the source then you can see what it does.

That is literally false. We have known from the 70s that you cannot know what an application will do from the source. I can't believe I'm actually having to reference "Reflections on Trusting Trust" but it is the entire point of one of the most seminal talks in our industry.

To defeat that style of attack you literally need 2 different compilers, I didn't pick that out of a hat, its also from the literature.

> If you only have the binary you have no idea what it does. You can observe what it does today, but what use is that if tomorrow it receives a short message from Facebook

Figuring out what a binary does without source is a very common task in our industry. I'm told there are people who do it for fun, and I know people that do it for money.

Regardless of whether you have the source or not, you have to verify the binaries behavior, both because of intentional trusting/trust attacks and for flaws that are non-obvious from the source.

I'm sure the source is helpful, and the source with repeatable builds is even better but its not a requirement nor does it seem to be a very high priority for the people that do this sort of work professionally.

>"what about this or that open source project with bugs in it"

I am certainly not making that argument.


Yeah, it's all old news. I regularly use several different compilers so that's not such the shock you suspect it might be. Also, for your paranoia to be really effective you must also assume the disassemblers must also be in on the act.

You need the source to have even a crack at being sure there's no foul play. Trusting the current behaviour of an app now is no guarantee. Anyone could write an app which, when it receives a certain message at a certain time changes subtly its behaviour. If you had the source this wouldn't be possible (ok, it would be possible if someone's got at the app and all major compilers and disassemblers going back years and want to blow that little secret on this one exploit); you'd have a chance of seeing the logic.

If you don't have the source then you're no better off because you have all those risks plus you have no idea whether or not someone has good intentions but a poor testing regime, or a bad actor, or any number of other problems.

People who do security work successfully seem to be unanimous that you don't do your own security, and that you use open source solutions so you can see that people are doing what they say you are doing. And this is done in the name of reducing risk, by reducing the number of people you have to trust.


Could you point out some of these "people who do security work"? I should like to meet one someday.


> If you have the source then you can see what it does.

What if it exploits a compiler bug to do something different than it looks? What if the compiler recognizes the code and instruments it with its backdoor? Sure, GCC probably doesn't, but what if the code was meant for Windows and only builds under MSCV which you can't audit? What if the code exploits an undocumented “feature” in Intel x86 processors? What if it uses a backdoor in Intel's randomness hardware? Modern Intel processors have many undocumented features and essentially can do whatever the hell they want with your code. The NSA almost certainly has a backdoor in Intel's RDRAND[1]. x86(_64) is impossible to trust; even VIA CPUs may be compromised (though they lack the “management engine” of Intel and AMD—if I were to 100% audit a crypto-related program that only runs on x86, I would use a VIA system and intercept any instructions that use the hardware RNG).

Point is, open source makes some things vastly easier, but the point is you _need_ to audit the binary anyway. That's the _only_ way to trust it. You _cannot_ trust the source code. You cannot even trust a machine other than POWER, RISC-V, or SuperH/JCore.

If you're not going to be paranoid as all hell, auditing crypto is pointless.

1. http://arstechnica.com/security/2013/12/we-cannot-trust-inte...


You compile with various compilers and you decompile them with different decompilers. This is trivial today.

Randomness hardware? There are already perfectly good PRNG routines available as used by currently available strong encryption.

I understand the criticisms of just relying on the fact that something is open source but the alternative - closed source - has exactly 100% of those problems, plus a whole other lot of new ones too.


"Perfectly good PRNG routines"? No, that is not at all true. Serious cryptographic software uses the OS's random number generator. There are not in fact "perfectly good PRNG routines" ready to take off the shelf and plug into crypto software.


> Randomness hardware? There are already perfectly good PRNG routines available as used by currently available strong encryption.

Yeah. How do you seed those with a good source of entropy? That's what RDRAND etc are for.


> it is still largely unsafe

It's not unsafe, you just don't know if it's safe or not. That doesn't make it unsafe.

Even if the code were open source and there were some guarantee that WhatsApp was using that exact code during compilation, you still would not know if it's safe. Just like most other people, you are relying on people experienced with such things to confirm that they are doing things in a "safe" way. Thus for all practical matters, it really doesn't matter if it's open source or not.

There is always going to be an unknown here, and open source does not reduce that much risk. In fact, it's probably better to look at how WhatsApp, the software, behaves (i.e. what it does on the wire) rather than the code. Sure there could be backdoors, but that is the risk you take when you use any pre-compiled software -- open source doesn't change that.

All we are really left with then is trust.


> Even if the code were open source and there were some guarantee that WhatsApp was using that exact code during compilation, you still would not know if it's safe.

Speak for yourself

> Just like most other people, you are relying on people experienced with such things to confirm that they are doing things in a "safe" way. Thus for all practical matters, it really doesn't matter if it's open source or not.

If you rely on the experts then it still matters to you whether being open-source means more or better experts.

> In fact, it's probably better to look at how WhatsApp, the software, behaves (i.e. what it does on the wire) rather than the code.

It is provably impossible to tell whether the RNG has been backdoored that way.

> Sure there could be backdoors, but that is the risk you take when you use any pre-compiled software -- open source doesn't change that.

Huh? So you require reproducible builds and/or build from source yourself. You absolutely don't trust that some random binary is what it claims to be.


You may not realize it, but you have proven my point.

At what point does this all become totally impractical? Should you also manufacture your own chips as well to make sure there are no backdoors at the hardware level? How about for people who have no time for any of this (99.99999999% of the world)?

At some point, you need to trust somebody in this chain of dependencies.

People who are beating their drums about open source are in essence saying: trust no one. But in the real world, that is not realistic.


> Should you also manufacture your own chips as well to make sure there are no backdoors at the hardware level? How about for people who have no time for any of this (99.99999999% of the world)?

You should ensure that your supply deal allows you to audit the manufacturing, yes. Maybe you don't actually have the time or money to do so, but if they don't offer that capability that's a giant red flag.

> At some point, you need to trust somebody in this chain of dependencies.

You never trust any single party unconditionally. I mean sure, even if you get an audit done, your audit agency could be crooked - a sufficiently large conspiracy can defeat anything a single person can do by themselves. But you don't want to put yourself in the position where one rogue company - or even one rogue employee at that company - could entirely compromise your security. If that level of security is good enough, why even bother using encryption at all?


And if you don't trust FB then it is ... unsafe :)


Well if your point is that safe or unsafe are always subjective conclusions, then of course I agree with you. If the safe/unsafe determination is being declared as an objective conclusion, then I don't.


Just wondering what EFF has to say about other Apps like Telegram since code is open sourced [1]?

[1] - https://telegram.org/apps#source-code


Telegram wouldn't fair very well. It has weak home-grown encryption and doesn't enable it be default.


They've got two scores for Telegram on their score card[0]: "Telegram" scores poorly (4 out of 7), "Telegram (secret chats)" scores perfectly (7 out of 7). The quality of the encryption algorithm itself doesn't factor into it, though Telegram gets a check for having a recent audit.

Even if the crypto was good, the cognitive load of having to decide whether you want a chat to be secret or not makes it a bad choice IMO, especially if it comes with a downside.

[0] https://www.eff.org/secure-messaging-scorecard


7/7 being a score that no practicing crypto engineer would be likely to come up with for that application.


If that's the stand, EFF is basically with Stallman that there shouldn't be any proprietary software and that view is hard to agree on.

> All applications must be considered unsafe unless we can review the code.

Let's face it. I like to give the liberty to people who prefer to keep their code secret and their responsibility to hire people to do proper security auditing. Curious whitehats are welcome to attack systems which have bounty program. While obfuscation is never a good idea in security, the paywall between Facebook and the public keeps large exploits away. Attacks came directly from behavior testing require some cleverness and dedication (of course there are those really dumb XSS everywhere). So in this case, obfuscation by hiding the source code is one way. However, you can argue that someone could have lost work laptop and steal the source code (let just assume some startup whose git repo is clonable entirely onto the laptop).


You mixed some straw-man objections ("Facebook is evil") with one that's important and legitimate.

People are right to be cautious with closed source crypto. It has a long and sordid history of turning out to be snake oil, terribly implemented, or outright backdoored.

Given the people involved, though, and all the scrutiny the Axolotl protocol has already withstood, WhatsApp looks trustworthy.

It would be better if the source code were available for inspection, and even better if it had a deterministic build so that people vould verify that what's installed on their phone was actually built from that code.

Rolling out usable end-to-end encryption to a billion people is still a huge achievement and we owe Moxie, Trevor Perrin, and the whole WhatsApp and Open Whisper Systems teams our thanks.


The real victory for privacy here is that a tool used by the masses has given then end to end encryption, and that alone is a game changer:

- This enables network effects for private communication, i.e. it's harder for privacy to be breached by an insecure partner

- One can avoid being singled out for using an obscure tool to hide their communication

Sure, it doesnt fit all the marks EFF would desire, but it's undeniably a very positive achievement for privacy.

The only thing missing is a canary to signal some sort of back door, since it's not open source. Then again, diverting users away from their app is not in their interests.


Please provide a link to a "scientific" but easy understandable comparison of crypto communication software that educates users about what is important to understand about this complicated issue. It would be great if you could motivate the crypto elite of the world to contribute to such a comparison site. Thanks!


I think the problem is that crypto is a devil-in-the-details issue. I'll cast around and see if I can dig up a survey of the general divergences in crypto schemes, but one chink in an implementation sinks the ship.

Haven't dug into it, but I guess another interesting question would be "Are various crypto solutions using the same underlying base libraries (and versions) or each rewriting their own implementation of a new system?"


Here's one for TextSecure: https://eprint.iacr.org/2014/904.pdf They claim doing the equivalent for WhatsApp is future work, although the paper is a year or two old.


No. Just because you want it to exist or think it needs to exist, doesn't mean it actually exists.


> No. Just because you want it to exist or think it needs to exist, doesn't mean it actually exists.

I understand the parent believes that the EFF are doing this already. I.e. suggesting that if the comment he's responding to doesn't like the EFF's approach they're welcome to create their own.

No thoughts either way but I'm pretty sure you misread them.


I don't follow what you mean. Because security can be improved for a product, means it shouldn't be pointed out? Sounds like that's more of a reason for a scorecard to exist?


What then is good for the masses?


I like the idea of a three-tier threat model:

Tier 1: potentially inconvenient but maximally secure, in an OPSEC sense (PGP, for instance, though I would personally put Signal here too). Use carefully for security against state-level adversaries.

Tier 2: convenient mainstream applications with strong security but some tradeoffs. Your default choice, security against criminal adversaries. I'd put WhatsApp and iMessage here (but not many others).

Tier 3: unencrypted or dubiously encrypted mainstream applications, unsafe against any adversary, but potentially useful as an endpoint to (cautiously) bootstrap communications. Google Talk and AIM would go here for me.


PGP doesn't have plausible deniability. It actually has `you can prove anyone else what I said to you', if you sign anything. That's dangerous, so shouldn't be on tier 1.

(Or at least that's how it used to be.)


Most secure messengers don't have deniability.


So because most of them implement security wrong, we should just accept that as normal? I feel like I'm mis-understanding your point?


I definitely do not think adopting a new, unverified cryptosystem simply to get better "deniability" is a good tradeoff. I see the value, but it's not the most important thing to get right in a secure messenger.


> I definitely do not think adopting a new, unverified cryptosystem simply to get better "deniability" is a good tradeoff.

I agree, but I don't see how that's relevent. This could be added to pgp, without "adopting a new, unverified cryptosystem"


Is plausible deniability something you want in your communication? Just curious, as this is something I haven't heard before. It seems like non-repudiation would be what you want in your comms.


See https://en.wikipedia.org/wiki/Off-the-Record_Messaging

Sometimes you want plausible deniability, sometimes you want to sign something for all the world to be able to verify.


Got it, thanks. I'd heard of OTR, but never really looked into what it meant. Makes sense now.


In addition to tiers, I think there are dimensions. Mainly:

1: security against targeted surveillance from criminals or state-level adversaries. (= spying targets).

2: security against mass surveillance. (= finding targets)

I think it's possible to have Tier 2 system that is Tier 1 against mass surveillance. Individual connections don't have to be proofed against state level actor targeting individuals. It's enough that the whole system works in a way that makes mass surveillance too expensive.

In fact, I think this kind of security is enough to ease most concerns against government surveillance.


Right, key is to make it so expensive that the obsession over mass surveillance becomes economically unhealthy.


Signal "runs on iPhones" which you've deemed to be a bad thing for some reason, so surely that should be tier 2 too. In fact, most things "run on iPhones" so it's kind of an odd argument. What's terrible about running on iPhones?


He was holding that up as a popular strawman argument, among the rest in that list.


You misunderstood the point I was making.


Ah yeah, it went totally over my head.


IMHO it really depends on what you are doing;

- Chatting with friends? = Whatsapp

- International drug deals? != Whatsapp


Goal should be mass adoption of universal turnkey state-level OPSEC for everyday use.


They do this for power and legitimacy. Ranking gives them power/imprimatur and presumably would want to use this to affect and effect the direction of encryption.


I have absolutely zero evidence for this, but I'd bet large sums of lint from my pocket that this article was discussed by a Facebook PR person and an EFF person.

It's the kind of thing that happens all the time in the industry, but it's precisely the kind of reason I find the EFF to not be worth sending money to.


>runs on iPhones

Anyone but Thomas and this would be dead


It's a weak strawman, though. He isn't literally saying that, but is trying to discount those who complain about closed source by comparing them to people who are apparently anti-iPhone. Then again, maybe that's exactly what you mean and you'd be right -- normally that sort of tired strawman would be dead.

It really is incredible to see tptacek discounting the closed source, completely unaudited nature of this product. A company is saying "we have total end-to-end encryption...trust us", and to believe that you trust the binary that is on your device, that it is managing keys properly (there is usually a fine line between usability and security, and usability usually wins), and you trust that they don't misuse your messages when they pass through their server. They control every aspect of the conversation, and are fully capable of completely undermining your security at any moment (if not already). In practice it's not terribly unlike the widely discredited JavaScript encryption approaches.


What's incredible to me is that it's 2016 and people on one of the largest programming communities on the Internet are still acting like conventionally compiled iOS applications are impenetrable, but C source code for new cryptosystems is somehow instantly validated by the crowd.

It's like IDA Pro and OpenSSL never happened.


If you understand and verify the binary, you don't have to worry about what might happen to your messages on the company's servers.


[flagged]


This argument has appeared throughout this discussion, yet I know of approximately zero cases (validation/comfort through disassembly) of this happen.

I am curious where you looked for these cases that you saw none.

I assure you that this happens quite regularly. And what is wrong with IDA Pro? Why would it need to advance?

This notion that security products are vetted by analyzing the binary code is absurd yet it happens on a regular basis.


It has been dramatically progressing, but you'd have to look around yourself to know if it's beyond what you're thinking of.


You posted your "edit" about 20 minutes after 'sp332 correctly answered your question, and your edit is both flagrantly incorrect and rude.


[flagged]


That wasn't a scare quote, and I don't think name-calling enhanced your argument much.


tptacek writes a lot of pretty positive stuff about iPhone security.

It's an example of a topic people will raise to object to the EFF scorecard, not an argument being made against the iPhone.


Woosh.


This is quite a cavalier recommendation for proprietary unaudited (for the public at least) spyware that uploads your phone book to a company participating in PRISM.


WhatsApp doesn't upload your phone book. All contact info resolution is done locally.


How does the app figure out another telephone number has a WhatsApp account? How does it fetch profile information for that telephone number without communicating the desire to fetch it to WhatsApp? Perhaps WhatsApp ships their entire profile photo and telephone number database to every phone


There are two things here: a) the app can take a snapshot of your phone book, upload it, associate it with your own WhatsApp account, and then keep the remote version synchronised; and b) the app can periodically perform disparate requests to the central directory server to query the profile information for each number. As far as I know, WhatsApp uses approach b. These two may seem similar but in approach b, WhatsApp the service wouldn't learn of any contact removals. WhatsApp the app would, of course, but the service wouldn't. Now, with some simple filtering and statistics, WhatsApp the service can deduce your phone book with some certainty, but I believe most would agree that it is materially different from actually uploading your phone book.


> Now, with some simple filtering and statistics, WhatsApp the service can deduce your phone book with some certainty, but I believe most would agree that it is materially different from actually uploading your phone book.

No, not really. Both implementations give WhatsApp a complete list of all my contacts. The fact that one of those implementations might not be 100% accurate because of deleted contacts changes nothing - if you're someone who's worried about a third party having your contact list, it doesn't make sense to think "but hey, they don't know I removed $FOO last week, so their list isn't 100% accurate!".

Both implementations mean you'll have to take their word on what they will and won't do with the data, so there's really not much of a practical difference.


The word "participating" there implies intention, which is a lie. Care to re-phrase?


> The PRISM program collects stored internet communications based on demands made to internet companies such as Google Inc. under Section 702 of the FISA Amendments Act of 2008 to turn over any data that match court-approved search terms

I can't imagine it's a simple task for a large company to unintentionally comply with a court demand, there needed to be some wilful choice between attempting to fight it or rolling over


So if you're using literally any messaging app on Android, you're using software by a company cooperating with PRISM?


Yes, and consequently the platform should not (and in practice does not) receive declarations of high security in its default state. Ignoring PRISM, this is true of every major smart phone platform as they all have remote install and update abilities that excepting Android aren't generally straightforward to disable. Argumentum ad populum is not an excuse for doing something even when you know it's wrong.


Doesn't appear to mention metadata at all.

While metadata is somewhat tangential to the actual encryption, it's still a vital part of a truly secure messaging platform - who we talk to reveals quite a lot about us.

I'm not sure how solvable this is without sacrificing the usability that makes whatsapp as nice to use as it is, and I certainly don't want to take away from how great it is that they've done this - but it is important not to lose sight of the fact that encrypting the contents of your messages is only one part of the puzzle.


It's the same problem with Signal actually ;) All the communications of the official clients are sent to some "official" Signal Amazon servers (see https://github.com/WhisperSystems/Signal-Android/blob/master...).

Doing end to end encryption is nice, but I really think that having a decentralized AND standard architecture is also very important.


Agree. Reluctant to be critical of the topic, given how much those involved have managed to get done, but it is very concerning to me from what I'm able to tell how low priorty this issue appears to be.

Second issue that to me is pressing given the volume of users is that in my opinion a commitment to execute an audit of the code/crypto/workflow/etc needs to be made; I'd start a campaign to fund this, but such a campaign would be much more likely to succeed if offically acknowledged and more importantly, resources are provide internally to enable it.


Too bad 8/10 of your contacts have automatic backups to iCloud or Google Drive enabled. Kind of defeats the idea of "end-to-end". More like end-to-end-to-cloud.


Apps can control the content that they backup to Google, I'm going to guess Apple has something similar.

http://developer.android.com/training/backup/autosyncapi.htm...


That's nice. What I really want to know is, can Mark Zuckerberg read my messages? Do WhatsApp servers have access to the private keys needed to decrypt my communications? If the answer to those questions is "yes", then it's great that we are now protected from most cybercriminals, but the NSA is probably monitoring our messages. If the answer to those questions is "no", I may actually decide to start using WhatsApp.


They cannot read your messages. The entire idea behind this is to give them a claim of technical infeasability if they are served with a warrant. However, if there is/will be some sort of back door to cease encryption without the user's knowledge, that's another matter which really would require it being open source.


How will it being open source guarantee you that the app you downloaded from the platform's store is using the exact same code?


Jailbroken iPhones or Androids that have an accessible filesystem would give access to the installed executable.

Having this, Whatsapp would be subject to having their app disassembled and checked against the source. Sure, they could modify the executable, but it's a PR liability for them.


You can compile the binary yourself and match the SHA1 against the binary downloaded from the store. It seems quite trivial.


It's not that easy. Due to toolchain and platform differences, there is no guarantee that your compiler will produce the same binary as the official distribution. This is why deterministic, reproducible builds are a growing area of interest right now.


EFF recommending closed source clients, erm they just drop in my esteem.


They deduct points as it's still open source. And they are more applauding the fact that WhatsApp - probably the most widely used messaging app currently - has adopted strong end to end encryption, something which other clients have been loathe to do.

This is a win. To disregard everything that WhatsApp and Signal have accomplished because WhatsApp isn't open source is silly.


> This is a win. To disregard everything that WhatsApp and Signal have accomplished because WhatsApp isn't open source is silly.

Is it? If the next Snowden uses WhatsApp on the basis of this recommendation, and it turns out (say) the NSA has backdoored their RNG and is scanning all messages sent over WhatsApp, that person is going to find themselves jailed or maybe executed. You can't say "it's secure except for not being open source"; the stakes are too high for that.


No but to blindly trust in it is silly. Even openssl had a heart-bleed bug that persisted for years without most people realizing it. All it takes is one bug for the entire thing to be useless.


And heartbleed is also than example of open source not being totally secure. It was a bug that persisted for years before it was found - and OpenSSL is open source.

It's just as foolish to blindly trust OSS. There will always be holes - the main point to OSS is not to combat these, as they will exist regardless. Rather, it is so one might know exactly what they're installing/using, without having to trust the corporation behind it.


no it's foolish to trust something that hasn't been independently reviewed. How can EFF recommend something that hasn't even been subjected to an independent security audit?


The goal is not to be perfect, but to kickstart encryption adoption by a large non technical audience I believe.

Sure that's no excuse for potentially bad crypto but it's worth it if this gets proper infosec into the public reach in the end. I'm confident this is a first step to having trustable encryption "in the real world" even if it's another client/company providing it later. Call me an optimist :)


> one bug

Everything can have bugs. The problem with this software is that it's a centralized single point of failure. Only a proper federated protocol can be resistant to subversion by business, government, or other interests.


They explicitly stated that it was close source and Signal should be preferred,.


They gave it a deduction in their rating and said to prefer open source.


Open source does not mean your binary is secure. There are too many counter-examples of this fallacy to even bother enumerating. These widely- distributed binaries will be torn apart and inspected instruction by instruction. That is the only way we will know for sure, and you can bet people will be watching closely going forward. In this case I think lack of open source is not an issue given the reality of the distribution system.


> Open source does not mean your binary is secure.

Open source is necessary - not sufficient - for security. Other work has to be done. With closed binaries you can't even begin that work.


Is the backend for the Signal's messaging server open source? If so, they should open source the build script too.



But WhatsApp requires a phone number, and requires that the recipient (of your message) have your number in their contacts list (or at least you should have their number in your contacts). Once your number (and your contacts') have been leaked to WhatsApp, enough metadata has been leaked to make communication risky.

Why doesn't WhatsApp allow anonymous communication? I should be able to create ephemeral WhatsApp "IDs", and anyome who knows my "ID" should be able to communicate with me anonymously and securely, no strings attached.


This is a bit weird. I don't believe WhatsApp are lying but there's absolutely no proof they're not.

I could release a closed source app with a bunch of padlocks in it and copy/paste their white paper and have exactly the same level of proof of security. Would I get a 7/10 from EFF?


If you had the credibility of a billion dollar company behind you saying that you're not making it up, then probably.


I didn’t know credibility was measured in dollars.


Reputational damage is higher for entities with more money. Therefore their credibility of not doing something that causes reputational damage is higher.


> Reputational damage is higher for entities with more money.

In what world? Big business figured out a long time ago that most of the time their bad actions won't be noticed by most people. The rest can often be fixed with some inexpensive spin and PR. Their reputation is only damaged when a scandal is very large and in the right place and time to be noticed.

A "billion dollar company" necessarily has a strong profit motive. It's also subject to the codes and regulations of the country in which they operate. The former damages their credibility (they will do what is profitable, not what is moral), and the latter makes communication platforms suspect (government involvement).

Of course, we don't have to speculate - this is Facebook. They are not only part of Prism, but their entire business is based on surveillance. Not only do they have zero credibility for respecting privacy, they are actively hostile.


You're disputing that it causes reputational damage, not that reputational damage scales with money.

I'm not saying that they'd lose reputation from "not respecting privacy". That's not considered that bad by the general public. I'm saying that they'd lose reputation from "claiming to release a feature while lying through their teeth about it".


How is VW's credibility after the emissions cheating scandal?



Because Facebook is known so well for their respect of privacy.


They're known for not lying through their teeth about what features they've implemented, like most large companies.


There are a lot of generalizations there. Do you have example of an audit showing that they have never misguided their users or represented their product and motivations?


No, and the burden of proof would be on you if you're claiming it's likely that they're lying.

Edit: my broad claim is basically that companies won't lie about what their products do, if their claims are specific. In this case, they released a whitepaper with technical details.

If a company makes a broad unspecific claim, it's possible to be wrong or misleading without the implication of a deliberate lie by the company. In this scenario, it's not possible for it be wrong without a direct decision to lie, and therefore I think the reputational damage would be great if exposed, and it would be easily exposed by analysing network traffic before long.

What I'm claiming in the specific here is that "the system implemented matches what the whitepaper says". I wouldn't put it beyond facebook to backdoor it in a way that's hard to figure out (with the backdoor included in the whitepaper ala Dual_EC_DRBG), but I'd consider that to be unlikely (firstly because there are actual humans behind it at the end of the day, and I'd hope that they would feel that "claim to release encryption but put a backdoor in" is morally worse than just not releasing encryption at all, and secondly because Whisper systems is involved). But all of that could arise in an open source system as well. What I have high confidence in is that the system matches the whitepaper. (There's also the possibility of an implementation problem that preserves plausible deniability for them, which I also consider unlikely.)

All in all, reducing their grade by 1 point to account for additional risk in closed source seems reasonable to me.


It's important to point out that all the Whisper Systems code is open source (https://github.com/whispersystems/). So if you have concerns, go read their code. Some of the best minds in security have, and they've come away with good things to say. There's a desktop version of Signal coming, which I'd personally be inclined to use over WhatsApp, but this is still a fantastic move.


It's an improvement, but proper end-to-end encryption on unsafe devices is about as useful as seat belts on an airplane.


So it protects you from bad turbulence but in a catastrophe you are still screwed.


I am still trying to wrap my head around privacy in the modern age, and this triggered something for me - this is the end of the privacy-at-a-distance problem.

There is a large body of law around making distance communication private ("secure in ones papers" I think is the phrase from American law. Not allowing people to steam open your letters etc)

This move, which I am including the inevitable "pgp emails using whatsapp collected public/private keys" seems destined to end the problem - two hundred years of law, one code release.

Really sure an email app will be next now they are building a base of secure keys

Edit: it's now the purview of regulation to require me to keep / handover private conversations as pre-Snowden and that seems a good thing. It forces surveillance to be active and open once again


I am so cynical (or is that realistic?) these days, that I would not trust the encryption on WhatsApp as far as I can throw a large, adult saltwater crocodile.


They implemented a known & vetted encryption protocol with expert consultation from the outside.

Its one of the most widely distributed apps in the world & thus likely to have lots of people looking at it.

If you don't trust it, is there any possible encryption scheme you would trust?


I think the general trust issues people have are

- metadata / contact lists in the hand of Facebook

- a proprietary binary that _claims_ to use said encryption schemes

Scenarios that you could come up with:

The next version of WhatsApp sends unencrypted data again.

WhatsApp encrypts for your recipient just fine, but also encrypts the same message for the great Facebook skeleton key.

Basically trust is a bigger problem than you acknowledge here, I think. If you trust the encryption scheme, even the specific encryption implementation, then you still need to trust the (binary, closed) application. Ignoring the metadata issue completely for now.


>The next version of WhatsApp sends unencrypted data again. >WhatsApp encrypts for your recipient just fine, but also encrypts the same message for the great Facebook skeleton key.

Both of these can and would show up in an analysis of the code.


web.whatsapp.com still works, so clearly it's possible for something outside my phone to gain access to my phone generated keys. That doesn't seem backdoorable to me /s.


WhatsApp Web communicates directly with your phone. You have to authorize the session from within the app. Communication between your browser and the app is end-to-end encrypted.


I hadn't thought about re-encrypting the messages for end to end communication with the browser so it's a plus that the phone doesn't have to give away its keys but there exists a mechanism for the phone to retransmit the messages, because we can't see the source we have no idea if there's a way for WhatsApp to trigger this remotely and as such create a back door.


The existence of this feature has no implications on the question of whether or not there's a backdoor in the client. The current implementation requires approval by the client. A hypothetical backdoor could be anywhere, and you wouldn't know about it unless you check the bytecode.


This is quite a big win for privacy. If you use Whisper, Tor or most of the other privacy minded communications mediums, you are in a small minority so you stand out. Because a very large part of the population is using Whatsapp this allows you to communicate privately without standing out.


What happens when I log on to WhatsApp Web? How they send my private key from my phone to my web browser?


They do not. They create a e2e connection between your phone and your browser. Everything goes threw the phone.


To post a follow-up comment, what happens when you lose your phone, or simply move to a new phone? Presumably you've lost all of your private keys (I mean...obviously they must not upload them or this would all be farce), message history, etc, right?


Your message history is lost unless you back it up yourself. Your contacts receive a message stating you changed the key once you send them a message or they send you one. I'd presume any messages that were sent before you activated the new key, but not received yet, cannot be decrypted any more.

I do not know how this works with push messages though; it used to be such that these can only be sent as-is so the server would need the key.


Well, the phone number is the identifier. What would happen is that you get a new phone, idenftify via phone number. It would generate new keys but your contacts are still around. You contacts all the notifications (if they have turned them on) that you have new keys.

I am not sure about chat history, you think its moved to the cloud if you have that activated.

Im not 100% on any of this, but I think that is how it works.

If you want something that is not bound to your phone, Threema is pretty nice. If you lose your phone and you have no backup, its gone.


WhatsApp allows you to (optionally) backup your chat history to iCloud or Google Drive. Moving to a new device involves restoring that backup. Your contacts will also have to re-verify your fingerprint.

This, of course, is also a way for adversaries to get to your chat history. The backups are not encrypted using a key under your control, so law enforcement could force WhatsApp, Google or Apple to hand over any backups.


That is not extremely secure.


It cannot be secure because the webpage has access to the chat contents and is running code that came from Whatsapp. Whatsapp could change the code so it logs the contents from your phone back to them and you would have no way to know unless you log and check the javascript that comes with the page every time you open it.

Similarly you have no way of knowing that the client is not sending your keys to the Whatsapp servers though. You just have to trust them.


Plus, for many users, a number of Chrome extensions have access to the data on that web page.


Has the WhatsApp code been audited by trusted third parties? I know it's not quite as good as it being open source, but if we had people we trust audit it, that seems like a good step. Also, disassembly and teardown.

I think something this big needs people to really really scrutinise it.


How does the WhatsApp encryption model differ from Apple's iMessage encryption model?

- In iMessage, Apple handles key distribution, so if I'm in your contacts, I know the keys for all of your Apple devices. (I'm guessing the private key stays on the device, but I'm not sure).

- iMessage seems to provide no way of verifying someone's key fingerprint.

- On the other hand, whatsApp seems to force you and your contacts to meet at a Starbucks so you can distribute and sign each other's public keys. Interesting.

What other differences are there?

(to make this easier, let's assume that both companies implemented the system the way they claim they did)


There's really not much of a difference, except that WhatsApp allows you to verify the fingerprint and will notify you if your chat partner's public key changes.

WhatsApp doesn't force to you meet your contacts, and there's no "signing each other's public keys" involved at all.

There are two levels of authentication or verification - with iMessage, you only get the first one:

- The first is WhatsApp telling you "Hey, this is +555 0100. Their public key is 12345. Once upon a time, I sent an SMS to that number, and the device with this key was able to read a code in that message. Looks like the owner to me. Good luck! PS: I might be lying."

- (Optional) You compare the fingerprint by meeting in person or communicating through some other secure channel. This will ensure that WhatsApp is not lying to you¹, and that you're actually encrypting messages with a key belonging to the recipient (and vice-versa).

¹ Unless your client has a backdoor, of course.


> whatsApp seems to force you and your contacts to meet at a Starbucks so you can distribute and sign each other's public keys.

The out-of-band meeting compares a QR code or a 60-digit public key-derived number, both of which are generated by WhatsApp. There is no ad hoc in-person signing involved. From what I can determine from their whitepaper, there is no private key involvement at all. Just concatenation or hashing using the public keys.

Furthermore:

  When either user scans the other’s QR code, the keys are
  compared to ensure that what is in the QR code matches
  the Identity Key as retrieved from the server.
Note 'as retrieved from the server'. The public keys are not distributed directly from Alice to Bob but always through Mallory. You can say for certain that the public key that Mallory's app tells you that you hold for Alice is the same as the one that Mallory presents for Alice, but that's it.

That wouldn't be a problem if the subsequent communication was in a different channel, like sending an e-mail after retrieving a PGP key from a key-server; the distributing party isn't in the communication channel and can't interfere. But WhatsApp is centralised for both key distribution AND communication.


The app automatically distributes public keys, to my knowledge, and the QR code is merely a validation mechanism to ensure that no MITM is substituting keys.

The EFF seems to think the QR code thing is the bee's knees. In practice, I would wager less than 0.1% of users will make use of that functionality. It's more of a placebo.


If there is widespread snooping even a small percentage of users verifying their keys will expose it.


If there was widespread snooping, it would imply either WhatsApp collaboration or compromise, in which case the app could show whatever they wanted it to show (or they could just upload all of the private keys, which they may very well do in any case).

However I was talking about QR code versus the widely known fingerprint. The EFF seems to think the QR code is just a huge improvement, but I just don't think it will see any usage at all over the fingerprint, but instead it provides the illusion of security ala "look there's this complex thing...and some people must be validating it...so I'll just trust it."


Fantastic news - gets us closer to most Internet traffic being encrypted.

I have seen complaints that meta data is not hidden, that is, there is a record with who you communicate with.

I might have an unpopular opinion here, but I don't think that having the meta data unhidden is in general such a bad thing. I am happy having my communications secure but having who I communicate with potentially public knowledge. Fair compromise.

For whistleblowers, protecting metadata is important, so use something else.


Garden path questions:

What's the easiest way to get a copy of your own WhatsApp private key from your phone?

What's the easiest way to get a copy of your friend's WhatsApp private key from their phone?

What if the phone is rooted, or you can root it?

What if they won't hand you the phone?

What if they are on your specially built wifi?

What if you have a fake cell tower?

What if you have a real cell tower?

What if you have a different makes/models of phone?

for fun: s/phone/debian laptop/g


Get the sense that EFF didn't even talk to any of the parties involved before posting their review, and to me, given how much weight they carry in the community, it's unclear why they didn't.


Why some people got "you are end to end encrypted with your friend" while their same friend got "the connection is not end to end encrypted with your friend"?


Because one end in this "end-to-end" is you, and the other is Facebook's government-facing data storage.


Is there a way to build and deploy whatsapp from source?


What? How can you give it 6/7 stars if you don't even know what holes are in the code?


Perhaps by knowing that the distributed binaries will be disassembled and inspected? The same way it would have to be examined even if there was a GitHub repo out there claiming to be the code used?


You've made this claim multiple times now. Do you really honestly believe that disassembled binary code sees the same scrutiny or gained confidence that open source code does? Have you ever analyzed disassembled code?

It is non-trivial. No one is going to disassemble this and say "Yup, it passes". That doesn't happen.


As a matter of fact I am certain they will be dissassembled and examined. For the bounty, for the PR and noteriety if you can find a bug or be the one to star in the "<big company> screwed up or lied to us" story of the week. In general, for lots of reasons that seem to motivate a select crowd who have the skills to actually pull this off.

If you have the source code it makes the disassembly and examination of whatis distributed a lot easier, but it is not a necessary pre-requisite.


I hope I can get my friends to switch to actor IM [1] or some other open source solution that doesn't suck. In the end all these chat systems turn into crap full of ads even if they aren't spying on you.

[1] https://actor.im/


Seems centralized and doesn't rely on a standard. If I were you, I would forget this solution as well.


You can set up your own server. They want to devlop a federated protocol. So they are the only ones that actually have stated a goal to devlop something like that. OWS would probebly do it as well if they had the time and money.

They already have the best clients in the buissness, if they manage to get the crypto and the federation right, it would be fantastic. They also have some support for using their web and desktop clients with email. I wish the Actor guys the best of luck.


"They want to develop a federated protocol". There is already one, it's a IETF standard, deployed accross the globe and have already hundreds of clients and several serious servers : XMPP.


I just restated what they said, if you are so convinced that XMPP would solve all their problems, then I suggest you go to them.

I dought that they don't know already [1], and I also think they have put a lot of thought into spending their money on devloping everything again.

[1] https://github.com/actorapp/actor-platform

Actor Messaging platform, modern replacement for Jabber/ejabberd


What else is there with a decent client that is open source?


On Android you have Conversations (https://conversations.im/) based on XMPP, good looking, nice battery optimizations, and supporting E2E encryption. And you have the XMPP federation behind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: