Hacker News new | past | comments | ask | show | jobs | submit login
ProtonMail's encrypted email is now available to all (engadget.com)
163 points by rdl on March 17, 2016 | hide | past | favorite | 108 comments



Some bad signs:

1. Hosted in Switzerland is advertised as a security feature. The point of e2e is that the servers are untrusted. If you need a "good jurisdiction" for your servers, it means they must be trusted. That's a problem, because sadly there are no good jurisdictions in today's world, and your jurisdiction doesn't help you if your servers are hacked.

2. It's webmail. That means the security of whatever e2e they're doing is based on the security of SSL. If you break SSL, you can break whatever e2e they're doing, and that means your e2e security is only as secure as the CA system.

3. Their 'threat model' documentation leads with: "From a high level, our premise is that a service like the now-defunct Lavabit does add value, despite some inherent weaknesses. We designed ProtonMail around many of the same principles..." Given what happened with Lavabit (the eventual compromise of everyone's email despite shutting down the service), considering it to have been "valuable" should really be off the table at this point: http://www.thoughtcrime.org/blog/lavabit-critique/

I can't find much in the way of technical documentation for ProtonMail, but from what little is available, it does seem that (much like Lavabit), the service is built on the premise of "won't" read your mail rather than "can't" read your mail.


Even with e2e you have availability concerns, so picking a "good" jurisdiction makes sense. The rest of your criticisms are completely valid -- webmail is never going to approach the security of true e2e client crypto.

The best possible system for webmail would be a browser extension or in the far future world, some kind of browser security extensions. Failing that, there's probably value in "best effort" secure webmail if users are explicitly aware of the limitations. The problem is even developers aren't really aware of the limitations (since they vary with threats and per user), so this is probably not a realistic goal.


Would you care to comment on Cyph [1]? They claim that they are able to approximate client side programs that provide e2e but in a non-plugin browser environment [2]. I know tptacek doesn't like it but he wasn't willing to go into a detailed account of why.

[1] https://cyph.com

[2] https://docs.google.com/document/d/1XVh4ALXhbfxi70QSUY-xHcla...


Hah. I had some time to think about that interaction and concluded (after a chat with a few peers at AppSec Cali) that he and I simply differ in mentality. I understand where he's coming from about the risks posed by that level of flexibility in the execution environment, but I'm also more prone to push for the idea that making encryption more convenient for all justifies taking some risks in terms of the execution environment so long as we can (to the best of our knowledge) sufficiently mitigate them. That and browser sandboxing helps a ton more than having someone break an app running in userspace.

His objection might be at a more visceral level, which I understand but steadfastly disagree with.


You guys seem to remember more about "this interaction" than I do. Someone want to bring me up to speed?


There was a thread a couple months back where I brought up WebSign (Cyph's TOFU in-browser code signing layer) in response to a story about MEGAChat, which you generally disapproved of but didn't go into detail on why. There was also some related back-and-forth between you and eganist.

My impression was that you hadn't necessarily had the chance to fully digest the architecture due to time constraints on your end, and were coming more from a position of "the premise of this is scary and it's almost definitely broken in some way" than of having identified a specific flaw. Also, as eganist noted here, you may have held a different opinion than us on the drawbacks of the flexible execution environment (despite Cyph's partial mitigations of TypeScript for static typing and asm.js for pseudo-native crypto) vs the benefits of the well-vetted sandbox.


Oh wow, is Cyph the one with the elaborate caching scheme where they invalidate the certificate once a week to prevent the package, once downloaded, from ever reloading itself off the website?


Yep, that's it. (To clarify, the signed package is freshly downloaded each time; it's just the root of the origin of the application that's pinned via "HPKP suicide" invalidation.)


Look, it's clever. I mean it. I'm not saying the people that came up with that insane scheme were dumb to do that. It would be a good research project!

But I can't imagine relying on that janky set of side effects for my own personal safety, and I can't recommend that anyone else do that either.

If you want to use continuity (aka 'TOFU') for security and you insist on getting your crypto in the form of a browser Javascript application --- which is a bad idea for other reasons that I hope will become clearer later this year --- then you should just use an installable, non-auto-updatable Chrome application. People shouldn't play games with message crypto.

I really don't think doing message crypto in browser Javascript is going to work out. Or, if it does, it will work out only after 5-10 year of semiregular hair-on-fire security emergencies that the world's most repressive governments will tend to know about a year or two before the rest of us do.


Fair enough! I'm comfortable with it in terms of security given that HPKP is at least a security feature that makes certain guarantees, but I fully acknowledge that:

1. A future update to browser implementations and/or Web standards could hypothetically break this in terms of availability, DoSing/bricking Cyph for all of our users; and

2. If the server were to violate users' trust by not actually deleting the old TLS private keys, it wouldn't be as outwardly apparent as if it had instead needed to serve malicious code to the client, which is at least a little gross.

Ultimately, I think the value of this is in making sure that initial user onboarding is as smooth and easy as possible (without being totally insecure), with the next step on our end being to offer an optional browser extension to fortify the scheme in a less experimental manner.

which is a bad idea for other reasons that I hope will become clearer later this year

Well, that sounds rather ominous... Are you referring to something specific, and if so is it something you would be at liberty to share (either here publicly or out of band)?


> 1. A future update to browser implementations and/or Web standards

For some reason I hadn't really considered that all modern browsers are self-updating, so in a standard setup, all web apps are fundamentally insecure. All it takes is for an agency to steal a code-signing certificate (or get one somehow) - and all efforts are defeated.

While bootstrapping based off of gpg-signatures has its own problems (how do you verify the install iso, if you can't know that you have a correct version of gpg/pgp on hand, if you can't (and should not) trust the CA system (SSL/TLS is out)) -- at least you can narrow things down a lot. All of the same issues and more apply to all other software. Mozilla can backdoor your Firefox, Google can backdoor your chrome, Microsoft/Apple can backdoor your IE/Safari and your kernel ... now of course Debian can backdoor your kernel too - but with an open system, it's much easier to control updates -- even if it might be beyond most people to actually verify each and every update themselves. (And to be fair, someone could steal Debian's signing keys at least as easily as they could steal Microsoft's -- and in case of a targeted attack, I find it unlikely that a kernel backdoor in a signed Debian/RedHat/etc update sent to only a handful of machines would be discovered).

[ed: I guess the reason to put less trust in browsers than other OS updates, might be a) it adds a second set of things to worry about (the browser runtime etc) beyond the kernel and other userspace, and b) Browsers are very frequently updated, so the opportunity to install a backdoor is quite frequent (assuming one wants to install it subversively, sending a custom update to a handful of clients at the same time as a general update is out) ]


That isn't exactly a Web-app-specific problem. If you have native software running malicious code with user-level access (whether it's in your browser or any other random update from your package manager), you should consider the whole machine compromised.


> which is a bad idea for other reasons that I hope will become clearer later this year

You're speaking with two people involved with the project, so if you're able/comfortable with talking in more depth about it, my PGP ID: 0x4D4C724C4BFB3E3F


Cyph cofounder here; thanks for the mention! It'd be great to hear thoughts from Moxie or anyone else on WebSign.

By the way, I think this was the doc you meant to link to for [2]: http://cyph.team/websigndoc.


Speaking of e2e browser extensions, does anyone know what the status of Google's "End-to-End" chrome extension[0] is? They started the project in June of 2014 and the development pace on the project has been somewhat slow.

I don't know if this is just a couple dev's 20% project or if it has dedicated resources, or what.

[0] https://github.com/google/end-to-end


1. You still need a good jurisdiction to avoid being compelled by public or secret courts to damage your users. Switzerland is pretty reasonable about their privacy stance and LI requests. The flip is that, outside Five Eyes, the TAREX teams can target them along with whatever TAO wants to throw in. No restrictions. They better know strong INFOSEC.

2. Webmail complaint is probably legit. That does in nation-state's targets over time. Might be a compromise, though, to up baseline of those that absolutely refuse to use other stuff.

3. Most of the new services have this problem. It's like they have more enthusiasm or business savvy than actual INFOSEC skill. There's many schemes proven in field or pentesting they can copy. Yet, inspired by the one that failed? (Rolls eyes)

I'm just using GPG over MyKolab for now. Keep them untrusted but in jurisdiction with lower availability risks from takedowns. I encourage you to keep up your own great work with addition of cross-platform desktop stuff. Maybe people like me will finally get off GPG for something less horrible to use but scares NSA just as much. ;)


Disclosure Note: I'm with ProtonMail. Please note that I don't officially speak for the company. But, I'm a crypto guy and this is Hackernews so...

1. While historically advertising a hosting location was a bit of a red flag for snake oil, the Snowden disclosures changed things for SaaS providers. Jurisdictional arbitrage is indeed a security feature of the service. I think you're missing the point a bit in that it goes much beyond the physical servers. Simply locating servers in Switzerland doesn't provide much protection for users if you're an American company with US bank accounts. For example, choosing to run on German or Irish AWS servers doesn't really buy you much. But, ProtonMail not only has all of its servers located in Swiss datacenters, it also: 1) Is a Swiss corporation fully under the jurisdiction of Swiss law (which also means it operates under strict customer data anti-retention requirements) 2) Holds its funds in a Swiss bank 3) Has corporate officers that reside in Switzerland 4) Offers .ch e-mail addresses and a .ch web interface that cannot be taken control of through US courts and that are resolved through Swiss DNS servers 5) Is using a non-US (Swiss) Certificate Authority (QuoVadis) for its certificates.

2. This is true. However, it's true about every web service. It is also true about any software that is either distributed over the web/TLS or has security updates distributed over the web/TLS. It also includes any software that runs on platforms that have patches distributed over TLS. That is nearly everything. It's no more difficult to insert malicious code into web apps than it is to insert it into mobile phone apps, desktop apps, or operating system patches when working with a compromised trusted TLS connection. While some may say that non-web apps have code signing or application signing keys, the fact is that most of either the signing or verification keys for those application code signature schemes are distributed over TLS. There are devices out there with trusted hardware and embedded keys and some groups are starting to make use of proper TPMs. But, high quality trusted platforms are beyond the reach of most consumers and developers. I know of no platform that would catch the insertion of malicious code by a determined third party with the ability compromise a TLS session in the release and/or development cycle. If webapps are faulted and everyone is using a webapp (github) to develop, well then, everyone is essentially equally compromised. The same goes for distributing updates or public keys for validation of code signatures.

3. The company did, in the early days, see Lavabit as a inspiration. But, our systems are very different. Our systems are "can't read your mail" not "promise not to read your mail". Proton has no need to avert its eyes. There is no "plaintext in and plaintext out". There is no transmission of the private key decryption passwords back to the server. I don't think your comparison holds. The way Proton works is that the encryption is done in the browser. A non-encrypted private key is not stored, or ever sent to, the Proton servers. An openPGP keypair is generated in the browser by the user. The public key is sent to servers and stored in a database. The private key is encrypted in the client's browser, with a passphrase the client enters, using the opensource openpgpjs library. That encrypted (with a password that is never sent back to the Proton servers) private key is sent to the Proton servers and stored in a database. When a user logs in, their encrypted openPGP key is sent down to their browser with their public key encrypted e-mail. Their web browser then decrypts their private key and uses it to decrypt their email on the local computer. We never have to avert our eyes from their passphrase because it never transverses our systems. The decryption is done locally. Obviously, it would be better for us not to store the private key (even though it's strongly encrypted). But, that's just not practical for a webmail application.

So, I'm sure everyone is wondering: What would a TLS based ProtonMail compromise look like? Well, modified code (nefarious javascript) would be injected into the TLS stream that would send the user's decrypted private key and/or private key pass phrase to a third party or otherwise expose it (as invalid packets, etc) in the stream. Or, carefully selected cryptographic primitives would be inserted into the software. I contend that these are the same vulnerabilities someone faces downloading GnuPG from it's distribution sites, downloading Firefox/Chrome/IE, or even applying Windows/Linux updates.

And, I'm also sure people are wondering: What would a US Government compromise of the ProtonMail servers look like? Well, I'll leave the details out on how the US Government might get their hands on Swiss domiciled servers... maybe something like the time they cut through a datacenter wall at MIT in the middle of the night to get the early PGP code. Let's just assume they have the servers. They'd first have to break the disk encryption. Once they broke the disk encryption, they'd have to break into the database. Once they did that, they'd basically have a bunch of AES encrypted keys that they'd have to run password guessing attacks on.

ProtonMail is trying to bring cryptography to the general population to protect the basic human right of privacy. We are doing webmail the best possible way webmail can be done because webmail is the primary method of communicating for the vast majority of people today. We are not going to get our grandmas to sit in Faraday cages and work on trusted platforms with one time pads that they exchanged at their church groups and yoga classes.

There is no security through obscurity in what we're doing. Honestly, I'd be honored if you took a look. We use Signal internally. I've read through the code and was very impressed. I couldn't find a single flaw that I could exploit. Perhaps, you'd even be willing to do a consulting engagement to audit us?


> 2. This is true. However, it's true about every web service. It is also true about any software that is either distributed over the web/TLS or has security updates distributed over the web/TLS. While some may say that non-web apps have code signing or application signing keys, the fact is that most of either the signing or verification keys for those application code signature schemes are distributed over TLS.

If you install Debian you have to make sure that the ISO was not compromised. You can do that by calling a third-party and compare the published checksums or any out-of-band solution. Once this base of trust is established you are good to go. In fact updates are fetched over plain HTTP because all the updates are signed using GPG.

Now compare with any "web crypto": every time the user loads the page he starts back from zero. An attacker can MITM at any time and inject a trivial `form.onsubmit=sendCredentials` and your user is compromised. There is no way to verify out-of-band that the distributed JavaScript is coming from ProtonMail and even if there was, it would have to be done on every page load.


> If you install Debian you have to make sure that the ISO was not compromised. You can do that by calling a third-party and compare the published checksums or any out-of-band solution. Once this base of trust is established you are good to go. In fact updates are fetched over plain HTTP because all the updates are signed using GPG.

It's hard to take this seriously - do you really call a third-party every-time you rebuild a Dockerfile and run update with your package manager? 99.9% of developers deploying to production do not. If anything the best guard we have is for Debian (as in your example) to realize their keys have been compromised (probably by someone else suffering from such an attack) and alert their users, this has an inherent delay.

> Now compare with any "web crypto": every time the user loads the page he starts back from zero. An attacker can MITM at any time and inject a trivial `form.onsubmit=sendCredentials` and your user is compromised. There is no way to verify out-of-band that the distributed JavaScript is coming from ProtonMail and even if there was, it would have to be done on every page load.

As I wrote above, your concept of verification is quite unrealistic - but even with that I would say it's fair to say the danger of a SSL cert for "web crypto" being compromised is definitely greater than for the desktop since even Dockerfiles are rebuilt far-less than web-pages get reloaded, so a much greater number of users would be affected before they became aware.

I tend to think protonmail is a good thing -- it's obviously not the best choice if you need as many security guarantees as possible, but for general population this is a strong improvement over sell-all-your-data to advertisers google mail.


There's really no comparison between the two scenarios. Hacking the MIT server or distros to replace the GPG code is an incredibly noisy attack, which will leave an evidence trail the size of Utah, home state of NSA. It's a risky attack because you don't have selectors (don't know to whom to provide the trojanized code) so you will necessarily infect more targets than necessary. There are also multiple manual and automatic tripwires, MD5 hashes, code signatures etc. that must be circumvented, and triggering any of them will blow the whole thing open and lead to public outcry. It's also currently illegal and leaves binary forensic traces in the infected machines. On the other hand, forcing an email provider to colaborate is standard legal practice (Lavabit), you have a perfect selector (the email address of the target), there is no tripwire, and getting the key will permit you to decrypt al past and future communication. Once the user closes the browser, the evidence is gone. Easy as pie.

So while in principle the threat model is similar, the practicalities of the two situations are vastly different, questioning the whole "we want to provide practical security" mantra.


> There's really no comparison between the two scenarios. Hacking the MIT server or distros to replace the GPG code is an incredibly noisy attack..

What percentage of the packages you use in a production deployment are GPG signed? I think the minority.

> On the other hand, forcing an email provider to colaborate is standard legal practice (Lavabit), you have a perfect selector (the email address of the target), there is no tripwire, and getting the key will permit you to decrypt al past and future communication. Once the user closes the browser, the evidence is gone. Easy as pie.

This is totally irrelevant to the current conversation. As it was discussed protonmail uses client-side crypto with PGP signed messages -- meaning the only thing they store is your encrypted keys.

Yes, if Protonmail could be forced to serve passphrase stealing javascript then your encrypted keys would be vulnerable - but so would you if Debian was forced to serve a keylogger as a kernel module. btw I do agree that SSL is a much higher-risk factor than a linux-system with a local mail-server using PGP - but I don't see a better alternative than something like protonmail for the majority of consumers.


>Yes, if Protonmail could be forced to serve passphrase stealing javascript then your encrypted keys would be vulnerable - but so would you if Debian was forced to serve a keylogger as a kernel module.

That's exactly what I'm saying, the situations are not remotely comparable, reasons as stated.


There is no need to modify the ISO image on the server's disk just as there is no need to modify the ProtonMail source code on disk (both of which would make it more likely to get caught). The NSA has been known to have the ability to modify the ISO image as it was downloaded in a single selected TLS stream. So, if TLS is compromised the attack on a Linux Distro is the same as an attack on ProtonMail.


I don't know why you are talking about Docker but even for Docker, I have their public GPG key setup in my puppet installation code. Docker is installed with APT and using the same signature mechanism. Docker in turn now does container signature verification. There is a chain there.

If an attacked wants to compromise that chain he has to be present from the start and re-sign the packages trough MITM all the time. If I switch Internet connections and fetch an update I will get a signature mismatch.

Now compare to protonmail's webmail. At any point, if an attacker is able to MITM SSL then the user is compromised. Game over. The client won't even have the chance to see a signature mismatch and take appropriate actions after the fact.

I'm not saying that protonmail is a bad idea but "web crypto" definitely is in my book. It doesn't mean you can't implement another client for the desktop like you did for Android and iOS. Distributing the software and the data on different channels really makes the attacks more difficult.


> It's hard to take this seriously - do you really call a third-party every-time you rebuild a Dockerfile and run update with your package manager?

It's best practice for all these kinds of develop/deploy processes to (automatically) verify gpg signatures. If you verified your initial iso, then you know (to a certain extent) that you have a known-good gpg binary. There are ways to attack web-of-trust, and each time you add/update a trusted key there are issues -- but compare this to the number of shifty CAs all browsers trust out-of-the box. Any single one of them is enough to trick the client.

Attack scenario: disrupt client access to the Internet. Send what looks like the webmail page. On user entering the passphrase, log it, replay login details to actual webmail service (get the private key). If you're a normal attacker: download email, decrypt it. If you're a state agent, get encrypted email from intercept logs (this is assuming TLS is broken - lets hope it isn't. Or assuming that there is some way to intercept the non-tls traffic (eg: between loadbalancer and disk storage).

How would this compare to the air-gapped laptop used for traditional email? You would need to physically attack the laptop - not just have access to the ISP. That's the difference between economical (targeted or not) mass surveillance and "boots on the ground".

How does this compare to traditional non-airgapped gpg encrypted mail: in order to compromise a client that is updated via gpg-signed updates, you'd have to get a signing key, or implant one. Not simply bully any old CA out of hundreds to give you one. Then you'd have to trigger an updated somehow, or intercept one. Given the above, if you can own the clients net access this shouldn't be too hard. But the client needs to install those updates. With a web service, the client just needs to access the app.


> I contend that these are the same vulnerabilities someone faces downloading GnuPG from it's distribution sites, downloading Firefox/Chrome/IE, or even applying Windows/Linux updates.

If you had in mind the typical Windows user who downloads programs from the internet without any security check, then you're right, but the typical Linux user doesn't do this and instead installs software via the package system provided by the OS.

I.e. installations and updates in reasonably secure Linux distributions (which AFAIK is the majority, including at least Debian) are signed through a key that's been installed at OS installation time. Which means that at least in the way you have worded it your statement is not correct. There is of course the initial OS image that you need to find some other way to trust (but this is the same in your case), and there are the package maintainers who download the source of these programs; hopefully the latter will take their job seriously and verify the authenticity of the source in some way (download from/to various places, find hashes done by others, check PGP signatures on source files provided by program authors, ask for hashes by talking to authors directly, inspect source diffs). A package maintainer working with an upstream author can ensure that users will get uncompromised programs, whereas you're dependent on an uncompromised CA and thus have an additional vulnerability risk.


But, where did you get that public key that you're using to verify the signatures? Personally, I got that public key because it was included in an ISO image that I downloaded over TLS from website. So, if that TLS session was compromised, a fake public key could have been inserted in the ISO. Or, the copy of GnuPG in that ISO could have been modified not to complain when the signature of certain pieces of code did not validate.

Ideally, you should be able to use the WoT (Web of Trust) to validate the public key that comes with the ISO (validating that the copy of GnuPG, glibc, or the Kernel has not be modified to mess with the signature validation is much harde). But, have you ever looked at the signatures on the public keys that most distributions use to validate their software releases? There are usually no signatures at all. They keys have no WoT links at all. They don't even use it. One example of this is RedHat Linux.

I actually wrote a document that describes how to do Strong Software Distribution with PGP: https://cryptnet.net/fdp/crypto/strong_distro.html Unfortunately, no one really does it.


> But, where did you get that public key that you're using to verify the signatures? Personally, I got that public key because it was included in an ISO image that I downloaded over TLS from website.

Which is why I said "There is of course the initial OS image that you need to find some other way to trust (but this is the same in your case)". I explicitely wrote my post to point out that ProtonMail has one additional, independent, vulnerability risk.

You were talking about the GnuPG or browser install, and I said that it would be wrong to claim that this is as vulnerable as getting the library from a web mail service if you're talking about decent OSes. This view is not invalidated by the risk of the initial OS install, since that's a different risk: you run the OS installation once every few years or so, but you check email every day. And if your OS is compromised, then you have lost in any case and could as well use Yahoo/Hotmail/Gmail/Facebook/postcards.

For the OS install you can also take precautions when getting the image (e.g. download or verify image over TOR, compare hash with results from search engines or other people), also in fact Debian does provide PGP signatures for their OS images[1].

[1] http://cdimage.debian.org/debian-cd/8.3.0/multi-arch/iso-cd/

> But, have you ever looked at the signatures on the public keys that most distributions use to validate their software releases?

Yes, I'm in fact always doing exactly that. Of course people who don't have a PGP implementation yet can't do it as easily. They can still take different precautions. Chicken and egg is not an absolute dilemma.

Also, Debian creates the above signature with a key that is signed by people I actually have a trust path to:

    $ gpg SHA512SUMS.sign 
    gpg: Signature made Sun Jan 24 18:08:46 2016 GMT using RSA key ID 6294BE9B
    gpg: Good signature from "Debian CD signing key <debian-cd@lists.debian.org>"

    $ gpg --list-sigs 6294BE9B | grep ^sig | cut -c14- | sort -u -k 2
    C542CD59 2011-01-05  Adam D. Barratt <adam@adam-barratt.org.uk>
    6294BE9B 2011-01-05  Debian CD signing key <debian-cd@lists.debian.org>
    A40F862E 2011-01-05  Neil McGovern <neil@halon.org.uk>
    63C7CC90 2011-01-05  Simon McVittie <smcv@pseudorandom.co.uk>
    3442684E 2011-01-05  Steve McIntyre <steve@einval.com>
    1B3045CE 2011-01-07  Colin Tuckley <colin@tuckley.org>
    95861109 2011-01-23  Ben Hutchings (DOB: 1977-01-11)
    30B94B5C 2011-02-08  [User ID not found]
    9011A5AE 2011-05-02  [User ID not found]
    53CD659C 2011-05-09  [User ID not found]
    8143B682 2012-08-26  Neil Williams (Debian) <codehelp@debian.org>
    0125D5C0 2012-08-31  Philip Hands <phil@hands.com>
    046F070A 2013-04-22  [User ID not found]
    5DF05C03 2013-06-11  [User ID not found]
    C778A4AB 2013-07-06  [User ID not found]
    10038D31 2013-07-23  [User ID not found]
Yes, I'm sure there will be users who won't know or care how to check this and I guess some will still benefit from using ProtonMail. I just don't want the existing difference be wiped under the table.

Your remaining advantage could be that the Swiss CA that you're using provides more security than the potentially non-Swiss CA used for the website a "normal" user (who doesn't do further checks) is downloading his OS image from. At that point a risk calculation using probabilities would need to be done to see which solution looks more secure.


1. It's too bad that Switzerland has a mutual legal assistance treaty relationship with the United States which requires it to hand over any information legally available to their local authorities to the requesting government.


1. Pwnd the servers. 2. Replace nice in-browser crypto Javascript with a bad version that captures and logs the passphrase and any emails you write. 3. Collect your bounty whenever a user logs in.

In order not to have to wait for users to log in to the now compromised service and capture their keys, don't actually encrypt newly incoming email anymore.

So once I compromise the service I therefore get immediate access to all newly arriving email 100% of the time, and to any already stored user's email if the user ever logs in again, all without breaking/brute-forcing any crypto.


Hi,

First I thank you for your work. Everybody that implments strong crypto for the masses should be commended.

1) Can I use my own GPG key, even if only for GPG users outside of ProtonMail?

2) Do you support access to Smartcards (NFC) or OpenKeychain (it supports Smartcards)?

3) Can I use TOTP/HTOP/U2F for app and/or web login, specifically U2F over NFC on Android (maybe Bluetooth LE on iOS).

4) Is your HTTPS connection pinned?

5) Do you live in Switzerland and if so, would you speek at a Hackerspace about ProtonMail crypto?


I have a simple maybe stupid question. Why is is necessary to have 2 passcodes for ProtonMail? Wouldn't it theoretically provide the same level of security if the password used to access the email would be also used for encrypting the messages?


Without having read the code but how it would usually work: would be that the first password is required for the encrypted private keys to be sent to you, and the later to decrypt the keys -- so one is hashed and goes to the server and the other is used to decrypt the keys sent back in the response client-side.


> "it does seem that (much like Lavabit), the service is built on the premise of "won't" read your mail rather than "can't" read your mail."

Can you help me understand this comment? I get that the usual problem with webmail is that it's "plaintext in, plaintext out"... So you just have to trust the server operator that they're actually encrypting anything, doing it well, not compromised, etc.

But if that's the problem, why isn't the answer to have client side code handling the encryption? From skimming the page source and reading https://protonmail.com/security-details, that's exactly what Protonmail does.

First you use a username/password to authenticate with protonmail - that's cleartext in/out - this only determines which account you are accessing. Once you're authenticated, your browser runs an open source AngularJS application, which asks you for a separate, client-side-only password. Once you've authenticated, your browser only sends/receives cyphertext.

I know you by reputation and I'm a big fan of your work... so I assume I'm missing something. Am I misunderstanding the problem? What am I missing here?


If you have an installable client that performs the encryption, yes. But if the encryption is driven by Javascript that is or can be loaded when you hit the website, then no: the application is only as secure as the HTTP connection you rely on to deliver the code to you every time you hit the site.


that's totally fair, but you are dependent on transport layer security implementation no matter what.


There are secure channels for delivering installable client software that allows protection of content even without transport layer security, and you only need to do it correctly during installation and intermittent updates, not at every access.


what are the secure channels that don't require an uncompromised server and uncompromised client? I can't think of any.

It's a good point that you only have to trust both environments at install and update... but you can do the same thing for javascript in a browser: set a long cache time, and browsers will use their local copy until the remote is updated. Use a module loader pattern, and you can compare md5 hashes of the local and remote libraries before sensitive data is handled.

I'm not saying it would foil a dedicated attacker - nothing would, anyway - but it would be good enough for protonmail's use case: protecting normal people against mass surveillance.


The benefit of installable client software is that the encryption software and the relaying party (your e-mail provider) are completely unrelated and separate. So if I use GMail and encrypt my messages to Bob with GnuPG, than no matter what GMail does with the encrypted message, it either gets delivered securely (and is verifiable by Bob as having been written by me) or not. Any adversary would have to compromise the source of my operating system's GnuPG version (or Bob's), and have access to the messages themselves.

The browser model explicitly trusts the remote server to provide whatever JavaScript code it sees fit, so if your e2e webmail provider is compromised, you get a backdoored crypto library that works as far as your browser and you can tell.

It all comes down to trust of course, but the separation of concerns adds a very real layer of security, and I trust Debian and its maintainers (and the experts who scrutinize these things) more than a single commercial e-mail provider.


> what are the secure channels that don't require an uncompromised server and uncompromised client? I can't think of any.

Assuming you trust your hardware: I could hand you a bootable CD.

In practice, there's a bootstrap problem: the general way is to trade the insecure CA system (will always be insecure, there are too many trusted parties, all the CAs) with a web of trust and/or widely distributed hash'es of ISO images/install files.

While surely your cd writer software theoretically could be backdoored so that it will compromise all bootable iso's (all bootloaders?) that contain a gpg executable - that is something quite different from having a rouge CA sign a cert, allowing the individual replacement of js code on a single connection.

Ultimately, one can only trust people, web-of-trust is an attempt at widening that network geographically with the help of technology. It's not perfect, but I think it's hard to argue that it's not better and more flexible than the CA system.

[ed: Note that getting an iso file over an insecure channel; bittorrent, ftp, http -- isn't a problem if you can verify a signature. Now, you still need to have a way to trust the signature (by way of well-known, trusted, cross-signed keys). But compromising the server and the transport now becomes a denial of service, not a compromise of the client. And again, yes there are some bootstrap issues here too.]


The problem is that they're trusting the integrity of that client-side code to their servers on each use, which is in practice not meaningfully different from a traditional non-E2EE email service.

The only difference in attack pattern, in the event of a server compromise, would be that the attacker would have to send you a line of JS to steal your password/key before reading your emails.


So—run your own server with your own root CA?


How (if at all) does this change if you only use their iOS/Android app to access their service?


Is there a "better" free alternative webmail?


You should consider all of them insecure and big time snoops if free and/or webmail. If webmail, best you can do is use a paid service with antisnooping in terms of service and preferrably local laws. Just reduces number that will attack you. Anything further requires using something like PGP/GPG over that so they can't read it. Strong endpoint security, too.


Thanks for taking the time to reply. Non-webmail may not be accessible from everywhere, paid-email entails money exchange which maybe traceable. Lavabit seems to have been secure enough for Snowden (for a while).

So the question remains: Yes Free and/or Webmail are bad - which one is the LEAST bad from a privacy & security perspective?


There's a good list here that should allow you to decide for yourself:

http://prxbx.com/email/


That is a good list.


I like mailbox.org

They support pgp encryption.


It's really nice to have this kind of privacy, but I don't understand how people can use a service like this while they don't provide the ability to make local (encrypted) backups.

If their servers disappear for whatever reason (legal issues or hardware problems), you end up with 0 e-mails. Nothing.

Tutanota.de has the same problem, no backup feature.


If you care enough about privacy to sign up for protonmail, why not just learn how to use pgp? It's not perfect, but it works. And... oh goodness, I'm looking at this [support page][1]; If you send encrypted mail to non-protonmail addresses, the recipients have to open the message in a web browser and enter in a password to read it. So instead of exchanging public keys, you have to send them a password, probably over an unencrypted channel. I guess it's easier than asking them to generate a keypair. But if your recipient is too lazy to set up pgp, they're probably going to be just as annoyed at having to open your message in a browser and enter in a password, right? It seems like an awkward compromise.

[1]: https://protonmail.com/support/knowledge-base/encrypt-for-ou...


Even Phil Zimmermann, PGP's creator, says it's too hard to use:

“I hardly ever run PGP. When people send me PGP encrypted mail I have to go through a lot of trouble to decrypt it. If it’s coming from a stranger, I’ll say please re-send this in plain text, which probably raises their eyebrows.“

http://www.forbes.com/sites/parmyolson/2013/08/09/e-mails-bi...

I tried the ProtonMail password protection feature today (which is optional; you can just send plain text email to non ProtonMail accounts if you wish – this is the default, but you can click a lock to password-protect any outgoing message).

The feature is well done, and I can see people using it to exchange secure mail, particularly if the recipient does not have a ProtonMail account. It has a built-in 'send secure reply' feature once you've entered the password as the recipient. This allows you to read the message and reply in-browser without returning to your email client, and without having a ProtonMail account, so both the outgoing message and the reply stay more secure than regular email throughout your conversation.

This could be helpful for a wide range of uses, including support email that involves transferring login info and logs.

The read and reply flow looks like this to the logged-out recipient:

- Email with “view secure message” prompt: http://d.pr/i/1cwZ8

- Click “View secure message”: http://d.pr/i/13GlI

- Enter password to view message: http://d.pr/i/11Gg9

- Click to reply in same tab: http://d.pr/i/WWWB

You still have to communicate the password securely somehow, but it's a much more user-friendly approach than PGP.

I'm not saying it's a replacement for https://securedrop.org/, but it still has value.


"PGP is hard to use" is more memetic than accurate. What makes PGP hard is that it has a million options, and its vocal users (and detractors) seem insistent on availing themselves of as many of them as possible.

In reality, 80% of PGP's value (which is more value than you'll get out of any webmail system), you can get with three command lines:

    gpg -sear recipient@addr document.txt
Encrypt and sign a document, ASCII armored for inclusion in a 7-bit email, to a specific person or persons

    gpg -a --export my@addr
Dump your public key for out-of-band transmission to the peer you're exchanging messages with

    gpg document.txt.asc
Decrypt a message sent to you by someone else.

No keyservers. No subkeys. No crazy formatting. No exotic key types. Send a message to someone, read a message back from them.

You don't get forward secrecy (unless you manually rotate keys). You don't get peer validation unless you do it manually. You don't get real time chat. You can't encrypt a videoconference. On the other hand: you won't be the subject of someone's amusing blog post a year from now, either, because PGP used this way has been reliable for something like 15 years running.

Everyone I know who actually uses PGP, like for real, more than a couple times a year, uses it pretty much this way.


If those commands are so simple, why are they not built into chrome? Honest question. It's where I actually write my emails.


If a browser provides an OpenPGP API, than how would I know whether or not a website uses it properly? What is to stop a compromised webmail provider from serving me a web page that claims to use this API to encrypt my message before sending it, but also simply sends the plain text message captured from the mail composition page along?

You are typing your sensitive message right there in the browser, in the web page provided by them, with JavaScript provided by them, which can simply send whatever you type directly to their (compromised) server.

This means OpenPGP in the browser is technically feasible, but very insecure because a third party can access the data before it is encrypted. To make a browser provided OpenPGP API work, it would have to provide the input area where you type your confidential message in a way that makes it clear it is the browser that provides it, and the web page JavaScript cannot access it before encryption. This is of course exactly what stand-alone e-mail clients do, so why implement it in the browser?


Welp, you successfully dissuaded me from ever looking into pgp.


Hopefully, then, they've dissuaded you from all browser-based crypto.


> 80% of PGP's value ... you can get with three command lines:

While I completely agree that those commands are all that is needed most of the time, it's this part that unfortunately makes pgp/gpg so unpopular:

> command lines

Even on HN I find people that hate the command line (even though they use it). The problem is the complete lack of front ends wrappers around those commands in every other email tool.

The enigmail plugin for thunderbird is surprisingly nice. It even does a decent job at key management (including the web of trust), but it made the "80% of PGP's value" completely transparent. If you have the key for an address, emails are automagically piped through gpg, and it handles decryption for you similarly (with pgp-agent support for less passphrase typing). Of course, that was only useful for the handful of us that used thunderbird; with Mozilla dropping the project, it's questionable if this has a useful future.

Almost all of the complexity with pgp/gpg can be hidden behind a nice GUI and opportunistic[1] enabling of the crypto. Unfortunately development has been focused on walled gardens and tying people to artificial dependencies[2] instead of writing useful client-installed software.

[1] Fallback to plaintext that people use currently is necessary until a critical mass of keys have been shared.

[2] why write a client app when you can make it hard for people to switch to other services by tying them to your domain name? /sigh/


Often it is even simpler than that. For encrypted e-mail using an e-mail client (such as Thunderbird with Enigmail) for addressees with known PGP-keys, hitting the 'encrypt' button and entering your passphrase is all that is needed once all is set up. Decrypting means you get prompted to enter your passphrase, and that's all there is to it.


Apart from the discussion surrounding the crypto side of all this, one thing that ProtonMail has really succeeded in doing is making encrypted mail finally an accessible thing to a non-technically inclined user.

I've spent well over a year trying to get my team to properly gpg encrypt sensitive files only to constantly find them sending plain unencrypted files over slack, dropbox and even simply gmail.

At least now, finally, they've found ProtonMail easy to use and are choosing to send files this way.

I think the key, unfortunate reality is that until the end user doesn't have to think about encryption, they aren't going to bother using it. Bringing usability to encryption is another huge part of the problem that needs to be considered as it is basically a guarantee that the typical user will take the path of least resistance.


I'll say this, a service like this is great from the network effect alone. While I might not think it's secure, it'll allow me to set up a traditional email-client+gpg and send encrypted email to people. And it'll provide a way for people to migrate to secure email, at a later point.

Much like I could for a brief moment actually talk to people over XMPP until Facebook and Google cut off the limited support they had for it. Most of those I talked to used the horrible web interface of Facebook/Google talk -- but I could use something that made sense. And I could even talk securely to some people on Facebook, as OTR worked fine over Facebook XMPP chat (while obviously breaking the web interface).


These guys got attacked recently by a DD4BC outfit and paid the ransom. I wouldn't trust them with a shopping list, let alone my email.


Do you have any sources for this?

I haven't heard about this before and would like to know more if possible.


It's mentioned in the article linked here.


> The app is a good example that even if they government forces US companies like Apple to create backdoors, users will still have communication options that the government can't crack. If you're interested, it's now available for Android, iOS or the web.

That is immensely misleading. If some adversary has a backdoor in the OS/platform, then in many cases no amount of encryption will save you.


If you input your encryption passphrase on a keylogged device, your entire encryption userspace falls apart.


not if you do your crypto on a dongle (such as a yubikey), at which point they also need physical access to get the private key.

It's perfectly viable for services like protonmail to allow 2FA via a dongle which would not be compromised permanently by a totally pwned userspace.


If you input your emails on a keylogging device, your emails are compromised.


Folks interested in rolling their own, checkout http://github.com/jakeogh/gpgmda

It's rough around the edges (and elsewhere) but works for me.


I remember when people used to flick to HushMail as the preferred encrypted e-mail provider. Where are they now?


HushMail... wow, that's some history right there. Anyways, they got in bed with feds, and that was pretty much the end of that..

http://www.wired.com/2007/11/encrypted-e-mai


"Even we cannot read your e-mails!"

Exactly what those guys said 10 years ago, and exactly what Proton is saying now.

What is different?


The difference is that they are lying. Hushmail encryption is not end-to-end. https://en.wikipedia.org/wiki/Hushmail


You're correct. It can't possibly be end-to-end, since they have the keys, and they can provide webmail. They use OpenPGP but they give themselves the private key! This is dishonest, in my view.


There are lots of drawbacks to an SMTP based service vs. something built on a newer protocol with client-side crypto everywhere, but email has a huge installed base. This becomes a question of "good" vs. "perfect", but it might also be a question of "too dangerous to use" vs. "replacement" -- depends on threats, alternatives, and quality of implementation.

I know one of the people at Proton Mail pretty well, and have met the others. Of all the companies doing secure email right now, they seem like the best.


I was curious to read about any proposed successor protocols there are for SMTP that offer end-to-end encryption. It looks like Dark Mail might be one of the forerunners in this regard?


A few critical questions:

1) How are spam filters implemented if they can't access the content?

2) How is metadata protected?

3) Also, can I use POP/IMAP and store my mail locally, or must I trust them with my data?


Regarding 3, no, there is no protocol to store mail locally. You have to use webmail or one of their applications.

https://protonmail.com/support/knowledge-base/imap-smtp-and-...


It's odd. Seems like a no-brainer to allow power users access over IMAPS; just put the encrypted private gpg key in an IMAP folder, and let the users mirror their encrypted mail.


1) Seems to be that its stores encrypted but on arrival would still be scannable just as on send it would. Without the people sending to you or receiving from you having a key there's not much that can be done about it.


The fundamental problem with e2e encrypted email, outside of the many technical limitations, is that the encryption is only as strong as the weakest link. Every member of an email chain must support the e2e encryption in order for it to work. One unencrypted sender/receiver is sufficient to break the whole model.

So if you want encrypted email, you can only communicate with other people using the same encrypted email stack.

What's the point of that? If you're paranoid enough to the point that you're using encrypted email and you can convince all your correspondents to use it as well, then you have far better options available to you than email.

It really makes no sense to me... It's 2016, there are thousands of ways to communicate online, many of them encrypted. Why force the use of email?

We would all be far better off if we just assumed that all email is public. If you need private communication, don't use email.


The way Protonmail and co try to solve this problem is by sending the non-protonmail recipient a notification email, with "click here to read". From https://protonmail.com/security-details :

"We support sending encrypted communication to non-ProtonMail users via symmetric encryption. When you send an encrypted message to a non-ProtonMail user, they receive a link which loads the encrypted message onto their browser, which they can decrypt using a passphrase that you have shared with them. You can also send unencrypted messages to Gmail, Yahoo, Outlook and others, just like regular email."


At that point, you might as well send people invites over email to switch to using a completely different messaging system that is more secure.


Can you search your emails? If so, how is the search index stored? Is it local or stored at ProtonMail?


In theory, there's nothing stopping you from creating a javascript indexer, and update the index whenever the user logs in and reads (thus decrypting) an email. The updated index is compressed, encrypted and sent back to the server.

You could have multiple megabyte sized indexes, for example each covering a certain timespan. The default search action would be to search, say, the preloaded index for the last month, and the user will expect an "All time" search to take a bit more as more indexes are downloaded. (Patent pending)


Yes you can, but only by Subject (at least that's how it seems to me after some tests)


It's webmail. Everything is stored on their servers.


It can't be searched on the server. The article says "The app is encrypted end-to-end and, like Apple's iPhone, can't even be accessed by the company itself."


Everything available OB the app is available in the webinterface (I am a protonmail user)


The strong encryption makes it impossible for the company to comply with government demands for data. And since ProtonMail and its servers are located in Switzerland, there's nothing that US authorities can do to shut it down. The company gained a lot of publicity, much of it bad, when a leaked document revealed the app was a preferred choice for ISIS terrorists.

Couldn't any company host their servers outside of jurisdiction and bypass US laws? I wonder how US gov't plans to work around this.


They'll do what they did to Wikileaks. The USG phone Visa and MasterCard CEOs and threaten the shit out of them personally. Suddenly you lose the ability to take credit card payments.

You can turn to Bitcoin, but that's like trying to quench the hunger of a million people with one faucet.

Plus if you are an American then it doesn't matter where your servers are hosted. The men in black suits are going to be knocking on your door.


> The USG phone Visa and MasterCard CEOs

Don't forget Amazon. Never forget Amazon:

http://www.theguardian.com/media/2010/dec/01/wikileaks-websi...


Amazon's pretty corrupt. Apart from Wikileaks they've also decided to abuse their position and not compete by refusing to sell Apple TV or Chromecasts. Just get a 404. Their service reps deny everything and say it's just a temporary stock issue or that they "lack the contracts to sell such products". Scummy, and I'm very reluctantly cancelling prime over it, and I'm a customer of 13 years.


Can you explain the background to this story? This is because they only want to sell their solution?


http://www.theverge.com/2015/10/2/9439281/amazon-ban-apple-t...

Their supposed explanation (though several CSRs denied this) is that people want Amazon Prime Video (heh ok), and Chromecast/AppleTV doesn't "support it". Hence they are refusing to sell these devices.

In reality it seems far more like a pathetic death throe of the folks running FireTV or a response to the (most likely?) low uptake of the poor Prime Video offering.

The thing is they are so entrenched (I spent a lot on Amazon, and Prime is a main reason) they can really shut hurt products by refusing to carry them. Ordering via someone else is a pain. (I used jet.com to buy Chromecasts and they're so, so, far from Amazon.)


I wonder if they'd ever go after corporate principals in ways calculated to annoy. I'd be fine with losing AWS access, but if they made it so I couldn't buy books/toilet paper from Amazon because I worked for/etc a company they hated, it'd be really annoying.


I think the US government would make it very difficult for this company to operate in the US and sell their services there (maybe blocking transactions?).

But with bitcoins...


Except... They can demand injection of backdoored code.


Here's a deep dive into ProtonMail's claims on Wired (disclosure: I wrote it) www.wired.com/2015/10/mr-robot-uses-protonmail-still-isnt-fully-secure/


> Our primary datacenter is located under 1000 meters of granite rock in a heavily guarded bunker which can survive a nuclear attack.

Would they seriously have done this on intent?


Switzerland has a lot of bunker, it probebly cheap to get one and its a cool, but gimiky feature.


Is this service open source? How can we know it does what it says it does?



Open source really is irrelevant. We don't know what code they deployed - it could be an open source project (we hope so), it might be an open source project with some minor changes - an NSA-approved backdoor for example, etc. We don't have any guarantee, so far as I'm aware, of what they're running, much less that it does what it claims to do.



> How can we know it does what it says it does?

How does being open source help with that?


Open source makes it is easier for security experts to review the code and determine whether it meets its security claims. If it is closed source, then there is no guarantee that the code the experts review is the same code that is used by the service. And also it is at the company's discretion whether to allow a security audit or not, and then which auditors to allow or exclude.

But if the whole of the client software is open source, then it also eliminates the need to trust binaries provided by the vendor (which can be MITM) because you can build the software yourself or use a version audited by an entity that you do trust.


That, also the fact that if they have it in their privacy policy and website, then we can call them out on it when they insert a backdoor or remove the encryption. We can't say the same about Whatsapp and its end-to-end encryption, because they never even publicly admitted to using it. How can we ever hold Whatsapp responsible for not using end-to-end encryption then?

We can do that with Protonmail.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: