Not that it matters. They could silently replace it with a backdoored script and your browser would never tell you it happened.
And to preempt the ProtonMail rep who is probably going to respond to this comment, I know that you can run the web app on localhost. But that doesn't mean that users who don't are any more secure.
The idea is that a long data-uri containing hashes and a small loader function are bookmarked. The loader won't load the corresponding javascript unless the hashes match. The user only needs to verify the javascript once, then they can rely on their bookmark containing the hashes. If the server were to swap out the javascript, the bookmark would fail to load it.
Doesn't SRI require the hashes to be in the __loaded__ html? I believe parent is referring to a page which is the same, but has been compromised on the server side, meaning you can't trust the html, even if the server is who it says it is.
Right, but what I'm saying is that you don't need the loader. Just have a bookmarklet with html that contains script tags with SRI. The loader is just another step you need to trust.
This (rightfully) comes up every time some browser-based encryption tool is posted. It seems like the desire for such tools isn’t going to go away. Is anyone working on solutions for making distribution of JavaScript applications more secure?
There’s a range of assurances you could try to provide, e.x. signatures from the author (or even 3rd parties), prompting for updates, etc. It would likely require support from browsers.
At one point I investigated using service workers to intercept subsequent app updates to check signatures but there was no way to prevent the service worker itself from being replaced (probably because it would be easy for a site to permanently “brick” itself in users browssr).
> At one point I investigated using service workers to intercept subsequent app updates to check signatures but there was no way to prevent the service worker itself from being replaced (probably because it would be easy for a site to permanently “brick” itself in users browssr).
Cyph nailed this back with appcaches, later migrating to service workers (and ultimately got a patent for the solution), but the implementation of the solution itself got an entire protocol canned by one browser (HPKP).
This was "solved" by a online crypto tool (can't remember which) that basically did HPKP suicide every 30 minutes and had a service worker that loaded cached assets on failure. So the browser would pin to a key that was deleted within minutes and then all subsequent requests would only go through the service worker until the pin expired.
As far as I'm aware that means their approach no longer works (across browsers), and, though it was a fantastic idea, it did require you to trust that they really were throwing their keys away (a trust model slightly weaker than TOFU).
We (Cyph) have been pretty disappointed in the Chrome team's decision to kill HPKP.
Paraphrasing, but IIRC the reasoning pretty much boiled down to "it's a pain to maintain and Expect-CT is kind of similar anyway" — which I think is a really weak justification for harming end user security and breaking established APIs that people depend on in production. Fingers crossed that Firefox keeps it alive!
That said, it doesn't entirely break WebSign in Chrome, just weakens a bit further below strict TOFU. https://www.cyph.com/websign goes into detail, but WebSign has some client-side logic to validate its own hash against a signed whitelist. The major downsides to relying on this are:
1. It depends on a caching layer, not a security feature. This means that any guarantees are potentially out the window if a browser vendor decides to do something crazy for performance reasons or whatever.
2. It opens up an attack vector where it can be forcibly unpinned by filling up the user's disk and making the browser evict the cached WebSign instance.
All in all I think it's still basically fine, but shipping an optional browser extension for hardening WebSign is now a higher priority because of this.
There was a site posted on HN a while back that had an interesting take on a solution to this: they had a service-worker that checked github.com for the latest version of the app code and itself (along with the standard subresource integrity of course). That description doesn't do the system justice, as to my memory it seemed like a pretty sound system as long as your public repo remains uncompromised.
Unfortunately can't remember the name of the website nor exactly what it did...
A browser add-on that is manually installed (which I believe would stop any potential insecure/unintended automatic update) could check a digital signature embedded in a formatted comment inside the JS file. That is relatively easy to implement, but you will also want some sort of PKI for key distribution and revocation.
The idea came to my mind that people could create websites on domains of the form [some SHA hash].example.com and reference from the root HTML page a file with the name [the same SHA hash].js and this could trigger a special mode in the browser where it checked that the JavaScript file hashed to the given value, and then refused to load any other scripts.
The bootstrap JavaScript file could contain the code needed to download more files, and to download a digitally signed list of file hashes, which it could check against a hardcoded public key. Also the browser would have to remember a flag for that domain to require this same bootstrapping process every time, to stop downgrade attacks.
- Hack the servers hosting OpenPGPjs
- Hack the browser to inject or replace content across domains, sandboxes, other security barriers
It's a subtle difference, but delivering applications dynamically via web browsing is much more precarious than natively hosted applications.
Another way to think of it is if your entire Linux OS were actually just web apps with GUIs. Every time you run 'bash', it was actually downloaded from a remote server. And every time you used bash, and it used some plug-in which was hosted on some other site, that plug-in could be compromised, and could be trying to attack your OS, which if successful, would compromise your entire host.
That doesn't happen right now because all the apps sit on your host, aren't constantly re-acquired, aren't constantly subject to potential 3rd party attacks over a wide surface area. Though this does sort-of happen with programming language package managers like npm, pip and so on. But you can pin those versions and hashes if you're paranoid, which I don't think you can do with a browser.
You don't have to hack google to replace chrome. You just have to hack a trusted cert provider and DNS and/or BGP. Those two things are not unheard of.
They're using key pinning, so you would have to use one of Google's keys and use one of Google's authorized CAs. But second, I imagine the auto-update process is using a non-TLS certificate to verify the signed binaries. Either way you have to hack Google.
It would actually be much easier to just find a vuln in Chrome that can break out of sandbox and get root.
(If you're a high-enough value target and your adversary is the US Government. But in _that_ case you've probably already lost - you might just not know it yet. I wonder if Snowden uses Chrome or lets it autoupdate?)
That's a good question. I don't think the government cares enough about snowden to do that. Snowden's damage has been done. It can't be reversed. Imo, they'd be much more interested to preemptively shutdown all future would-be Snowdens.
1. Most platforms these days require a signature with a key issued by the platform. In the browser you have HTTPS but that doesn’t help if the server is compromised.
2. It’s easier to target individuals (thus evading detection) if you’re serving the code to users directly (which I think is also the case with Chrome, but not Mac App Store, Linux package managers, etc)
3. Some platforms even do some amount of auditing before including software in their repositories.
It appears they were actually just hosting a web app and users were sending their decryption password to Hushmail. I don't see a reference to backdooring the Java client, though obviously since they delivered it, they could do that too. https://www.wired.com/2007/11/encrypted-e-mai/
Back in the days when I was 15-16 I added a simple system() call in PGP to send the plaintext message out via sendmail before encrypting it as an experiment on my local system, and was surprised that a one line change would completely compromise the system. I remember telling Phil Zimmerman this at a conference I think he said something like "well, that is a real asshole move". There's nothing to prevent all of these "create secure password checkers" from sending out JSON requests your plaintext password as well. Furthermore, you can send fake "invalid password" messages and trick a user into typing all of his old passwords and permutations while he/she tries to "remember" the right password. One of my first root exploit for OpenBSD was taking over the console, killing login, and presenting a fake login prompt that pipe'd its output to a real one.
No, because the provider would be in the position to change the SRI hashes. Sub Resource Integrity protects you against malicious CDNs and so on, but needs a non-compromised HTML page to provide correct hashes.
You could however probably provide a signed entry point via a webextension or so and a an audit trail via a trusted distribution plattform, like addons.mozilla.org. Are there apps which use a mechanism like this?
> No, because the provider would be in the position to change the SRI hashes. Sub Resource Integrity protects you against malicious CDNs and so on, but needs a non-compromised HTML page to provide correct hashes.
Can you not trust the originating site to serve non-compromised HTML if using HSTS and a trusted local certificate store (eliminating MITM as an attack vector)?
Not if the originating site is the potential attacker, which was the initial scenario: Protonmail sending you page that leaks your decrypted mail to them.
There was a way to make a permanent site installation on most browsers using HTML5 appcache, which even the web host couldn't update, but that API is deprecated in favor of the service worker's Cache API.
It's not clear to me that the Cache API offers the same level of security guarantee.
This solves a different problem. Say actor A makes a website and uses CDNs provided by actor B. This protects users from actor B changing their scripts, but not from a malicious actor A. In this case I'm presuming ProtonMail is actor A.
I don't know if Protonmail do this or not, but presumably you could supply the javascript part of your web application as a browser plugin, which would render it immune to attacks on the server.
To be fair, what you said is only tangentially related to the posted article anyway. It is about a security audit of the OpenPGPjs library (what bastawhiz commented about), not how ProtonMail implements it (what you commented about).
> Please, that's not a verified claim[0], and you shouldn't trust any VPN service that isn't operated by you in the first place.
The co-founders of ProtonMail were caught providing multiple inaccurate statements about their business practices in that thread, and couldn't deny any of the facts stated by the co-founder of PIA[1].
Which part of the world should any service be provided from to be trustworthy? Let me rephrase, which services are known to have never cooperated with any agency, nor ever being hacked by them?
1. You are the user in question that is a co-founder for PIA.
2. You are a direct competitor to Proton*
3. "Messengers", especially in the position of founder of the competitor, have significant bias.
4. You have a fiduciary reason for them to fail.
5. user protonmail has noted significant harassment regarding this issue
Im inclined to distrust both of you. I find that your arguments might have merit. But I also see you as a digital aggressor. I don't particularly like either.
It's neither religion, nor politics. It's tech. You don't need to believe in anything you cannot verify yourself. I have checked most of the statements provided by the co-founder of PIA, and found none of them to be false, even if he sometimes crossed the line of a civil discussion.
I'm sure the poster was addressing "Focus on the facts. Not the messenger."
As was listed, there are plenty of good reasons to learn about the messenger, just like how looking up my comment history will show my propensity for calling this argument out. You know my ulterior motives and where I'm coming from.
Those types of comments (yours and the general back-and-forth with ProtonMail) make the VPN industry look like it's full of sharks. It's hurting all of you. It makes you look unprofessional. Before seeing this I had a favourable impression of PIA, but not anymore.
EDIT: I'm sure the competition is intense and I'm not sure I would be able to rise above it myself but I think you need to be aware of what it looks like.
I agree that people should read the linked comments.
I did not find the the evidence to be as clear cut or damning as OP seems to think at all after reading through it.
Most claims are also put forward by a co founder of Private Internet Access. A direct competitor.
ProtonVPN is no competitor to Private Internet Access in terms of the size and the number of users. If you were a co-founder of PIA, would you risk your reputation by publicly providing false accusations against a company 1/100 of the size of yours?
That's the problem. I haven't found any of his statements, that would be only half-true. I even discovered a conference held in Lithuania in 2017, where one of the speakers was presented as the head of B2B sales at Tesonet, working on Oxylabs[1]. It is very unlikely, that ProtonMail was not aware of who it was partnering with on a free VPN service.
Furthermore, after it was pointed out by the co-founder of PIA, that the CEO of Tesonet is the director of ProtonVPN UAB, the company was renamed multiple times in two months[1], with its director now hidden from the public view.
Finally, the IP blocks, which belonged to Tesonet and were used by ProtonVPN just a few months ago – despite the co-founders of ProtonMail publicly denying any technical partnership between the two[1] – now belong to ProtonVPN[2].
> These stories were first fabricated by Private Internet Access, a competitor who has been feeling pressure from ProtonVPN lately.
This is a lie. Private Internet Access is probably the largest paid VPN provider in the world, and ProtonVPN (by Tesonet?) belongs to a short list of free VPN providers, such as Onavo VPN by Facebook[1] and Hola VPN by Luminati[2], most of which are subsidized by data mining companies. These are two completely different markets.
> We used the same legal address and nominee directors as our local partners because we still did not have our own office yet. For contractual reasons, these moves took some time. For example, ProtonLabs Skopje, our newest entity, only moved in November 2017.
ProtonVPN UAB has been founded in July 2016, and was still operated from Tesonet HQ in June 2018, when this fact was made public by the co-founder of PIA. The current ProtonVPN legal address in Vilnius, Lithuania can be used by any company, which agrees to pay for 1 work-place without any long-term obligations[3]. This means, that ProtonVPN might as well be still operating from Tesonet HQ.
> ProtonVPN/ProtonMail does not, and has never used any IPs or servers from Tesonet (this can be publicly verified)
This is a lie. ProtonMail admitted to using Tesonet IPs, when presented with Whois results in June 2018[4]. Those IP blocks were later assigned to ProtonVPN.
> Proton does not share any employees (or company directors) with Tesonet. This is also a verifiable fact.
This is a lie. It is no longer possible to verify, who is the director of ProtonVPN, because the company made the public record unavailable after changing its name multiple times in the last two months[5]. The last public record listed the CEO of Tesonet as the director of ProtonVPN[6], which was still true in early June 2018, when the co-founder of PIA made the fact public.
> There is little actual evidence that Tesonet does data-mining (in any case we have never used infrastructure from them).
This is a lie. There is plenty of actual evidence, that Tesonet is running a data mining company, called Oxylabs[7][8], which sells access to "10+ Million Mobile IPs in Every Country and Every City in the World".
> We used Tesonet as a local partner before we had an official Lithuanian subsidiary, and rented office space from them. We don't share employees, infrastructure, etc. We have had a similar temporary arrangements with local companies when we opened offices in other jurisdictions where we didn't have an official presence yet.
This type of arrangement is common in the startup world.
The section from the "About" page of Tesonet (26 Apr 2018)[1], which got removed soon after that HN thread:
"For the latest project, Tesonet is working together with an international brand from Switzerland to create a security product that helps users protect their network traffic. As part of this technical partnership, we are collaborating on datacenter and network infrastructure that can easily supply 10 Gbps worth of bandwidth to users around the world. The product is developed using the latest authentication encryption methods and the best practices in the security world."
I strongly resent the implied concept that being "East European" by itself could be used as a valid argument to doubt the quality or integrity of a service.
Tesonet denies to its customers[1], that it is running both, a VPN service, NordVPN, and a data mining service, Oxylabs, from its HQ in Vilnius, Lithuania, even though both of these facts can be easily verified by anyone with the internet connection[2][3].
Cloud<everything> should not really be trusted for anything that requires privacy (unless you encrypt the data locally first using GPG or something similar).
Eastern Europe, from which ProtonVPN is operated as a legal entity without the knowledge of its users, is an entirely different jurisdiction from Switzerland in terms of privacy and data retention laws.
>Eastern Europe, from which ProtonVPN is operated as a legal entity without the knowledge of its users, is an entirely different jurisdiction from Switzerland in terms of privacy and data retention laws.
There is nothing wrong with data mining itself. It's completely neutral technology. You are just thrwoing shade with link flooding (those who read the links find out that they don't credibly confirm what you say).
Tesonet provides all kinds of services, like hosting, software development and cybersecurity for it's customers.
> There is nothing wrong with data mining itself. It's completely neutral technology.
Tesonet's Oxylabs offers "10+ Million Mobile IPs in Every Country and Every City in the World"[1], which might explain why ProtonVPN, whose Android app is signed by Tesonet[2], is a free service. This is how Luminati, Tesonet's largest competitor in Residential Proxies, operates: it provides a free VPN service, Hola VPN, and then connects its users into a botnet[3], which is used for data mining operations.
It turns out, that Luminati Networks Ltd sued UAB Tesonet over patent infringement in "Large-scale web data extraction products and services with residential proxy network (oxylabs.io)"[1] in July 2018.
> The only limitations come from the platform itself (JavaScript/web), which do not allow for side channel resistance or reliable constant time operations. Overall however this is an exceptional library for JavaScript cryptography.
How would this compare to something like WebCrypto, which assume would be implemented in a way that would allow for side channel resistance etc? It does seem surprising that we don't have something like a browser API version of libsodium in widespread use already.
You are confusing crypto primitives with a high-level spec like OpenPGP. OpenPGPjs used WebCrypto and node crypto libraries when available for primitives. You still need a library for the OpenPGP stuff.
I think you're right to pick up on this "side channel resistance or reliable constant time operations" wording, actually. If the OpenPGPjs library is using WebCrypto for the primitives, then what are the non-constant time operations and JavaScript-specific side channels that have security implications? Such a claim should really be accompanied by a specific threat model.
Is the supposed threat actor a MitM that can use the timing of the packets your browser sends to work out when you stopped typing your email and when the email was sent to the server, allowing them to calculate the time taken by the encryption operation and thus infer something about the plaintext of the email?
Alternatively, is the threat actor someone running JavaScript code in another tab of the same browser, who can infer how much CPU the browser is using at any given time, with enough accuracy to reveal bits of the private key?
Perhaps they are imagining an attacker who could do both, and it would be very interesting to see a practical attack along these lines, but I still think that a decent WebCrypto implementation should make it close to impossible for an attacker to extract any useful information unless the user is sending billions of emails through the ProtonMail web client.
I also think exploiting it would be extremely difficult. IIRC, it was NIST ECC curves which are hard to make constant time and do not have WebCrypto primitives. We are still going to see what we can do to address this.
Yupp. My account at a particular website was terminated. They pointed to their TOS, where "anonymous" address are not allowed. Wasn't even given the chance to keep the account and change the email to an "acceptable" one.
why do they say it's anonymous? I use proton, it's encrypted but not anonymous. (unless you used the hash option someone else posted (I didn't know that was a feature)
Not yet at around a year worth of usage. I use a custom .space address which I thought would bring its own issues, but hasn't. I have spf, dkim, and dmarc setup with it as well.
You are confusing crypto primitives with a high-level spec like OpenPGP. OpenPGPjs used WebCrypto and node crypto libraries when available for primitives. You still need a library for the OpenPGP stuff.
Does OpenPGPjs use WebCrypto to create keys which are not extractable? That's the big win here if you can make it impossible for a compromised client to leak keys which were used before/after the compromise.
This also means you can't use another computer or that your key is lost if you clear browser data. Unless you'd do backups but I doubt this is standard procedure of ProtonMail users.
That's true assuming that the browser doesn't offer any way to manage that using e.g. Chrome/Firefox Sync.
What PGP really needs is a modern security model so you'd have many device keys registered to an identity rather than requiring the risk of spreading copies around. I think I have IIRC 8 GPG subkeys currently (6 of them being Yubikeys) and every aspect of that toolchain is unacceptable in the modern era.
I've got the same setup with subkeys per Yubikey (though I had to rotate due to Infineon).
What do you mean by "device keys"? Something like forward secrecy keys for initial session setup as used by e.g. Signal? This could be done with some effort... actually Rust OpenPGP library Sequoia developers already work on making this use case easier.
Another set of patches circulating on the ML adds support for TPM bound keys, that are non extractable.
Thanks to the great folks at PARAGONIE our open source platform (ie you can actually tell the code is always the same and you can host it yourself) also just passed an independent security audit:
How am I stealing the spotlight from them? They are on the front page, whereas mine is just a comment that's relevant to it. They still have the spotlight, the link is still there and my comment only adds to the number of comments on the story.
If anything, the comments saying that they shouldn't be trusted, etc. harm them more than my comment.
Actually my comment should be: I don't think being audited by a third party firm is newsworthy, this is us being audited and we didn't post it.
PR is hard. Replies like the one you got are a hint that you need to learn a lot more than just "This right here that I thought was not really newsworthy is totally newsworthy."
;)
(Chin up and all that. This is not intended to be in any way hostile.)
And to preempt the ProtonMail rep who is probably going to respond to this comment, I know that you can run the web app on localhost. But that doesn't mean that users who don't are any more secure.