Is HTTP 2.0 finalized and approved? If not could we push for it to support PFS by default with ECDHE, which seems to add only 15 percent overhead [1]? That seems like a small price to pay if the security of every session grows exponentially, with each one being encrypted with a new key.
HTTP 2.0/SPDY already makes TLS mandatory, no? So why not make PFS mandatory, too? I'd rather we do it now than wait for HTTP 3.0, and it might force a lot more companies to adopt it by default as they move to HTTP 2.0 (companies such as Microsoft [2]).
I wrote this post before the latest of Snowden's revelations and Bruce Schneier's comments on them[1].
"Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can."
I'd rather have some smart people take a thorough look at the curves we use before we make ECDHE mandatory. If we need to choose between computationally heavy DHE or possibly backdoored ECDHE, I'm afraid many companies will still pick ECHDE.
Be careful with traditional Diffie-Hellman, which in practice also has problems: if your server software doesn't let you specify your own parameters, it's probably using 1024 bit parameters. All versions of Apache are guilty of this[1], as are (at least the versions I checked) Dovecot and Postfix. I would not trust 1024 bit DH in the face of an adversary like the NSA. It would be interesting to check how XMPP server software handles DH parameters.
Yes, Prosody does allow it, but you're right that the documentation is vague - it's because it's a bit awkward at the moment.
We're planning to release 0.9.1 on Monday to address this issue (or you can grab one of our nightlies at https://prosody.im/nightly/0.9/ (build 160+) ).
Should have docs up in the next couple of days, but for now it should suffice to say that you can simply add a 'dhparam' field to your existing 'ssl' option in your config file that is a path to a DH parameters file created with something like:
> Note that ECDH and DH are different authentication mechanisms: these require special certificates and offer no forward-secrecy.
I contacted the author about this, but I don't think this is correct.
The OpenSSL ciphers documentation[1] says "DH" is simply all suites using Diffie–Hellman, not necessarily authenticated DH, which is "aDH". I actually couldn't check if it does include aDH since `openssl ciphers -v 'aDH'` tells me I don't have any aDH ciphers!
Unfortunately there's no documentation to explain the difference between EDH (ephemeral DH?) and DHE. Are they synonyms? I'm assuming DHE is ephemeral since using a string with DHE Will get you Perfect Forward Secrecy "points" on an SSL Labs test[2]. (Run the test! Secure your web servers! You can get at least a B rating easily enough.)
The strings and documentation don't make it too easy to distinguish, but the docs do explicitly distinguish between aDH and ADH. I think "DH" includes any and every cipher that might use DH, but there's a good chance that for a cipher string like TLS_DH_DSS_WITH_AES_256_GCM_SHA384, which uses "DH", it's actually part of the "aDH" umbrella (certificate carries DH keys). (Credit to the original author of the article for spotting that one via RFC5246.)
I clarified that I mean to refer to those with OpenSSL names that start with "ECDH-" and "DH-", which is not the same as what the shorthand "DH" gives you. As far as I can tell, EDH is an older synonym for DHE.
I like the ssllabs test and I hope this sort of testing gets more common for TLS usage that isn't HTTPS. However, I think a B is still cause for concern. You can only obtain a B if you have a) SSLv2 enabled (which is broken), b) cipher suites enabled with <128 bit symmetric keys, c) a RSA private key with less than 1024 bits or d) if you don't mitigate BEAST. I think only d) is a valid excuse under some circumstances.
On the other hand, a 1024 bit RSA key with SSLv3 and only RC4 will give you an A, but I would not call this very secure anymore.
I indeed managed to get a B without trying to mitigate against BEAST (d). The trouble does seem to be that RC4 isn't vulnerable to BEAST, but it's not desirable because of its other vulnerabilities.
Since BEAST was fixed in TLS 1.1, I think you can require 1.1+ and get an A, but the test suggests you might break things for a significant chunk of users.
Even if you get a B because you're vulnerable to BEAST, if you prioritise 1.1+ ciphers, you'll still fail to get an A but you'll mitigate against it. It looks like Qualys themselves have a post on this, actually: https://community.qualys.com/blogs/securitylabs/2011/10/17/m...
In my opinion, it's irresponsible of Qualys to put so much emphasis on BEAST. It's been mitigated in every major browser and most other clients. RC4 is not a better alternative (as it shortcomings can't be overcome unilaterally in client implementations), and it's dangerous to give it so much weight in the scoring.
First of all, I think it's perfectly reasonable to not get an A if your security is not perfect. Thus, BEAST does not carry "so much weight". For a lot of weight, look at SSL 2 or insecure renegotiation -- if you have those enabled you get an F.
I know that everyone assumes the BEAST had been addressed, but that's not actually the case. Safari continues to be vulnerable. BEAST is actually still exploitable (against Safari), because the Java Same-Origin Policy bypass that was used originally still remains (in a slightly different form; they appear to have failed to fix it).
But you won't hear about this, because people have moved on to the next exciting thing and no one bothers to check.
There is one positive change and that is that the Java plug-in does not appear to have access to httpOnly cookies, making BEAST less interesting. Still, other cookies, HTTP Basic Authentication, and URL-based sessions remain vulnerable.
Some might say that supporting TLS 1.1+ deals with the problem. It might, but it might not, because most browsers are susceptible to protocol downgrade attacks. An active MITM might be able to force TLS 1.0 and then exploit BEAST. I have so far determined that all browsers downgrade connections in case of failure. I am soon to test if they have some sort of rollback defense.
I am well aware that RC4 is not a good cipher, and it pains me to continue to indirectly encourage people to use it. I am sorry I cannot move at a faster pace, but researching these things takes considerable effort, and thus time.
It's actually pretty likely that next week, after I finish with some further tests, SSL Labs will stop penalizing for the BEAST vulnerability.
First off, let me say thanks for an awesome tool. Just because I disagree with the preference of RC4 over AES-CBC doesn't mean I don't think SSL Labs is incredibly useful (I use it all the time to test TLS configs).
In my opinion, if being vulnerable to BEAST means you aren't 'perfect' and capped at a B, then allowing RC4 should have the same effect, as there are very real attacks against TLS's implementation of RC4 as well.
I look forward to the day when we can shut off TLS 1.0 altogether... We disabled SSLv3 a month ago (once it dropped below 1% of our traffic). Alas, TLS 1.0 is still almost 2/3rds of our traffic. The good news is, with the release of Chrome 29 Stable, we've seen a _huge_ spike in TLS 1.2 traffic.
I'm pretty sure it was securitymetrics that I recall was failing a site for PCI compliance last year due to lack of server-side BEAST mitigation (using ciphersuite order to prefer RC4).
I don't think it's only Qualys. I think it's most of the industry.
Yes, I agree. To get an A these days, a site should at the very least support TLS 1.2, and use a key stronger than 1024 bits. That's where we're heading.
Handy tip: If you want to use ssllabs.com server test against a non-HTTP based SSL service on your host which runs on a different port, you can temporarily add this rule to iptables so that any connetions from SSL labs on port 443 get redirected to it:
In this particular case, I turned on legacy SSL in my XMPP servers (Prosody) configuration so that an SSL on connect service existed on port 5223.
Of course, in the results that SSLlabs displays, you'll get some strange information as it's expecting HTTP, but the majority of the information is useful.
SIMPLE [1], mostly. There is also IMPP [2], an XMPP competitor which has a bunch of RFCs behind it, but never gained any implementations, as far as I know.
Alternatives such as the MSN, AOL and Skype's IM protocols are shoddily defined, cruft-ridden ad-hoc protocols that people have been mostly able to reverse engineer, but they are not useful except for interoperability with existing networks.
SIMPLE has not been deployed to any great extant, that I know. The benefit to XMPP is that it's not just well defined (a bunch of RFCs with a working group behind them), but also a proven protocol that has years of production use under its belt. The problem with the IM protocol landscape is entirely political. Everyone wants to own the network, nobody wants to be compatible.
"The best cipher offered is 128-bit AES. So far, this has been the only client that doesn’t support 256-bit encryption that I’ve seen."
"Surprisingly AES128 takes priority over AES256 here."
"Surprisingly AES128 is first, followed by 3DES and only then AES256."
"128 bit AES/Camellia is preferred over those with 256 bit, but at least RC4 is at the very bottom here."
Etc...
In my opinion, preferring AES128 over AES256 is a feature. AES128 is more than sufficient in terms of cryptopgraphic strength, it's faster, and it isn't susceptible to the key schedule weakness that the higher key sizes have.
The key schedule weakness in AES256 that I know of only applies to related-keys, which don't affect TLS as far as I know. If there's anything else I'm missing, I would like to know.
I don't have a specific concern with AES128 (compared to the concern I have about the usage of (EXP-)DES and RC4). It merely surprised me that some clients prefer AES128 over AES256 and I think it's important for server operators to know that there are clients that will stop working when disabling all <256 bit ciphers.
Personally I don't find speed a concern when using IM, it's not full-disk encryption or a large download where speed is noticable.
All Java installations support only up to 128-bit AES by default. Oracle calls that "strong encryption". You can upgrade to "unlimited strength" by changing some policy files, after which 256-bit AES will be possible.
I don't know if that's what you're seeing, but there's definitely a class of clients that do not support 256-bit AES.
I'll admit that my concerns are mostly theoretical (and you're absolutely right that _only_ offering AES-128 is a bad idea).
The related key attacks don't apply in TLS (but still if the majority of the known attacks only apply to the higher key sizes, that sets my tin-foil a'tingling), and as far as speed goes, the bulk encryption cipher is never going to be the bottleneck, no matter the key size.
It's also a matter of the right tool for the job. Why get out the 2# sledge hammer when the claw hammer will pound that nail in just fine? They'll both get the job done, and to a lay person, they might both look like the 'right' tool. The claw hammer is obviously the more 'elegant' tool though.
What about Moxie's TextSecure? I think it's going to be integrated into CyanogenMod ROM's soon, so I'd like to see an evaluation of that, too. An evaluation of Surespot would be nice, too, even though it's not using PFS.
HTTP 2.0/SPDY already makes TLS mandatory, no? So why not make PFS mandatory, too? I'd rather we do it now than wait for HTTP 3.0, and it might force a lot more companies to adopt it by default as they move to HTTP 2.0 (companies such as Microsoft [2]).
EDIT: Links
[1] - http://vincent.bernat.im/en/blog/2011-ssl-perfect-forward-se...
[2] - http://news.netcraft.com/archives/2013/06/25/ssl-intercepted...