Hacker News new | past | comments | ask | show | jobs | submit login
Announcing SSL Labs Grading Changes for 2017 (qualys.com)
122 points by _vvdf on Nov 16, 2016 | hide | past | favorite | 47 comments



All good changes.

I'd also like to see the Handshake Simulation improved by splitting ancient clients out into their own section. Call it Obsolete or Legacy. Populate it with all the clients that don't support ECDHE or TLS 1.2. With that change, people won't feel pressured to support IE6 on Windows XP or other such combos. In the real world, a big majority of TLS 1.0 traffic is either 1) unknown web crawlers that don't matter, or 2) zombies running on ancient hacked machines looking to replicate. It's mostly unwanted traffic.

You might say it does no harm to enable connections to IE6 on Windows XP, but I can think of two reasons why it does: First, enabling more ciphersuites in OpenSSL (or other) increases your attack surface, and as for me, I'm way more interested in reducing my attack surface than in talking to a computer that was last updated in 2001. Second, these XP machines will hang around the Internet so long as they're useful. We cancel the driving priveleges of drunk and senile people, and we should do the same at the protocol level for XP machines.

And anyway, it's incoherent to mark an IE6/XP connection failure in red (which means bad), while simultaneosly saying that SSL3 is bad. Some cruft has built up over the years, and it needs to be periodically removed.


I like how informal systems like this drive feature development in reverse proxies. Nginx has great support for automated OSCP now, but I wonder if that would have come to pass without some group saying it is important and giving it a (somewhat arbitrary) rating.


It would be nice with an SSL Labs Beta site that let you test your site(s) according to the upcoming grading rules.


Indeed, a couple of people have asked for that already. I will try to make that happen. Edit: The development site is already used for the testing of new versions before they are deployed to production, but with the new grading perhaps we can make the testing period longer and tell people about it. I believe the grading is currently the same on both servers.


Sounds really good!! :)



And what would be really really rad would be open-sourcing the toolkit that makes these checks possible -- I'd like to plug something like that into periodic checks in nagios or something.


If your servers are all public, you can use the (also free) SSL Labs API: https://www.ssllabs.com/projects/ssllabs-apis/


Sweet -- next best thing -- thanks! !


Not quite as feature rich or opinionated, but SSLyze is decent and not difficult to modify for your use case:

https://github.com/nabla-c0d3/sslyze


I already run this once a day using this tool https://github.com/ssllabs/ssllabs-scan


Thanks for the link! I'm not sure it does thing differently, though. I'll have to check it again later.


I think different tests are being run based on the version numbers: the dev test is v1.25.2, whereas the production version is v1.24.4


Everyone should note that they intend the following change:

> HSTS preloading required for A+

So, if this vanity metric is important for you, you may want to start thinking about getting yourself added, https://hstspreload.appspot.com


The preload pre-conditions are what stops us from considering it:

- Must be on the root domain.

- Must redirect HTTP to HTTPS.

We refuse to serve content over HTTP, and thus don't redirect broken clients. Doesn't make sense to penalize people for not being on the list.


What's the feeling on them preferring 256-bit AES over 128-bit AES?

I was under the impression there is no theoretical nor practical justification for it.



How is this attack meaningful with the random AES keys generated by the TLS KDF?


Huh? That attack works with random keys. Are you confusing it with a related-key attack?


Look again. I think the meaningful attacks here are against public key algorithms (and, I suppose, maybe, MAC algorithms). See, for instance, the hypothesized 2^64 ECC attack.


The 2^64 attack there is for 128-bit ECC, which is not hypothesis---it's a standard Pollard rho attack. That is, the entire purpose of rho is to find a "key collision" aP + bQ = cP + dQ, from which we can immediately derive the secret key.

The point there is that batching helps the attacker finding the first AES key by a factor of m, m being the number of keys being attacked, but it doesn't really help finding the first ECC key---although it does help with the cost of finding the second, third, etc a bit. So for example, the effective multi-key security of AES-128, given 2^32 different keys (read: sessions), is 2^96.

See for example [1] for a formalization of multi-key security in the symmetric encryption setting.

[1] https://eprint.iacr.org/2015/101


I think this is the essential paragraph:

> What the attacker hopes to find inside the AES attack is a key collision. This means that a key guessed by the attack matches a key chosen by a user. Any particular guessed key has chance only 1/2^128 of matching any particular user key, but the attack ends up merging costs across a batch of 2^40 user keys, amplifying the effectiveness of each guess by a factor 2^40

If the attacker has collected 2^40 instances of some known plaintext, such as "GET / HTTP/1.1\r\n", encrypted under 2^40 unknown AES-128 keys (and it doesn't matter how those keys were generated), then the attacker can generate a key randomly, encrypt the known plaintext under that key, and see if it matches any of the target ciphertexts. The chance of guessing one of the 2^40 keys successfully this way is 1/2^(128-40) = 1/2^88. This means that attacking one of the keys requires far less than the 2^128 operations that one might expect would be required to attack AES-128. DJB argues that this makes AES-128 ill-matched with elliptic curves that provide a 128-bit security level, thus motivating a cipher with 256 bit keys instead, such as AES-256 or ChaCha20.

In practice, the first 32 bits of the GCM nonce used by TLS are derived during the handshake, so the attacker has to do 2^32 more computations (since there are 2^32 different ways a known plaintext could be encrypted with a given key). So perhaps AES-128 is actually fine with TLS. But the point still stands that there is a justification for preferring AES-256 over AES-128.


My undertstanding is that's the basic, very impractical attack illustration the article opens up with, so that the reader groks how batch attacks work. But most of the article is about the surprising ways in which public key algorithms take that attack into the realm of practicality.


DJBs point is summarised by the last point from the slides:

"Bottom line: 128-bit AES keys are not comparable in security to 255-bit elliptic-curve keys. Is 2²⁵⁵−19 big enough? Yes. Is 128-bit AES safe? Unclear."


DJB has espoused this point repeatedly - it's more than a motivator for the rest of the article.


I guess we can agree to disagree about whether this attack is an illustration of how treacherous it is to reason about key sizes or a practical attack on TLS.


From memory there is a related key attack on AES256 with complexity 2^99. So in that regard AES128 is more secure than AES256.


Someone who knows more correct me if I wrong, but my recollection is that the AES256/192 related key attack presented by Alex Biryukov and Dmitry Khovratovich (assuming that's what you're referring to) was seriously, seriously theoretical. Like, for real. Performing it required somehow getting the owner of the targeted key (K1) to use a specific derivation algorithm to derive 3 other keys from K1, then get the owner to perform encryption/decryption operations of the attacker's choosing using all 4 keys on up to 2^99.5 16-byte blocks, THEN the attacker uses ridiculous amounts of storage to to finish, I can't remember exactly how big but multiple exabytes of storage big.

I know how sometimes weaknesses are found that seem impossible at first but can somehow be improved, but unless something fundamental has changed in the years since that attack did not appear to have any actual significance beyond academics (which is why no one in the industry worried about it). Beyond any efficiency requirements is the basic least-common denominator actor issue: if an attacker can get a key owner (person or machine) to perform that level of arbitrary action, there are much much easier ways to subvert the entire system.

As a symmetric block cipher I suppose being in the habit of AES-256 usage is at least some gesture towards any future attacks by scalable general purpose quantum computers should they appear, since Grover's means it'd still be decent while 128 would drop to 64, though given the generally usage of asymmetric ciphers for initial exchange I don't know if it actually matters at all in the specific instance of the web. But whatever its benefits or downsides I don't think the related key attack makes it worse then AES-128.


I didn't mean to suggest that A256 was worse than A128 (though I see how it would read that way). I was just trying to point out that things are more complicated than comparing two numbers.

That being said, I don't know if I'd agree with anyone suggesting that A256 provides a meaningful increase in practical security over A128. As you said, the main difference is post-quantum and you can't just drop A256 in and say you are ready for post-quantum as you need a more holistic post-quantum solution anyway.


The related key attack is only really relevant if you're using the AES internals to construct, say, a hash function.

With random keys (generated via /dev/urandom), you don't need to worry about that.


That's exactly what I mean by the lack of theoretical justification for their grading.


I should have fixed that a long time ago and I apologise that I haven't yet. But I will! At the moment, sites with "too strong" security (e.g., 4096-bit RSA, 4096-bit DH, etc) are still rewarded for it, but they shouldn't be.


I think it's a bit silly to demand 256 bit security from your bulk cipher but only 128 bit security from your key exchange.

Quantum does affect this, but obviously the implications of quantum are way bigger than your block cipher choice.


Talking about "128 bits" security is meaningless.

DJB pointed out, correctly, that generic attacks against ciphers with 128-bit keys are distressingly close to being practical. Generic attacks against 256-bit ECDHE are wildly impractical. If you want to make it take comparable effort to break the key exchange as it takes to break the cipher, the cipher key needs to be bigger.


> DJB pointed out, correctly, that generic attacks against ciphers with 128-bit keys are distressingly close to being practical

Not he didn't.

And 256-bit ECDHE has 128 bits of security.


Look at his paper "Understanding Brute Force".

> And 256-bit ECDHE has 128 bits of security.

Can you give that a meaningful definition? I suspect you mean that it takes ~2^128 operations to break a single target ECDHE exchange. If so, that is both true and completely irrelevant to anything I said.

The key point here is that it does not take anywhere near 2^128 operations before you can decrypt a single AES-128 block out of a large corpus of captured ciphertexts. For that type of attack, 128 bit ciphers offer vastly less security than ECDHE on a 256-bit field.


If you're talking about batch attacks. They're a completely theoretical kind of attack, it does not apply to normal usages of AES. Sure, TLS makes it possible to theorize about them. Let me try:

Imagine that the NSA has collected 2^40 sessions using AES-128 on the internet. Let's imagine that they only want to steal session IDs being sent (if they want to eavesdrop on more data, it will be worse). From burp, I get that a simple GET request to Gmail is 2319 bytes, that's around 144 blocks of AES. Let's say a 100.

So in total, the NSA would have to store 2^40 * 128 * 100 bits, let's say around 2 petabytes.

Also I forgot, let's imagine that the first block of any of these sessions' first message is always the same thing. A GET request to the same address. We got extra lucky!

Now they have to perform 2^88 AES operations for each key guess, and hopefully they will start finding keys at this point. (It's more than all of the computing power of bitcoin btw.). And for each of these AES operations they also have to check in their 2 petabytes corpus for a match (I hope you way to do that).

Let's imagine that the NSA wants to do less computations of AES. They could have stored 2^50 sessions instead! And now they only have to perform 2^78 AES operations.

For this they would have to store around 2 exabytes. In 2010 the whole traffic of internet was estimated to be around 21 exabytes per month. So if the NSA would be to record sessions for a month, they would need to store 10% of the internet hopping that they would also be GET requests to Gmail.

Now we can theorize about the evolution of storage space, and computing power, and ... quantum computer. In which case ECDHE will fall before AES-128 does.


> From burp, I get that a simple GET request to Gmail is 2319 bytes, that's around 144 blocks of AES. Let's say a 100. > So in total, the NSA would have to store 2^40 * 128 * 100 bits, let's say around 2 petabytes.

2 petabytes is no big deal for the NSA. Also, why would they store anywhere near this much per session? They can estimate which part of the stream matters and store that part if they want to.

> And for each of these AES operations they also have to check in their 2 petabytes corpus for a match (I hope you way to do that).

Bloom filter plus a large hash table?

The point is that crypto ought to be configured to be secure, with a large margin of error, against an adversary who controls the entire world's computational capacity and is willing to use highly theoretical attacks because achieving this level of security is not particularly difficult. AES-256 satisfies this criterion, as does ECDHE until a quantum computer shows up. AES-128 does not satisfy this criterion even though the existing attacks are barely practical even for a nation-state adversary.

256-bit keys may not even be quite good enough against a batch attack done with a quantum computer because a full Grover's algorithm run against the entire batch runs in ~sqrt(2^bits / batch size). If you assume a batch size of 2^96 (to give a nice margin of error) and you want a work factor of 2^128 for the adversary, that gives 352 bits.

ECDHE is, of course, completely dead once someone builds a quantum computer.


I think you're over estimating both the practicality of these attacks and the operations at the NSA. This just does not make sense to compute such an attack. Especially when they are other vulnerabilities, easier to find, and with much more efficient targeting. A debate about AES-128 vs AES-256 is useless when people are still using non-authenticated encryption, non cryptographic PRNGs, etc...


Why does the grade not depend on presence of Certificate Transparency Signed Certificate Timestamp for the certificate, either embedded in the certificate or provided by TLS extension?

Google Chrome requires SCTs for all EV certificates and for all certificates issued by Symantec-owned CAs and will require it for all certificates issued after October 2017.

Currently the Qualys tester checks for Certificate Transparency, but the grade does not depend on it.


It may just be that they're waiting on more consensus from browser/OS makers and will take a stronger look at this next year as Chrome's (and perhaps Firefox/Safari/IE by that point) general SCT deadline approaches. That said, honest question: are certificate authentication questions really within the strict scope of their focus on SSL? The items they grade is pretty bread-and-butter technical compliance and best practices stuff, within the direct control of a single party (the server owner) and independent of any others. It's pretty unambiguous despite any difficult subject choices that may arise from old infrastructure/client issues. Further, there's also the fact that in all this SSL Labs has grown to fill an important unfilled niche in evaluation.

By their nature though CA evaluation and choices are a lot mushier and involve more politics and human factors (witness the recent debates surrounding certificate transparency and how it interacts with internal service usage, vis-a-vis redaction etc). And fundamentally, there are also a lot of powerful, interested actors deeply involved already. Authentication is just a separate domain from the crypto itself, and involves a different set of hairy issues.


That would be a grade at how well your site contributes to the ecosystem, but not as much how safe your particular site was.


With respect, I disagree that anyone should make it easier to get an A+ rating. That really should be reserved for the best of the best, where a site implements all current best practices and avoids all current weaknesses.

An A+ grade should be hard to get, and really mean something.


No, it shouldn't be hard to get and it already does mean something.

You seem to think of grades as in 'honorifics' or something. "The best of the crop are assigned an A+ to distinguish them from the rest".

But that's not what we have here. SSL Labs grades aren't "rare". They're a supposed to be a direct result of your TLS deployment practices. Given that everyone should follow best practices, everyone should aim for (and get) an A+ in an ideal world.


No, sites would be better off spending that extra effort on non-cryptographic security like protecting their DNS registration, internal spearphishing, etc.


IMO, full cryptographic security should be the baseline.

Yes, you need the other things too, but until you've got full current cryptographic security, you're simply not done in that area.


What's an example of something they could add that would make A+ harder to get, and also add meaningful security?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: