Hacker News new | past | comments | ask | show | jobs | submit login
Downgrade Attack on TLS 1.3 and Vulnerabilities in Major TLS Libraries (nccgroup.trust)
227 points by pentestercrab on Feb 8, 2019 | hide | past | favorite | 67 comments



Just commenting since I saw and recognized the name next to BearSSL... Thomas Pornin is an absolute treasure, and anyone interested in entry level crypto and beyond should read through his StackOverflow responses. Many of the answers simplify complex topics into more digestable pieces.


Link to his stackoverflow answers sorted by votes: https://stackoverflow.com/users/254279/thomas-pornin?tab=ans...


Thomas Pornin actually posted a lot of answers on multiple sites on the StackExchange network, under two different accounts:

https://stackexchange.com/users/92852/thomas-pornin

https://stackexchange.com/users/969353/tom-leek


Thanks for this, StackExchange was actually what I was thinking of when I wrote StackOverflow. Luckily, he also posts there a lot as well. I think my favorite answer is as below, as it's the first time I could recall a simplified answer of how TLS works, while retaining technical detail. A gift for sure...

https://security.stackexchange.com/questions/20803/how-does-...


It's worth noting that on security.stackexchange.com , The Bear and his alter ego are no.1 and 2 by reputation, respectively.


With the amount of TLS vulnerabilities I don't really understand why we're not just replacing it completely. From what I've read a lot of the issues is the complexity of the standard itself that we could do much better now that so much more is known about good crypto practice. Google has already pushed HTTP 2 and now 3 thanks to having very sizeable chunks of both the browser and the sites. Why not also have a much better designed crypto standard in one of those efforts?


Sure, this is the standard anti-agility argument. "Oh, we understand yesterday's mistakes now, so, just throw away everything built before today and start fresh, then there will be no mistakes". It's like CADT but with cryptography.

If you have the luxury of greenfield development, you are welcome to try this. There's a pretty good chance you'll screw up badly, but regardless when tomorrow more opportunities for mistake are discovered you'll be vulnerable and won't have a greenfield any more. You get to learn basically the same lessons about agility everybody else did, the same way everybody else learned them. Brilliant.

The Web is not a greenfield development. Google may have "pushed HTTP 2" but you can still connect to their sites using HTTP 1.1 because _of course you can_. So that first step, where you throw everything that already exists away, is immediately the end of your whole strategy for the Web or more or less any public Internet service.

You might be thinking. "OK, old crap stuff would be affected, but my new shiny things would be fine". And you're almost right. But you have to really operate a scorched earth policy, the new shiny things _must not_ interoperate with the old crap at all. And that's a deal breaker in practice on the Internet. If you say "Well, if we can't do shiny I guess we'll do the old thing" then you lose immediately, that's the thrust of their TLS 1.3 example, both client and server want to talk TLS 1.3 which isn't vulnerable - but the attacker abuses the fact that they're willing to talk TLS 1.2 instead.

If you don't want to do RSA kex in your own system where you control all servers and clients, don't do RSA kex. I commend this, it's good sense. You can use the exact OpenSSL version described as vulnerable in this article, switch off RSA key exchange entirely at both ends and the vulnerability vanishes. But alas even "almighty" Google does not control all servers and clients on the Web.


> It's like CADT but with cryptography.

TLS is 25 years old now. CADT is not a fair criticism when more than the lifetime of a whole teenager has elapsed. At some point building a second system is worth it.

>You get to learn basically the same lessons about agility everybody else did, the same way everybody else learned them. Brilliant.

We would also get to throw away a bunch of stuff we already know was a very bad idea. There has to be a limit somewhere to that tradeoff. At some point starting fresh allows you to throw away enough crap that you come out ahead.

> If you say "Well, if we can't do shiny I guess we'll do the old thing" then you lose immediately, that's the thrust of their TLS 1.3 example, both client and server want to talk TLS 1.3 which isn't vulnerable - but the attacker abuses the fact that they're willing to talk TLS 1.2 instead.

There is no difference there between changing from 1.2 to 1.3 and from 1.2 to ShinyNewStuff. My point is that from what I've read we've learned enough that we could design something better than TLS if we started fresh, so we could do TLS1.2->ShinyNewStuff instead of TLS1.2->TLS1.3 and get some advantages. The downgrade attacks are present in both cases until you eventually discontinue 1.2.

>If you don't want to do RSA kex in your own system where you control all servers and clients, don't do RSA kex.

One of the things we've learned over the last 25 years is that having optionally insecure ways to use security standards is a very bad idea. So the fact that configuring TLS well enough is feasible is part of the problem, not the solution. That's the kind of thing we could potentially fix.


> TLS is 25 years old now.

The on-the-wire protocol has some faint resemblance but the actual technology between SSL 2.0 and TLS 1.3 are utterly different.

The original Bell phone system where you have to talk to an operator and say "Give me 4235 please", and a modern iPhone, are also utterly different - but every step along the way was achieved by backwards compatibility and that meant some compromises. So the iPhone still has "phone numbers" even though you probably rarely use them for anything.

That's all that was going on in TLS. The "start fresh" you were originally asking for involves _throwing away_ that compatibility to get "some advantages". I explained why TLS doesn't do that. The feeling that surely starting over would help is _exactly_ CADT even if you're not a teenager.

You absolutely can have those "some advantages" if you don't want backwards compatibility. But it's not obvious why you'd rewrite TLS rather than just use TLS in this case and refuse to downgrade below TLS 1.3. What happens when TLS 1.4 has even more "advantages" ?


>but every step along the way was achieved by backwards compatibility and that meant some compromises

This is not a good analogy. The phone system has thrown away plenty of standards as well. Throwing away TLS for something else doesn't have any compatibility problems on the web. There is no difference in compatibility between doing 1.2->ShinyNew and doing 1.2->1.3.

Maybe we can do enough within just TLS versions to fix the flaws. Maybe doing 1.4 with a much stricter set of conditions is possible and so we should keep the base. But for some reason that keeps not happening and insecure options like these still exist. But the compatibility argument doesn't exist. The end user would not notice anything.


> Throwing away TLS for something else doesn't have any compatibility problems on the web

No. Even just small changes to TLS resulted in massive compatibility mishaps. There was about a year delay in the TLS 1.3 process while they worked around things like this.

> But for some reason that keeps not happening and insecure options like these still exist.

Unfortunately you don't even understand enough about this topic to have noticed that the article is specifically mentioning a downgrade because the "insecure options like these" do not exist in TLS 1.3.

Insisting that problems you don't understand will go away if we just rewrite everything again from scratch is even closer to CADT than the anti-agility enthusiasts.


>Even just small changes to TLS resulted in massive compatibility mishaps. There was about a year delay in the TLS 1.3 process while they worked around things like this.

Those are part of the problem we should be solving. Delaying TLS 1.3 to enable middleboxes is precisely the kinds of things we should be throwing away from any standard. That TLS 1.3 was close enough to 1.2 for broken middleboxes to show how broken they are is an example of how backwards compatibility at all cost bites you, and yet you are using it to claim the opposite.

It's perfectly possible that all that needs to be done for TLS to stop being a continuous problem is a sane TLS 1.4. But I'd like to see an actual argument for what that would look like instead of you just insulting me without adding anything to the discussion.


> Those are part of the problem we should be solving. Delaying TLS 1.3 to enable middleboxes is precisely the kinds of things we should be throwing away from any standard.

Your "solution" of just breaking things while declaring this "doesn't have any compatibility problems" is not a solution people are going to accept. It genuinely doesn't matter that you think it'd be a great idea, not the tiniest bit, since it would see no adoption.

TLS 1.3 did not take this "backwards compatibility at all cost" approach you describe. On the contrary, it was engineered very carefully to work _around_ middleboxes. The protocol is untidy as a result, with extraneous compatibility fields, but the cryptography remains as intended. But you've not addressed that at all since your entire basis is "From what I've read a lot of the issues is the complexity of the standard itself".

Again, this issue you're reading about today is NOT a problem with TLS 1.3, and would not have been fixed by any changes to TLS 1.3 or imaginary alternatives to TLS 1.3 _unless_ as a side effect they prohibited falling back to earlier versions, something you can already choose to do if that's what you want (you do not).


>It genuinely doesn't matter that you think it'd be a great idea, not the tiniest bit, since it would see no adoption.

The discussion around TLS 1.3 delays included plenty of people that also thought middleboxes should not be enabled. If a standard that broke compatibility with them completely would be adopted is an open question.

>TLS 1.3 did not take this "backwards compatibility at all cost" approach you describe. On the contrary, it was engineered very carefully to work _around_ middleboxes. The protocol is untidy as a result, with extraneous compatibility fields, but the cryptography remains as intended.

The second part contradicts the first. The standard has had to be made more complex to allow for the backwards compatibility and that's a future liability. We've had other holes in TLS from these extra complexities that are completely unneeded.

>Again, this issue you're reading about today is NOT a problem with TLS 1.3

Again that's not my point at all. I'm not discussing today's issue in particular. I'm referring to the fact that we keep finding bugs in TLS consistently. I was wondering if we were at the point were a clean slate would reduce this. I've made that point, someone else has pointed out to an experiment in doing just that, maybe someday we'll get a full test. You're dismissal of the actual question as if it's invalid is just being unpleasant without being helpful.


That's a long road. Plus, short of cutting off support for legacy versions you would not mitigate downgrade attacks. On the flip side, if we're ever going to go down that road it needs to start somewhere.

A middle ground might be a modern stripped down TLS stack and something similar to HSTS to externally flag that a given site does not accept downgrades.


shameless plug, this is where current research is heading: https://cryptologie.net/article/467/quic-noise-nquic/


> The cat is not dead yet, with two lives remaining thanks to BearSSL (developed by my colleague Thomas Pornin) and Google's BoringSSL.

Some kind of award has to go to this sentence, that has to be the most convoluted way to simply say "aren't vulnerable."

In context you can only just barely follow it, and it literally involves counting the vulnerable + un-vulnerable libraries to check they all add up to 9...


I think the audience addressed loved that joke as much as this brilliant attack suite itself.


Yeah... What about LibreSSL?


LibreSSL has no TLS 1.3 support yet.


> The last 20 years of attacks that have been re-discovering Bleichenbacher's seminal work in 1998 clearly show that it is close to impossible to correclty implement the RSA PKCS#1 v1.5 encryption scheme. While our paper recommends a series of mitigations, it is time for RSA PKCS#1 v1.5 to be deprecated and replaced by more modern schemes like OAEP and ECEIS for asymmetric encryption or Elliptic Curve Diffie-Hellman for key exchanges.

RSA PKCS#1 v1.5: https://tools.ietf.org/html/rfc2313

Title: PKCS #1: RSA Encryption version 1.5

tl;dr: deprecate RSA encryption as a whole?! Did I read this right?


The consensus among cryptographers for quite a while now has been that RSA should be avoided. Implementation vulnerabilities in RSA aren't surprising, and it's a poor choice of algorithm for modern cryptosystems.

However, note that much of the problem with implementing RSA correctly is the padding. The specific recommendation here is to only use RSA OAEP, and preferably to abandon RSA altogether for more modern (elliptic curve) constructions.

So no, they're not saying to deprecate RSA in its entirety (though I have high confidence all of the authors would strongly suggest that to anyone who asked). Rather, they're saying you should only use RSA with one very specific form of padding, if you absolutely insist on using RSA in 2019 (and you shouldn't unless you know you have to).


The Cryptographic Right Answers (https://latacora.micro.blog/2018/04/03/cryptographic-right-a...) do tell you to ditch RSA if you can.

They say the only way to use RSA is in a very very specific way that is probably not the default in many implementations. If you need to be careful when doing crypto you will do something wrong. It's just better for everyone if you just forget about RSA and switch to something that is both highly secure by default and hard to mess up in the implementation.


I'm a bit late, but this is referring to ditching RSA only for encrypting a symmetric key. The algorithm is vulnerable to side channel attacks since it expects the plaintext to follow a certain padding. The method only has 6% use today and is generally deprecated for DH.

For DH, RSA is still used purely for signatures. RSA is not used to hide any data.


RSA is a reliable algo, key exchange protocols are not, and are being broken all the time.

Where is exceeds is its original use in PGP/GPG as no complicated key exchanges are taking place


There's a big disparity between the level of ambition in transport security implementations, and the big recent archievements in getting crypto more widely deployed...

I think the current standard should be memory-safe implementations with proven robustness against known classes of attacks, and optional resistance against traffic analysis (at expense of wasted bandwidth).


Jesus. Imagine being enough of a genius to actually write timing-safe code.


Why? You should be able to mathematically prove lack of timing channels instead over whole negotiation... (For a specific CPU implementation or a set of them at least.) It's just that even encryption library authors are not mathy enough and it takes effort to model CPUs enough.

PKCS#1 RSA is likely possible to be proven broken by design...


yeah next time I deploy a web backend lemme figure out if I can implement timing channels in my framework and also make sure the framework authors modelled the encryption suite so that's even possible, it's not like I'm an actual software engineer getting paid to ship features instead of thinking mathematically about the encryption libraries I'm using


> It's just that even encryption library authors are not mathy enough

I rest my case.


They can implement an algorithm. It is not the same as writing tons of pages of a machine code level automated theorem prover.

Except someone wrote a library for timing proofs (including cache and memory) in Isabelle/HOL already, but it has to be combined with the recompile prover from SeL4 project. That would take some time and work.


And with Intel releasing new micro-architectures all the time invalidating your proof?


Which is why you should use RISC-V Ariane instead which is already being timing modelled by ETH.

Or work on the ARM Cortex M5 verification then run it on AMD-SP which is one. Presuming they allow you to run your own code. Likewise Intel ME... But that processor is some weird architecture.

Or other external hardware - smart card or the fancy U2F.


Different RISC-V and ARM cores also have differing timing?


Yes, which is why I mentioned the specific Ariane in-order application processor and specific ARM processor. If the encryption algorithm validates in SeL4 Ariane environment, it is algorithmically side channel free. However, this does not say anything about it being side channel free on x86 Linux - out of order execution, more caches, HT - all Spectre vectors.

Of course the proof can be easily extended to handle other RISC-V in order cores. Proving out of order execution is hairier, a lot of work to be done.

Oh, and you get to freeze the code to given validated compilation FIPS style. Any random C compiler revision can mess you up. Or rerun the prover.


Huh, I could have sworn your comment said you should use RISC-V instead. Apologies.


Why is using RSA for key exchanges acceptable? I thought we had dedicated key exchange algorithms, Diffie–Hellman (DH), the ephemeral variant (DHE), and the elliptic curve variant (ECDHE). So why RSA?

Also, what if I disable RSA in my browser and make sure the ClientHello doesn't mention RSA? Will I be secure?


Your browser probably doesn't have an option to do this. If it did you'd find out that certain sites just don't work if you forbid RSA key exchange.

The type of site that wants SSL Labs A+ scores works fine. But your bank probably doesn't (they actively don't want ephemeral key exchange) and nor does some crumbly older HTTPS site running a stitched together Apache 1.x on an old Debian release.

To protect against this attack the server needs to refuse to try RSA key exchange OR you need to refuse RSA altogether including the safe and extremely popular authentication step.


Serious question: Why does the bank care about the TLS key exchange?


Some businesses have to run WAF products for regulatory compliance, which is typically implemented via TLS decryption at a WAF. There are ways to do TLS decryption with ephemeral keys but many orgs just use the easy way of just giving the WAF the RSA key.


Is LibreSSL affected?


LibreSSL was not among the 9 libraries tested.

Nevertheless, it is likely that LibreSSL has not replaced yet this part of complex code inherited from OpenSSL. In that case, LibreSSL would also be vulnerable.


I guess it's not, as it doesn't support TLSv1.3 yet: https://news.ycombinator.com/item?id=19113106


As others have mentioned, TLS 1.3 is not yet supported. But a little bit of context for that decision as of early last year.

https://github.com/libressl-portable/portable/issues/228#iss...

We will support 1.3 once the standard is firmed up and finalized (i.e. ceases to be coopted by vendors making changes to allow for people to continue to run moribund middle boxes that can't recognize a new protocol on the wire) Since there is effectively nothing wrong with TLS 1.2 with a sanely chosen cipher suite today, we believe a clean careful implementation is more beneficial than early adoption.

So there's no rush on the part of the developers to support it. Skimming the commits on that git mirror we can see 1.3 related commits started hitting late last year.


Good question. The paper is here[1] but there is no mention of LibreSSL.

1: https://eprint.iacr.org/2018/1173


Strange. LibreSSL a fairly well-known implementation compared to something like BearSSL (which I had not heard of until today). Does anyone have any ideas on why LibreSSL was not mentioned?


It's not really independent. It's a major fork of OpenSSL, but at the end of the day, it's a fork of OpenSSL.


Probably. It's basically OpenSSL, with pieces removed. I don't think they've had time to rototill the majority of it.


How about the obvious solution to all timing attacks of sending a network response after a timer set to 2^K milliseconds expires rather than when the data is ready? (with K adaptively incremented and decremented only rarely, outliers managed by starting the timer again so that the time is rounded up to 2^K)

This way, the only timing signal available would be which requests take an outlier amount of time, and I doubt that's enough to break anything unless you can remotely cause the peer to hit slow disks or make network requests depending on secret data (which is a far more explicit programming choice than CPU timing differences).


The article is about cache side channels. They are running on the same CPU as the victim, typically in the Clown. Your "solution" isn't addressing the problem this article is about.


Maybe a one size fits all transport security protocol is a bad idea. Simpler and smaller protocols with less features might provider better stable security?


Downgrading to TLS1.2 isn't the end of the world.

One of the things you can do to make a significant difference is configure all of your httpd (apache2, nginx, whatever) to specifically disallow SSLv3, TLS1.0 and TLS1.1.

There is no longer any relevant population of useragents that don't understand TLS1.2.


This advice seems nice but I assume you wrote it as a response to the article title.

The article itself is actually only doing TLS 1.3 downgrade in passing, and probably only because if they don't some people will say TLS 1.3 fixes this magically (it couldn't).

It's a Bleichenbacher Oracle, again. So the effect is you get the server to do RSA operations for you. You don't learn their private key, but the server uses it and you eavesdrop on the process.

A TLS 1.3 ONLY server isn't vulnerable (no Oracle) and a client isn't vulnerable even in TLS 1.2 if it refuses to use RSA completely. But unlike your suggestion to disable much older versions, those options aren't very practical today.


Seconded, and sourced:

> Microsoft cited public stats from SSL Labs showing that 94 percent of the Internet's sites have already moved to using TLS 1.2, leaving very few sites on the older standard versions.

> "Less than one percent of daily connections in Microsoft Edge are using TLS 1.0 or 1.1," Pflug said, also citing internal stats.

https://www.zdnet.com/article/chrome-edge-ie-firefox-and-saf...


Well given the amount of Internet traffic out there, I'd say that one percent (albeit 'less than' that) could be rather a lot of traffic that would be blocked if TLS 1.0 or 1.1 were totally dropped or blocked.


The suggestion was to block it at the server not the browser. Presumably most of that 1% is because of old sites/servers and not old browsers.


Android only added TLSv1.2 in Android 5.0. Sure, seems that was ages ago, but Android 4.x audiences can still be relevant to some.


Now, who was a TLS 1.3 committee member who pushed for removing "hard versioning" from server hello?


am I vulnerable if I only use TLS 1.2?


Yes. The only way to be invulnerable to this class of attack is of one of:

1. You never use RSA at all (the attack needs a server to be willing to do RSA decryption, but clients only need to be willing to do RSA for certificate verification)

2. Everything is "on premises". This is a cache timing attack and probably won't be practical even a short distance away over a network.

3. Server doesn't allow any version below TLS 1.3


2. Just like Spectre was not practical over the distance... Until you introduce the fact that browsers run JavaScript.

3. Almost all known servers accept TLS 1.2, and definitely banks.


Actually what I mean is that my server only responds with TLS 1.2, nothing below or above.

So I am still vulnerable to 1. and 2.?


Yes. Despite the HN title the article topic isn't really "We found a new problem in TLS 1.3" it's closer to "Bleichenbacher Oracles still exist in lots of TLS implementations, although in two of the nine we checked we couldn't find an Oracle".

They only mention TLS 1.3 because otherwise uninformed people would say "Just upgrade to TLS 1.3" which won't fix the problem.


Thanks. Still confusing title, guess best Would be tls 1.2,1.3


Thank you; not OP but helpful summary.


Is BouncyCastle affected?


What about LibreSSL?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: