Hacker News new | past | comments | ask | show | jobs | submit login
Patch Critical Cryptographic Vulnerability in Microsoft Windows [pdf] (defense.gov)
714 points by Moral_ on Jan 14, 2020 | hide | past | favorite | 228 comments



From a conversation with Thomas Pornin, a plausible explanation given the details provided in the DoD advisory:

Given an ECDSA signature and control over the curve domain parameters, it's straightforward to create a second private key that matches the original public key, without knowledge of the original signing private key. Here's how:

To start with, you need to understand a little bit about how curve cryptography works. A curve point is simply the solution to an equation like

    y^2 = x^3 + ax + b mod p
The "curve" itself consists of the parameters a, b, and p; for instance, in P-256, a is -3, b is (ee35 3fca 5428 a930 0d4a ba75 4a44 c00f dfec 0c9a e4b1 a180 3075 ed96 7b7b b73f), and p is 2^256 - 2^224 + 2^192 + 2^96 - 1.

To use that curve for cryptography, we standardize a base point G, which generates all the points we'll use. A private key in ECC is simply a scalar number k mod p; the public key corresponding to that private key is kG (the curve scalar multiplication of the point G times our secret k). Everybody using P-256 uses the same base point; it's part of the standard.

Assume that we have a signature validator in CryptoAPI that allows us to specify our own nonstandard base point. We're ready to specify the attack; it's just algebra:

Let's call Q the public key corresponding to the signature; for instance, Q could be the ECC public key corresponding to an intermediate CA.

Q is a point on a named curve (like P-256). Q = xG for some private key x; we don't, and won't ever, know x. G is the standard generator point for (say) P-256.

What we'll do is define a "new curve", which is exactly P-256, but with a new generator point. We'll generate our own random private key --- call it x' --- and then from that random private key compute a malicious generator G' = (1/x')*Q.

On our "new curve", Q remains a valid point (in fact, our evil curve is the same curve as P-256, just with a different generator), but now Q' = x'G', and we know x'.

Now we sign a fake EE certificate with our evil private key x'. Presumably, Windows is just looking at the public key value and, reading between the lines of the DoD advisory, the curve equation, but not the base point. By swapping base points, we've tricked Windows into believing the private key corresponding to Q is x', a key we know, and not x, the key we don't know.

I'm paraphrasing a shorter writeup Pornin provided, and the basic curve explanation is mine and not his, so if I've worded any of this poorly, blame me and not Thomas Pornin. The actual exploit-development details of the attack will involve figuring out in what circumstances attackers can swap in their own base point; you'd hope that the actual details of the attack are subtle and clever, and not as simple as "anyone could have specified their own base point straightforwardly at any time".

See also this related exercise in Sean Devlin's Cryptopals Set 8:

https://toadstyle.org/cryptopals/61.txt

This attack --- related but not identical to what we suspect today's announcement is --- broke an earlier version of ACME (the LetsEncrypt protocol).


I hope that the actual vulnerability is far more complicated. If we can't even get crypto libraries right (where you'd hope most of the formal verification folks are), then there's not much hope of security for the rest of the industry.

I'm normally not much of a pessimist but things like this really make me wish we could just burn all the things and start over.


> just burn all the things and start over

Wrap all of this with an "IN MY OPINION"...

That would make things worse because we'd make the same mistakes again. I've been on many start over projects (Xeon Phi, for example, threw out the P4 pipeline and went back to the Pentium U/V). It doesn't work. You know what the most robust project I've worked on? The instruction decoder on Intel CPUs. The validation tests go back to the late 1980's!

You make progress by building on top and fixing your mistakes because there literally IS NO OTHER WAY.

Go read about hemoglobin, its the one of the oldest genes in the genome, used by everything that uses blood to transport oxygen, and it is a GIGANTIC gene, full of redundancies. Essentially a billion years of evolution accreted one of the most robust and useful genes in our body, and there's nothing elegant about it.

I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.


It really pisses me off when devs start ripping out asserts and tests that fail "But have been doing so for a while, so clearly they aren't needed."

How about... no. Review the code and determine our undefined behavior is understood well enough that we can accept the bad inputs. Those asserts & tests were created for a reason, and need to be maintained. Not just removed because no one can spot the subtle failure modes.


Yes, though fragile tests can be a sign that simplification might benefit your system.

Sometimes it's useful to just turn flaky failures into solid failures.


> You make progress by building on top and fixing your mistakes because there literally IS NO OTHER WAY.

As long as you are talking about knowledge, not artefacts. There is indeed no choice but accrete, organise, and correct knowledge over time, because anything you forget is something you might get wrong all over again.

Artefacts are different. It often makes sense to rebuild some of them from scratch, using more recent knowledge. We rarely do that, because short terms considerations usually win out (case in point: Qwerty).

> I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.

Only if we give up any hope of improving performance, energy consumption, or die area. Right now the biggest gains can be found by removing cruft. See Vulkan for instance.


Vulkan is a good example indeed.

Most studios end up putting middleware on top of it to reduce Vulkan boilerplate to some more manageable code level, which ironically makes some Vulkan code bases run slower than OpenGL AZDO, due to misunderstandings how to do the low level work in a proper way.


Vulkan is both a good and bad example.

Vulkan tries to remove the "bloat" of the driver by moving it into the engine (or the middleware the engine uses), which, yes, reimplements a pretty sizable chunk of what the driver used to do.

But it exposes the API in a way that requires domain-specific knowledge of how modern GPUs work, which requires, frankly, smarter engine developers. They need to stop only thinking in the ways OGL/D3D taught them to think, and need to also think like a driver developer, or possibly even a compiler developer.

OpenGL was written wrong because no one knew what modern GPUs would eventually look like, and tried to solve the problem at the wrong layer; and fixed function hardware worked pretty much the way OpenGL worked in the beginning, not realizing stuff would eventually become, basically, absolutely massive highly parallel DSP-esque math coprocessors that are more complex than the system that hosts it. OpenGL became a mess because they kept bolting newer styles of hardware onto it (VBOs/IBOs/VAOs, the eventual move to unified buffers, compute shaders, fixed function geometry then non-fixed function geometry shaders, ubershaders and the move from direct to deferred back to direct, etc)


> They need to stop only thinking in the ways OGL/D3D taught them to think, and need to also think like a driver developer, or possibly even a compiler developer.

Which is exactly the opposite than anyone that wants to draw something wants to think about.

Also Vulkan is on its merry path to have endless list of extensions, so it will eventually match OpenGL's complexity about what code paths to take, with the required cleverness of having to be a graphics programmer, driver developer and compiler developer at the same time.

No wonder anyone that wants to stay sane rather picks up a middleware engine instead.


Vulkan was never meant to be directly used by people who “just want to draw stuff”. It was meant to give engine developers the tools required to squeeze more performance out of the GPUs.

This is a case of working as designed. The people trying to directly use Vulkan in their games without any middleware layer are just generally wrong here.

Regarding extensions, this is just what happens when you specify something that keeps evolving - there’s no getting around this. What you can do to minimise the complexity is decide to require certain extensions once they’ve been around for long enough. That’s what everyone does.


It would help if Khronos would promote an API for people who “just want to draw stuff”, given that OpenGL 5.0 will never happen most likely and those people don't want to be stuck with OpenGL 4.6 forever.

That is not what everyone does, because Vulkan gets new extensions updates almost every week.


> It would help if Khronos would promote an API for people who “just want to draw stuff”

Would it?

What you really want is a stable API to talk to the hardware. It doesn't really matter if the API is nigh unusable, as long as it is stable. Only when such stability is reached, can we reliably build stuff on top of it. Including a "just draw stuff" engine.

We could for instance re-implement Flash on top of Vulkan. Such a thing wouldn't need a standard body to get done and be usable by a wide range of people. (Though in this particular case, we'd likely have some standard body involved anyway, since there's already so much Flash code out there.)


So where do OpenGL developers move to, when 4.6 moves into "this legacy thing we would like to drop"?


You implement OpenGL as a middleware that speaks Vulkan.

Basically, ANGLE it, but for desktop OpenGL instead of GLES.

As an aside, Google is adding a Vulkan backend for ANGLE, and Microsoft seems to be adding a DX12 (which is basically DXVulkan) backend (to match the DX11 backend they gave Google) at some point in the future.

So, GLES, in its entirety, is now a community supported, open source, Vulkan middleware. No reason why we can't do that with desktop GL too.


So 10 years from now those that don't want to be a mix of graphical developer, driver author and compiler designers, have to keep using a frozen API from 2018, without any access to whatever has changed on their computers during the next decade.

All because providing something like MetalKit or DirectXTK is too much to ask for to Khronos and LunarG.


But should it change? There are issues where updating OpenGL support in drivers broke earlier apps due to accidental changes in how existing features were implemented.

Vulkan and DX12 are far less likely to break existing apps i the future due to far fewer core features.

It makes no sense to have what is essentially an entire legacy middleware in the driver when it no longer represents modern hardware.

Unlike GLES, OpenGL basically can never deprecate features, and D3D9 support will never, truly, die. It's a lot easier to just package a universal shim into existing legacy apps than it is to keep mangling drivers over the issue.


> It makes no sense to have what is essentially an entire legacy middleware in the driver when it no longer represents modern hardware.

Exactly, but it is the only API that Khronos is offering for those that don't want to be Vulkan experts.

Which leaves middleware engines, something totally unrelated to Khronos APIs, as the only future proof path for accessing GPU features on modern hardware.

As for drivers breaking down, the main reason why Vulkan is now compulsory on Android as of version 10, is because while it was optional, the few OEMs that bothered to support it did a very lousy job, so Google hopes by making it compulsory and part of Android CTS, the situation will improve.


> Vulkan and DX12 are far less likely to break existing apps i the future due to far fewer core features.

This is questionable. Vulkan by definition has basically no error checking in the driver, and while developers are supposed to use the validation layer, they may not do so, and even if they do, there are certain plenty of incorrect things an application can do that won't be caught by validation.

Incorrect programs may still happen to run correctly on existing drivers, but then fail with a driver update that happens to change the undefined behavior.


C compiler writers have answered that conundrum a long time ago: "if you (even accidentally) rely on undefined behaviour, the warranty is void".

I don't necessarily agree, but if we have a way to avoid undefined behaviour (and at least in C there are ways to make pretty thorough checks), then it works in practice.


The checks that according to most surveys and security reports are used by a tiny part of the C community?

If it doesn't work for C regarding mainstream adoption, how come it will work for Vulkan?


It won't, unless only a fairly small elite ends up using Vulkan. And I believe that's what will happen indeed: Vulkan is low level enough that most likely, only engine devs and middleware devs will touch it.

You will of course have the occasional cowboy (which I personally am, though in a different domain), but that shouldn't matter that much in the grand scheme of things.

Now if you ask me, Vulkan is not enough. What we really want is a stable, usable hardware interface. Basically an ISA. The thing will have close to zero bug, because hardware folks know how to properly test their designs. Undefined behaviour is likely unavoidable, but I believe it can be reduced to a reasonable minimum.

If AMD and NVidia started something like RISC-V, except for graphics cards, it will likely have a greater impact than RISC-V itself.


Perhaps wireguard VPN as a low-cruft replacement for openvpn is a good illustration of this.



> That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

with the proper code comments for the hair (in the code or in the source repo) and regression tests it becomes possible to clean up and even rewrite.

floppy disks are no longer a thing and neither is windows 95. RAM & swap space are abundant, so that OOM may never happen any more IRL.

the app's whole architecture may have been carefully chosen for the hardware and simplistic compilers of another age.

is it okay never to rewrite a webpage where 50% of the codebase is IE6 hacks?

if it is code that must be maintained, then at some point the hair may need to be shaved; it simply cannot grow forever as the world moves on around it. the doma of "never rewrite" is silly without further context.


>if it is code that must be maintained, then at some point the hair may need to be shaved; it simply cannot grow forever as the world moves on around it. the doma of "never rewrite" is silly without further context.

"Shaving" seems more close to refactoring & partial rewrite than starting from scratch. Which is what we are talking about.

Still, You make good points. Maybe, we should think of "restarting from scratch" the same way we do premature optimization - (1. don't do it. 2. Don't do it too early. 3. if you must do, measure first)

I think every developer dreams of rewriting from scratch because they hate how hacky and ugly their code is probably due to rushed deadlines and because it was just supposed to be an mvp. And they think about throwing it away and starting clean and doing it the 'proper way'. This imo is the wrong reason to start from scratch.

But if your technical debt is genuinely preventing your product from going where it needs to go or do its job. That is the right reason. Again you have to make the calculation whether the technical debt is greater than the cost of rewriting from scratch, re-opening old bugs and introducing new ones, breaking your customers workflow - they also invested a lot into your old program. How many times have you liked a program until 'the damn devs went and ruined a good thing' by 'fixing what wasn't broken'. Again, customers don't see or care about the code or technical debt. They just need to get their work done.


> April 6, 2000

Wow. That goes waaaaaaay back. It’s crazy to think that 1980 was to him at the time as 2000 is to us _right now_.


This is nothing new. Our parents all saw this already 50 years ago wherever they were working at the time.


Thank you for this, it's exactly what I'm getting at.


This stuff often looks simpler and less subtle in hindsight.

Modern cryptography has gotten better at taking human (developer) error into account for designing secure API’s, but the fact of the matter is that the math is subtle, and cryptography in general is subtle due to the places where it collides with reality, so burning everything down and starting over is likely to just cause us to rediscover issues we already know about.

Say we stop using X509 certificates. We will continue to use signing to cryptographically bind claims, and we will still use PKI-like structures to attest trustworthiness, so we are still vulnerable to the same approaches of attacks even if we don’t use certificates.

We have actually learned a few things, such as that cryptographic agility seems to cause more issues than the problems it solves, so the field is improving in important ways, but if it’s not a key matching weakness, it’s going to be some Unicode encoding BS or some other critical but not validated data somewhere else, or... (because this is a real in the wild problem) using phone numbers to generate cryptographic keys for coin wallets.


This is true, but the bug here is not subtle. It would have been shocking to discover an ECC implementation that let attackers specify curve parameters even 10-15 years ago. When we blogged the e=3 debacle back in '07 or whatever, we linked to a Kaliski presentation from 1999 that called out validating curve parameters.

At least with the ACME vulnerability, there was a novel service model involved. Here, we're talking about certificates that allow you to embed what is in effect your own cryptographic algorithm to interpret the certificate under!

This is a rare instance where I'm happy to concede that closed source allowed a terrible bug to survive far longer than it would have if nobody needed to break out a copy of Ghidra to read the code that validated elliptic curve signatures.


Very interesting take. Good description of Pornins thoughts.

By the way, when you say closed src allowed... do other libs in the opensrc space check the curve Params?


Most people implementing ECC signatures are going to end up only handling a chosen group of named curves whereupon there aren't any curve parameters to check.

For example in Signal IIRC they use Curve25519. So there isn't any code about parsing or checking parameters, it's just here's some code that implements Curve25519.

NSS (the library used in Firefox on every platform) only accepts NIST's P-256 and P-384 named curves. It is parsing an ASN.1 data structure which can have parameters for curves, but if parameters are present instead of a name NSS's code gives up immediately because it's only looking for those specific names. (These aren't actually human readable names, they are OIDs such as 1.2.840.10045.3.1.7 for curve P-256)


> cryptographic agility seems to cause more issues than the problems it solves,

Is that so? The ability to shift to new hash functions and ciphers within the bounds of a single protocol seems to have accelerated the adoption of better primitives.


We don't need the ability to switch to new algorithms as much as we need the ability to ditch old ones. Agility in cryptography only needs to mean the ability to deprecate what's broken. We're still going to see newer and more robust algorithms implemented in new software and protocol versions anyway.

What we need is 1 or 2 strong cipher suites and exactly zero weak ones, not 10 strong ones and 5 weak ones.


How is that not cryptographic agility? Which cipher suites to support is a question separate from whether the cryptography should be runtime configurable at all.


One should version whole protocols instead of adding option negotiation for things like cipher suite.

So say: TLS 1.4 = “NIST version” only supports ECDHE(P-256)+AES-256-GCM+SHA256 TLS 1.5 = “Bernstein Version” only supports ECDHE(X25519)+ChaCha20-Poly1305+Blake2b

Because of the X.509 legacy both these future TLS might have to support RSA-2048 and P-256 ECC certs, but supporting just one would be better.

In either case fewer options and branches is simpler and more secure. Both can be enabled, one turned off if a weakness is found.


> If we can't even get crypto libraries right

Signature verification is one of the hardest things to get right. One reason is, they're harder to test: when you encrypt or hash something, you have a whole bunch of bits you can check against test vectors. With signature verifications, you only have one bit: it's a match, or it's a fail.

Moreover, it's very easy to forget to check something (here, that we are using the same base point). Other constructions, like EdDSA, are simpler, and verification requires a single equality check. Harder to botch.

And even then, implementers can be tempted to get clever, which requires extra care. I've personally been bitten by not verifying mathematical jargon, and mistakenly thought "birational equivalence" was complicated speak for "bijection". Almost, but not quite. This single oversight lead to a critical, easy to find, easy to exploit vulnerability.

We found out 15 months later, 1 full year after public release, by tinkering with the tests. A cryptographer would have found it in 5 minutes, but as far as I can tell, they're all very busy.


That’s an interesting assessment. Rogaway just gave a talk at RWC about APIs for secret splitting schemes. IIRC he said that the API needed to be closer to what symmetric crypto was. But from the diagram it seemed like you were obtaining some associated data back to check that the secret was correct (not sure if this would work/is relevant with more recent DKG schemes).

A signature verification returning an actual AD would be interesting as well.


EdDSA can have something close. Long story short, an EdDSA signature has two parts, often called "R" and "s". Verification works by producing a number using the public key and "s", then checking that this number is the same as "R". There are basically 3 steps:

  1. h_ram   <- HASH(R || public_key || message)
  2. R_check <- obscure_computation(public_key, s, h_ram)
  3. if R_check == R, accept, else reject
Steps 1 and 3 are straightforward (the hash and the constant time comparison are almost always implemented in dedicated routines, tested separately). Step 2 is the most dangerous (that's where the elliptic curve magic happens).

EdDSA Implementations would be easier to check against one another if they all exposed step 2 as part of the public API. Bonus points if step 2 can handle invalid inputs (low order point, point on the twist...) in a standard manner. I haven't seen such a thing though, probably because end users never need a separate step 2.

Still, I can already envision the benefits. I'll probably use it myself.


There's a history of vulnerabilities like these, in some of the most important crypto libraries. For instance: until 2008, NSS, the TLS library used by Firefox, couldn't properly validate RSA signatures from e=3 RSA keys (it wasn't validating the full signature block, but rather parsing it and looking for the embedded hash). For e=3 roots, which were readily available at the time, you could simply build any signature block you wanted and then take its cube root.


Bleichenbacher'06 never dies.


Neither does BB'98


Can’t wait for BB20


Interesting. Were there any similar vulnerabilities in Windows XP's crypto libraries?


There have been crypto vulnerabilities in Windows libraries before, but not the e=3 vulnerability; CryptoAPI parsed RSA signatures back-to-front, and so sidestepped the issue.


> If we can't even get crypto libraries right (where you'd hope most of the formal verification folks are)

Personally, I'd hope most of the formal verification folks are working in firmware for industrial/medical embedded systems, and/or the microcontroller designs that go into those same systems. A lack of encryption (outside of military contexts) doesn't usually directly cause people to die.


Interestingly Microsoft presented EverParse designed to produce verified parsers for these sorts of data formats at USENIX Security 2019. https://www.usenix.org/conference/usenixsecurity19/presentat...

But it's only for parsing the data. What gets down with it after parsing can still be buggy.


Crypto is hard. Choose your libraries carefully, stay updated and for Zimmermann's sake: don't roll your own.


We need one time pads. They're the only really trustworthy crypto.


By all means: make yourself some one-time pads. Maybe you can convince Google to accept one from you at a dead drop somewhere in Mountain View.


Hmm. Maybe I'll generate some high quality OTPs for myself with a good CSPRNG. I could use any decent block cipher in counter mode, just need to guard against counter re-use, and then ship my OTPs off to anyone I need to communicate with.

Hang on, if I could come up with a way of securely sharing the key I used with my recipient, I wouldn't need to actually mail the OTP to them, since they could generate it themselves. Should probably include a nonce too.

Now, if only there were a secure method to share the key...


> Now, if only there were a secure method to share the key...

... that doesn't include a critical cryptographic vulnerability. :)

To be generous to the now greyed GP, there's probably a way to teach a recipient a short pad and its proper use so that the recipient can later decode a short ciphertext sent over Twitter. Perhaps even performing that feat in their head. None of the cryptography that could fit your quip has that property.

And if the recipient does something catastrophic like resend the pad over Twitter to confirm they memorized it correctly, there's at least a chance they may catch their error. Perhaps they may even correct it without the goddamned Department of Defense coming into it.

I don't get cryptographers' condescension toward OTP zealots. All they want is a better boat. At least have the decency of the captain from Titanic and apologize while we sink for not having delivered it.


> Maybe I'll generate some high quality OTPs for myself with a good CSPRNG

That's not a one-time pad, it's simply a stream cipher.


That's the joke!


https://eprint.iacr.org/2019/779 - Paper on the fun things you can do with signatures, including a write up of the Let's Encrypt Attack discovered in 2015 and some more recent attacks.


Mobile users: there is a “p” at the end of the equation, it is hidden due to the type of code formatting that HN uses.


In fact, exactly what is hidden will vary depending on your device and font size.

You can scroll the code block horizontally to see what's hidden, or here is the equation without code formatting:

y^2 = x^3 + ax + b mod p


So really, the problem is more that the horizontal scroll bar is hidden by default.


This attack is similar in nature to some JWT implementations' bug where you could pass "no-encryption" as encryption scheme to use, effectively rendering the whole scheme open to any arbitrary payload which will pass validation; in this case you can pass arbitrary G, which effectivley allows you to generate (private key, G) pair for any public key so you can inpersonate as any identity with it, right?


So the mitigation would be to add a check that the generator point in (for example) a CertificateVerify message is the one in the p256 spec (or otherwise the one on the cert, I’m not deep enough to know where it usually lives)?


In practice this means rejecting the "specifiedCurve" choice in ASN.1 ECParameters. ASN.1-based protocols are the only ones I'm aware of that permit specifying an arbitrary curve. For the PKIX ASN.1 standard(s) old-style EC public keys are specified with an ECParameters field:

  ECParameters ::= CHOICE {
    namedCurve      OBJECT IDENTIFIER
    implicitCurve   NULL
    specifiedCurve  SpecifiedECDomain
  }
PKIX, which is what TLS and most other standards use for ASN.1 message grammars, already mandates that "implicitCurve and specifiedCurve MUST NOT be used". See https://tools.ietf.org/html/rfc5480#section-2.1.1 (There are many other related RFCs. It gets very confusing, especially once you take into account obsolete and draft RFCs.)

Newer curves, like EdDSA curves, are always each specified with unique OIDs and have a simpler public key grammar. See https://tools.ietf.org/html/rfc8410 Older public curves share an OID and a more generic ("flexible") syntax, thus the ECParameters field. (RSA public keys also have a parameters field, but it's unused. However, annoyingly, some implementations omit the field altogether, others set a NULL value.)


My guess is somebody at Microsoft responded to a customer need or claimed customer need to support specifiedCurve arguing that it was OK so long as said curve was a standard curve. I just don't see why you wouldn't stop at "PKIX forbids this, let's just not implement it" otherwise.

There probably aren't any such certificates out in the Web PKI because Mozilla's rules prohibit them (and almost as important Firefox won't validate them). But Microsoft trusts a whole bunch of dubious kinda sorta wanna be Certificate Authorities, maybe one of those issued crap with specifiedCurve for a standard curve ? Or maybe a corporate internal CA?

That's a test somebody can do: If you use a Microsoft internal CA can you easily mint these stupid specifiedCurve certificates with it? Or does it make you pick a named curve and emit the OID properly? How about other popular private CA solutions? EJBCA maybe?


Who on earth has a corporate internal CA that issues certificates signed on non-standard curves? Who would ever do that? It's not like you're just a couple parameters away from Curve25519; a serious alternative curve will always come with alternative curve code.


Curve25519 was only introduced in 2005, so it's entirely plausible that MS had a customer prior to that with the attitude "we don't trust those NSA curves, we'd rather use our own" (though whether those customers would choose to use the MS crypto library is another question...)


In the scenario I'm describing specifiedCurve is NOT being used to sign with non-standard curves, it's being used to sign with the standard curves but expressed using specifiedCurve anyway.

As a crypto/ security person your instinct is to say "No" because if something is more complicated but adds no apparent benefit that's a problem. That's why PKIX says "No" here, and it's why Mozilla policy says "No" here. But for a Microsoft sales person trying to land a large deal before the quarter ends the instinctive answer is "Yes" unless one of the technical people can explain why it's inherently unsafe.

This is how Microsoft ends up supporting six different bad ways to do something in Windows - not because they're lousy engineers but because they are willing to do what it takes to make the sale. Of course specifically in security arguably that does make them lousy engineers.

The worst part about such requirements is that they can often turn out to be bogus. One technical person pastes in a description of NIST's P-256 with all the curve details and a game of telephone results in the idea that they want to use specifiedCurve when actually they'll use a named curve because duh, of course they will. But if nobody says "No" the specifiedCurve feature gets backed into a requirements document and actioned.

I have had non-security "requirements" that I pushed back on and six transatlantic conference calls later I'm talking to the person who supposedly "had" the requirement and they go "I have no idea how this got made into a requirement, we don't need this at all" and a bunch of work vanishes instantly. But I do that sort of thing because I'm stubborn, it might well be easier and faster (and more profitable) to just fulfil the unnecessary requirement.


I get all that, but I'm asking: why would anyone ever do this? What conceivable benefit could there be to it?


It's not even that: the AlgorithmIdentifier structure in the SubjectPublicKeyInfo is allowed to contain parameters, the one in the Signature should not. This is arguably a spec bug.


There's even a timely question about this to m.d.s.policy as a result of a Mozilla policy revision which spells out byte-by-byte what a conformant AlgorithmIdentifier looks like:

https://groups.google.com/forum/#!topic/mozilla.dev.security...

This gave Ryan a chance to point people at Adam Langley's wise observation that you should not parse things like signatures when you can instead calculate the entire value you expected and then just binary compare - anything that doesn't match is wrong and you needn't care why.


The mitigation should be to remove all code that supports custom elliptic curves. This is a misfeature, it shouldn't exist. I also don't think anyone uses it for real.

This stems from a time where people thought maximum flexibility in cryptography is a good idea. It's not.


So this explains the math and crypto part, but how does that tie into X.509 and certificates?

From what I understand we can put custom parameters into a certificate, but the parameters come with the key, not the signature. So we have

CA cert + key A + parameters A

signs

My cert + key B + parameters B + forged signature from A

Now we can only mount this attack if we can somehow control a part of the parameters (the basepoint) that is used to verify the forged signature. Is the bug that windows is using parameters B to verify signature from A? Or am I missing something and there is another way to supply parameters with a signature?


From what I read, x.509 allows the signer to specify the curve parameters along with the signature.

And yes, the bug here would be that Windows accepted parameters B without confirming they match A, it only checked that the public key was the same.

So you have official and trusted root / intermediate cert C1 which contains pubkey 1 and parameters A, which uses privkey 1 (secret obviously). When signing it doesn't usually specify parameters (just gets it from the trusted cert), the leaf certificate just contain the reference to the trusted cert and its public key and of course also contains the actual signature.

In the attack you reuse pubkey 1 but create parameters B and associated privkey 2, and using that you create a leaf cert that contain both the same references that an official signature would contain - except you also specify parameters B, and then supply the signature that validates only under parameters B, and then Windows accepts both the parameters and the signature.


That would make the attack plausible, but I wonder where these parameters are in practice.

I tried creating a cert with custom curve parameters here: http://dpaste.com/1Q2MYWF

It seems the parameter block is all part of "Subject Public Key Info". The signature is just a binary blob at the bottom. But openssl doesn't really break that down, does this signature have its internal encoding that allows supplying additional parameters?

And if that's the case: How does that make any sense? It sounds like just asking for trouble. (I mean... there never can be a situation where the parameters of the signature do not match the parameters of the key.)


Try pasting your cert into https://lapo.it/asn1js/

You can see all the parts in the blob:

        OBJECT IDENTIFIER 1.2.840.10045.2.1 ecPublicKey (ANSI X9.62 public key type)
        SEQUENCE (6 elem)
          INTEGER 1
          SEQUENCE (2 elem)
            OBJECT IDENTIFIER 1.2.840.10045.1.1 prime-field (ANSI X9.62 field type)
            INTEGER (256 bit) 1157920892373161954235709850086879078532699846656405640394575840079088…
          SEQUENCE (2 elem)
            OCTET STRING (32 byte) 0000000000000000000000000000000000000000000000000000000000000000
            OCTET STRING (32 byte) 0000000000000000000000000000000000000000000000000000000000000007
          OCTET STRING (65 byte) 0479BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798483A…
          INTEGER (256 bit) 1157920892373161954235709850086879078528375642790749043826051631415181…
          INTEGER 1
This will help you understand the ASN.1 encoding of a cert: http://luca.ntop.org/Teaching/Appunti/asn1.html


The parent comment is wondering about the structure of the signature and if different curve parameters can be specified for it. How can explicit curve parameters be specified in an ECDSA signature? ecdsaWithSHA256, at least, is simply two bigints. There's no spot for specifying explicit parameters.


This is answered upthread and in RFC 5480: the AlgorithmIdentifier has an ANY OPTIONAL field for parameters.


Missed that. Thank you!


Subject Public Key Info is just an Algorithm Identifier and the public key. The Algorithm Identifier is an OID and the parameters (ECParameters when using EC keys). It's these parameters that can contain the custom EC domain parameters.

The certificate signature is preceded by another Algorithm Identifier that specifies the signature algorithm (and the parameters), and so it seems that Microsoft is using this value instead of the parameters in the signer certificate Subject Public Key Info?


AIUI it's a root cert trust issue. You supply your own self-signed root cert, which obviously lets you specify your own parameters and build a fully valid chain of trust. Then the bug is that the library considers the root cert trusted if its public key hash and serial match that of a cert in the root trust store, even if the curve parameters don't.


I wonder if I should take those opportunities to talk about Formality (https://github.com/moonad/formality).


The important part might be just how hard it is to come-up with G'.


As you can see, it's not at all hard.


Yes if it's that it's not even particularly computationally expensive. I hope that's not the crux of the flaw.

edit: seems to be this bug letting attacker specify params https://twitter.com/thracky/status/1217175743316348929


Here is a sage script that implements the described attack. I have scripts that implement curve parameters separately (so I can load them into other scripts as needed) so this might look a little redundant in places:

    #!/usr/bin/env sage

    nistp256r1_order = 0xFFFFFFFF00000000FFFFFFFFFFFFFFFFBCE6FAADA7179E84F3B9CAC2FC632551
    nistp256r1_modulus = 2**224 * (2**32 - 1) + 2**192 + 2**96 - 1
    nistp256r1_a = 0xFFFFFFFF00000001000000000000000000000000FFFFFFFFFFFFFFFFFFFFFFFC
    nistp256r1_b = 0x5AC635D8AA3A93E7B3EBBD55769886BC651D06B0CC53B0F63BCE3C3E27D2604B

    nistp256r1_field = GF(nistp256r1_modulus)
    nistp256r1 = EllipticCurve(nistp256r1_field, [0,0,0,nistp256r1_a,nistp256r1_b])

    nistp256r1_base_x = 0x6B17D1F2E12C4247F8BCE6E563A440F277037D812DEB33A0F4A13945D898C296
    nistp256r1_base_y = 0x4FE342E2FE1A7F9B8EE7EB4A7C0F9E162BCE33576B315ECECBB6406837BF51F5
    nistp256r1_gen = nistp256r1(nistp256r1_base_x, nistp256r1_base_y, 1)

    curve = nistp256r1
    curve_order = nistp256r1_order
    curve_gen = nistp256r1_gen

    CG = Zmod(curve_order)

    ### these are "inputs" to the system. Only pubkey is known to an attacker
    privkey = CG.random_element()
    Q = curve(ZZ(privkey) * curve_gen)

    ### The attacker generates the necessary malicious generator
    
    kprime = CG.random_element()
    kprimeinv = kprime.inverse_of_unit() 

    Gprime = ZZ(kprimeinv) * Q

    ### We can now verify that the attacker knows a private key corresponding
    ### to the public key under their generator

    Qprime = curve(ZZ(kprime) * Gprime)
    print("Q==Q'", Qprime == Q)
    print(Qprime.xy())
    print(Q.xy())

    ### So if the attacker believes Gprime is a correct generator,
    ### this is bad news.
    ### you can implement something relying on it, with Gprime as your generator, here.
This is using real secp256r1, i.e. NIST P256, parameters. You can try this in sagemath's online tools and see that it works very efficiently. inverse_of_unit is a nice function that computes the inverse of kprime in the multiplicative group modulo the prime order of the curve. You can do exactly the same thing with the XGCD algorithm from sage.arith.misc. ZZ is required simply for type conversion: sage doesn't understand how to multiply a point by a Zmod element. We help it out by converting to an integer. .xy() is not required, it simply prints affine, instead of projective, coordinates. I wrote this with sage 9, which is based on python 3. The 0,0,0 are not necessary for the elliptic curve constructor (two parameters = short weierstrass) but I'm in the habit...

As a corollary to Lagrange's theorem, any prime order group is cyclic, which means any randomly chosen point is a generator and since the group of curve points under elliptic curve point addition is of prime order, any randomly chosen point on the curve necessarily generates the whole group. However, we need to choose a random secret key to relate the old generator to the new, because solving G' = k'Q is a discrete logarithm problem and we can't solve those.

P256 is obviously quite difficult for humans to deal with in their minds, so if you want to play with a more human-sized curve, try:

    E=EllipticCurve(GF(17),[3,5])
which is of prime order and has 23 points (and so cofactor 1 like P256).


Thanks for the explaination. What is cryptopals?


None of these links describe how the exploit works.

I found this: https://media.defense.gov/2020/Jan/14/2002234275/-1/-1/0/CSA...

So based on my limited understanding:

1. The certificates have a place for defining curve parameters.

2. The attacker specifies their own parameters so that they match the start of a standard curve but choose the rest of the parameters themselves. With the right ECC math they are able to generate a valid signature for the certificate even though they don't own the private key corresponding to the original curve.

3. The old crypto API -didn't- check that certificates were signed from a fixed set of valid parameters. It would just check for sig validity allowing for spoofing of the cert.

Interesting stuff. So you might be able to cryptographically prove if there was ever any attacks in the wild from this at a given time (if we assume dates are checked at least)?

I wonder what happens at the Microsoft Security Response Center when a big vuln hits like this? Does it tie up all their resources just working on the one vuln?


Good find. This page should almost certainly be the headline article on HN, at least until someone does a full write-up of the vulnerability --- but the vulnerability here looks very simple (and gross): if you can define your own curve parameters and get CryptoAPI to honor them, you can sign anything.


The only way to do it (I'm lazy so didn't read any of the documents - my gut feeling of an engineer) ... is to use ECDH, which provides EC params in ServerKeyExchange. CryptoAPI might have used those and just pull the public key from the cert.


Ok, we've changed the URL from https://www.kb.cert.org/vuls/id/849224/. Thanks!


I wonder what happens at the Microsoft Security Response Center when a big vuln hits like this? Does it tie up all their resources just working on the one vuln?

Generally no, if only because of Microsoft’s sheer size.

For something like this issue, while its potential impact is big, I would guess that it only tied up the team(s) that work on CryptoAPI.


And Windows defender.


So... Wait, they weren't calculating the signature based on ALL the contents of the cert? There was an "unprotected" section in the cert that allowed for curve details? This seems... too obviously bad.


Basically there are standard curves and software assumed parameters matched, bad design.

edit: more detail

https://news.ycombinator.com/item?id=22048619


Ah I see. It wasn’t that they didn’t secure parts of the cert, it’s that they assumed they didn’t need to add some data. The generator was assumed standard... and surely no one would ever abuse that intention!


Not exactly but I'm not sure it matters. It sounds like if the curve parameters are crafted just-so, they can dodge validation and use anyone's public key and yet still negotiate successfully and decrypt everything. The "gramdma" explaination is basically the green lock next to the URL in your browser doesn't mean spit at this particular moment in time.


Sounds similar to the post from a few days ago about the Firefox WebCrypto allowing too much adjustment of DH parameters: https://news.ycombinator.com/item?id=21980199


It is indeed very similar in spirit, and of course much more devastating here.

Another attack, implemented on ECDSA and similar in spirit (though not the same attack) is in Sean Devlin's Set 7 of Cryptopals:

https://toadstyle.org/cryptopals/61.txt


"enable remote code execution." suggests it's worse than that - like the curve parameters might allow a buffer overflow, or something of that nature.

Maybe they just mean because it can sign Authenticode signatures


The pseudonymous but well-connected 'swiftonsecurity' twitter account reports on background that 'RCE' chatter about this particular vulnerability does indeed relate to compromised software update channels. (Not just AuthentiCode, but also MITM on, say, connections to the npm package server.) See https://twitter.com/SwiftOnSecurity/status/12171594348808478...

That said, this same patch set also has a separate pre-auth RCE on Microsoft's Remote Desktop Gateway, which has been documented as CVE-2020-0609 (not ...-0601). See https://www.kb.cert.org/vuls/id/491944/


> connections to the npm package server

Doesn't npm use node.js for this, which uses openssl?

> https://nodejs.org/api/tls.html#tls_tls_ssl

Third party tools connecting to the npm server that use Window's TLS library would absolutely be affected though.


I suspect the answer is "it's complicated". For example, you can specify a package version string as `git://...` and it will grab the package from a git repository. It's possible that this uses a JS-native Git implementation, but it's also possible that it uses a locally-installed Git binary, which could in turn use MSCAPI, especially if it's configured to use an external SSH provider.


That's a good point. I'm not too familiar with npm code but it seems that they are indeed using git from the CLI instead using it via a library: https://github.com/npm/cli/blob/ba7f1466436cc22e27f8a14dede3...


It's scarier the more you think about it because digital sigs are the first place you look for most "secure protocols." I think reading between the lines there are multiple attack scenarios:

- Fake windows updates

- The notorious SMB protocol -- "The Server Message Block (SMB) protocol provides the basis for file and print sharing and many other networking operations, such as remote Windows administration. To prevent man-in-the-middle attacks that modify SMB packets in transit, the SMB protocol supports the digital signing of SMB packets." Could prob impersonate a Windows server or computer in a home group, IDK.

- Likely fractal attacks on active directory that would allow injecting admin accounts on any work station in a network and enabling remote desktop.

- Fake SSL certs -- also: hey user, here's a [trojan] to fix the latest Windows vuln [fake Microsoft.com]. It's a race to update with the offical update, really. If attackers were to DDoS the update service, it would be very very bad.

- Fake signed trusted programs that security software may "ignore" and that windows itself would allow to run with fewer warnings. Trusted MS programs could be a very good way to write persistent root kits.

I'm sure windows experts can think of more stuff. But for me it's a good lesson for how much we depend on the certificate system for security.


Love the shock.

Take a job as a pentester (or don’t) and you’ll look at your list, nod, and say “Yes. This is normal.”

It’s normal to be broken. That’s why to do pentests on every piece of security infrastructure.

The hypothesis that systems like this ought to be secure is empirically false. I am trying to shake the shock out of you, because your surprise = my surprise before being a pentester. But the job forces you to come to terms with the fact that everyone, everywhere, is broken, always, and this is neither surprising nor (and you’ll hate this part) a big deal.

Bug is fixed. Life goes on. Yes, of course the infrastructure could have been attacked from any time between “forever ago” and that fix. Ask yourself: why is this surprising to me? And carefully examine the assumptions with which you want to say “because it’s their job to make it secure...”

To be clear, I wish the world were different. But I wish we’d take a hard look at reality and the history of vulnerabilities. Stop thinking things are secure just because the label says “secure”. People devote their entire existence to seeking out and exploiting the tiniest imperfections, sometimes for no reason other than because it’s fun to do so. There is zero chance software would end up impenetrable under those conditions.

Hell, even Tarsnap screwed up once, and Colin is pretty much cryptographically-signed Jesus. So if someone as smart and dedicated as him can make these mistakes, what hope have we? Especially when “we” consists of a large number of programmers working together, and all the complexities that entails?


> There is zero chance software would end up impenetrable under those conditions.

Not when it is so impenetrably complex that there are always hidden errors. The only secure software is simple enough that a single human mind can comprehend it and verify correctness, and as an entire industry we have moved away from that entirely.


The problem is not with digital signatures. The problem is bad ECC certs used to generate digital signatures. So SMB is not affected. Code signed with one of these bad ECC certs is a concern. But considering that people install stuff that's not signed all the time, the primary issue is probably TLS MITM.


I haven't seen it mentioned anywhere yet, but I have to wonder... Does this vulnerability allow MITM of Windows Update itself?

I would expect all connections to the Windows Update servers to be protected with TLS, and as a second layer the updates themselves to be signed, but if this vulnerability allows bypassing both signatures, this could be really bad.


This attack targets the nuts and bolts of how the Windows platform actually implements TLS; a vulnerability in CryptoAPI that allowed you spoof any ECC certificate would presumably break all of TLS. What might mitigate this in Windows Software Update would be some kind of key pinning that prevented arbitrary certificates from being used.

Later

Dmitri Alperovitch at Crowdstrike says this doesn't impact Windows Update.


It does allow you to modify TLS streams (the WU downloads), and code signing (checking that a binary can run), but it is unclear if any of the trusted WU validation keys (confirming the update is from a signed manifest) are ECC.

So: maybe.


Only if Microsoft is using ECC for its Windows Update certificates. I would guess not given how many past OS versions Windows Update has to support.


The attack allows faking https certs as well as code signing certs; so it seems plausible that a MitM attacker could trick Windows Update (or other auto-updaters) into executing malicious code.


That this exploit can be used to spoof the Windows Update system is a big yikes. You can’t necessarily trust today’s update itself.


That depends on whether Windows Update is using ECC certificates. A quick scan of my Windows 10 trusted root certificate store shows almost exclusively RSA based certificates, so I’d guess 80% odds that Windows Update itself isn’t affected.


It may still be affected, the system may accept a bad ECC cert and override the RSA cert.


What about user certs. Some windows systems allow for authenticating with a user cert. depending on how bad the validation bug is, seems like spoofing a user certificate could be a valid attack vector.


Depends how you read it. Could just be that any SSL connection to download a binary could be MITM'd and replaced with malware, and that would be "remote code execution" by some definitions.


I read it simply as malicious update passes as from a trusted source due to this vuln and the malicious update runs it's code as part of the update process, not as a buffer overflow or something of that sort.


i know that when i had a windows machine on my companies network they could just install stuff willy nilly. maybe the attacker could dupe and pretend they are you admin and just do what they wish?


>The old crypto API -didn't- check that certificates were signed from a fixed set of valid parameters. It would just check for sig validity allowing for spoofing of the cert.

This sounds exactly how pdf signatures were attacked and successfully defeated https://media.ccc.de/v/36c3-10832-how_to_break_pdfs https://www.youtube.com/watch?v=k8FIDGmmYvs


A lot of what happens at MSRC is auditing for related vulnerabilities - trying to find any place where similar code might make the same mistake. In this case, since roll-your-own cryptography is so strongly discouraged (with good reason), I suspect that there won't be as many places to audit


So you mean the signature algorithm was not fully part of the signature? Indeed that’s a recipe for disasters as the numerous TLS attacks have shown (logjam, freak, drown)


From Krebs tweets:

The NSA's Neuberger said this wasn't the first vulnerability the agency has reported to Microsoft, but it was the first one for which they accepted credit/attribution when MS asked.

Sources say this disclosure from NSA is planned to be the first of many as part of a new initiative at NSA dubbed "Turn a New Leaf," aimed at making more of the agency's vulnerability research available to major software vendors and ultimately to the public.


more like someone with some commonsense decided to capitalize on disclosing issues when other countries get zero days. Oh well, guess we can't use this anymore Bob, china has been exploiting it over the past week. Call Microsoft lets at least get some free PR in exchange of having to give this up.


They have probably done that for a while (this is the first public attribution, not the first disclosure); but they are now blowing their trumpet because they need some good PR. Why?

Snowden.


Much more likely the bad reaction to Eternal Blue.


EternalBlue would have not received that much coverage had it not happened after Snowden proved that the American public cannot trust the agency. They had been dragged to the foreground before without repercussions, because reactions were limited to the IT world. Snowden made it a general-public issue, and now they are forced to to shape up.


You write that like it's a bad thing.


You can do the right thing for the wrong reasons.


An alternative angle that could make sense is that it shows that they're not purely intent on hoarding exploits (particularly dangerous ones) and are willing to report them to software vendors in order to reduce everyone's risk profile.

That'd be more of a communal-good, de-escalation approach. There's certainly something to be said for the fact that it displays the talent and expertise available too though (i.e. helping for recruitment).


The tweet* from the call with reporters - a cynical person might think instead that NSA thought that with the similarity to the LE and FF flaws it was not much longer before a hostile actor would find this crypt.dll flaw so it was time to notify MS.

* https://twitter.com/briankrebs/status/1217125030452256768


Didn't the FBI or NSA push for flawed Elliptical Curve Crypto in the past?

Could be the knew about it for a while and had milked it hard until they caught someone else using it. Or like the parent said, previously discovered flaws meant that someone might catch this one, too.


It was Dual EC DRBG, a prng


There is no evidence that US push flawed curves.


>There is no evidence that US push flawed curves.

"Reuters reported in December that the NSA had paid RSA $10 million to make a now-discredited cryptography system the default in software used by a wide range of Internet and computer security programs. The system, called Dual Elliptic Curve, was a random number generator, but it had a deliberate flaw - or “back door” - that allowed the NSA to crack the encryption."

https://www.reuters.com/article/us-usa-security-nsa-rsa/excl...


"Dual Elliptic Curve" is an RNG, a PKRNG, that works by using a public key to encrypt its state, which is then directly revealed (as public key ciphertext) to callers (for instance: in the TLS random blob). The problem with PKRNGs has nothing to do with elliptic curves; you could design one with RSA as well. The problem is that for a given public key, there's also a private key, and if you have that private key you can "decrypt" the random value to reveal the RNG's state.

That's not a flawed curve that NSA pushed; it's a much more straightforward cryptographic backdoor.


"random number generator"


>a new initiative at NSA dubbed "Turn a New Leaf,"

More like "do the actual job they are paid to do"


They are paid to collect intelligence for the benefit of the american people, not american companies. Luckily citizens united hasn't stretched that far.


Their mission also explicitly includes information assurance:

Mission Statement The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both signals intelligence (SIGINT) and information assurance (now referred to as cybersecurity) products and services, and enables computer network operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances.


They've got to balance both roles.

IIRC, in earlier times the government didn't use as much COTS stuff, and civilian computer systems weren't so critical, so the roles were easier to separate. The NSA developed whole series of secret encryption algorithms for the exclusive use of the government/military, and civilian algorithms weren't approved to secure classified communications.

https://en.wikipedia.org/wiki/NSA_cryptography


I always wondered why Barr, Comey and basically every AG I paid attention to, consistently want to break encryption for the populace.

I guess it makes sense proponents of those changes would be ok of breaking it for the proles of they thought their secrets are protected.


You don't see how a lack of critical vulnerabilities is software infrastructure is of benefit to citizens?


No, I don't see how this is part of foreign intelligence/surveillance/espionage work. It is good that these vulnerabilities are fixed, of course. But shouldn't that be at least a separate partially independent branch of the NSA? Otherwise you get a large conflict of interest.


Their job is to collect signals intelligence and execute cyber warfare operations. Not whatever you think it is.


Their job is more than that.

"The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both signals intelligence (SIGINT) and information assurance (now referred to as cybersecurity) products and services, and enables computer network operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances."

[1] https://www.nsa.gov/about/mission-values/


So...SIGINT and CNO. Exactly as I stated.


Security assurance isn’t necessarily cyber warfare. To have the high ground is not the same as using it offensively, hence the expectation of defensive posture as part of the NSA’s mission (although admittedly some offensive activities are to be expected, depending on the situation, such as Stuxnet and Iran).


Not sure if you’re just being snarky, but the NSA’s stated mission includes helping with cyber security: https://www.nsa.gov/about/mission-values/


It also involves breaking enemy cyber security (signals intelligence).

It's actually a rather fascinating incongruity, since we live in a world where "the enemy" is more likely than not to be using the same software systems that the NSA themselves are, and that therefore any exploitable flaws they find in enemy systems are pretty likely to be just as exploitable in their own. (And that similarly, disclosing the flaw in order to fix the issue in their own systems is very likely to result in "the enemy" fixing the flaw as well.)

A couple years ago the White House released a document explaining the process they use for deciding what vulnerabilities they keep secret: https://www.cnet.com/news/white-house-trump-administration-h... noting that "In the vast majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest". Though from what we've seen in past leaks, it's pretty obvious they don't reach that conclusion for all vulnerabilities they find.


And what do you think the end state of all that cybersecurity research is?


NSA has long had an explicit offensive and defensive mandate. They even recently created a cyber defense directorate:

https://www.washingtonpost.com/national-security/nsa-launche...


NSA has both attack and defense mandates and organizations. Currently, the attack org has priority, but it's not like the defense org does nothing. So if the attack org doesn't want a vuln, they can let the defense org reveal it for PR points.


I like NSA being more active, but the concept of trusting NSA on crypto is just never gonna happen. Their core mandate is being able to break it so the whole concept is a non-starter


This kind of logic is attractive on message boards but makes little sense in the real world.

What NSA needs are NOBUS ("nobody but us") backdoors. Dual_EC is a NOBUS backdoor because it relies on public key encryption, using a key that presumably only NSA possesses. Any of NSA's adversaries, in Russia or Israel or China or France, would have to fundamentally break ECDLP crypto to exploit the Dual_EC backdoor themselves.

Weak curves are not NOBUS backdoors. The "secret" is a scientific discovery, and every industrialized country has the resources needed to fund new cryptographic discoveries (and, of course, the more widely used a piece of weak cryptography is, the more likely it is that people will discover its weaknesses). This is why Menezes and Koblitz ruled out secret weaknesses in the NIST P-curves, despite the fact that their generation relies on a random number that we have to trust NSA about being truly random: if there was a vulnerability in specific curves NSA could roll the dice to generate, it would be prevalent enough to have been discovered by now.

Clearly, no implementation flaw in Windows could qualify as a NOBUS backdoor; many thousands of people can read the underlying code in Ghidra or IDA and find the bug, once they're motivated to look for it.


I mean, the 0 days in the shadowbroker dumps wouldn't count as "NOBUS" backdoors either, but the NSA was sitting on those like a dragon hording gold.


Those aren't vulnerabilities NSA created, unlike Dual_EC, which is.


Neither is this crypt32 vulnerability, which is part of the analogy the parent comment is making.


NSA disclosed this CryptoAPI vulnerability. What's the lesson to draw from that?


My point is that the structural "NOBUS" framework the parent was trying to construct has glaring, recent counter examples, and can't really be used to holistically describe their behavior over the past couple decades.

Of course I applaud responsible disclosure, and if they continue down that direction they have the possibility of rebuilding some of the trust they've broken in modern times.


You've lost me. What are the glaring counterexamples to NOBUS? The NOBUS framework says that NSA introduces vulnerabilities and backdoors only when it has some assurance that only NSA will be able to exploit them. It doesn't follow that NSA would immediately disclose any vulnerabilities they discover.


...the parent is literally talking about it in the context of today's crypt vulnerability and using that as example of their cohesive NOBUS framework.

> Clearly, no implementation flaw in Windows could qualify as a NOBUS backdoor; many thousands of people can read the underlying code in Ghidra or IDA and find the bug, once they're motivated to look for it.

The counter examples are the hordes of critical 0 days they've been sitting on, some of which have led to to a body count of five eyes citizens.

Like I said, disclosing is a step in the right direction, but they don't get a cookie for the first major disclosure in decades.


I don't think anyone should give NSA a cookie. I think it's useful to be able to reason through where NSA is (relatively) trustworthy and where they aren't.


Right, but in the absence of everyone using their NOBUS-backdoored software presumably the next best thing would be to hoard zero days and hope they can work as pseudo-NOBUSes.


That's certainly true; NSA is chartered to exploit vulnerabilities and certainly hoards them. But that doesn't address the question of whether you should trust NSA "on crypto". Here, they're the ones disclosing the crypto flaw; there's no need to "trust" them, because they're clearly right (Saleem Rashid worked out a POC for this on Slack in something like 45 minutes today).

Should you trust them about Dual_EC? Obviously not: the sketchiness of Dual_EC has been clear since its publication (the only reason people doubted it was a backdoor was that it was too obviously a backdoor; I gave them way too much credit here).

Should you trust them about the NIST P-curves? That depends on who you ask, but the NOBUS analysis is very helpful here: you have to come up with a hypothetical attack that NSA can exploit but that nobody else can discover, otherwise NSA is making China and Russia's job easier for them. Whatever else you think about NSA, the idea that they're sanguine about China is an extraordinary claim.


> Sources say this disclosure from NSA is planned to be the first of many as part of a new initiative at NSA dubbed "Turn a New Leaf," aimed at making more of the agency's vulnerability research available to major software vendors and ultimately to the public.

Sounds like "we find so many critical bugs... we don't need all of them to achieve our goals, so let's blow some of them for PR"


I think it's more like, "We find so many critical bugs, let's blow some of them for PR once we discover that adversaries are using them too."


Bull.... A more likely scenario is they've been sat on this for years and finally saw another actor using it in the wild.


So... exactly what I said?


Interesting comment on reddit:

> Within the federal space, we've been making unprecedented plans for patching systems as soon as this patch is released today. In my agency we're going to be aggressively quarantining and blocking unpatched systems beginning tomorrow. This patch has been the subject of many classified briefings within government agencies and military.

https://old.reddit.com/r/sysadmin/comments/eoll74/all_hands_...


The Department of Homeland Security issued an emergency directive today for federal agencies to patch their systems within 10 business days:

https://cyber.dhs.gov/ed/20-02/


And that's for civilian systems. The only other time I can recall this happened was with the DNS vuln.


> https://twitter.com/randomoracle/status/1217198437281804290

Some speculation on CVE-2020-0601.

Earlier version of Windows cryptography API only supported a handful of elliptic curves from NIST suite-B. It could not handle say an arbitrary prime-curve in Weierstrass form with user defined parameters

While it could not grok arbitrary curves, Windows API made an attempt to recognize when a curve with explicit user-defined parameters was in fact identical to "built-in" curve that is supported

It appears that mapping was "lazy:" it failed to check that all curve parameters are identical to the known curve.

In particular, switching the generator point results in a different curve in which an attacker can forge signatures that match a victim public key

> https://twitter.com/esizkur/status/1217176214047219713

It looks like this may be a caching issue: There's a CCertObjectCache class in crypt32.dll. In the latest release its member function FindKnownStoreFlags (called from its constructor) started checking the public key and parameters

> https://twitter.com/thracky/status/1217175743316348929

ChainComparePublicKeyParametersAndBytes used to just be a memcmp before the patch. Same with any calls to IsRootEntryMatch. Both new functions.


The actual advisory from Microsoft (CVE-2020-0601):

https://portal.msrc.microsoft.com/en-US/security-guidance/ad...

> A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.


So Win7 isn't affected? At this point in time I have to point out a fully patched Win7, having ~8 hours of support life left, just happens to be more secure than Win10 for trusting certs.


It's called maturity... code that isn't radically changed or added to will asymptotically approach being completely bug-free as all the bugs get gradually discovered and fixed over time. This also implies that the majority of bugs are found in the newest code.


The advisory from Microsoft is quite bizarre. It focuses on code signature validation, rather than X.509 as a whole. It also doesn't say anything about how the vulnerability itself works. Vague advisories like this are dangerous, because it gives adversaries an advantage over IT departments that don't know which system they should patch first. It would be much better if everyone understood exactly what the impact is from the get go. The NSA advisory is a bit better, but still doesn't tell us how exactly the ECC certificate validation bug works. We're left with only a few hints.


X509 as a whole is fine and this isn't so much arbitrary MITM of any web server. It's specific to ECC public keys(not specifically X509 certs) that validate from cryptoAPI which is a fairly limited but devastating scope. EG Code signing.

Firefox uses its own NSS libraries not cryptoAPI to verify certs and is completely unaffected. I assume every major browser uses NSS or their own APIs as well. And of course RSA and AES certificates remain unaffected.


TLS supports ECC certificates, so any web client using crypt32 to verify those is affected. That includes web browsers and lots of other types of services, so it's not primarily code signed executables.

Does Firefox still use NSS when using the Windows Certificate Store for the source of trusted root certs? What about Chrome?

You're right that RSA certificates are unaffected. There's no such thing as AES certificates, though.


> Does Firefox still use NSS when using the Windows Certificate Store for the source of trusted root certs?

Yes. When enabled this feature in Firefox just effectively copies certificates from one of the Windows trust stores but continues to use its own (NSS) logic for trust decisions. Note also that Firefox's config switch only looks at your local changes - a corporate CA, a MITM proxy on a dev's workstation, something like that. Firefox continues to rely on Mozilla's judgement not Microsoft's for global trust policy.

> What about Chrome?

Chrome is probably affected. Chrome uses the platform (in this case crypt32.dll) trust decisions and then layers on additional rules from Google, such as the requirement for proof of CT logging. So unless an additional rule is blocking the weird curves they'll pass on Chrome on Windows.


A CERT person on Twitter was explicit about this impacting all of X.509.


Guess this is what Krebs was referring to yesterday: https://krebsonsecurity.com/2020/01/cryptic-rumblings-ahead-...

And the discussion on HN: https://news.ycombinator.com/item?id=22039481


Anyone got any news on Windows 7 seeing as it's still 25% market share approx according to statcounter?


Does Windows 7 crypt32.dll support ECC?



but only shortname curves


Does that mean it's unaffected?


I would expect everything not patched today is unaffected for such a reason yes.


But Win7 was patched this Tuesday...


That was the last roll-up not this security update.


This is yet another illustration of why complexity is evil in cryptographic and security critical code. It's evil everywhere, but it's particularly evil there. The relationship between bugs and complexity is exponential, not linear.

X.509 is an over-engineered legacy-cruft-encrusted nightmare. I've implemented stuff that uses it and I never, even after the most careful auditing by myself and peers, leave with the sense that I have handled everything correctly or that my code is totally air-tight.


The bug is being publicly described as specific to the implementation of a particular class of cryptographic primitives (ECC). If that's accurate, simplifying the certificate data format (unnecessarily messy though it may be) wouldn't do much to mitigate this particular issue.


To the extent it's X.509 allowing curve parameters to be specified alongside signatures and public keys, this is indeed a case where all the extra joinery in X.509 is creating exploitable complexity, and the point 'api is making is well taken.


It's been reported that Windows Defender can detect and report on malicious certificates:

https://twitter.com/AmitaiTechie/status/1217156973268893696

Of course, that relies on not having Defender disabled by an alternate product.


On Windows Server 2016 and newer it stays enabled even with an “alternate product.”

https://docs.microsoft.com/en-us/windows/security/threat-pro...

> In Windows Server 2016, Windows Defender AV will not disable itself if you are running another antivirus product.


True, but imagine you deployed SEP in accordance with supplier's instructions:

https://support.symantec.com/us/en/article.tech237177.html

Or Mcafee:

https://kc.mcafee.com/corporate/index?page=content&id=KB8245... (search for DisableRealtimeMonitoring)

For a deeper dive: I ran into issues on a security assessment trying to run procdump on lsass being blocked by Defender. Workaround.. was to find a machine with McAfee installed where that behavior was allowed.


Following a couple of twitter threads led me to this PDF: https://media.defense.gov/2020/Jan/14/2002234275/-1/-1/0/CSA...

(the tweet where I found it at https://mobile.twitter.com/NSAGov/status/1217152211056238593 has an image version of that PDF, in case you don't trust that domain)


I suspect you're overcomplicating the attack with all the math and we can ignore most of it.

The only way the attacker can tell the MS Crypto API is via the TLS protocol. You can only do it if it's relevant. The only option for that is to use ECDH, which allows the server to supply EC parameters for the Diffie-Hellmann exchange.

My bet is that the problem is that MS Crypto API took those parameters as correct without checking them against what's in the certificate. I.e.,

ServerKeyExchange - here's the EC spec, we just need the public key Certificate - ah - here's public key, we have the ECparams - let's run the math

:)


For those that need proof their machine is updated the article numbers listed here are the KB numbers you should match in windows 10 update list.

https://portal.msrc.microsoft.com/en-US/security-guidance/ad...


I think the scariest thing about this is if this was a PR stunt, the release of an unknown vuln could be completely controlled by whoever knew about it. Best case scenario is a relationships between Microsoft and the five eyes. It could have just as easily been China, a independent group or whatever. It's even possible that the top of Microsoft and/or NSA might not even know. But if it wasn't planned, no one would admit it anyway.


Oh great. Mozilla just added last week an option for entreprises to enable trusting of system certificates on Windows. See Firefox 72.0 release notes: https://www.mozilla.org/en-US/firefox/72.0/releasenotes/


Note that from the release note the new option just allows to read certificates from the system store. Validation is still done by Firefox, so NSS crypto lib, not crypt.dll. So even if the option is enabled Firefox is not affected by the vulnerability (except for code signing check of the Firefox binary itself by the OS).


Interesting, makes you wonder how many exploits the NSA purposely doesn't mention to the vendor for their own benefit


I'm assuming there will never be an official proof of concept release, so how long do you all think it will be before we see widely available exploit code and fake certificates out in the wild?


Tpcaek has said elsewhere in this thread:

> Saleem Rashid worked out a POC for this on Slack in something like 45 minutes today

So I would be amazed if there were not some malicious certificates out there already.



Could someone clarify: does this allow the creation of fake certificates that are accepted as authentic by any crypto library?

Or rather, does it treat such faked certificates as authentic itself?


> Could someone clarify: does this allow the creation of fake certificates that are accepted as authentic by any crypto library?

No, only the Windows native one. For instance, Firefox (which uses NSS) would be safe.


Although, if one point can be exploited to gain access to one area, then privilege escalate or exploit from that vantage point, then a lot is at stake.


At least this only affects Windows 10 (as far as I can tell)


Windows 7 reached EOL today so they may leave it as is if it is affected.


According to https://portal.msrc.microsoft.com/en-US/security-guidance/ad..., it doesn't look like windows 8.1 received a patch either, and that's still in support. Maybe only windows 10 has ECC support, and therefore any previous versions are not affected?

Also, according to https://support.microsoft.com/help/4534310, it looks like Windows 7 got security patches for this month.


Indeed, got like 3 patches for Win7 this Tuesday.


>This vulnerability affects all machines running 32- or 64-bit Windows 10 operating systems, including Windows Server versions 2016 and 2019

https://www.us-cert.gov/ncas/alerts/aa20-014a


Windows 8.1 is still supported and does not have a patch either, so it looks like maybe it does actually affect 10 only.


Other CVEs include updates for Windows 7 and Windows 2008, for example CVE-2020-0608 | Win32k Information Disclosure Vulnerability: https://portal.msrc.microsoft.com/en-US/security-guidance/ad...


When will they learn that the only way to respond to a hostile government is to overthrow it?


Do any browsers use CryptoAPI for TLS certificate validation?


I think classic IE would do so.


So does Chrome, if I'm not mistaken


Firefox might with an Enterprise flag on?


That can only make it check the system certificate store for trusted roots etc, it'd still use NSS for the crypto operations is my understanding.


Globalist conspiracy. X files intro music


Some things never change...


Nothing screams "we have microsoft keys!" harder than the fact that the only vulnerabilities reported by the NSA is a cryptographic validation bug. If I had to guess exactly what kind of vulnerabilities they do not need, this is exactly those kind. Who needs crypto validation bug when you already own microsoft's keys?!


I think you're spot on. Everything agencies at this level do is calculated and weighed carefully. They definitely would not seek to patch a useful vuln. It is a PR stunt.


The NSA's job is to gather information. They have >400mil people to protect, and 6bn people to attack. They are in the business of using exploits, not closing them.


Its patched because of the risk it poses to the government and from other state actors


Thinking with my tin-foil hat: Same date as windows 7 last patch right? Not sure if this was a risk decision or an intentional message.


They likely have the control over the Intel IME backdoor too. And maybe even the (Ryzen) AMD equivalent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: