Hacker News new | past | comments | ask | show | jobs | submit login

I hope that the actual vulnerability is far more complicated. If we can't even get crypto libraries right (where you'd hope most of the formal verification folks are), then there's not much hope of security for the rest of the industry.

I'm normally not much of a pessimist but things like this really make me wish we could just burn all the things and start over.




> just burn all the things and start over

Wrap all of this with an "IN MY OPINION"...

That would make things worse because we'd make the same mistakes again. I've been on many start over projects (Xeon Phi, for example, threw out the P4 pipeline and went back to the Pentium U/V). It doesn't work. You know what the most robust project I've worked on? The instruction decoder on Intel CPUs. The validation tests go back to the late 1980's!

You make progress by building on top and fixing your mistakes because there literally IS NO OTHER WAY.

Go read about hemoglobin, its the one of the oldest genes in the genome, used by everything that uses blood to transport oxygen, and it is a GIGANTIC gene, full of redundancies. Essentially a billion years of evolution accreted one of the most robust and useful genes in our body, and there's nothing elegant about it.

I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.


It really pisses me off when devs start ripping out asserts and tests that fail "But have been doing so for a while, so clearly they aren't needed."

How about... no. Review the code and determine our undefined behavior is understood well enough that we can accept the bad inputs. Those asserts & tests were created for a reason, and need to be maintained. Not just removed because no one can spot the subtle failure modes.


Yes, though fragile tests can be a sign that simplification might benefit your system.

Sometimes it's useful to just turn flaky failures into solid failures.


> You make progress by building on top and fixing your mistakes because there literally IS NO OTHER WAY.

As long as you are talking about knowledge, not artefacts. There is indeed no choice but accrete, organise, and correct knowledge over time, because anything you forget is something you might get wrong all over again.

Artefacts are different. It often makes sense to rebuild some of them from scratch, using more recent knowledge. We rarely do that, because short terms considerations usually win out (case in point: Qwerty).

> I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.

Only if we give up any hope of improving performance, energy consumption, or die area. Right now the biggest gains can be found by removing cruft. See Vulkan for instance.


Vulkan is a good example indeed.

Most studios end up putting middleware on top of it to reduce Vulkan boilerplate to some more manageable code level, which ironically makes some Vulkan code bases run slower than OpenGL AZDO, due to misunderstandings how to do the low level work in a proper way.


Vulkan is both a good and bad example.

Vulkan tries to remove the "bloat" of the driver by moving it into the engine (or the middleware the engine uses), which, yes, reimplements a pretty sizable chunk of what the driver used to do.

But it exposes the API in a way that requires domain-specific knowledge of how modern GPUs work, which requires, frankly, smarter engine developers. They need to stop only thinking in the ways OGL/D3D taught them to think, and need to also think like a driver developer, or possibly even a compiler developer.

OpenGL was written wrong because no one knew what modern GPUs would eventually look like, and tried to solve the problem at the wrong layer; and fixed function hardware worked pretty much the way OpenGL worked in the beginning, not realizing stuff would eventually become, basically, absolutely massive highly parallel DSP-esque math coprocessors that are more complex than the system that hosts it. OpenGL became a mess because they kept bolting newer styles of hardware onto it (VBOs/IBOs/VAOs, the eventual move to unified buffers, compute shaders, fixed function geometry then non-fixed function geometry shaders, ubershaders and the move from direct to deferred back to direct, etc)


> They need to stop only thinking in the ways OGL/D3D taught them to think, and need to also think like a driver developer, or possibly even a compiler developer.

Which is exactly the opposite than anyone that wants to draw something wants to think about.

Also Vulkan is on its merry path to have endless list of extensions, so it will eventually match OpenGL's complexity about what code paths to take, with the required cleverness of having to be a graphics programmer, driver developer and compiler developer at the same time.

No wonder anyone that wants to stay sane rather picks up a middleware engine instead.


Vulkan was never meant to be directly used by people who “just want to draw stuff”. It was meant to give engine developers the tools required to squeeze more performance out of the GPUs.

This is a case of working as designed. The people trying to directly use Vulkan in their games without any middleware layer are just generally wrong here.

Regarding extensions, this is just what happens when you specify something that keeps evolving - there’s no getting around this. What you can do to minimise the complexity is decide to require certain extensions once they’ve been around for long enough. That’s what everyone does.


It would help if Khronos would promote an API for people who “just want to draw stuff”, given that OpenGL 5.0 will never happen most likely and those people don't want to be stuck with OpenGL 4.6 forever.

That is not what everyone does, because Vulkan gets new extensions updates almost every week.


> It would help if Khronos would promote an API for people who “just want to draw stuff”

Would it?

What you really want is a stable API to talk to the hardware. It doesn't really matter if the API is nigh unusable, as long as it is stable. Only when such stability is reached, can we reliably build stuff on top of it. Including a "just draw stuff" engine.

We could for instance re-implement Flash on top of Vulkan. Such a thing wouldn't need a standard body to get done and be usable by a wide range of people. (Though in this particular case, we'd likely have some standard body involved anyway, since there's already so much Flash code out there.)


So where do OpenGL developers move to, when 4.6 moves into "this legacy thing we would like to drop"?


You implement OpenGL as a middleware that speaks Vulkan.

Basically, ANGLE it, but for desktop OpenGL instead of GLES.

As an aside, Google is adding a Vulkan backend for ANGLE, and Microsoft seems to be adding a DX12 (which is basically DXVulkan) backend (to match the DX11 backend they gave Google) at some point in the future.

So, GLES, in its entirety, is now a community supported, open source, Vulkan middleware. No reason why we can't do that with desktop GL too.


So 10 years from now those that don't want to be a mix of graphical developer, driver author and compiler designers, have to keep using a frozen API from 2018, without any access to whatever has changed on their computers during the next decade.

All because providing something like MetalKit or DirectXTK is too much to ask for to Khronos and LunarG.


But should it change? There are issues where updating OpenGL support in drivers broke earlier apps due to accidental changes in how existing features were implemented.

Vulkan and DX12 are far less likely to break existing apps i the future due to far fewer core features.

It makes no sense to have what is essentially an entire legacy middleware in the driver when it no longer represents modern hardware.

Unlike GLES, OpenGL basically can never deprecate features, and D3D9 support will never, truly, die. It's a lot easier to just package a universal shim into existing legacy apps than it is to keep mangling drivers over the issue.


> It makes no sense to have what is essentially an entire legacy middleware in the driver when it no longer represents modern hardware.

Exactly, but it is the only API that Khronos is offering for those that don't want to be Vulkan experts.

Which leaves middleware engines, something totally unrelated to Khronos APIs, as the only future proof path for accessing GPU features on modern hardware.

As for drivers breaking down, the main reason why Vulkan is now compulsory on Android as of version 10, is because while it was optional, the few OEMs that bothered to support it did a very lousy job, so Google hopes by making it compulsory and part of Android CTS, the situation will improve.


> Vulkan and DX12 are far less likely to break existing apps i the future due to far fewer core features.

This is questionable. Vulkan by definition has basically no error checking in the driver, and while developers are supposed to use the validation layer, they may not do so, and even if they do, there are certain plenty of incorrect things an application can do that won't be caught by validation.

Incorrect programs may still happen to run correctly on existing drivers, but then fail with a driver update that happens to change the undefined behavior.


C compiler writers have answered that conundrum a long time ago: "if you (even accidentally) rely on undefined behaviour, the warranty is void".

I don't necessarily agree, but if we have a way to avoid undefined behaviour (and at least in C there are ways to make pretty thorough checks), then it works in practice.


The checks that according to most surveys and security reports are used by a tiny part of the C community?

If it doesn't work for C regarding mainstream adoption, how come it will work for Vulkan?


It won't, unless only a fairly small elite ends up using Vulkan. And I believe that's what will happen indeed: Vulkan is low level enough that most likely, only engine devs and middleware devs will touch it.

You will of course have the occasional cowboy (which I personally am, though in a different domain), but that shouldn't matter that much in the grand scheme of things.

Now if you ask me, Vulkan is not enough. What we really want is a stable, usable hardware interface. Basically an ISA. The thing will have close to zero bug, because hardware folks know how to properly test their designs. Undefined behaviour is likely unavoidable, but I believe it can be reduced to a reasonable minimum.

If AMD and NVidia started something like RISC-V, except for graphics cards, it will likely have a greater impact than RISC-V itself.


Perhaps wireguard VPN as a low-cruft replacement for openvpn is a good illustration of this.



> That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

with the proper code comments for the hair (in the code or in the source repo) and regression tests it becomes possible to clean up and even rewrite.

floppy disks are no longer a thing and neither is windows 95. RAM & swap space are abundant, so that OOM may never happen any more IRL.

the app's whole architecture may have been carefully chosen for the hardware and simplistic compilers of another age.

is it okay never to rewrite a webpage where 50% of the codebase is IE6 hacks?

if it is code that must be maintained, then at some point the hair may need to be shaved; it simply cannot grow forever as the world moves on around it. the doma of "never rewrite" is silly without further context.


>if it is code that must be maintained, then at some point the hair may need to be shaved; it simply cannot grow forever as the world moves on around it. the doma of "never rewrite" is silly without further context.

"Shaving" seems more close to refactoring & partial rewrite than starting from scratch. Which is what we are talking about.

Still, You make good points. Maybe, we should think of "restarting from scratch" the same way we do premature optimization - (1. don't do it. 2. Don't do it too early. 3. if you must do, measure first)

I think every developer dreams of rewriting from scratch because they hate how hacky and ugly their code is probably due to rushed deadlines and because it was just supposed to be an mvp. And they think about throwing it away and starting clean and doing it the 'proper way'. This imo is the wrong reason to start from scratch.

But if your technical debt is genuinely preventing your product from going where it needs to go or do its job. That is the right reason. Again you have to make the calculation whether the technical debt is greater than the cost of rewriting from scratch, re-opening old bugs and introducing new ones, breaking your customers workflow - they also invested a lot into your old program. How many times have you liked a program until 'the damn devs went and ruined a good thing' by 'fixing what wasn't broken'. Again, customers don't see or care about the code or technical debt. They just need to get their work done.


> April 6, 2000

Wow. That goes waaaaaaay back. It’s crazy to think that 1980 was to him at the time as 2000 is to us _right now_.


This is nothing new. Our parents all saw this already 50 years ago wherever they were working at the time.


Thank you for this, it's exactly what I'm getting at.


This stuff often looks simpler and less subtle in hindsight.

Modern cryptography has gotten better at taking human (developer) error into account for designing secure API’s, but the fact of the matter is that the math is subtle, and cryptography in general is subtle due to the places where it collides with reality, so burning everything down and starting over is likely to just cause us to rediscover issues we already know about.

Say we stop using X509 certificates. We will continue to use signing to cryptographically bind claims, and we will still use PKI-like structures to attest trustworthiness, so we are still vulnerable to the same approaches of attacks even if we don’t use certificates.

We have actually learned a few things, such as that cryptographic agility seems to cause more issues than the problems it solves, so the field is improving in important ways, but if it’s not a key matching weakness, it’s going to be some Unicode encoding BS or some other critical but not validated data somewhere else, or... (because this is a real in the wild problem) using phone numbers to generate cryptographic keys for coin wallets.


This is true, but the bug here is not subtle. It would have been shocking to discover an ECC implementation that let attackers specify curve parameters even 10-15 years ago. When we blogged the e=3 debacle back in '07 or whatever, we linked to a Kaliski presentation from 1999 that called out validating curve parameters.

At least with the ACME vulnerability, there was a novel service model involved. Here, we're talking about certificates that allow you to embed what is in effect your own cryptographic algorithm to interpret the certificate under!

This is a rare instance where I'm happy to concede that closed source allowed a terrible bug to survive far longer than it would have if nobody needed to break out a copy of Ghidra to read the code that validated elliptic curve signatures.


Very interesting take. Good description of Pornins thoughts.

By the way, when you say closed src allowed... do other libs in the opensrc space check the curve Params?


Most people implementing ECC signatures are going to end up only handling a chosen group of named curves whereupon there aren't any curve parameters to check.

For example in Signal IIRC they use Curve25519. So there isn't any code about parsing or checking parameters, it's just here's some code that implements Curve25519.

NSS (the library used in Firefox on every platform) only accepts NIST's P-256 and P-384 named curves. It is parsing an ASN.1 data structure which can have parameters for curves, but if parameters are present instead of a name NSS's code gives up immediately because it's only looking for those specific names. (These aren't actually human readable names, they are OIDs such as 1.2.840.10045.3.1.7 for curve P-256)


> cryptographic agility seems to cause more issues than the problems it solves,

Is that so? The ability to shift to new hash functions and ciphers within the bounds of a single protocol seems to have accelerated the adoption of better primitives.


We don't need the ability to switch to new algorithms as much as we need the ability to ditch old ones. Agility in cryptography only needs to mean the ability to deprecate what's broken. We're still going to see newer and more robust algorithms implemented in new software and protocol versions anyway.

What we need is 1 or 2 strong cipher suites and exactly zero weak ones, not 10 strong ones and 5 weak ones.


How is that not cryptographic agility? Which cipher suites to support is a question separate from whether the cryptography should be runtime configurable at all.


One should version whole protocols instead of adding option negotiation for things like cipher suite.

So say: TLS 1.4 = “NIST version” only supports ECDHE(P-256)+AES-256-GCM+SHA256 TLS 1.5 = “Bernstein Version” only supports ECDHE(X25519)+ChaCha20-Poly1305+Blake2b

Because of the X.509 legacy both these future TLS might have to support RSA-2048 and P-256 ECC certs, but supporting just one would be better.

In either case fewer options and branches is simpler and more secure. Both can be enabled, one turned off if a weakness is found.


> If we can't even get crypto libraries right

Signature verification is one of the hardest things to get right. One reason is, they're harder to test: when you encrypt or hash something, you have a whole bunch of bits you can check against test vectors. With signature verifications, you only have one bit: it's a match, or it's a fail.

Moreover, it's very easy to forget to check something (here, that we are using the same base point). Other constructions, like EdDSA, are simpler, and verification requires a single equality check. Harder to botch.

And even then, implementers can be tempted to get clever, which requires extra care. I've personally been bitten by not verifying mathematical jargon, and mistakenly thought "birational equivalence" was complicated speak for "bijection". Almost, but not quite. This single oversight lead to a critical, easy to find, easy to exploit vulnerability.

We found out 15 months later, 1 full year after public release, by tinkering with the tests. A cryptographer would have found it in 5 minutes, but as far as I can tell, they're all very busy.


That’s an interesting assessment. Rogaway just gave a talk at RWC about APIs for secret splitting schemes. IIRC he said that the API needed to be closer to what symmetric crypto was. But from the diagram it seemed like you were obtaining some associated data back to check that the secret was correct (not sure if this would work/is relevant with more recent DKG schemes).

A signature verification returning an actual AD would be interesting as well.


EdDSA can have something close. Long story short, an EdDSA signature has two parts, often called "R" and "s". Verification works by producing a number using the public key and "s", then checking that this number is the same as "R". There are basically 3 steps:

  1. h_ram   <- HASH(R || public_key || message)
  2. R_check <- obscure_computation(public_key, s, h_ram)
  3. if R_check == R, accept, else reject
Steps 1 and 3 are straightforward (the hash and the constant time comparison are almost always implemented in dedicated routines, tested separately). Step 2 is the most dangerous (that's where the elliptic curve magic happens).

EdDSA Implementations would be easier to check against one another if they all exposed step 2 as part of the public API. Bonus points if step 2 can handle invalid inputs (low order point, point on the twist...) in a standard manner. I haven't seen such a thing though, probably because end users never need a separate step 2.

Still, I can already envision the benefits. I'll probably use it myself.


There's a history of vulnerabilities like these, in some of the most important crypto libraries. For instance: until 2008, NSS, the TLS library used by Firefox, couldn't properly validate RSA signatures from e=3 RSA keys (it wasn't validating the full signature block, but rather parsing it and looking for the embedded hash). For e=3 roots, which were readily available at the time, you could simply build any signature block you wanted and then take its cube root.


Bleichenbacher'06 never dies.


Neither does BB'98


Can’t wait for BB20


Interesting. Were there any similar vulnerabilities in Windows XP's crypto libraries?


There have been crypto vulnerabilities in Windows libraries before, but not the e=3 vulnerability; CryptoAPI parsed RSA signatures back-to-front, and so sidestepped the issue.


> If we can't even get crypto libraries right (where you'd hope most of the formal verification folks are)

Personally, I'd hope most of the formal verification folks are working in firmware for industrial/medical embedded systems, and/or the microcontroller designs that go into those same systems. A lack of encryption (outside of military contexts) doesn't usually directly cause people to die.


Interestingly Microsoft presented EverParse designed to produce verified parsers for these sorts of data formats at USENIX Security 2019. https://www.usenix.org/conference/usenixsecurity19/presentat...

But it's only for parsing the data. What gets down with it after parsing can still be buggy.


Crypto is hard. Choose your libraries carefully, stay updated and for Zimmermann's sake: don't roll your own.


We need one time pads. They're the only really trustworthy crypto.


By all means: make yourself some one-time pads. Maybe you can convince Google to accept one from you at a dead drop somewhere in Mountain View.


Hmm. Maybe I'll generate some high quality OTPs for myself with a good CSPRNG. I could use any decent block cipher in counter mode, just need to guard against counter re-use, and then ship my OTPs off to anyone I need to communicate with.

Hang on, if I could come up with a way of securely sharing the key I used with my recipient, I wouldn't need to actually mail the OTP to them, since they could generate it themselves. Should probably include a nonce too.

Now, if only there were a secure method to share the key...


> Now, if only there were a secure method to share the key...

... that doesn't include a critical cryptographic vulnerability. :)

To be generous to the now greyed GP, there's probably a way to teach a recipient a short pad and its proper use so that the recipient can later decode a short ciphertext sent over Twitter. Perhaps even performing that feat in their head. None of the cryptography that could fit your quip has that property.

And if the recipient does something catastrophic like resend the pad over Twitter to confirm they memorized it correctly, there's at least a chance they may catch their error. Perhaps they may even correct it without the goddamned Department of Defense coming into it.

I don't get cryptographers' condescension toward OTP zealots. All they want is a better boat. At least have the decency of the captain from Titanic and apologize while we sink for not having delivered it.


> Maybe I'll generate some high quality OTPs for myself with a good CSPRNG

That's not a one-time pad, it's simply a stream cipher.


That's the joke!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: