Note that you probably need to revoke your key if you used it on a 32-bit system and published anywhere close to 64M results. Busy SSL terminators or e.g. OAUTH signers might want to take note.
I'm surprised to learn that Go did not already check the result of calculations; that's a pretty standard countermeasure, also to e.g. protect your private key on somewhat-unreliable hardware.
The surprising thing is not the bug, it's that go didn't verify the result of the CRT calculation. That's a standard countermeasure (although not present everywhere, Florian Weimer recently did some work on patching implementations that were lacking this check).
And nit: The OpenSSL bug didn't affect libressl (I discovered it, I checked for that).
How do bugs that affect OpenSSL not affect LibreSSL? Surely any bugs that were fixed in LibreSSL would have been shared with OpenSSL, right? Or is it legacy code that was removed during the Valhalla cleanup?
> Surely any bugs that were fixed in LibreSSL would have been shared with OpenSSL, right? Or is it legacy code that was removed during the Valhalla cleanup?
hannob gave the reason, but bug could also be unwittingly removed by separate cleanup e.g. generic replacement of unsafe functions/practices by safer ones.
Your link - which is interesting - concerns sanity check on private keys. I agree that such checks need to be implemented very carefully, since they handle highly-sensitive data.
However, I was proposing verifying the signature - which involves only the public key and the signed data, neither of which are likely to be all that sensitive.
(The situation is more subtle for RSA decryption.)
Not really. Go doesn't use shared libraries (yet), so you're only really recompiling the end binaries.
If you're using Go, you should already have the infrastructure to do this. Everything we have gets rebuilt for each go release anyway, so this is no different, just a little more immediate.
> If you're using Go, you should already have the infrastructure to do this.
Unless you're using packages in an APT or Yum repo. You instead get to wait for packages to come downstream, almost certainly not in anything approaching synchronicity. Awesome.
We got away from statically linked monoliths for a reason.
This is a problem with relying on APT for security fixes anyways.
It's been particularly bad in the past for things like nginx, where we've known about memory corruption bugs for multiple days and had clients who were incapable of patching because they only had infrastructure for downloading packages.
You should have tools in place to build, from source, anything that might be a significant security issue.
If my clients would pay me for it, I certainly would. But `package 'foo'` is a lot cheaper than that, turns out.
And while I 100% agree about the optimal case, in practical terms you're going to see a lot better turnaround from the maintainers of Debian's OpenSSL than, say, the third-party Docker repo. A sane system not reliant on static linking 'til the cows come home is not a cure-all, but it's better than the Go situation, and is a part of why I'm super not thrilled with Go-as-infrastructure. (Go-as-app, whatever, people take that problem on themselves.)
The problem isn't really with APT (or deb). Building a package from source is trivial. Anyone can do it, and it's completely automated. Building software from upstream source can be a lot more daunting.
Feeling your pain right now. I have a service running in go that runs in multiple machines and this will keep me busy for the night. Worst is that I can't have a sysadmin do the whole process. Now QA has to sign off on it... It's no wonder things stay broken for a long time. It is a lotnif work. Sorry for the rant. :)
That's why you write tests. Not that always works either, but it's not like a qa team of point and clixkers running over a manual set of those same tests will always get it right.
Imagine the nightmare of having to hunt tens of shared libraries versions, and even have to re-build them and patch them with the correct version of the language/apis so that they continue to work.
How does that square? I mean, I have one copy of a shared library on any of my machines. I don't even necessarily know off the top of my head what of the Go tools my developers found bloggable enough to want in the stack that might be impacted by this bug. (Or rather, I do, but that's because it's my job, and I'm actually good at my job. I don't have the same high hopes with regards to most other infrastructure folks I've worked with.)
>How does that square? I mean, I have one copy of a shared library on any of my machines.
That's all well and good, but that's just you. Real systems often get messed with multiple copies/versions of shared libraries. Also known as dll-hell in Windows. In Java land you could easily end up with 20 versions of some jars just as easily, all installed locally by maven and needed by this or that lib/framework/etc.
>Or rather, I do, but that's because it's my job, and I'm actually good at my job.
I haven't seen that on a decently-maintained Linux system--certainly not any I'd call a "real system"--in a really long time, unless they're side-by-side packages from the OS.
What "Linux system"? We're not talking about your distro userland packages here.
We're talking about businesses deploying multiple server apps and such. Neither Java nor .NET shops for example rely on "OS packages" for their server dependencies when it comes to Jars, dlls, etc.
Yes, actually, I was talking about distro userland packages, but whatever. If you're deploying a Java application, everything should be coming out of Maven (and thus avoid any sort of shared library hell) in the first place, no?
>Yes, actually, I was talking about distro userland packages, but whatever.
Then you were off topic, since we were discussing Golang packages for deployment, which aren't userland packages either. But whatever.
>If you're deploying a Java application, everything should be coming out of Maven (and thus avoid any sort of shared library hell) in the first place, no?
You still get multiple copies of jars. And you still need to replace them. And you still might have older projects that need particular versions of a jar that you need to update somehow in case such a long-reaching issue is discovered (and upstream might not do that at all).
> You simply need to rebuild whatever binaries you run
"Stack", not "app". That includes, say, Docker. Or nsq. Which are packaged by the operating system (because, as noted in a reply to `tptacek, paying me to build out what they get from the operating system is generally not something a client is going to want to do). And which will not be updated as conscientiously as I can expect a core library on my system to be updated.
> As an aside, do you have any idea how pretentious you come across?
Pretension, frustration, whichever. My low opinion of Go aside, it's my job to deal with the details of my clients' stacks, make a coherent experience for their developers, and deal with the system-level security concerns they don't think about. If getting annoyed that Go and its community make my life suck a little bit more in ways it need not suck is pretension, I'm really gonna be okay with that. I mean, jeez, it's 2016. "Rebuild the entire world when a mouse farts" is fragile, dangerous garbage that puts the lie to the idea of "engineering" in this profession. We should be better than this. (And Go's first iteration, lovingly called Java 1.4, was. We're going backwards.)
I can't understand why someone making a statement like that ("bloggable..") which is untrue can be upvoted while you're downvoted. Worse, he even bragged about being an excellent engineer who knows everything there is to know when he clearly doesn't in this case.
It's trivial to do when you know which binary to replace (and you have the source code available).
But it's probably not so trivial to hunt down every single old binary on every system. There's also a continuous risk that you may receive old binaries from third parties in the future.
For example, gitlab has started to ship a component written in Go in recent versions. Things like these are easy to forget about.
If you can't track the binaries you are running you have a bigger problem than recompiling some of them. Security issues aren't the one in a decade thing, if you can't easily locate which code has to be updated, you are in trouble already...
I'd say as a general overzealous rule that if at least two people in the company don't know the code, it shouldn't be publicly accessible if it being compromised could cause you financial harm.
Can you point to a resource that explains what a carry propagation bug means? I am having a hard time searching for an explanation, especially in the context of how it can cause security issues.
When I see a question like this, I always type in obvious words like "carry propagation bug" into Google just to see if it's a lazy question or there's a failure by INFOSEC authors on a topic. I was shocked that all I got was garbage about all kinds of CVE's and such with no clear explanation of the problem. Not clear in terms of someone searching for an article rather than bug report. Only one I saw in a few pages of Google was:
StackOverflow had nothing useful with that phrase. Weird. I tried Modular and Montgomery Multiplication as they're important key words that should give you an idea about potential carry issues. I found these decent descriptions in my short Googling:
Best I can do with little time and the flood of irrelevant results I saw. Maybe need a thorough write-up on these sorts of things that shows up in Google. At least I serendipitously found a great paper on statically detecting flaws in error propagation:
Carry propagation bugs are more or less self-explanatory: the implementer failed to add every possible carry bit during a large integer addition or multiplication, and this turns into an incorrect result. This may only happen in rare edge cases, and therefore remain unnoticed for a long, long time. The sibling comment explains this better than I could.
A carry propagation bug is a special case of a fault attack. Fault attacks, in their most general form, are attacks where an adversary tries to induce an error during a cryptographic computation. The consequences of a fault during a computation are algorithm-specific; here are some examples:
- In RSA, a fault during a decryption or signature operation using a particular (CRT) implementation approach lets the attacker recover the private key [1]. You do this by obtaining an incorrect signature S for some message M you know about, and then recover one of the prime factors of the modulus with the easy computation gcd(S - M^e, N). This is often called the Bellcore attack.
- With elliptic curves, if you can convince a scalar multiplication to work on an incorrect point on the curve (either because the implementation doesn't check the point is valid, or via a hardware fault), you can likewise recover the secret key [2]. The process here is a little more complicated, but no less efficient.
- With AES, if you can trigger a fault in a state byte right before the eighth round, you can recover the full key very easily having both the valid ciphertext and the erroneous one [3]. This attack only makes sense for hardware implementations, since software implementations of symmetric ciphers do not generally leave much space for errors to occur.
In the Golang case, the carry bit failure leads to the Bellcore attack. All you have to do, in principle, is to harvest around 2^25 RSA signatures (and respctive messages) from a buggy target, compute a GCD, and you're done.
You can't store a 4096-bit number in 32-bit register, so if you are multiplying numbers together you need to do it a piece (limb) at a time. Given that the arithmetic is modular, you can get away with "leaving some room" and not using all 32-bits, but instead using more 32-bit registers and treating them as if they are say 26-bit registers that can overflow 6-bits. For instance, Curve25519 (255-bit modulus) can be stored with 8 32-bit limbs, or 10 26-bit limbs (among other combinations). So if you do repeated calculations, you don't need to carry all of the bits all of the time. I don't know if it's the case for this bug, but I know it's been a bug in the past that they waited too long to propagate the carry (the extra 6-bits) into the next limb and it's overflowed the 32-bit register. This results in an incorrect calculation.
In case anyone wondered if the discontinued 32-bit Mac OS X versions were also affected (for 10.6 & 10.7 compatibility), apparently "The issue was introduced in Go 1.5", and the last 32-bit binaries for Mac were Go 1.4.2.
Go implements their own RSA crypto and related algorithms?
I would have expected them to learn from Javas mistakes and use something that has a hope of being maintained indefinitely and isn't coupled to something like language version. JSSE (TLS implementation in Java) is so broken and behind Tomcat ships with some marshal interface to OpenSSL, but that takes considerable effort and software converges to language defaults.
Go has one of the best TLS implementations available in any language. Adam Langley at Google oversees it. I think it's more trustworthy than OpenSSL's.
The specific type of cryptographic flaw that occurred here is common to virtually all crypto libraries. As I said upthread: even Nacl got bit by carry propagation. Using OpenSSL wouldn't have protected Go from these kinds of flaws; OpenSSL has had several carry bugs.
I've had a lot of pushback from our operations team about the Go TLS stack, which they consider to be less secure and less vetted than OpenSSL. They (rightly) point to this message from Andrew Gerrand in 2013: https://groups.google.com/d/msg/golang-nuts/LjhVww0TQi4/oBLi...
Do you know if this is still true? Or does it even matter, if OpenSSL is the only other contender for TLS? As someone who has no expertise in crypto, it's hard to make sound judgments.
> It's really up to the provider of the service in question to make the call. All we can say about crypto/tls is that it hasn't been thoroughly reviewed from a cryptographic standpoint. Whether that suits your use case or not is very much up to you.
That is still true. Of course, the implementation has been reviewed and beat on in production use more now than it had been in 2013, but the fact remains that there has been no systematic, independent outside audit focused specifically on cryptography. On the other hand, the cascade of "interesting" bugs since 2013 like Goto Fail and Heartbleed suggests that maybe the other implementations are not quite as audited as one might have expected.
As tptacek mentioned in another comment, Adam Langley is the primary author and maintainer for Go's crypto packages. He's also one of the key people involved in Chrome's TLS implementation and Google's use of TLS more broadly, and he's a significant committer to OpenSSL and the creator and maintainer of BoringSSL. You can learn more about his work at his blog, imperialviolet.org. Having Adam involved with Go's crypto/tls raises my comfort level considerably. It's not Joe Random (or Jane Random) working on the crypto code, as others suggested below. Even so, even experts make mistakes and it would be nice to get a second expert to go through it looking for problems.
For the most part Google does not use Go's crypto/tls for user-facing traffic. That is primarily a detail of Google's pre-existing serving architecture (we use separate servers to do that, much like putting nginx in front of your main server, and many engineering years of work went into those servers before Go even started). Because of that, Google has not really evaluated Go's crypto/tls one way or the other for that use. I believe we do run Go's crypto/tls to serve HTTPS on godoc.org, although I'd have to double-check. I certainly run it on swtch.com, rsc.io, and so on, not that there are real secrets in any of those servers.
I suspect the largest production user of Go's crypto/tls is CloudFlare. Maybe someone from there can say something about their comfort level or exposure.
Like Andrew said, it's really up to the provider of the service to make the call. You can have the same bugs as everyone else, or you can have a different set of bugs.
For whatever it's worth: lots of experts have looked at the crypto/tls code in Go.
There's never been a focussed, all-encompassing audit of it. But that's mostly true of all the other TLS stacks as well. OpenSSL got a complete audit last year; the same people that worked on that project have spent significant time working with Go's TLS as well.
I'd caution readers of threads like these not to put too much stock in formal audits for things like complete TLS implementations. It's incredibly difficult to get the kind of coverage you'd expect from an audit in something like a full-featured TLS.
For instance, Microsoft's SCHANNEL has had several contracted audits, presumably by very smart teams, but I would still trust Go's TLS much more: every time a new wide-reaching TLS bug class is discovered, someone is going to check crypto/tls for it. That's not true of a lot of other stacks.
I'm not sure I understand how Go's implementation is so much better when your justification is that they are making the same mistakes other libraries already have. The bar should be higher for new software, not the same.
That's a sweet bullet point on page 3. I don't see what that has to do with security of rolling your own crypto and coupling it with the whole distribution.
If you want, you can link OpenSSL statically. It's about as recommended as statically linking the entire universe into a binary.
you need as many functional implementations of TLS as you can get. written by people with pedigree. you need them to cross-polinate and find bugs in each other. Go's TLS stack for example is used to test the TLS implementation is BoringSSL: https://www.imperialviolet.org/2015/10/17/boringssl.html
The evidence points elsewhere. TLS is such a complex protocol that any new library will make fundamental errors in implementing it and the requisite crypto primitives (of which there are many).
What crypto libraries need is usage and, this usually goes with it, attention from people looking to break them. There are plenty of TLS libraries out there like mbedTLS and the mentioned JSEE, and looking at CVE reports a few years ago you would think these are the most secure libraries in the entire world compared to the constant trickle of issues in OpenSSL. Except when heartbleed got attention and people started looking at these rarely used alternatives they found massive problems. For years, the protocol FSM in JSEE was fatally broken. MbedTLS had and continues to have issues with just basic things like buffer overflows. And often these libraries just plain lack features and timely updates.
Interesting that you mention heartbleed while also claiming that Go implementing its own TLS library was a bad move. fyi, the popular Heartbleed test used a modified Go TLS stack - [1]. Clearly having an alternate implementation that was easy to work with was a win here.
openssl is by far the most widely used tls library out there. still, occasionally our collective hearts bleed. what's even worse is you get forks of openssl for each vendor that wants to abide by this government's policy or that. in effect, you have a single bad codebase multiplied n-fold. where's the economy of scale there?
if you're arguing that "with many eyes all bugs are shallow", this has been disproven. if you're arguing that java's tls implementers were crap, well, that's fine. doesn't mean go's are.
I'm surprised to learn that Go did not already check the result of calculations; that's a pretty standard countermeasure, also to e.g. protect your private key on somewhat-unreliable hardware.