Thanks. This makes a lot of sense, and matches what I saw myself. I never got anything out of nginx, but I found apache quite easy. I never built more than a POC, and left it at that.
I didn't think about it at the time, but it was only on a newly started apache instance.
Coupled with the fact the apache as a frontend tls server is pretty rare on big sites nowadays, I'm feeling pretty good about what did happen, vs what could have happened.
It seems like a "feature" that you'd want your process to store the private key material as low as possible in the address space so that arbitrary read overruns don't run the risk of hitting it. It seems to just accidentally be this way in nginx, but I wonder if it should just be another (tiny) layer in the overall security design.
IIRC openbsd's malloc does something like that by default, so every bit of data gets its own protected address space... and then the openssl guys built their own malloc without that feature, to get better performance :(
True.. I wonder if any specs/certifications actually require something like that. Typically I mostly use tricks like that to track down bugs, but there's nothing wrong with using it in production for something like a single key/cert alloc. It becomes a bit unwieldy if you have lots of things to protect. (Especially on machines with 64k pages :))
The bug is reading 64k from x -> x+64k. You'd want the key as low as possible in memory so the chance of the heap implementation allocating a request below it (thus allowing the +64k to overlap into the key) is next to nil.
So if your key was at an address less than x the bug would never read it, was my point. So I guess that means you'd have to force the UDP datagram payload to be stored high as that dictates what x is?
It's using RSA with key length of 2048 bits [1], and one can assume that both prime factors have equal bit length [2] i.e. 1024 bits each, and so the size of all the other private components can be derived from that. I don't know what OpenSSL's in-memory format is for this data, I suppose therein lies a good part of the challenge!
The don't sign comments rule is designed to stop pointless posting of company URLs when there is no need to. In this case it is clearly appropriate for Matthew to disclose who he is as it is important to the context of the post.
Didn't Juliano Rizzo already post that he'd been able to extract keys from a server? I'm not clear on the circumstances; for instance, it might have been right after boot, and it might have been Apache, and it might have been FreeBSD.
Errata security modified their original post to say that yes they are extractable. And a moderator properly fixed the title after the article was changed to reflect the retraction:
i suspect the memory accessible by this bug depends a lot on the software, OS and possibly hardware, e.g. on openbsd and bitrig amd64, the amount of memory leaked per exploit is less than 64 KB, closer to 32 KB. if you go much past the 32 KB mark on these OSes, it segfaults.
running an exploit script against one of our own services showed only 1-2 KB of information, most if it being the (public) cert, and the rest zeroed out.
I wonder if Cloudflare is logging all data sent to and from the challenge server, and then searching it for private key fragments. If not, they should be! As it's not guaranteed that a lucky attacker will be aware of receiving key material, if it is exposed.
Irony would be such a patch leaking information about the state of any random number generator leading to more easily guessable session keys or the like.
(Of course, creating suitable fake data with a separate PNRG to avoid this would be pretty easy.)
what i am expecting ppl to see is that you can't actually get to the tls private key itself. we have done some testing with our backup service, cyphertite, and have yet to attack and actually compromise any keying material.
You all are involved in so many great open source projects -- just wanted to give a shout-out to Conformal. People do notice and appreciate your contributions.
I like spectrwm but it really needs a NEWS/CHANGELOG. Users can not quickly get an idea of what has changed in between release X and Y. I went to update the debian "please package latest spectrwm" bug and there is no easy way to post the changes between 1.0.0 (yes the debian package is that old) and 2.5.0.
It's a good idea but even if nobody successfully exploits this particular website with the heartbleed bug doesn't mean much for the rest of the vulnerable sites.
Since the bug exposes a few kilobytes of uninitialized malloc() memory the kind of data the attacker will retrieve is heavily dependent on the software the server is running.
I read elsewhere that the bug exposes first and foremost memory that OpenSSL itself used before (because OpenSSL has its own allocator running on top of malloc).
Is this really an accurate challenge? I have wondered if the true exposure risk from Heartbleed may be over-stated* due to memory separation between processes, etc. This however is probably a clean server with a fairly static install. There isn't a risk of things like session leakage which I think is the true risk of Heartbleed. Nor would there be the memory fragmentation that would occur in a production system. While intriguing I'm not sure how much it proves?
(* this is still good though as any risk is unacceptable and shouting Fire makes everyone move... and they need to on this issue)
It may be the case that Heartbleed did not practically expose private keys. The risk was not overstated in general, though; a virtually undetectable method that has been proved to do things like leak plaintext username/password credentials is still catastrophic.
Anything OpenSSL decrypts surely has to hit the OpenSSL process's memory, right? That would include requests, which could easily contain sessions, passwords, etc.
I wonder if one could use this server (or, of course, any other vulnerable server, but the hostname on this one is nice) for phishing people. For example, e-mail them the link, then try to snarf the referer header out of the server's memory in the hopes that their webmail URL contains something juicy, or give them a fake form that POSTs interesting data to it. Probably far-fetched, but it's amusing to consider.
I seriously don't expect anyone will find anything (reasons mentioned in the blog post), but that doesn't mean I haven't pointed my exploit that searches for private key material in the return buffer at it on a one second interval. If anything I suppose CloudFlare will be able to produce some pretty pictures of the amount of incoming heartbeats they received!
The bug was discovered by whitehat researchers approximately 12 days ago. It was publicly announced 5 days ago. We got early word of it from the researchers who initially discovered it, allowing us to patch our systems and ensure all sites behind CloudFlare were not vulnerable. However, we have no way of knowing how long blackhats may have had it. It had been present in the OpenSSL software for the last 2+ years. Therefore we're trying to get an informed sense of the security risks. Hence our work attempting to use the vulnerability to recover SSL private keys and, as announced today, the CloudFlare Challenge.
There's no way to share such a bug with the major Linux distros and let them deploy a fix to users without making it public at the same time. Even assuming that the distros commit to handle the fix submission and silently repackage openssl (which they don't always do, depending on their policy), the word would get out minutes after it's pushed to the update servers.
So telling major Linux distros == telling the public. At this point, you have to decide whether you want to forewarn big hosters handling millions of sites and billions of visits like Cloudflare, Google, AWS, etc., or not. I don't think there's a good universal answer to this.
Not so; companies like RHEL understand the importance of disclosure timelines and won't leak it early.
The importance of telling large distros doesn't lie in them immediately releasing a fix; it lies in them being able to prepare a package with the fix before the announcement, and then exactly when the announcement happens, they can publish the package (and possibly do something to make it propagate faster to their distribution servers)
As is, when Heartbleed was announced, many distros took an hour or significantly more to offer a fixed package. Proof-of-concept exploits were also made in that time. That was a dangerous situation.
I fully expect critical vulnerabilities that are "responsibly disclosed" to be reported to major distros so that packages can be prepared, but not released, in advance; furthermore, it allows people to be ready when it's announced at an agreed-upon date so that the packages can be pushed to live.
I'd actually be okay with a system where smaller distros which use similar packaging formats to larger ones are alerted with "There is an exploit. We will publish a fixed package that will likely be compatible with your distro on DATE. Be awake then to make sure these changes go live quickly, not when one key dude wakes up in 5 hours".
Sorry for the long-winded comment. What I really wanted to do is just explain "no way to share ... deploy a fix ... without making it public" is not the reason for sharing. The reason for sharing is so that the fix can be deployed more quickly when it is deployed.
It doesn't matter if they show "responsibly disclosed" all their repos are publicly available to see, it takes one person looking through commits then going wtf is this followed by a quick look at the code and a blog post to make this a wildfire.
I think the notion is the distro security team codes and tests a patch, but doesn't commit the code to public repos or release the patch publicly until an agreed-upon disclosure date.
Not necessarily that the distro security team codes the patch even. In most cases, upstream (e.g. openssl here) should have an official patch/commit that is private, but is given to these trusted distros. The security team only has to create a package with the upstream patch.
No, it works like this: everyone gets their fix ready hush hush, and on an agreed date it's made public, and vendors hit the "publish" button more or less simlultaneously.
Except that it does work like this on a regular basis. It's not just something that sounds nice in theory. Distros and other major software vendors regularly coordinate disclosure. Have there been failures of the process? Sure, but that's the nature of secret keeping. The advantages of coordinated release far outweigh the risk of occasional mistakes, since the latter simply leaves people in the same position as they'd have been without any coordination (i.e. the exact same position that the distros were in with heartbleed).
You would think that they could easily work on in advance of incidents who they can trust to with early information. I'd be amazed if Canonical (Ubuntu), Redhat (RHEL) and The Attachmate Group (SUSE) wouldn't show discreet discretion. I don't know about Debian and similar projects, but you would think they could determine this in advance.
I haven't been involved in distro security for a few years, but all this coordination used to happen via a mailing list. Organizations (distro maintainers, OS vendors, security people representing some of the larger/more security sensitive open source projects, etc) would need to apply to be on the list. They'd need to document who would have access to the sensitive materials posted, what their procedure would be for handling, etc. Impact assessment, disclosure timelines, CVE assignments from MITRE, attributions, etc etc would all be coordinated on this list. Fixes would not be pushed to public VCS systems or package repositories before the agreed-upon disclosure date.
AFAICT, none of this happened here. A very small number of organizations was told in advance, but nobody knows what the criteria were to get on this special advance notice list. Given how completely off-guard some really big organizations were caught (yahoo, for instance; all the linux distros, etc), this could have been handled a lot better.
There is just no easy way to handle this and someone had to make the decision on who got what.
The reality is that the open source community isn't vetted like an intelligence agency when it comes to holding secrets to their vest. It only takes one person in all those OSS communities to leak to the press about something of this magnitude and then the result could be even worse. The fact that this was kept under wraps for 12 days (that we know) is a testament to the folks who made the decision whom to inform.
I have not seen any comment by the security researchers about why they choose to disclose to some organizations (such as cloud flare) earlier than the general public or the distributions. Perhaps, they felt the need to have some organizations with very large OpenSSL installations test the patch prior to make sure it worked and did not cause any unexpected problems?
"While we believe it is unlikely that private key data was exposed, we are proceeding with an abundance of caution. We’ve begun the process of reissuing and revoking the keys CloudFlare manages on behalf of our customers. In order to ensure that we don’t overburden the certificate authority resources, we are staging this process. We expect that it will be complete by early next week."
# Automatically figure out what echo options to use so that echo
# '\r\n' actually just outputs the characters CR and LF and nothing
# else. This is very shell dependent.
ECHO_OPTIONS := -e -n -en
ECHO :=
$(foreach o,$(ECHO_OPTIONS),$(if $(call seq,$(shell echo $o '\r\n' | wc -
c),2),$(eval ECHO := echo $o)))
ifeq ($(ECHO),)
$(error Failed to set ECHO, unable to determine correct echo command,
tried options $(ECHO_OPTIONS))
endif
So out of curiosity, I ran the python heartbleed tester against their server. Surprisingly, the first (and every response thereafter) returned a memory dump which contained at least part of what looked like a private key. I immediately became skeptical when each of these apparent private keys ended in LOLJK.
Would a multi-process server engine help protect against this? Think what Chrome does with tabs. If the network request is received by a dedicated IO process which then uses IPC to communicate with other parts of the server, then perhaps sensitive information like keys would not be in the same address space so could not be leaked? I guess if the bug was in a sensitive process then it would still happen. Disclaimer: I have no idea how modern servers are architected, perhaps they already do this :) Would be interested to hear from anyone more knowledgeable.
If what CloudFlare is saying is true (and I think it is), that the only possibility of Nginx/Apache leaking the private key is on start up due to how low of a memory address the private key is given, is there anything in Nginx/Apache to fire a bunch of dummy requests at itself before accepting public connections? This should effectively bury the memory address that the private key is stored in. Would this be useful if it doesn't exist or would it only be useful in hindsight?
Where it might not be possible to get SSL private keys directly via heartbleed. Is there not also the possibility of exposing reused credentials or something that exposes a further exploit that could provide root access or similar to a server, allowing the retrieval of these keys?
It may not be possible in this clean minimal install, but in a real production environment, it should still be treated as a threat?
0600: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
0610: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
0620: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
0630: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
so i assume any intermediate values from the RSA computations will end up in the heap and may be accessible by an attacker. is it possible to reconstruct the key from these values?
i could be missing something but it looks like signing does m mod p and m mod q and part of these operations involves doing a left shift on the divisor (p, q) and this is allocated to a temporary buffer. if these buffers are allocated near the heartbeat buffers then they could be leaked.
i'm looking at crypto/rsa/rsa_eay.c and it is possible that this code is not the code that is being used to do the signing in ssl.
/* signing */
static int RSA_eay_private_encrypt(int flen, const unsigned char *from,
unsigned char *to, RSA *rsa, int padding)
{
..
..
if ( (rsa->flags & RSA_FLAG_EXT_PKEY) ||
((rsa->p != NULL) &&
(rsa->q != NULL) &&
(rsa->dmp1 != NULL) &&
(rsa->dmq1 != NULL) &&
(rsa->iqmp != NULL)) )
{
if (!rsa->meth->rsa_mod_exp(ret, f, rsa, ctx)) goto err;
static int RSA_eay_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, BN_CTX *ctx)
{
..
..
/* compute I mod q */
if (!(rsa->flags & RSA_FLAG_NO_CONSTTIME))
{
c = &local_c;
BN_with_flags(c, I, BN_FLG_CONSTTIME);
if (!BN_mod(r1,c,rsa->q,ctx)) goto err;
}
else
{
if (!BN_mod(r1,I,rsa->q,ctx)) goto err;
}
..
..
/* compute I mod p */
if (!(rsa->flags & RSA_FLAG_NO_CONSTTIME))
{
c = &local_c;
BN_with_flags(c, I, BN_FLG_CONSTTIME);
if (!BN_mod(r1,c,rsa->p,ctx)) goto err;
}
else
{
if (!BN_mod(r1,I,rsa->p,ctx)) goto err;
}
#define BN_mod(rem,m,d,ctx) BN_div(NULL,(rem),(m),(d),(ctx))
int BN_div(BIGNUM *dv, BIGNUM *rm, const BIGNUM *num, const BIGNUM *divisor,
BN_CTX *ctx)
..
sdiv=BN_CTX_get(ctx);
..
if (!(BN_lshift(sdiv,divisor,norm_shift))) goto err;
BN_lshift() shifts a left by n bits and places the result in r (r=a*2^n).
EDIT: obviously if you leak p and q you can trivially reconstruct the private key :) but it is possible other intermediate values might be leaked as well that allow for key reconstruction.
though, considering left shifted p and q look a lot like normal p and q i'm surprised no-one has found this by just searching through the leaked data. so maybe this is not leaked or it requires a read at the correct time because these buffers might be trashed by another computation.
> maybe this is not leaked or it requires a read at the correct time because these buffers might be trashed by another computation.
As far as I can tell, it's certainly possible for intermediate data to be leaked, but it'd require pretty spectacular timing.
That said, I'm having a bit of a hard time understanding why this challenge exists. If the possibility (even remote) exists that key material was leaked in any form, it should be assumed that the key material was leaked in the worst possible way, and that everything is compromised. They seem to agree, having rolled their keys.
The only real effect I can see from this challenge existing is that people will see it and make the assumption that Heartbleed is "not a big deal, because Cloudflare's keys weren't leaked".
In other words, I think this challenge is harmful to security overall; Cloudflare seems to be doing the right thing, but the messaging here is going to cause people to not follow suit.
i've locally tested the BN_div function and the sdiv buffer is intact at the end of the function with a p or q value. however, the BN values seem to be allocated in some kind of pool and they are reused so the private keys get clobbered by another BN function soon after. also, at the end of the RSA_eay_private_encrypt function the BN values are zeroed (or written over with crap data) so unless there is another implementation bug it should be impossible to leak the intermediate values in a single threaded implementation.
leaking the intermediate values in nginx to recover the private key looks like a dead end. however, i think this could be quite promising for apache mpm_worker. :)
So it looks like you might be right. Debian uses Apache mpm_worker by default, and after getting a test install running and using ab to provide load I managed to get the private key in under a minute on the first run: http://t.co/nYvIw7q4M8 (I was lucky the first time, usually it seems to take a little longer.)
for nginx it might not be possible for the intermediate data to be leaked. nginx is single threaded so as long as the intermediate buffers are safely freed or clobbered by the end up the ssl signing method then it won't be possible to leak the key that way. i suspect multithreaded apache would pose a real risk.
I do not understand what you are trying to convey in your message. I never mentioned Cloudflare and neither did the comment I was responding to. What part of my comment led you to believe I was presenting/responding-to a claim made by cloudflare?
The article is about Cloudfare saying their keys are safe.
We were discussing implications of someone winning the Cloudfare challenge. You suggested the fall of the challenge would confirm Neel being wrong? I was alluding to Neel's position that that any key leaking is "unlikely" - a much more tenuous position than Cloudfare's.
But maybe you meant that he's alrady been proven wrong, and the nsa would be another? Then I did misinterpret that bit.
Too often you seem lame jokes like the original comment that require the belief that NSA is leaps and bounds ahead of the world in infosec AND yet NSA is composed of bumbling morons. NSA would be grossly incompetent if they knew how to retrieve the private key AND then informed the world of their offensive capability. This is a basic tenet of intelligence operations, you do not publicize your capabilities to your adversaries.
The renowned clandestine operative B.Smalls, eloquently stated rule #2:
Never let 'em know your next move
Don't you know bad boys move in silence and violence?
My guess is that if this gets exploited this is going to be used in combination with another (yet unknown) bug. Now all we can do is set back and hope the white hats prevail
http://blog.cloudflare.com/answering-the-critical-question-c...
Matthew Prince Co-founder & CEO, CloudFlare