Hacker News new | past | comments | ask | show | jobs | submit login
A world of hurt after GoDaddy, Apple, and Google misissue 1M certificates (arstechnica.com)
267 points by jonburs on March 13, 2019 | hide | past | favorite | 133 comments



Just to be clear - "mississued" in this case doesn't mean they were issued to someone who doesn't control the domain. The issue is they were issued using a 63-bit serial number instead of the minimum 64 bits. (The software these CAs were all using was generating 64 random bits, but setting the first bit to zero to produce a positive integer.)

The reason CAs are required to use 64-bit serial numbers is to make the content of a certificate hard to guess, which provides better protection against hash collisions. IIRC this policy was introduced when certs were still signed using MD5 hashes. (That or shortly after it was retired.) Since all publicly-trusted certs use SHA256 today, the actual security impact of this incident is practically nil.


Is there a reason why certificates are being issued using the bare minimum required number of bits, instead of something higher like 128 or 256? Why even risk being at the very edge?


There are legends that some software doesn't like serial numbers that don't fit in 64-bits. As with most legends you'll tend to hear a third hand story that someone heard once from somebody who remembers someone else telling them. Since this came up on m.d.s.policy we have a very specific client to always keep in mind, Mozilla's Firefox, and that doesn't care, but perhaps something else does, or did, at some unspecified point in the past. Probably.

The main practical reason seems to have been that a popular application used by Certificate Authorities, EJBCA, offered an out-of-box configuration that used 63 bits (it called this 8 bytes because positive integers need 8 whole bytes in the ASN.1 encoding used). That looks superficially fine, indeed if you issue two certs this way and they both have 8 byte serial numbers that just suggests the software randomly happened to pick a zero first bit. It's only with a pattern of dozens, hundreds, millions of certificates that it's obvious that it's only ever really 63 random bits.

But yes, I agree the sensible thing here (and several CAs had done it) was to use plenty of bits, and then not worry about it any further. EJBCA's makers say you could always have configured it to do that, but the CAs say their impression was that this was not recommended by EJBCA...

If you could go back in a time machine, probably the right fix is to have this ballot say 63 bits instead of 64. Nobody argues that it wouldn't be enough. But now 64 bits is the rule, so it's not good enough to have 63 bits, it's a Brown M&M problem. If you can't obey this silly requirement to use extra bits how can we trust you to do all the other things that we need done correctly? Or internally, if you can't make sure you obey this silly rule, how are you making sure you obey these important rules?


Thank yous all for teaching me about Brown M&Ms.

https://www.snopes.com/fact-check/brown-out/


It seems that a popular open source software that CAs run on (EJBCA) defaulted to 64 bits and few people bothered to change it, since 64-bits was seemingly (but apparently not actually) compliant.


Presumably there's already a big safety margin built in when using 64 bits and some computational overhead involved with increasing that to 128 or 256 bits.


This seems highly unlikely to be authoritative -- AIUI serial number unpredictability is critical to SSL certificate security, as without it, it becomes possible to induce a CA into producing a signature that matches a certificate for another domain. Unless something else changed about the format when the hash algorithm was changed, AFAIK this property is independent to the hash algorithm in use

If memory serves it isn't a theoretical attack either, I read about it used against (Startcom maybe?) not so many years ago


The signature is over all the data in the certificate. So a hash collision in the signature algorithm makes this attack possible. (And if you can predict/control serial numbers, it makes the attack much easier because then you can generate a colliding pair of one valid cert and one invalid one and get the first one signed, instead of having to find a preimage of a valid certificate.) But without a hash collision, it should be theoretically safe to have no entropy at all. Most commonly-digitally-signed objects (Git commits, software packages, etc.) have no added entropy in the object itself / the input to the hash function.


I believe this is only an issue if you can produce collisions for the underlying hash function. SHA256 is still considered safe against that.


This article really annoys me.

It's "Rage Culture" or maybe just front-page seeking by the author. The problem with that is that it makes people desensitized because if everyone is screaming all the time, one should just shut their ears. We have real issues to discuss and this isn't one of them by a long shot.

Reducing the search space from 64bits to 63bits is of no consequence because if an attack on 63bits was feasible, it would mean the same attack would work 50% of the time on 64bit (or take twice as long for 100%). That wouldn't be acceptable at all.

Sure, 64>63, but at the very least it's not "A world of hurt"


The "world of hurt" comes from the fact that the CAs are revoking every single one of these non-compliant certs, as they're required to by the BRs.

Even though the actual security impact is nil, the current policies in place don't allow any flexibility in how non-compliant certs are treated. Therefore, millions of customers now need to replace their certificates due to a mere technicality.


It isn't a problem in itself. It doesn't make the certificate any less secure in practice, even if we still used md5 as a hash.

The problem however as pointed out down-page [0] [1]

> If you can't obey this silly requirement to use extra bits how can we trust you to do all the other things that we need done correctly? Or internally, if you can't make sure you obey this silly rule, how are you making sure you obey these important rules?

> The reason for the urgent fixes is to promote uniformly applied rules. There are certain predefined rules that CAs need to follow, regardless of whether the individual rules help security or not. The rules say the certs that are badly formed need to be reissued in 5 days. > If these rules are not followed and no penalties are applied, then later on when other CAs make more serious mistakes they'll point to this and say "Apple and Google got to disobey the rules, so we should as well, otherwise it's favoritism to Apple and Google."

[0] https://news.ycombinator.com/item?id=19377292 [1] https://news.ycombinator.com/item?id=19375758


That's a slippery slope argument, and the answer to slippery slope arguments is "We'll address serious issues with appropriate seriousness."

This specific error isn't a serious issue, as indicated by how little impact it's had on real-world security.

It's not favoritism to Apple and Google if they emit certs with 63 bits and get minor criticism and someone else, say, stops using random numbers to seed cert generation and gets raked over the coals. The latter case would require more urgent and serious attention.


It's not a slippery slope argument, it's an applying the rules argument. The rules don't allow for a difference between more and less serious infractions, they just need to be followed to the letter.


"If you can't obey this silly requirement to use extra bits how can we trust you to do all the other things that we need done correctly?" is a slippery slope argument. The response is "We allocate testing resources proportionally to the seriousness of the consequences of failure to adhere to a requirement, as any good engineering project does."

It's probably worth noting that the problem lasted three years and wasn't discovered by an exploit in the wild, but by followup spot-checking of Google certs as a result of spot-checking Dark Matter certs. I don't think seriousness of the issue is in dispute.


It's saying, you have an extremely important job for the functioning of the Internet, that everybody has to blindly trust you do right.

The moment we see a small sign that you don't do it right in some detail, then that trust is gone.

Consider all the details in the spec to be Van Halen's brown M&Ms (although that had no functional effect, and losing a bit of security does). They knew that if people did that right, then they could trust that they also read the details of the rest. If Google gets this wrong, we can't trust on that.

That's not a slippery slope argument. That'd say, if we allow this then you would then do worse things because we let it go. But that's not the argument.


The world of hurt is not that the certificates are insecure - the world of hurt is that by the letter of the rules, which people seem intent on sticking to, millions of certs need to be revoked and replaced for (as you point out) no good reason.

And people are sticking to the letter of the rules entirely independent of the article's author. The author is not advocating for anything to be done, just reporting that this process is already in motion.

Of course we have real issues to discuss. But the fact that all these certs are going to get revoked and require replacing is a real issue that impacts people, even if there's no technical reason for it.


They even include the phrase "Practically speaking, there’s almost no chance of the certificates being maliciously exploited.", but continue to talk about the mistake as catastrophic. Very irresponsible.


There appears to be some subtext involving a UAE state-backed CA called Dark Matter which is the real reason this is being treated so severely: https://news.ycombinator.com/item?id=19376528


Reducing the search space from 63bits to 62bits is of no consequence because if an attack on 62bits was feasible, it would mean the same attack would work 50% of the time on 63bit (or take twice as long for 100%). That wouldn't be acceptable at all.


As you know, those 50%s grow quickly. But the relevant question is "How few bits before cracking the cert takes less time than the rate of reissuance?" And the answer is "Fewer than 63."


I think you're missing their point. The time it takes to crack a key is given as an average. In reality, half of all 64 bit keys are crackable in the same amount of time or less than what it would take to crack a 63-bit key on average. So if they are saying that it's feasable to crack any 63-bit key in that timeframe, then it must also be true that it's feasable to crack around 50% of 64-bit keys in the same timeframe. Clearly that's still unacceptable.


> Adam Caudill, the security researcher who blogged about the mass misissuance last weekend, pointed out that it’s easy to think that a difference of 1 single bit would be largely inconsequential when considering numbers this big. In fact, he said, the difference between 2^63 and 2^64 is more than 9 quintillion.

Okay, but, that's because 2^63 itself is more than 9 quintillion. Where the search space was previously 18 quintillion, it's now 9 quintillion. Both of those are "big". The attack is 50% easier than "theoretically impossible before certificate expiration," which should still mean that it's impossible.


Or, if the safety margin here is really only one bit, we should probably increase the minimum. If 63 is unsafe today, 64 will be unsafe tomorrow.

If you discovered your AES key generator only created 127 bit keys, would you correct the mistake moving forward? Or go back and immediately burn everything with the old key? The difference between 2^127 and 2^128 is much, much more than 9 quintillion.


Moreover, a Biclique attack against AES exists, by saving some meet-in-the-middle computations, it has already reduced the full 10 rounds, 128-bit AES to "just" 126-bit (25% of 128-bit) of security. Is it a clever attack? Yes. Does it mean the security of AES has been reduced to 25% of the original security level? No. Does it practically matter? No. This is exactly why 128-bit security is seen as a minimum standard in cryptography - it can provide a more-than-adequate security margin which renders all minor speedups in cryptanalysis irrelevant.

If the 64-bit random serial number has already provided an adequate security margin, it should be that no action needed for all existing 63-bit certificates. But it seems the choice of 64-bit here is arbitrary without good justification...


Does it mean the security of AES has been reduced to 25% of the original security level? No.

I'm curious why that's the case. A plain reading of reducing the security level from 128 to 126 bits would seem to imply the answer is yes?


Because going from "unbreakable in 12 billion years" to "unbreakable in 3 billion years" isn't a practical reduction in security


But that’s still 25% of the original security...

I get that it’s meaningless - 4x effectively 0 is still effectively 0 - but denying the math doesn’t really help anything.


I agree.

The problem here is my choice of an ambiguous word, "security". Formally speaking, the "security level" or "security claim" of a cipher is defined by the computational complexity (time/memory) of breaking it, often represented as the number of bits. so the Biclique attack indeed reduced the "security" of AES to 25% of its original claim. "Security" in a broader sense can be roughly understood as "how well a system is practically protected, under a specific threat model", in this case, the underlying details, such as this minor reduction to a cipher's security claim hardly matters.

I should have edited my comment to use a better word, but now it already became permanent.


The “security” of an algorithm is not defined as the duration of time required by a computer to brute force it. Much more important is how safe it is against other known or anticipated attacks.


Brute force attacks are now 4x as effective as they were once thought to be, but they are not the limiting factor for AES' security, even at 126 bits. The most likely way for AES to be broken would be a new algorithmic innovation that worked against any key length, or a new kind of computer, or an implementation flaw, or... , and those things are not 4x as likely than they were.


Instead of measuring "How many years does it take for me to crack this?" measure "How many actors would be able to crack this?" it turns out if you can crack 126, you can crack 128, so the pool of perpetrators to fear remains the same


It seems unlikely that there was process that determined that 2^63 was an insufficient number of outputs, but 2^64 was just right. The choice was somewhat arbitrary in the first place.


Its 50/50.

Either you crack it or you don't.


Except that's not how probability works.


By that logic lottery tickets would be 50/50 as well; either you win or you don't. That's not how it works


"50% easier than theoretically impossible" means it's now 50% possible, doesn't it?


Nope, the chance of success for each attempt went from 1 in 18,000,000,000,000,000,000 to 2 in 18,000,000,000,000,000,000.


that's from theoretically possible to theoretically possible.


Which is grammatically consistent with the assertion that fundamentally nothing has changed: certificates with a 63-bit serial number are unlikely to be compromised and certificates with a valid 64-bit serial number are also unlikely to be compromised.


Not 1 in 18 quintillion to 1 in 9 quintillion? I think you've got your binary math wrong.


2/18000000000000000000 == 1/9000000000000000000


2/18 and 1/9 are equal.


No more than "half of infinity" is half finite.


Infinity and "practically infinity" aren't the same thing though. Half of "practically infinity" may end up being practical.


Yes the previous value was not infinity. It was impractical to solve in a human lifetime, but if they keep trimming off a few bits it very quickly becomes practical. If actually "infinity" then dividing it by any finite number would still result in infinity, which is not the case here.


Right, which is why the specific claim here is that 63 is not a problem, not that smaller numbers in general are not a problem.

A better way to put this: instead of saying "it reduces the search space by 9 quintillion," say "it reduces 50% of the search space." Sure, that's a lot, but not nearly as much as trimming 8 bits and saying "it reduces 99.6% of the search space."


> trimming off a few bits

i.e., reduce it by close to practically infinity?


There should be a very large gap between "theoretically impossible" and "practical". If cutting the search space in half gets you from one to the other, there's probably been an error in definition.


Who knows what the future would bring?

A $32 million (1985 dollars) Cray 2 super computer could do 1.9GFlops.

You can now get over 50x that performance for less than a grand in a device that fits in your pocket. I bet those engineers didn't expect that in half a lifetime.


That's rather beside the point. If 63 bits is insecure, then 64 bits is also insecure. If I can brute force 63 bits in a week, I can brute force 64 bits in 2 weeks. If we are worried that 63 bits is a security issue, then the solution isn't increasing to 64 bits, it's increasing to 96 bits, or 128 bits.


Moore's law was described in 1965 and the experimental evidence lined up for well past the next two decades. If you handwave exactly what it means to "everything is 2x better every 1.5 years," we'd expect a factor of 2^(30 / 1.5) = 1 million by 2015, so having a factor of 100,000x in cost and having it fit in your pocket wasn't actually unexpected.

Certainly any cryptosystems designed in 1985 that wanted to encrypt data until today should have taken the most aggressive form of Moore's Law into account.


I’ll worry about that next time I issue a 30 year certificate.


Nassim Taleb would probably disagree. ;)


What an incredible non-story burying an actual real and terrifying story.

The crux of this entire issue is a company known as Dark Matter, which is essentially a UAE state sponsored company, potentially getting a root CA trusted by Mozilla.

It's highly suspected that Dark Matter is working on behalf of the UAE to get a root trusted certificate in order to spy on encrypted traffic at their will. Everyone involved in this decision is at least suspect of this if not actively seeking a way to thwart Dark Matter.

Mozilla threw the book at them by giving them this technical hurdle about their 63-bit generated serial numbers - which turned out to be something that a lot of other (far more reputable) vendors also happened to have this issue.

Should it get fixed? Ya, absolutely.

Is it nearly as big of a deal as giving a company like Dark Matter, who works on behalf of the UAE, the ability to decrypt HTTPS communication? Not even close - this is far more scarier, and much more of a security threat to you and me. It's pretty disappointing that this is the story that arstechnica runs with instead of the far more critical one.

The measure of what makes a trustworthy CA are things like organizational competency and technical procedures. These are things that state level actors easily succeed in. There is no real measure in place for motives and morals for state level actors. That should be the terrifying part of this story - anyone arguing about the entropy of 63 or 64 bit is simply missing the forest for the trees in this argument.


> It's highly suspected that Dark Matter is working on behalf of the UAE to get a root trusted certificate in order to spy on encrypted traffic at their will.

This is false. DarkMatter already operates an intermediate CA, so _if_ this were something they were actually planning to do they wouldn't need a trusted root CA to do it. So far, there's been no evidence presented that DarkMatter has abused their intermediate cert in the past, or that they plan to abuse any root cert they might be granted in the future.


Presumably 64 bits were originally chosen because it still permitted simple or naive ASN.1 decoders to return the parsed value as a native 64-bit type. But ASN.1 INTEGERs are always signed, so theses serials would now have to be 65 bits. But any ASN.1 decoder interface that permitted directly storing a 65-bit value into a 64-bit type--even an unsigned type--is dangerous if not broken. I'm guessing that most X.509 management software (much like my own) simply maintains the parsed serial as a bignum object.

Serials were originally intended for... well, for multiple purposes. But if they only function today as a random nonce, and if they're already 65 bits, then they may as well be 128 bits or larger.

A randomly generated 64-bit nonce has a 50% chance of repeating after 2^32 iterations. That can be acceptable, especially if you can rely on other certificate data (e.g. issued and expire timestamps) changing. But such expectations have a poor track record which you don't want to rely on unless your back is against the wall (e.g. as in AES GCM). Because certificates are already so large, absent some dubious backwards compatibility arguments I'm surprised they just didn't require 128-bit serials.


Yes, now certificates are about half as hard to hack as they were supposed to be.


Well, it depends on what you mean by "hack."

The attack that we're talking about here isn't breaking a signature, but relies instead on being able to manipulate certificate data to generate a certificate with a known hash. That hash must collide with another certificate hash, which would then let you generate a rogue certificate.

A team demonstrated that this attack was possible by being able to issue a rogue cert by being able to predict the not_before and not_after on the certificate that would be issued, predicting the serial of the issued cert, and finding an input for the rest of the cert fields which caused a collision.

https://www.win.tue.nl/hashclash/rogue-ca/

So, yes 128 bit serials would be better, but we should be safe even at 63 bits of entropy.


Does that mean that twice as many will be cracked?/s


Seems like a lot of hand wringing over nothing, security is done with huge factors of safety (moving to 256 bit keys when no one had ever broken a 128 or even 96 bit key). It's hard to imagine that 1,2, or even a quarter of the bits couldn't be zero-ed.

> it’s easy to think that a difference of 1 single bit would be largely inconsequential when considering numbers this big. In fact, he said, the difference between 263 and 264 is more than 9 quintillion.


In fact, without a practical attack against SHA256, all of the serial number bits could be zeroed. This is undesirable for other reasons, but the serial number isn't part of the cryptographic security of the certificate except as far as it can be used to prevent the person requesting the certificate from anticipating or controlling what the entire signed data will be.


Well not _all_ the bits. We do want the serial numbers to be non-identical because you need a way to talk about specific certificates for validity checking. Once upon a time bug reports would have focused on certificate serial numbers, these days they're more likely to be crt.sh links but arguably we should discourage that because crt.sh could go away some day.


Yep, that's what I mean by "for other reasons". (Without distinctive serial numbers or crt.sh, we would probably have to attach PEM copies of the certificate in every discussion about it.)


Given that there seems to be no security impact (and none expected in the next year or two)...

Curious why everyone doesn’t agree to use 64 bits in future and just let the mis-issued certs live out their natural life?

Seems to create a lot of busywork for lots of people for no discernible benefit?


The reason for the urgent fixes is to promote uniformly applied rules. There are certain predefined rules that CAs need to follow, regardless of whether the individual rules help security or not. The rules say the certs that are badly formed need to be reissued in 5 days.

If these rules are not followed and no penalties are applied, then later on when other CAs make more serious mistakes they'll point to this and say "Apple and Google got to disobey the rules, so we should as well, otherwise it's favoritism to Apple and Google."


No idea and completely unsourced, but one of the site comments states this:

> 4) This only came up because of DarkMatter, a very shady operator who most people are very happy to have an excuse to screw with technicalities.

Edit maybe these are sources?

https://bugzilla.mozilla.org/show_bug.cgi?id=1531800

https://groups.google.com/forum/#!msg/mozilla.dev.security.p...

Still not getting the whole picture.


The basic story as I understand it is that DarkMatter under contract to the United Arab Emirates wants to become a trusted CA, and they are widely expected to start running a governmental MITM once trusted, but the CA root programs don't have any provision for "You're a bunch of sketchy creeps, we don't trust you." (Oddly enough for a "trusted" root program, there is generally no actual evaluation of trust as conventionally defined. The "trust" part is "can you pass audits and generally be technically and organizationally competent to not let your private key be stolen / your infrastructure be abused by an attacker." Individual employees are part of the threat model, so there's usually a two-person rule for access to the private key; entire malicious organizations willing to lie in public and cover their tracks are not envisioned by the model.) So people are trying to block their application by nitpicking technical mistakes that, by the letter of the Baseline Requirements, disqualify you from being a CA.

https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups... covers some background on DarkMatter.

One of the Baseline Requirements is you may not issue certs with fewer than 64 bits of entropy. Turns out DarkMatter was doing that, by issuing certs with 63 bits of entropy. Also turns out this was a thing lots of CAs did. Now that it's been pointed out publicly....


It's amazing that we anticipate having to revoke malicious CAs as a crucial part of a security model, yet we have basically no plan to ensure that we don't accept a competent-but-malicious CA into the fold in the first place.


Competency in this case can be objectively reinforced, but maliciousness requires one to device who is “bad” and who is “good” which is not a technical problem.


That's a bit misleading I think. There's no evidence whatsoever that DarkMatter plans to abuse their CA to MITM users. DarkMatter has been in possession of a trusted intermediary certificate for years now, so _if_ that was something they wanted to do they could have done it a long time ago.

The reason people are concerned about DarkMatter is that they have (allegedly, they seem to be denying this) previously developed and sold software that can be used to MITM connections (though not by abusing any CA certificates), and that this software has been used for less-than-noble purposes.

So yes "You're a bunch of sketchy creeps, we don't trust you." is an accurate assessment of some people's opinions towards DarkMatter, but "widely expected to start running a governmental MITM once trusted" is inaccurate.


The whole CA system is fundamentally broken.

When you point a virgin browser to a new ssl endpoint the user should be presented with the certificate and a list of certificate chains that imply trust in the certificate. At that point you should decide which certificate to trust or not. This can be

- only the end certificate (because you verified the hash),

- some intermediate certificate or

- some/all root certificates (that come with the browser).

Obviously the last option is stating “I’m incompetent and/or blindly trust the browser”. Unfortunately it is the default and the software doesn’t help you to manage certificates you trust in a reasonable way.

For me it would be okay to turn of dumb mode during installation. As a start, the green address bar could be used for these user trusted certificates (instead of for EV).


Not obvious to me at all. I would say that believing you can manually verify hashes in a trustworthy way is incompetent. Where do you get the hashes to compare against from?


You get the hashes you trust from the counterparty that you trust. I.e. your bank could print it everywhere.

It’s not less obvious than just trusting your browser vendor.

EDIT: Also note that in the presented approach you can still trust some root CAs. It’s just that the user has to do it explicitly.


I’d like to be able to limit certain privately imported root certificates to specific domains — that would be a valuable feature in a browser to protect against corporate hijacking.

However for the average person what you propose is meaningless.


I know nothing about DarkMatter so this may nor may not be justified but I just want to make the point that they could be kicked out if they actually did make MITM certs. There are certificate comparison programs that try to spot them.


In theory, yes. In practice, letting them in and then kicking them out still lets them do damage: certificate revocation doesn't work in the presence of MITMs (and in the absence of MITMs, you don't really need certificates...) as described at https://www.imperialviolet.org/2011/03/18/revocation.html , so allowing the CA into the program allows them to keep conducting attacks even if revoked. There are browser-specific revocation-like things like Firefox's OneCRL and Chrome's CRLSets (and there's always straight-up browser updates), but from a network perspective, they're as blockable as actual revocation sets. So if the threat model is a nationwide MITM by the government, it won't help you.

You also need the recipient of the MITM cert to notice it and report it. It's generally hard to MITM an entire nation's traffic, for reasons of computational overhead. So instead you let people browse the web normally, and you deploy MITMs against specific targets for specific sites for limited times. It's probably easy for the MITM to do this in a way that avoids the victim noticing that the cert is illegitimate, and also probably easy for the MITM to prevent tools that report suspicious certificates from sending that report to the internet at large.

(Also, if your threat model is a malicious lying CA, things get much harder under the current practices: a CA has actually said "Oh, that was an internal test certificate for google.com, it didn't actually go anywhere, but also we've fired the employees who thought issuing a test cert for google.com from the prod CA was a good idea" and not been revoked. So if you get caught, just say something like that and don't fire anyone, and there's a nonzero chance you won't get kicked out.)


Once kicked out (due to certificate transparency or due to finding out ala diginotar) the next browser update will remove them, and the CT people won’t deal with them.

Doesn't Chrome now require CT?

Not great, but doesn’t rely on crls or other broken systems.

> It's generally hard to MITM an entire nation's traffic, for reasons of computational overhead

Isn't that what Iran did with DigiNotar?


Are there any that cover the one-in-a-million targeted MitM scenario?

My understanding of current cert transparency efforts was that they wouldn't catch "we fingerprinted your connection, identified you, and are just injecting a malicious cert for you" scenarios.

And were more targeted at the "rouge / misconfigured CA signing half the internet" to any client mishap.


Mandatory CT does actually solve that: if a browser won't trust a cert without seeing it include a signed certificate timestamp from a trusted log, then the CA has to disclose certs, even if they're only targeting one user.

But most people don't have e.g. Expect-CT set up, so it's not clear it would help on a majority of sites.

(One reasonable option would be to require certs from DarkMatter, and really every CA going forward, to have SCTs in their certs, and enforce that with a flag in the root store. But if there's a concern about DarkMatter specifically, it's probably better to phrase a change to the root store policies that say "We won't accept CAs we just don't trust" instead of waiting for them to misbehave and then rescinding their membership.)


> it's probably better to phrase a change to the root store policies that say "We won't accept CAs we just don't trust" instead of waiting for them to misbehave and then rescinding their membership

Unless you can define the policies up front that's a very risky road to go down. Why refuse to trust DarkMatter, but not refuse to trust China Bank?


This is the CAB Forum rationale for serial number entropy[1]:

> As demonstrated in https://events.ccc.de/congress/2008/Fahrplan/attachments/125..., hash collisions can allow an attacker to forge a signature on the certificate of their choosing. The birthday paradox means that, in the absence of random bits, the security level of a hash function is half what it should be. Adding random bits to issued certificates mitigates collision attacks and means that an attacker must be capable of a much harder preimage attack. For a long time the Baseline Requirements have encouraged adding random bits to the serial number of a certificate, and it is now common practice. This ballot makes that best practice required, which will make the Web PKI much more robust against all future weaknesses in hash functions. Additionally, it replaces “entropy” with “CSPRNG” to make the requirement clearer and easier to audit, and clarifies that the serial number must be positive.

[1]: https://cabforum.org/2016/03/31/ballot-164/


Theoretical possibilities and minimal security impacts aside, I'm not seeing comments along the lines of the brown M&M clause [0]. Yeah, brown M&M's weren't going to ruin the day of David Lee Roth, but that wasn't the point: when dealing with heavy and high-amperage equipment of a stage show, what else did you forget or ignore?

64 bits, 63 bits, what's the difference? The difference is that we now have to go through everything you might have forgotten that will make a difference. In other words, we apparently can't trust you to follow instructions, and certificates are all about trust.

[0] https://www.snopes.com/fact-check/brown-out/


Ok, I’m all for strong security and better SSL infrastructure, but the response to this issue was just totally overboard. The issue - one fixed bit in a 64-bit randomized serial field - does not compromise the security of these certs in any meaningful way, especially not before their natural expiry dates anyway.

The disruption caused by reissuing everything surely exceeded the disruption of this theoretical issue. I guess, on the plus side, we get to find out whether the PKI infrastructure is ready for a mass revocation/replacement event...


It's not about whether it compromised security; it's that they didn't adhere to standards. If you're a certificate authority, you need to conform to standards. If you're not, you SHOULD get evicted as an authority, like DigiNotar [1] was for example.

[1] https://en.wikipedia.org/wiki/DigiNotar


I don't think you can compare misissuing certificates, including *.google.com, to leaving one bit out of 64 marked as 0.


Personally I hate EJBCA.

Recently they stopped releasing new updates for the community edition (blocker at 6.10, while the 7.0.1 is out) because they are a really greedy company.

Building by yourself is half a nightmare and the installation process as well, relying on ant tasks for it and that fail 5 out of 10 times.

Considering the UI, most of the settings can be really misused and even their evangelist can get fooled by it (especially with their Enterprise Hardware Instance, whose synchronization across the nodes is also faulty)


Sooooo all the big players depend on one CA PKI package: EJBCA - is that not a major concern ?


That seems like the correct state of things. More packages means more possibility of bugs. We want to trust as little code as possible.

Now if only the same policy would be applied to CAs (possibly a few to mitigate abuse of power concerns, but far less than are in my trust store today).


Counterpoint (which I'm not fully convinced of myself, to be fair): CAs are supposed to be interchangeable and easy to revoke. While the CA ecosystem as a whole must be robust, no individual CA can be too big to fail. If a serious bug is found in software used by one or a few CAs (imagine something like the Debian OpenSSL bug from 11 years ago), revoking them and requiring customers to move to other CAs is feasible. If a serious bug is found in software used by all CAs, you can't revoke all the certs on the web and leave HTTPS useless globally while CAs set up new software.

On a tangent: one practice I'd genuinely like to see for security reasons (and which I'm surprised the CAs haven't proposed themselves, since it would make them twice as much money) is that major sites should always hold valid certs from two CAs, so that if a CA gets revoked it's just updating a file or even flipping a feature flag and certainly not signing up with a new CA. It would make sense to have two certs generated by different software, then. (It might also make sense, re abuse of power concerns, to present both certs and have browsers verify that a site has two valid certs from two organizationally-unrelated CAs. That way you can be significantly more confident that the certs aren't fraudulent.)


Would two signatures on the same cert fit the bill?

Two complete certs is twice as much data to transmit, making the TLS setup a bit heavier.


A typical webpage is something like 2MB

A typical cert is 0.1% of that


Lots of things use TLS that is much smaller than a bloated webpage, for example REST APIs.


EJBCA is popular but it's hardly a monoculture, especially among the larger CAs that can afford to do their own thing. Let's Encrypt doesn't use it, and I'm pretty sure a significant number of the other bigger CAs don't.


Well, it will come out who was using it since they will have to disclose this.

I don't think Digicert/Symantec are using it


Typically with crypto you want to stick with one major industry standard implementation that is strenuously verified. It's probably more concerning if everyone's using their own.


Suppose so but doesn’t it become one well to poison? It just surprises me a bit (mainly because I was NOT familiar with EJBCA and have moderate awareness of PKI)


I suspect a lot of people expect OpenSSL or LibreSSL to be used in this kind of setups.


CT principles would surely demand they do some public facing declaration?

The 'pull the certificates from the browsers' thing demands people from these companies maybe recuse themselves from conversations?

(this is public trust process stuff, not technology per se)


I don't understand what you're asking.

Many of the affected CAs have already come out and "confessed" that they've issued non-compliant certs and stated that they're revoking them.

No certificates are being "pulled from browsers" as a result of this incident as far as I know.


Is this a consequence of Java's failure to expose unsigned integer types?


from that write-up, I'd call that a bug in EJBCA more than a "misconfiguration". If it was working as designed, then it's design was buggy. :)


“Almost no chance of exploitation.”

How true is this?


High-entropy certificate serial numbers are a defense against hash collision attacks. The margin of security is reduced to the point that a brute force attack is twice as easy. If it was going to take you 10,000 years to brute-force it, now it takes you 5,000, both of which round to "impossible." If it was going to take you four weeks on EC2, now it takes two weeks, both of which round to "entirely too easy."


True.

2^63 and 2^64 are effectively the same cost to break. Instead of costing $2X to break, it now costs $X.


This is an anti-collusion measure against birthday attack. The effect is exponential.


2^x is exponential. GP is correct, it's still only one bit, so the cost is halved.


when X > $1M (maybe even large) it really doesn't


They're "the same cost" because anyone with account to $1M or $1B to break a cert generally also has access to $2M or $2B. No reasonable threat model includes defending against attackers that have 50% of the necessary capital to conduct an attack but not more.


Objection! There is one such threat model: content protection schemes (like BD+) are that finely-calibrated. The goal is to be secure enough for the new release window, but to accept failure after that point.

You're totally right here, I'm just nerding out on threat models and security economics.


I think this is a general fallacy held by a lot people.

The notion that someone who has access to X amount of funds for a given task automatically has 2X and can also afford to spend 2X on the given task is not necessarily true, so such claims are generally baseless.

What is most interesting is that these claims are generally about non-exact amounts, so the logic should follow that if you can afford X, then you can afford 2X, also means that you can afford 4X, and 8X, ad infinitum.

In practice, a 2X difference in majority of real life cases concerning substantial amount of resources is by definition substantial and far from a trivial.


I'm not sure but I think you're trying to say I'm wrong. In the general case it's wrong, of course, to say that being able to afford X implies being able to afford 2X: few people could afford 2x their rent or a house 2x the price of theirs, etc. Few people would fail to be meaningfully affected by getting 2x (or 1/2x) their salary.

But I'm talking specifically about cryptographic threat models. No reasonable threat model says, conducting this attack takes $100,000, and since most people don't have $100,000 in savings it's safe, because defending against "most people" isn't meaningful. A reasonable threat model says either, conducting this attack takes $100,000 so we're going to add an additional layer of security because it's a realistic attack, or conducting this attack takes $100,000,000,000,000. In such a threat model, if the numbers change by a factor of two in either direction (either through a one-bit error like this, or through macroeconomic trends, or whatever), it doesn't change the analysis.

And in particular the claims here are in fact about exact amounts: a factor of two, or one bit. Cryptographers tend to measure things very precisely in bits. There's usually no good reason for a particular choice (64 is not a magic number here, it's just a convenient number for computers), but the analysis is still done with that particular choice. You can measure the difficulty of attacking a problem with N bits of entropy, and then add a heavy margin on top, and be very clear about what that margin is. Once you've done that, N-1 becomes probably reasonable, and you can argue precisely about why it's reasonable; you can argue equally precisely that N-5 is questionable and N-10 is not reasonable, and that the arguments are not recursive.


> No reasonable threat model says, conducting this attack takes $100,000, and since most people don't have $100,000 in savings it's safe

Sure, that is never the claim.

> And in particular the claims here are in fact about exact amounts: a factor of two

Sure, but that is still a factor of X, an unknown amount.

The bottom line is that for many actors, even nation state, the cost difference of 20M and 40M might mean that they have to seek alternative options. Not every actor has access to infinite amount of USD or compute.


And my claim is that if your threat model depends on an attacker who can afford $20M being unable to afford $40M, your threat model is flawed and you've already lost. They might have to seek alternative options. They might not. They might just be able to issue $20M of bonds, who knows. They might have a strong economy next year and the attackerbucks-to-USD exchange rate might double. If you need to defend against an attacker with $30M in the bank, make the attack cost $30B or $30T.

And the neat thing about crypto is that's easy to do: just increase the amount of entropy involved. A mere ten more bits make a brute-force attack cost 1000x as much. If we're genuinely worried that 63 bits is too small, ditch the 64-bit requirement and make it 128-bit. (Probably phrase it as 120-bit, so people can use UUIDs and whatnot - the point is still that 120 is still clearly more than enough, not near the borderline.)


> And my claim is that if your threat model depends on an attacker who can afford $20M being unable to afford $40M

But is it? I think the underlying claim is that 2X difference doesn't matter, which is patently false.


2X difference doesn't matter to a reasonably constructed cryptographic threat model. Any threat model for which a 2X difference is meaningful is already flawed. I'm not saying a 2X difference doesn't matter in general. I'm saying a reasonably constructed cryptographic threat model is going to consider attacks as either "worth worrying about" or "not worth worrying about", and any maybes, like the possibility of an attacker who already controls $20M finding another $20M, fall in the "worth worrying about" bucket.


A 2X difference from baseline does not make a meaningful difference in who can attack you.


It could make the difference of mounting a hash collision before a certificate expires or after (2X time), if the attack doesn't yield to parallelism and time becomes a limiting factor.


The claim was about a 1 bit reduction in entropy. A scenario like that definitely acts differently, but it's not searching a space either; reducing a guaranteed calculation time by 2x is not really comparable to a loss of 1 entropy bit.


A signed integer with 64 unsigned bits isn't even convenient for computers.


The true cost of Java not supporting unsigned integers


In an X.509 certificate, the serial number is encoded as the ASN.1 integer type, which is arbitrary length. So that can't map to a native integer type on any platform.

I'd chalk this up to the author of the relevant module not really grokking the two's complement behavior in java.math.BigInteger.


Again and again, the problem with PKI is not the tech, but the agents. We need an authorityless solution.


It works fine until some bug arise and nobody have the authority to fix it....


The interesting aspect that a lot of people are overlooking is that, for a theoretical attack within certain timeframes, this difference can be make-it or break it!

Imagine a collision attack that takes about a 1 year with 64bit serial numbers, so with 63bit serial number it should take about half, at 6 months.

The average certificate is issued for about 1 year, so being able to mount a collision attack that took 1 year in 6 months can make the difference from generally-not-useful to very practical and dangerous.


In your scenario half of the 64 bit certificates could be brute forced in 6 months anyway.


Why do you assume that an attack would take 1 year, and not (e.g.) a billion years? A factor of two is only interesting if the number you're dividing was interesting in the first place.


imagine is hardly assuming. But it doesn't have to be exactly 1 year, any attack that takes longer but less than 2x average certificate lifetime with 64bit serial numbers (useless) becomes practical on 63bit serial numbers (useful, for a strange meaning of useful).


Such an attack doesn't exist.

Any such attack would also become feasible with twice the budget.


> Such an attack doesn't exist.

As far as we know.

> Any such attack would also become feasible with twice the budget.

Assuming that the attack yields to parallel computing and scales linearly with more cpu/cores, because linear programming is bound to current compute capabilities and then theoretical limits like Bremermann's limit and Margolus–Levitin theorem.


Yeah, assuming these true things.


Assuming the parallelism of an algorithm that you know nothing about is beyond foolish.


Right, which is why we know things about the algorithm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: