Hacker News new | past | comments | ask | show | jobs | submit login

Imagine you have an old expired key... you take your new malicious extension, and sign it with the expired key and a time stamp that says it was signed at a time the key was still valid.

Without some other verification mechanism, you can't tell the difference between this and an actual signature signed when the key WAS valid




> sign it with the expired key and a time stamp that says it was signed at a time the key was still valid.

The whole point of a trusted timestamp is that such signature cannot be made for a fraudulent date, otherwise it would be utterly pointless.

This scenario and threat model does not exist if timestamping is correctly implemented.


Nope.

RFC 3161 timestamps - which is what we're discussing here - can be fraudulently constructed for any timestamp value by someone who has the private key for the TSA (Time stamping authority). So what your parent described is easily possible: A system that relies on RFC 3161 timestamps has to trust that

* any cryptographic hash algorithms used remain safe

* any public key signature methods used remain safe

* the TSAs retain control over their private keys for as long as you continue to accept timestamps from that TSA

This is a big ask, and in practice the code signing systems you're probably thinking of just don't care very much. A state actor (e.g. the NSA) could almost certainly fake materials for these systems, we know that this has been done (presumably by the NSA or Mossad) in order to interfere with the Iranian nuclear weapons programme in the past.

You _can_ build a system that has tamper evident timestamping, but it's much more sophisticated and has much higher technical requirements. That's what drives the Certificate Transparency system. CT logs can prove they logged a specific certificate within a 24-hour period, and monitors verify that their proofs remain consistent, the to-be-built Gossip layers allow monitors to compare what they see in order to achieve confidence that logs don't tell different stories to different monitors. But to achieve this a CT log must be immediately distrusted if it falls off line for just 24 hours or if an error causes it to not log even a single timestamp certificate it issued. Massive Earthquake hit your secure data centre and destroyed the site? You have 24 hours to get everything back on line or be distrusted permanently. Bug in a Redis configuration lost one cert out of 4 million issued? You are distrusted permanently. Most attempts to build a CT log fail the first time, some outfits give up after a couple of tries and just accept they're not up to the task.


> can be fraudulently constructed for any timestamp value by someone who has the private key for the TSA

Sure. Which is why these are heavily secured and guarded. Just like the keys for any cert, and highly trusted root certs in particular.

Any private/public crypto system can be compromised if the private keys are leaked. Everyone knows that.

That however is in no way a good argument for not using timestamps.


RFC 3161 timestamps are used because they let people do something Mozilla doesn't care about at all and which was largely irrelevant here.

Alice the OS Vendor wants to let Bob the Developer make certificates saying these are his Programs, she is worried Bob will screw up so his cert needs to have a short lifetime, but her OS needs to be able to accept the certs after that lifetime expires so users can still run their Programs. So, Bob makes certificates and uses Trent's public TSA that Alice authorised to prove they were made when they say they were. Alice only has to trust Trent (who is good at his job) for a long period, and Bob who can be expected to screw up gets only short-lived certificates.

But Mozilla's setup doesn't have these extra parties. There is intentionally no Bob in Mozilla's version of the story, they sign add-ons themselves, so timestamping plays no role. If a 25 year TSA would be appropriate (hint: it would not) then a 25 year intermediate cert would be just as appropriate and simpler to implement for Mozilla.


Oh, the developers are signing their own extensions? I wasn't aware how it worked and I was missing that part, I thought Mozilla signed them on upload (and thus could trust itself to not be malicious).


Well, Mozilla does the signing, but we only know that to be true as long as they control the private key. The whole point of expirations is to make mitigate the risk of an older key being stolen (or cracked).

So yes, Mozilla signs the extensions, but that doesn't change the importance of keeping the private key private... that is HOW we know it is Mozilla doing the signing


How did you get ahold of an expired key?


The whole point of key expiration is that somebody might get a hold of it (or crack it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: