Hacker News new | past | comments | ask | show | jobs | submit login
PFX: How Not to Design a Crypto Protocol/Standard (1998) (auckland.ac.nz)
53 points by summoned on Jan 27, 2022 | hide | past | favorite | 12 comments



A “standard” based off a proposal from Microsoft. Run through the RSA committee. Implemented by two relatively young engineers. One at Netscape. One at Microsoft.

I was on the Netscape side of the house.

I love when this article pops up.

I learned a lot about ASN1and BER/DER coding in the process.


By that do you mean you were the young engineer at Netscape who helped design this? If so do you have any defence with the joy of context as why some of those decisions were made?

I'd be curious at why things like "Encode these as bit strings." came about?


I was a junior engineer - it was 25+ years ago. The spec started with a proposal from Microsoft. I happen to agree with The critique.

I also recall trying to implement some of the nebulous things and it was painful. The code for encoding/deciding der/BEr was hand rolled.


Just look into how Javascript came about, if you want to understand how little "defence" they can realistically put forward for any choice made around that time.


Peter Gutmann's library was at one point the only code I could find to perform the minor magic of taking a certificate and using it to make a new certification outcome under your own CA.

I never understood why people go to issuing with new private keys when an existing Certificate can be used as a CSR. Shiploads of process would be simpler if you kept the same private key until it was clear you needed a new one, which is much rarer than people think, in these days of keystores, yubikeys and keychain managers and long RSA keys.

I wish it was the fabric of routine Unix cryptography and not openssl, at this stage in both codebase's life. It's smaller. That alone, LOC to understand, is huge.

There's stuff it can't do, hasn't been added like ARM optimised code, which is a shame. Maybe one day.


But a Certificate isn't a CSR.

CAs can do whatever magic they want here†, but, they probably shouldn't.

If you're comfortable keeping the same keys, which, sure, in some situations is fine (although a loss of agility can hurt you if you didn't understand what you were doing and then have to change keys unexpectedly) you can use the same CSR over and over to apply for the new certificates.

† There are a bunch of rules, but, not relevant to this particular issue.


If you have the CSR. If you move CA or use the form of recruitment common in browsers the CSR often isn't to hand.

Look I agree fundamentally, a Cert isn't a CSR. But it has the essential qualities, the critical components of the TBS which you needed to do the S part of the TB. TB | !TB is sometimes the question, in as much as policy is in the CSR, why they asked to be signed, what additional data flows including nonces and proofs of possession, what they asked for, distinct from what was signed over. But to validate the private keys use under a new CA? You don't need it. Should you? Well that's a policy question. If you were the original CA or RA you have reasonable confidence in why you need to do this and all the proofs you need regarding validity. Begs the question why you don't have the CSR, sure.

If you aren't in the original certifying roles then gaining confidence in the cert you're resigning is pretty important and the costs here might be less than rekeying assuming a CSR isn't to hand.


> By far the most widely-used character set, and the one which is easiest to work with for most implementors, is ASCII (IA5String) or latin-1 (T61String). Coming in a rather distant second is Unicode (BMPString), which has somewhat patchy support on most systems and is often difficult to convert into anything useful. Allow only Unicode strings wherever text strings are used. This has the added advantage that no 1988-level ASN.1 compiler can handle the ASN.1.

Ha, how the times have changed. I'm glad we've learned that forcing US ASCII on everyone just for programmers' ease is a bad idea, especially for an international standard. Of course the Unicode standard used for PFX is a rather obscure one, but at least it's Unicode...


But then they did something bad with the Unicode just to make sure that we didn't get the full benefit of using it.


> Allow only Unicode strings wherever text strings are used.

Wait, that's the proper way to design standards.


In '98 the resistance against Unicode efforts was still very strong, particularly in the US.


Proper way is to use UTF-8 encoding for Unicode strings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: