Hacker News new | past | comments | ask | show | jobs | submit login

Clear the highest bit, set the next highest bit, and clear the three lowest bits. This process is known as "key clamping", and ensures that the private key is in the right range and is not vulnerable to a couple of specific attacks.

https://neilmadden.blog/2020/05/28/whats-the-curve25519-clam...




Starting to sound a lot like your description of RSA, just more positive about it.


key[31]=(key[31]&63)|64;key[0]&=248;

is radically simpler and not remotely comparable to the requirements for RSA key generation. Moreover, RSA key generation is massively slow-- enough to be irritating for users even on fast computers so there is a lot of incentive to 'optimize' key generation and introduce complexity that results in bugs.

(do not use, I just implemented what the GP poster said, didn't check if it what he said was correct, but it sounds right)


> Moreover, RSA key generation is massively slow--

Not only is it extremely slow, but it's also non-deterministically slow -- there's no constant-time way to randomly generate an RSA key, because searching for suitable primes is a "guess-and-check" process.

ECC key generation, on the other hand, is so fast that it's perfectly feasible to use it as part of the session negotiation process (e.g. in ECDHE).


> (key[31]&63)|64

You confused me here; this looks like you're clearing the highest bit and then setting the highest bit... in a 7-bit integer. Of course it makes more sense that you're clearing the two highest bits and then setting the second-highest bit of an 8-bit integer, which is indeed the same thing as clearing the highest bit and then setting the second-highest bit.

But why have you written it to fiddle the second-highest bit twice? Why not just write `(key[31]&127)|64`?

(And then of course, why not use the hexadecimal form of your bitfield constants so people can tell what's going on? No need to count bits that way.)


just because 63 was a shorter constant to type than 127. Hex would have been longer due to the prefix. :P


Thank you for that self analyzing! I got lost on that part, so will now stop writing ping 127.1 just to save four bytes. I've always felt that people learnt things by seeing that, but now I know I'm too lazy to bother learning myself.


Explaining all of the ways an RSA key can be weak, and what you have to do to avoid generating one, would take a lot longer.


This is not really fair. The correct way to mint RSA keys is actually pretty simple, a naive approach works fine - it's just that doing this is slow and when people try to do something fast they keep making keys which fail the criteria you listed.

The tests you've advocated make sense if somebody else picked the keys and you're worried whether they did a good job, some of these tests are mandatory for a Web PKI Certificate Authority checking RSA public keys for example (the CA only has your public key so it can't check everything), but if you are picking the keys you really can just use the naive approach. The odds of "accidentally" getting two factors that are unduly close or very smooth are negligible.


> The odds of "accidentally" getting two factors that are unduly close...

... are actually pretty high with some poorly conceived key generation algorithms.

There are other ways that RSA key generation can be weak, such as ROCA (https://en.wikipedia.org/wiki/ROCA_vulnerability).


> ... are actually pretty high with some poorly conceived key generation algorithms.

To be fair, that's what they said:

> > a naive approach works fine - it's just that doing this is slow and when people try to do something fast they keep making keys which fail the criteria you listed.

The naive way to generate a RSA key is to pick a uniformly random 1024-bit (or per your security parameter) integer, test if it's prime, and if it's not throw it out and pick a new, unrelated, 1024-bit integer, and repeat until you have two to multiply together. This will, probably, produce a secure RSA key. Definitely upwards of half of the time, assuming your primality test has sufficiently low false positive rate.

It's just ('just') painfully slow, so people almostly always optimize it. There are some optimizations that provably don't change the distribution of primes (the most obvious being marking the least significant bit so the number is never even), but history has shown that any optimization that doesn't exactly and exactingly preserve the distribution almost always turns out to introduce exploitable vulnerabilities.

Of course, even if you do it wrong it's still slower than barely-optimized ECC, and even if you do it right it still has vastly less security than a ECC key a quarter of it's size.


>any optimization that doesn't exactly and exactingly preserve the distribution almost always turns out to introduce exploitable vulnerabilities

Not a crypto expert, but aren't there changes to the distribution that you actually want to make, like having a lower size limit for each of the factors?


> aren't there changes to the distribution that you actually want to make

Yes, but that isn't a optimization, it's a deliberate change in semantics.

> like having a lower size limit for each of the factors?

For some reason, a annoying amount a cryptography literature uses "1024-bit integer" to mean "a (arbitrary-precision) integer whose most significant set bit is in position 1023". I probably should have phrased that better to avoid confusion with "a integer stored in 1024 bits of memory".


>For some reason, ...

Ah, that clears it up, thank you!


It's two bitwise operations




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: