Hacker News new | past | comments | ask | show | jobs | submit login

As far as I can tell, this is what they're doing: if two different keys have a factor in common (i.e. A=PQ, B=PR), then you can use Euclid's algorithm (which just requires repeated subtraction, and is thus really easy) to find P (=gcd(A,B)), and then just use division to find Q (=A/P) and R (=B/P) easily.

So what the researchers did, apparently, was to gather all the RSA public keys they could find (6 million or so) and then calculate the gcds of all pairs of keys. (Presumably they were slightly more clever than this, but it doesn't really matter--doing this is still vastly easier than factoring even a single public key). Whenever they found a gcd that wasn't equal to 1, they'd cracked (at least) 2 keys. And it turns out that roughly 0.2% of the keys could be cracked in this way (for a total of 12720 1024-bit moduli).

As far as I can tell, the problem isn't with RSA itself, but with whatever algorithms are used to pick two random primes to multiply and produce the key (or moduli, whatever)--they don't have high enough entropy and output duplicates ~0.2% of the time.




Is it possible/likely that many of these bad pairs are from this debian/openssl bug?

http://taint.org/2008/05/13/153959a.html


None of the newly found a ected moduli are blacklisted (cf. [24]). Based on this brief analysis, we conclude that the problem reported in the main body of this paper persists and that the 99.8% security level of 1024-bit RSA moduli is further reduced.

pp. 15 http://eprint.iacr.org/2012/064.pdf


They explicitly excluded the Debian/OpenSSL blacklisted keys from their sample before checking for shared factors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: