Nit: the hard dependency on good randomness for ECDSA is a property of DSA in general, and not of elliptic curve cryptography. The DSA construction has what is probably the strictest randomness requirement in all of mainstream cryptography; a bias of just a few bits is, with repeated signatures, sufficient to recover private keys! (The attack that makes this work on ECDSA is extraordinarily cool).
The problem with NIST Dual_EC_DRBG is simpler than the article makes it sounds. A good mental model for Dual_EC is that it's a CSRPNG specification with a public key baked into it (in this case, an ECC public key) --- but no private key. The "backdoor" in Dual_EC is the notion that NSA --- err, Clyde Frog --- who is confirmed to have generated Dual_EC, holds the private key and can reconstruct the internal state of the CSPRNG using it. I think this problem is simple enough that we may do a crypto challenge on a toy model of Dual_EC.
Nobody in the real world really uses Dual_EC, but that may not always have been historically true; the circumstantial evidence about it is damning.
The NIST ECC specifications are in general now totally discredited. If you want to see where the state of the art is on ECC, check out http://safecurves.cr.yp.to/.
You should never, ever, never, nevern, nervenvarn build your own production ECC code. ECC is particularly tricky to get right. But if you want to play with the concepts, a great place to start is the Explicit Formulas Database at http://www.hyperelliptic.org/EFD/; the fast routines for point multiplication are mercifully complicated, so copying them from the EFD is a fine way to start, instead of working them out from first principles.
> And surely it would be better to have many implementations than a couple?
Its probably better to have more qualified people working on (analyzing and validating, particularly) a smaller number of production implementations than fewer people per implementation doing that with a larger number of production implementations.
Encryption is an area where doing one, very precisely defined, task correctly is critically important; the considerations in that domain are different than in many other domains of software.
I'm not sure I'd go with 'discredited'. Paranoia aside, I'd go with 'superseded' by better choices that weren't around in 1999. We probably mean the same thing, but this has become one of my pet peeves.
I'd have said the same thing a few weeks ago. You may have better information than I do; I'm working from the (new) presumption that the supposed random curves aren't likely to be random based on their structure.
It's a Cryptography Research paper from earlier this year and involves both a series of lattice reductions and the inverse fourier transform, which smooths out a very sharply defined bound into a much broader bound to make a search faster --- the latter trick is due to Bleichenbacher and referred to in the CRI paper as an "underground" attack.
Alex Balducci in our office actually got this attack working from the paper and walked us through the code --- the lattice reduction steps take 8 hours to run, and are followed by an IFFT-aided search that I would have zero chance of getting right and so never would have bothered waiting 8 hours to try, but he did it anyways.
Maybe I can get him to write it up.
Moral of this story by the way: hire people smarter than you are, and give them semi-unreasonable problems to work on.
The first time I heard about attacks on crypto that monitored processor power consumption, I was pretty skeptical. It seems crazy to me that it could work. But of course it does.
Same thing with timing attacks, til I learned how to code one for myself.
The performance comparison of ECDSA vs RSA is somewhat unfair. In ECDSA, signing is the cheapest operation, whereas in RSA it is the most expensive. If the timings chosen were signature verification time, RSA would be much faster. See:
Doing 2048 bit private rsa's for 10s: 1266 2048 bit private RSA's in 9.98s
Doing 256 bit sign ecdsa's for 10s: 22544 256 bit ECDSA signs in 9.97s
Doing 2048 bit public rsa's for 10s: 42332 2048 bit public RSA's in 9.98s
Doing 256 bit verify ecdsa's for 10s: 4751 256 bit ECDSA verify in 9.92s
A fairer comparison would probably pitch DH-2048 against ECDH-256, which is more apples-to-apples.
there is also a huge difference between newer and older versions of openssl.
# default openssl install (osx mavericks)
Doing 224 bit sign ecdsa's for 10s: 39077 224 bit ECDSA signs in 9.98s
Doing 224 bit verify ecdsa's for 10s: 8285 224 bit ECDSA verify in 9.98s
OpenSSL 0.9.8y 5 Feb 2013
vs
# homebrew openssl install
Doing 224 bit sign ecdsa's for 10s: 61047 224 bit ECDSA signs in 9.99s
Doing 224 bit verify ecdsa's for 10s: 15609 224 bit ECDSA verify in 9.84s
OpenSSL 1.0.1e 11 Feb 2013
f : x -> pow(x, pubkey) mod m
g : x -> pow(x, privkey) mod m
being inverses of each other was a big breakthrough when it was discovered. The article implies, but does not directly state, that this "big breakthrough" was part of what separated the "classical" era of cryptography (pre-1977 as defined by the article) from the "modern" era (post-1977).
The "big breakthrough" result was actually proven by Euler hundreds of years ago! [1] The innovation of RSA was building a working public-key cryptosystem around Euler's result, not the result itself.
Fermat's little theorem is Euler's theorem in the special case that m is prime. The condition on Euler's theorem is that x and m must be relatively prime... which is certainly true if m is prime.
ECC is significantly faster than RSA at equivalent security levels. In case it isn't immediately clear: RSA-2048 doesn't provide 2048 "bits of security"; the 2048 describes the modulus size, but the security level is closer to 115 bits. A 256 bit ECC curve provides more security than a 2048 bit RSA key.
Another reason is that progress against ECC has been slower than it has against crypto in the integer mod prime groups. ECC is based on a variant of the discrete log problem that defines multiplication in way that isn't amenable to index calculus algorithms, which are making progress against integer DLP. It may be that in the future, it won't be sufficient to steadily ramp up RSA or DH modulus sizes. ECC is thought to be more future-proof.
Finally, ECC constructions happen to be quite amenable to modern crypto protocols; it is easy to do DH with ECC, and easy to sign with ECC, and DH+signatures forms the backbone of a number of modern protocols (because it's a design that provides forward secrecy).
From what I understand theoretical improvements in factoring algorithms go hand in hand with theoretical improvements in discrete logarithm algorithms, and for both there are algorithms which improve slightly over the trivial approach. ECC is considered better because of the believed constant factor slowdown in arithmetic operations on elliptic curves, not because discrete logarithm is considered harder than factoring. This article implies the opposite quite directly.
You're not quite right. The hardness of the discrete logarithm depends a lot on what mathematical structure you are using below.
For example, the discrete logarithm on the group of the integers modulo some prime is known to be more or less as hard as breaking an RSA modulus of the same size. Those are the two problems that seem to be linked together.
However, attacking the discrete logarithm on the group of points on some good elliptic curve is a very different problem, and the same techniques that work above just don't seem to apply here. So you're stuck with the generic attacks, which are much slower.
Bottom line: the discrete logarithm on elliptic curves is harder than factoring and the discrete log on the integers modulo a prime.
I realize the point of the article is to make cryptography accessible, but if I'm not mistaken, their example of the use of RSA is to use it just as a substitution cypher, which would be easily decrypted by frequency analysis....
So if decrypting an RSA encoded message just involves multiplying the message by itself x amount of times, couldn't you just do keep multiplying until the result message makes sense, then you have discovered x?
Sure in the example the key was so small it could only do one character at a time. With a larger key you wouldn't know the length of bytes to decode in one go. But that would only slow things down a bit.
You could keep multiplying m by itself, sure, but in practice you're dealing with numbers so huge that it rapidly becomes impractical to test all values in the range m^2 to whatever (e.g. m^49863657239487123495780934672309687239458732946585601239587146245192587126978563425981758234729374837833000023013834769482349111943853593463798) before you finally stumble on the correct one.
Effort is better spent trying to factor the public modulus n which you already have. Because the private exponent(m^498636... above) can be derived from it.
If you could do a multiply in a nano-second, you could do about 3 x 10^16 multiplies in a year. Keys are about 10^300, so it would take 10^84 years.
Sort of.
In practice it's more complicated than that, but the numbers involved are HUGE. Doing the math with small examples always gives completely the wrong impression, like doing hill-climbing in 3D.
So, I assume there must be a quicker way to work out x ^ n that incorporates wrapping than just doing x = x * x % y repetitively. Darn it, my maths is getting rusty in my old age.
There is a fast exponentiation that is effectively Russian Peasant Multiplication, and it works in time proportional to number of bits in the exponent. Look up "fast exponentiation" or similar.
If its too TL;DR for you to read right now, just read the first page. It's a great introduction to RSA and the use of prime numbers with real examples.
Say e=5 is your public key. Since the encryption step is computing x^e (mod pq), you want your private key to be a number d such that (x^e)^d = x (mod pq).
Basic algebra (Fermat's theorem) says that the last equation holds if you have ed = 1 (mod phi(pq)). Here, phi(pq) is the size of the multiplicative group mod pq, where the multiplicative group is simply the set of elements that have an inverse modulo pq.
More basic algebra tells you that phi(pq) = (p-1)(q-1), or, in your case, 72.
So, you need to find the inverse of 5 modulo 72. You can find that using the extended Euclidean algorithm for computing gcds: A number e is invertible modulo N if and only if gcd(e,N) = 1, and in this case, there exist integers a and b such that ae + bN = 1; the extended Euclidean algorithm will find them, and the number a is what you're looking for (check the definition of modular arithmetic).
To verify that the number 29 makes sense, just compute 5 * 29 (mod 72): you'll get 1.
I don't know if that helped at all. If it didn't, you'll have to start asking more specific questions :-)
The problem with NIST Dual_EC_DRBG is simpler than the article makes it sounds. A good mental model for Dual_EC is that it's a CSRPNG specification with a public key baked into it (in this case, an ECC public key) --- but no private key. The "backdoor" in Dual_EC is the notion that NSA --- err, Clyde Frog --- who is confirmed to have generated Dual_EC, holds the private key and can reconstruct the internal state of the CSPRNG using it. I think this problem is simple enough that we may do a crypto challenge on a toy model of Dual_EC.
Nobody in the real world really uses Dual_EC, but that may not always have been historically true; the circumstantial evidence about it is damning.
The NIST ECC specifications are in general now totally discredited. If you want to see where the state of the art is on ECC, check out http://safecurves.cr.yp.to/.
You should never, ever, never, nevern, nervenvarn build your own production ECC code. ECC is particularly tricky to get right. But if you want to play with the concepts, a great place to start is the Explicit Formulas Database at http://www.hyperelliptic.org/EFD/; the fast routines for point multiplication are mercifully complicated, so copying them from the EFD is a fine way to start, instead of working them out from first principles.