Virtually all AES/RSA/SHA256 cryptography outside of a very few well-known, well-regarded cryptosystems is effectively snake oil.
It's a credit to Zimmerman's article that he doesn't leave it at "don't write your own block ciphers", adding that you need to understand block cipher modes (we should stop calling ECB mode "ECB mode", by the way, and start calling it "the default mode"). But obviously, this 1997 article doesn't nearly cover the bases on all the other things you have to do correctly to not end up with trivially breakable cryptography. A lot of those things weren't even known to the public in 1997.
An even more important point is that Zimmerman is talking about cryptography implemented in its strongest, safest venue: data at rest. As implemented by most developers today, crypto ends up in a far more exposed venue: one where a server holds a secret key and uses crypto not to protect a file but to enforce policy. Despite being the 2011 common case for crypto, this is a far harder scenario to get right, because adversaries can devise ways to query the key-holder to learn things about the plaintext, ciphertext, and behavior of the system as a whole.
My advice: don't trust crypto at all outside of TLS, GPG/PGP, and SSH. And know that of those three systems, two were broken relatively recently.
You'll want to follow people in the industry as well (I've found Nate's blog in particular an enjoyable read). A lot of the practical knowledge accumulates in the brains of the guys who actually get hired to do crypto work. It's an interesting feedback loop to be sure.
Also, it's my impression that start of the art papers aren't really necessary to start off with. The implementation errors people make in their code are flaws that have long been published, sometimes for decades.
Crypto papers actually kind of suck; my experience is that roughly 2/3rds of the time, the complicated formula on the page works out to a "for" loop that would be trivially easy to understand if expressed in algol syntax.
Best advice: do deep research on TLS. For every feature, do a directed search of the literature and do experimentation to try to figure out why that feature is there. Most of the features in TLS exist as a countermeasure to some attack. Follow this tack all the way down the stack, starting with the high-level protocol features and working your way all the way down through the block cipher modes and configuration that it uses.
Yeah there definitely is a bit of an art to reading them. But we can't have cryptographers and security professionals merging too fast... that would just make too much sense. :P
I'm a little allergic to allusions to the risk of "key management", because Applied Cryptography (not a good book) calls that out specifically as the riskiest thing about implementing crypto, and it is most definitely not the riskiest thing about implementing crypto.
So let me just say that while key management is hard, even if you stipulate that your key management is perfect, you are probably still boned. The simple task of exchanging real time messages between two parties with secure keys is treacherously hard to implement.
Yup. Just to give the uninitiated some examples of things you have to watch out for:
1) don't do the modern day equivalent of an Enigma-style 'cillies', ie doing something that reduces the entropy of your key generator (not using the full range of possible values, not having a random distribution of values, etc)
2) Make sure that the decryption time does not vary with the number of correctly guessed bits of the key, ie don't stop a decryption attempt in the real system if you discover that the decryption must be wrong part way through.
3) Make sure that your randomly generated keys don't ever generate keys that create a degenerate crypto text
4) Make sure that you don't use a key for longer than it's cryptoperiod
that's just a small sample of the gotchas that can ping you. And in that list, I'm only worrying about plain decryption of the message, without worrying about 'minor' details such as identifying that a party is who they say they are, man-in-the-middle attacks, replay attacks (resending a recorded encrypted message later on, to open access to something), injecting messages into the communication and so on.
I think AC is, like Zimmerman above, talking mostly about the data-at-rest case when it refers to key management being the riskiest part. This is reasonable, because in the data-at-rest case, all cryptography does is replace a large secret (the plaintext you want to protect) with a small secret (the key you need to protect instead). If you already had a keeping-something-a-secret problem, then it's reasonable to presume that you will have almost as much of a problem keeping the key a secret.
First, _Applied_ is totally not just talking about data at rest. It's chock full of archaic challenge-response protocols and half-specified descriptions of asy key exchange schemes.
Second, even for data at rest problems, _Applied_ is a terrible resource.
Just get _Practical Cryptography_ and burn your copy of _Applied_. Schneier co-authored _Practical_. It's a great book.
Agreed - I have both already, I haven't burnt AC yet though (I find it occasionally useful for identifying which oddball 90s cipher a developer has gotten it into their head to use - yes, I have actually seen "3-Way" used in anger...)
That is first of all not true, and second of all not relevant. In both cases the cryptosystem itself had a design flaw. And in both cases the implementations of that cryptosystem, aside from the design flaws, had implementation flaws that made the design a moot point.
I took his point to be the opposite: that the worst that seems to happen with cryptosystems is that implementations are broken.
That bugged me on two levels: first, the idea that there haven't been terrible bugs in e.g. SSL3, and second that an implementation bug means "just upgrade OpenSSL", when it's more like "the discovery of buffer overflows and attendant years of chaos".
All three have had flaws of one sort or another discovered since 2005 (TLS has had several; OpenSSH had the Debian keygen thing and an SSHv1 vulnerabiity, and possibly others; GPG apparently had a couple of problems back in 2006).
TLS has definitely had more severe issues, but then, it's also the most widely deployed (so undiscovered flaws are more likely to be discovered). On the other hand, it's also solving the most complicated problem of the three.
The number and nature of the flaws in TLS actually give me more confidence in it. It's not that other systems don't have similar or worse flaws; it's that these kinds of flaws are a cast iron bitch to find, and TLS is the protocol with the maximum incentive for study.
Give it time; we'll find something horrible out about ISAKMP.
When I opened this up, I thought for sure that cperciva would have commented on this. Must be a busy day.
And I personally think we all own Mr Zimmerman a debt of gratitude. I don't think I could have handled the stress of what he had to go through with the US Government.
Is there any connection to be made between this article and the usage of signed cookies to hold session state? Database-backed sessions hold a state that you know your application set at one point, but a signed cookie, if forged, could have much bigger ramifications. Since no one gets cryptography right, it seems like this would be another instance not to trust it.
This article skips over CTR mode which is gaining popularity. It has nice properties like parallel enc/dec, no padding requirement, seek to any block, and the ability to lose blocks. AES 128 CTR is the default cipher suite in the SRTP standard
Today, your reasonable choices for block cipher modes are CTR and CBC.
And: (If you have a library that does any of the "authenticated modes" like CCM, OCB, GCM, or EAX, your reasonable choices are those 4 constructions, all of which are based on CTR mode.)
But your comment gave me hives, because CTR mode is in its most simple application (no parallel, no precomputation, no seeking, no lossiness) already easy to spectacularly fuck up, and some of the things you pointed out as benefits of CTR come with additional pitfalls.
Finally, what is a "default" in a standard is different from a "default" as provided by a library. ECB is the "default" because (a) it usually is the default, and (b) it requires the least amount of configuration. In 2011, it is still sadly common to see trivially breakable ECB in new apps, and that's because crypto libraries are structured in ways that make ECB the de facto default.
OpenSSL is the worst kind of crypto library: the kind that gives you a menu of ciphers and a menu of cipher modes and says "go to town". There is virtually no chance that a generalist developer will ever build sound crypto with a library like that.
The good kind of crypto library is Keyczar. Keyczar removes degrees of freedom; it says, "you don't tell me what block cipher to use, and you don't remember that your messages need a MAC, and you don't choose the order operations happen in, and your keys are all going to be from a CSPRNG, and you'll use this keystore, and that will be that". There are a very few other libraries like that (Guttman's cryptlib is another).
Unfortunately, nobody uses libraries like that. They use OpenSSL (via their language's bindings to it) which seems to work until it blows up in their face during a pentest.
My best advice is to use Bouncycastle's PGP implementation, which again removes all the degrees of freedom and has the benefit of building on very well studied constructions (poorly regarded constructions, but survivors nonetheless).
What about writing a library that wraps OpenSSL and only implements a couple, strong ciphers/modes? Specifically, AES-256-CBC and RSA 4096?
I mean, I've done my homework: I know enough to not call myself an expert, yet also know enough to avoid every crypto-algorithm like the plague until I've thoroughly investigated it and its "competitors."
Saying "AES-256-CBC and RSA 4096" isn't nearly enough detail to assess whether you know what you're talking about, and pushing me through a thread to the limit of what I personally know how to break is just going to give you false confidence, because I know less than a lot of people I know.
In case this is what you were implying: it is absolutely not the case that the big problem with OpenSSL is that it'll let you use Camellia in ECB or 512 bit ElGamal. The problem is that there are (a) more things you can do terribly wrong with AES-256-CBC than there are things you ar likely to do with wrong with, say, C memory handling, and (b) things you have to do well beyond encrypting soundly with AES-256-CBC to make your system work as a whole.
Phil Zimmermann talks about some patents on public key cryptography. I am just wondering, did Clifford Cocks prior art invalidate that patent posthumously?
Yes, I know. But wouldn't the British chap's prior art make those lawsuits invalid retrospectively?
I don't know enough about even my own countries' legal systems, and certainly not about America's. So my question is: Supposed you got sued back then, had to pay, and now try to get the money back, because the patents the suit was based on were invalidated by prior art. Would that work? (Sorry, not a crypto-question at all, more like a legal one.)
This is why I feel that all password managers that store passwords are fundamentally flawed. Who has verified the crypto used to store the passwords? Has it been implemented by devs who thought they knew crypto (but really don't) as this article suggests?
It's a credit to Zimmerman's article that he doesn't leave it at "don't write your own block ciphers", adding that you need to understand block cipher modes (we should stop calling ECB mode "ECB mode", by the way, and start calling it "the default mode"). But obviously, this 1997 article doesn't nearly cover the bases on all the other things you have to do correctly to not end up with trivially breakable cryptography. A lot of those things weren't even known to the public in 1997.
An even more important point is that Zimmerman is talking about cryptography implemented in its strongest, safest venue: data at rest. As implemented by most developers today, crypto ends up in a far more exposed venue: one where a server holds a secret key and uses crypto not to protect a file but to enforce policy. Despite being the 2011 common case for crypto, this is a far harder scenario to get right, because adversaries can devise ways to query the key-holder to learn things about the plaintext, ciphertext, and behavior of the system as a whole.
My advice: don't trust crypto at all outside of TLS, GPG/PGP, and SSH. And know that of those three systems, two were broken relatively recently.