This is the same in every profession though. Once you know the rules, and the reasons behind the rules, and the consequences of breaking the rules, you can consider the trade-offs of not sticking to the rules in this specific case because it buys you some big benefits.
Then, when it burns you horribly, you can write a informative blog post about why even the experts should follow the rules :-)
(Btdt, although in my case it was with documented project scope. It was probably the right decision, but that didn't make taking the kicking any more fun.)
I would have much preferred if the post wasn't acted out as a narrative as it detracted from the powerful message of not using AES and why it's flawed.
I'd say it didn't detract from the narrative; it made the narrative accessible to a different audience.
The screenplay format both presented a story-based framework for understanding the progression of the argument, and humor (unicorn with laser horn) to provide a contrast to the fairly complex topic.
I'm clueless when it comes to crypto, and if I were to read a very well written paper about it, I'd probably get sidetracked, or accept the conclusion without understanding the steps taken to reach it. The format of this article was perfect for driving the point (and its reasonings) home to the uneducated masses such as myself.
My takeaway from the article wasn't "don't use AES" - it was "use a sufficiently high-level cryptography library such that someone smarter than you is deciding which algorithm to use and with which options".
Usually people focus on "which algorithm" (you may have heard endless debates about key length) when "which options" is what actually ends up biting you. But one argument seems fun and the other mundane, so the latter doesn't get the attention it requires.
But it seems like the situation they describe is one where you wouldn't want to use encryption at all. Why give the user his data in an encrypted form when you can give the user an ID and keep his data entirely away from him? Seriously, when does giving someone data you don't want them to read ever work better than just not giving them data at all?
Having a narrative reinforces the point that what you actually do depends on the entire context of the application. You would almost never be the one implementing cannot-be-broken-under-ANY-circumstances encryption. So you have to know what the circumstances are. In this case, the circumstances point to no-encryption-whatsoever!
Sure, you could point to other circumstances where something like what they're talking about would be useful but that's a million possible circumstances with a million possible encryption solutions and you've lost the useful urgency of the original concrete narrative.
I agree. I found the narrative "intimidating" and stopped reading after a short while. I'm turned off by all the crypto-testosterone. :D
Ah, if only I could write another Notepad and make money off that.
I love the narrative account. Making a point dramatically is great and a pseudo-mystery story is great way to talk about encryption.
The narrative is right that AES is a wrong solution.
But really, any encryption at all is a wrong, bass-ackwards solution to keeping the user from modifying their account information.
A single cookie or argument that's a randomly generated user id with all the information server-side is much better.
I mean, consider any scenario where you pass the user data you'll later use. Will you not keep track of that data yourself but expect that the user's encrypted cookie will do it for you? This is one way of simulating statefulness in the stateless http protocol but it's a clearly an inferior, dumshit of doing it and it doesn't matter what encryption you use for the purpose. Giving someone encrypted information they can't use is essentially analogous to copy-protection and similar unwinnable scenarios whereas the unique id approach is pretty much the standard and works well for many, many apps of all sorts.
Having unique user-ids and user-information is only costly in terms of accessing the information. But there isn't a point where decrypting information coming from the user becomes less costly than getting it from the database. Indeed, the higher the traffic, the more different brute-force attacks make sense.
I don't think that was the goal. This whole crusade is just to get people to realize the true cost of getting crypto right. It costs time and money to build a resilient protocol. How many person-hours went into the AES competition? What was the proportion of time spent on creating the algorithms versus analyzing them?
In crypto, review costs much more than design. If you understand that tradeoff and the benefits of your own design are worth spending this effort on, congratulations, you are Qualcomm, Netscape, or Intel. All of these companies had staff cryptographers.
More likely, your problem is close enough to other problems that you don't need to spend the time or money to DIY. If that path is available, take it!
Ok, grant it I'm new here, and could be missing the point. I didn't actually go through the full article because I'm not really that much into crypto, and really, didn't the guy fail the interview at the point where he suggest that encryption was the answer for a cookie used for individual identification? I mean, encrypt it all you want, if I can be behind the same NAT as you, and spoof your user agent, all I need to do is get that cookie and put it in my browser, and I've stolen that session.
The real answer is you need to either encrypt the transport, or at the very least minimize the amount of time that cookie is valid for.
Well, I read the entire article, and it's quite interesting that there are things like Keyczar and cryptlib out there, I still think it's a poor example. Other than a brief note on encryption does not equal authentication, they never touched on the fact that encrypting the cookie data itself isn't a real solution, as the encrypted cookie can be sniffed, and replayed to the server.
So, while standard user can't hack their own cookie for escalation, they can sniff admin cookies to get that escalation. If the cookie is used for authentication or authorization, encrypting it's content is providing a false sense of security, no matter what nifty library you choose or not choose to do it with.
The attack you just described applies to all online commerce. Your bank and brokerage (and the clearing firm your brokerage uses) all fall to HTTP cookie sniffing.
The point isn't the relationship between you, Bank of America, and a hacker; it's the relationship between you (a hacker) and Bank of America.
1: Encryption isn't magic pixie dust: people can tamper with the ciphertext and fool you. Sometimes they can even mess with the padding and decrypt the entire message.
2: A rule of thumb that falls out of this: never try to decrypt ciphertext that you can't verify is authentic. Consequence: compute the MAC over the ciphertext, not the plaintext, so you don't have to decrypt to check the MAC.
I've done my message format in the past as "{version}{hmac-md5}{ciphertext}". This is suitably secure, to the best of my knowledge. And thanks to the version, it lets me alter the algorithms or keys without service disruption. Once all sessions using it would be expired, the old version can be assumed to be expired.
If the version is not covered by the HMAC, you may be subject to rollback attacks. Compare the SSLv2 and v3 protocols to see what was done to address this.
I'm sorry, I don't get it. For session cookies, the client cannot do any validation of the cookie, so it's a completely different domain.
Also, unlike session cookies, SSLv2 never had a hard expiration date after which it could be unconditionally rejected.
However, I did just realize that for perfect security, there must be a service disruption on change of version. Otherwise you may be upgrading an attacker's forged v1 cookie to v2, if they submit a request before the v1 expiration.
Upgrading cookies is a bad idea. Revoking them and requiring reauthentication is better. See the talk I gave at Google on web crypto where I talk about exactly that situation.
My answer: I'd implement a real SSO solution, like CAS: http://www.jasig.org/cas -- Open Source, started at Yale, and supported by an amazing team of people.
This is a great example of warning signs. Note the number of times sizes (bits) and algorithm (AES/Rijndael) are mentioned. Note the lack of clarity in the specification (is "+" addition, XOR, or concatenation?) Note the overly-large UID space which has nothing to do with security (will you really have more than 4 billion users?)
Now look at how useful this design is. You've created a single-purpose authenticator that just says "someone at some point in time has seen this exact UID/authenticator pair". All other features (expiration, privileges, source of the authenticator, version, system name the UID is for) are left out.
Our UID share a namespace w/ our note IDs, we don't see a reason to limit it to 4 billion objects (especially since the objects are versioned w/ revision IDs).
And your post, sire, is a great example of not understanding the intent of security. There's always a tradeoff between the requirement on the level of security and cost in implementing it. Security is a matter of tradeoff.
Even with your proposed solution, there are still security breach. How does your solution handle the situation where someone hijacks the user's machine and uses his browser session? How does your solution handle the case of keylogger stealing the user's userid/password and start a valid session?
See there are always ways to make any security solution fall short. The security is always a balance between a number of competing goals. If the solution satisfies the requirement, it's a sound solution.
The point of the article is that some people use encryption to try to achieve authentication, which doesn't work. You're right. Signatures are appropriate for providing authentication. HMAC is a form of symmetric authentication.
Heh; an illustration that the real art here is in cryptographic protocols, not crypto black boxes like AES (well, as long as your "random" numbers are random enough, which turns out to be an issue in the VPS world).
I was lucky enough to learn this in 1979 while in a group trying to puzzle out how to make an authentication system based on public keys (I've read they didn't get it right until the mid-80s).
Final note: these protocols are fragile. If e.g. Microsoft implements Kerberos but changes one little thing without explanation, let alone a serious and public security review, don't trust it!!!
- Harold, a proud member if the Do It Right Society.
Also, don't trust Kerberos. It was built long before things like SRP existed and hasn't caught up with the times. It hasn't fallen apart yet, but it's creaky.