Doesn't that Schneier quote only apply to symmetric algorithms (e.g. AES)? I thought asymmetric algorithms (e.g. public/key systems like RSA, as in the original question) have completely different characteristics and rules.
"NIST key management guidelines further suggest that 15360-bit RSA keys are equivalent in strength to 256-bit symmetric keys."
Although you're right that a 256-bit RSA key is weaker than a 256-bit AES key, this is not an intrinsic property of symmetric vs asymmetric encryption. It has to do with how you search the key space. To break 256-bit RSA encryption, it is sufficient (not known whether it is necessary) to factor the 256-bit modulus n, whereas to break 256-bit AES, you have to essentially run AES with the whole keyspace of 2^256 keys.
If you are trying to brute-force guess an RSA key, the fact that the factors which produce the key should be prime (well, co-prime with respect to each other to be more precise) has the potential to greatly reduce the search space. For small keys it is practical to have precomputed what this limited search space is so of the 2^N possible keys you can be fairly sure that you only need to check a much smaller subset of them. You still have to use brute-force, but unlike with symetic methods your search space can be much smaller than the entire keyspace. For larger keys this pre computation (and storage) becomes impractical (with current tech). I have no idea off the top of my head where the point it currently becomes impractical occurs relative to 256 bit keys though.
I doubt this approach makes sense for any size prime. Let's assume a 50-digit number to factor. According to http://primes.utm.edu/howmany.shtml, there are over 10^21 primes less than 10^24. Let's assume you manage to store 10^3 of them in a byte of memory. Then, you would need 10^18 bytes, or a million terabytes of storage to store those primes.
That isn't practical, so let's scale back to primes around 10^20 (around 10^18 different primes) Then, you would need around a thousand terabytes (still assuming that you manage to compress your 64-bit or so primes to fit 1000 of them in 8 bits)
I'll assume you build the machine with those drives, and can do trial divisions in parallel, 1000 CPUs, a billion divisions per CPU per second. you still would need 10^6 seconds to do a complete search over all primes. Rounding down, that is a week.
Then, you would need 10^18 bytes, or a million terabytes of storage to store those primes. That isn't practical, so let's scale back to primes around 10^20 (around 10^18 different primes)
Actually, a million terabytes (an exabyte) is really quite feasible. Not on an individual scale, but I would be very surprised if governments (all of them, really) did not have this sort of thing set up.
Do we actually know that it's not an "intrinsic property of symmetric vs asymmetric encryption"? It seems like finding an asymmetric algorithm that's comparably efficient to our symmetric algorithms is a long-outstanding problem in crypto.
Keep in mind that theoretical constructions of symmetric ciphers are nowhere near as fast as practical constructions like AES. The real issue is that no public-key algorithms are known other than those that are based on theoretical constructions.
Also, our knowledge of complexity theory is not sufficient to show that cryptography is even possible. I suspect that improvements in our knowledge of complexity theory will greatly improve our cryptographic primitives, in both security and efficiency.
Keep in mind that theoretical constructions of symmetric ciphers are nowhere near as fast as practical constructions like AES.
What I hear you saying is "AES was designed with a practical implementation in mind, whereas asymmetric constructions were more 'discovered' from theoretical work that's often unweildy when reduced to practice". This about right?
The real issue is that no public-key algorithms are known other than those that are based on theoretical constructions.
While it is true that AES was designed with practicality in mind, that is not what the difference between a theoretical construction and a practical construction is about. The theory of cryptography is based on complexity-theoretic arguments (or in some cases, information-theoretic arguments) about cryptographic constructions, essentially showing that any algorithm that can be used to violate some security property of the system can be used to solve some hard problem. For example, in the case of the Goldwasser-Micali system, any chosen plaintext attack can be used to solve the quadratic residuosity problem efficiently. On the other hand, there is no such proof for AES; the evidence in favor of the security of AES is heuristic, based on a combination of statistical tests, resistance to known attacks on block ciphers, and other measures that have been developed over the past few decades.
This is not to say that AES should not be trusted. AES is a fine cipher, it is efficient, and unless someone can show us a practical attack the heuristic evidence is pretty strong.
Now, as for Merkle puzzles, that system is not considered secure by cryptographic standards. A cryptographic construction is not secure unless it requires the adversary to do work that is exponential in some parameter in the system (the security parameter), while parties that know some secret (such a key) only do work that is polynomial in all parameters of the system. In the case of RSA, for example, parties that are aware of the secret key must do work that is cubic in the security parameter, while the adversary must do work that is exponential in the cube root of the security parameter. Whether such systems actually exist is still an open question, as it turns out; a positive answer to this question would imply that P does not equal NP. Cryptographers generally assume certain truths about complexity theory, beyond the P vs. NP problem, and cryptography research has actually opened new areas of complexity theory that are based on such assumptions (such as the notion of knowledge complexity, which emerged from the work on zero knowledge proof systems).
The reason I brought up Merkle puzzles is because they depend on a symmetric cipher or one-way hash function, giving them one foot in that first category.
But remember that before we build our Dyson sphere, we may get quantum computers, which makes our bruteforcing polynomial rather than exponential. Cryptsystems based on discrete log, EC discrete log or prime factorization (pretty much all existing crypto-infrastructure) are vulnerable.
Shor's algorithm lets us factor numbers in polynomial time. This would break RSA, but not every algorithm in general. I think, but I'm not 100% sure, that elliptic curve methods have no known polynomial time quantum algorithms, for example.
The person quoting Schneier said: "... will be infeasible until computers are built from something other than matter and occupy something other than space"
So are quantum computers not from matter and not in space? :)
Yes. The fundamental energy requirements come out of the destruction of information from a bit flip (or from a 2 bit -> 1 bit gate). Qubits don't undergo that transition - until the final measurement, they're in a superposition of both states, so the transformations are really just rotations and reflections of a state, which in principle are not subject to the same requirements. Also, the real power of quantum computing comes from the ability to 'test multiple answers' at once, at least in principle. In reality, we've only figured out how to do this for a small subclass of problems.
For symmetric cryptosystems search difficulty is halved (a 256 bit AES key under a quantum bruteforce becomes as 'hard' as a 128 bit key under conventional bruteforce).
For asymmetric systems, search is reduced to polynomial (if using the discrete log, EC discrete log or prime factorization).
For some public key systems, the search becomes polynomial. Luckily, we know of ones for which that is not the case (as far as we know), and so if a practical quantum computer could be built we would just deploy new cryptosystems (pricey, but not end-of-the-world pricey).
On RSA, knowing ay of the factors used is enough to break the key. Thus, you brute force 256 bits RSA with the same number of tries that you brute force 128 bits AES.
Or, in other words, when you create a 256 bits RSA key, it's guaranteed that one of the factors will have at most 128 bits. Thus, it's only as strong as a 128 bits AES against brute force.
But anyway, that's a moot point, because criptoanalysis exists, and there are better than brute force attacks against both algorithms.
This a very good explaination of why 256 bits is enough against a brute force attack. The goal of breaking cryptographic systems is of course not brute force, but to reduce the actual key space you have to search. This is frequently measured in how many redundant bits you can shave off. So for AES, we might find in the future that 256 bits isn't enough after all.
No that's not the way to go for countermeasures against side-channel attacks (breaking 4096 bits key in asymmetric crypto is also doable using side-channel attacks, even in the cloud [1]). Since it is a physical attack rather than a mathematical one, the countermeasures are at the physical level too: the idea is to make the cryptosystem leak less information through channel such as power consumption, computation or memory access time, electromagnetic radiation… Currently most of these countermeasures are at hardware level.
One reason for extra large keys is for headroom in case of partial breakage.
Usually when a cipher is broken, they don't break it fully, but rather reduce the keyspace. By making your starting keyspace even larger you make even a broken cypher secure (up to a point).
You are correct but its also extremely educational to graph the decrease over time, for various cryptosystems both real/pro and "roll your own".
The difference between a professionally designed algo and a "roll your own" algo is the roll your own is usually extremely brittle and shatters to nothing in one discovery vs a professionally designed algo usually has multiple generations of grad students chip away maybe an average of 10 bits per decade as newer forms of analysis are slowly invented and/or declassified.
If you roll your own, its at least several orders of magnitude more likely to break than a pro algo, and when it does, unlike the pro algo, it'll almost certainly snap all the way at once, so using a large keyspace just means more of an epic fail and wasted processor power. On the other hand if you use a real algo, then a loss of 20 bits per decade or whatever average really does help, so crank the keyspace to the max. So I'd have to disagree with you there, if you're using a broken (homemade?) algo don't bother with more than 8 bits or so, any more is just wasting processor time, homemade is only useful for casual obfuscation. If you're using a real algo, then, and only then, larger keyspace actually will save you over the long run...
I wish I knew more about encryption. Unfortunately it isn't a very accessible area, even if you have a degree in CS you aren't really qualified to even touch the stuff, you need a maths degree at the very least.
Plus every time anyone brings up the subject the answer is always the same: "If you roll your own crypto you're an idiot." Which is not very encouraging. Likely why crypto' libraries are so antiquated and terrible.
1. "Rolling your own crypto" meaning "inventing your own algorithms".
If you invent your own algo for creating one-way hashes, performing symmetric encryption etc and are not an active, up-to-date cryptographer ... you will make a mistake.
This is a world for academics and researchers; for open publication of algorithms followed by the open publication of attacks on those algorithms.
2. "Rolling your own" meaning "Assembling known cryptographic primitives into a larger system".
If you assemble well-studied primitives (SHA-2, AES, HMAC) into larger systems ("I've invented a simple messaging system, guys!"), you are probably still mistaken.
But at least you won't be as much of a professional embarrassment (unless you're keeping passwords in plaintext or some such thing).
At this level, you're between academia and industry. Design at this level is mostly about using certain rules of thumb, exercising defence-in-depth and taking inspiration from existing designs that are battle-hardened. For example, when using multiple kinds of cryptographic operation, have different keys for each operation to mitigate potential bit-leaking attacks.
The good news is that you can get professional reviews done. Seek out respected security experts and ask if they are prepared to exchange labour for currency.
There's also "1.5": Implementing well-known algorithms yourself, which, given the reference to poor crypto-lib, I suspect is what the GP referred to. I'm not an authority on the subject, but I believe that if you do this, you risk stuff like leaving keys (or parts of keys) in memory, being susceptible to timing attacks or not using PRNGs properly.
Most of the implementations i regularly see keep plaintext in memory and generally say its not an issue.
Then they hash passwords because plain text passwords is dumb.
We're doomed, i'm telling you, doomed. I sometimes want to just yell: ALL YOUR PLAINTEXT PASSWORDS ARE.. PLAIN TEXT IN MEMORY RIGHT NOW DUMBASS.
But obviously, indeed, I just explain it nicely (and generally still get ignored, mind you)
Oh and hi python developers, you can't mlock memory let alone wipe it properly (yeah, i know, you can erase it with some ctype binding and _hope_ it didn't hit swap)
If you reach the point where in-memory values are an issue, you have already lost.
If I can pull the in-memory password value out of your program, I also have access to inserting my own code into your process to prevent any hashing attempts you may make on that memory. Once memory safety comes into play, all bets are off, and all security is irrelevant, as all your security is simply data in memory as well.
The reason we don't store plain text passwords to disk is because stuff on disk tends to be backed up and placed in positions where it has a chance to be read by others. Storing plain text passwords is not actually a security problem in itself if you can guarantee safety of those plain text passwords - but so far, such guarantees tend to be very false.
The danger zobzu referred to was that the pages containing sensitive information might be paged out to disk. If that should happen, you're pretty much out of luck: they're just as vulnerable as anything else on disk.
The only defense against this is to make sure that the sensitive information is never paged out, using either mlock() or just not having any swap on your system.
There's also the class of bug whereby the software starts spitting out stuff from somewhere in memory that was never meant to be displayed, a la the Minus World in Super Mario Bros. Those are much less common in these days of memory protection and high-level languages, but it is still a concern.
That's not entirely correct, in fact, far enough from it to warrant a reply.
Wiping off memory ensure that the attacker doesn't get the past 29329847 logins, only the ones from the moment he compromised the system. Thus, it's quite a win.
There's no easy "all bets are off" escape from properly coding apps so that sensitive data is kept only when needed, for the time needed, never more.
"If you roll your own crypto you're an idiot" is technically correct, but discouraging in all the wrong ways and worse for everyone in the long term. I've been a teaching assistant for MIT's introductory graduate crypto class, and I still don't consider myself qualified to implement core crypto algorithms. But I've seen -- and used -- entirely too many crypto implementations that are obviously poorly written in my eyes.
I would love to see a solution to the fearmongering that still keeps people aware of their mistakes. Perhaps public development is the answer -- many eyes make all bugs shallow, so all we need is more people who feel like their eyes are qualified. How many people have actually read the OpenSSL or gnutls or GPG source with an eye to its security?
Probably the enitire community of security researchers did review OpenSSL, gnutls or GPG, maybe only once, but they did. (Didn't you?)
Anyway, pulic development was the anwer that I'd give here. Keep in mind that even the researchers often make mistakes when they implement the algorithms, and the main reason that their implementation is better than the GP's or mine is not because they know more (altough they do know more), but because there are knowledgeable people doing QA on it.
Anyway, you are idiot only if you use your own imlementation for anything serious. There is no problem in rolling your own crypto if you only do it for fun.
People should really be saying "If you roll your own crypto, and then use it, expecting to stay secure against GCHQ and NSA, then you're an idiot".
But rolling your own crypto, then attacking it, is probably a good learning exercise. Having a group of people rolling their own crypto, and then attacking each other's attempts is probably better.
Somewhere there's a list of common mistakes people make, and it's surprising how often those mistakes are found in commercial "MILITARY GRADE ENCRYPTION!"
Doing crypto right is really, really, really hard. Not only do you get to fight against advanced math, but if you get even the tiniest detail wrong somewhere, leak even a little bit of state, even if what you leak doesn't even look like state, someone will hang you with it.
Such as, does the length of time taken to de/encrypt ever vary based on the internal state? And in your answer, do consider all the interesting ways in which the microarchitecture details like branch prediction or TLBs of the CPU you run the code on affect your code.
Time and time again, organizations with lots of money have decided to roll their own crypto to solve a problem they have. And it has almost inevitably failed.
> Time and time again, organizations with lots of money have decided to roll their own crypto to solve a problem they have. And it has almost inevitably failed.
Your last sentence is kind of hard to believe. I'm sure nobody ever even attempted to crack the vast majority of stuff that's encrypted out there.
I took Dan Boneh's "Cryptography I" course this year on coursera and I learnt a lot. The mathematical basics are covered (with references to books and free resources if necessary) and the concepts are very well explained. I would definitely recommend taking it. There isn't a new offering planned yet (Crypto II will start in January), but I think it will be proposed again soon considering that they had two runs of the course (spring and summer) in 2012.
Crypto 1 just "ended". The final exam came out today.
It's been good, as in I never could have followed this thread before it, but I don't know about the first run, but there has been little to no professor interaction with this run of the course.
Which is a bit unfortunate. I don't have any education in modular arithmetic and I can only get half of those questions correct. Being completely lost as to why the other half are wrong, and being unable to get help or worked out answers (to be able to figure out where I'm going wrong before the final) really isn't the best.
Unfortunately I think this may be what we can expect from Coursera. If you're ready for it then it's great. (I've put Boneh's Crypto 1 in the top ten of courses I ever took.) But if you're not familiar with modular arithmetic then you're definitely not familiar with abstract algebra, in which case yeah you need to study that first.
(I didn't realize there was a fall run!)
As for modular arithmetic did you check out the free book by Victor Shoup (it's mentioned on the course site) ?
http://shoup.net/ntb/
As for professor interaction, it is a byproduct of the MOOC formula : don't expect too much from the professor, he is already teaching to thousands of students ! You'll have to manage your way through the discussion forums (who unfortunately have a lot of noise). During all the courses I took, I always managed to find answers like that.
Thanks for the link, for some reason I thought that book was on a different topic. I've found a series of lectures from Harvard Extension using the Artin Algebra book. I'm going to use that (and the new edition of the book) to pick up on the concepts I'm missing (over the course of the next few months)
The previous course I took with CourseRA had a decent amount of Prof and TA interaction, and the forums were well tended. But, I guess that could be down to the fact that the course was being used by the professor to instruct his actual University students at the same time.
By far the simplest encryption to understand is the factorization problem.
If you have two prime number it's trivial to multiply them and get the result. Or if you have the result and one prime number, it's also trivial to divide and get the second prime number. But if you have that result, figuring out which two prime numbers divide it requires you to check every single one of them - which takes much longer.
Now imagine your password is one of the prime numbers, and your secret message is converted to a different prime number. Multiply your password with the secret message and you have your result. That's the encrypted message.
Without knowing what the password was, there is no feasible way to figure out what the message was - you would have to divide by every prime number looking for another prime number that evenly divides into the encrypted message. (Technically once you do that you don't know if that prime number was the message or the password.)
This is the core of encryption - the trouble is the side-details can mess you up, and they are really not obvious. For example: How do you securely convert a message into a prime number? Or a password into a prime number?
You can't use the entire message as one enormous prime number, that's not feasible, so you have to break it up. How do you do that properly, without leaking information?
There are other issues with the math (I'm not familiar enough with encryption to even know what they are, but they are things like certain primes have patterns that make themself obvious in the result). See http://en.wikipedia.org/wiki/Semiprime for some primes patterns to avoid.
Then there are issues with the implementation - if you are on a multi user system (or a remote system) you want to make sure people can't figure out the password by watching which system resources you use, or how long things take you.
And then issues with the algorithm: If you encrypt two documents with the same password it can be possible to compare them and figure out what you used. So you want to add some randomization to the password to make that impossible. But getting good random numbers isn't always easy. And you have to store that random number in the message so it can be reversed - but watch out that you didn't do that in a way that negates the advantage of the random number.
This system is (almost) just a variant of the one-time pad.
One-time pad: before Alice and Bob need to send secret messages to one another, they generate a large blob of random bits and each take one copy of it. They keep these bits very secret. When one needs to send an N-bit message to the other, s/he takes the next N bits of the blob and XORs them with the message. This is trivial to do and provably impossible to break -- except that making and sharing and keeping all that key data (the big blob of bits) is really inconvenient and easy to get wrong.
So, anyway, your system has the same weaknesses as any other one-time pad: you need new key data for every message you send, which means you have to make a lot of key data, share it with your partner in crime, and remember it for a long time, all without leaking any of it.
It's not quite the same as a one-time pad. You can get by with fewer bits of key than bits of message, provided there's still "enough" key. On the other hand, "enough" is quite a lot, much more than the key length of a typical symmetric cryptosystem like AES.
Anyway, I don't think this counts as "by far the simplest encryption to understand". A standard one-time pad is simpler and stronger, and the extra complexity of multiplying large prime numbers doesn't really buy you anything much to compensate.
I don't intend for anyone to actually implement this. It's just to understand how encryption can work. i.e. how it's possible for an operation in one direction to be quick and easy, but the reverse operation is hard and slow. (And why it's hard to implement properly.)
But you are right that as described it is a lot like a one time pad. (Especially since if you encrypt a known message you can instantly find the password.)
And your description makes little sense to me. I guess it is based on a bad understanding of RSA (http://en.wikipedia.org/wiki/RSA_%28algorithm%29). That does not convert your message to a prime.
Many, but not all, of those algorithms will work for numbers which are a product of two roughly even sized primes (depending on size). Especially the Quadratic Sieve and Number Field Sieve class of algorithms (NFS, GNFS, SNFS, etc).
Size is key though. A rough guide of the GNFS times for a quad core PC would give:-
(c100 means a 100 digit composite number, doesn't matter if it is just the sum of just two evenly sized primes).
For comparative sizing:-
2^256 =~ c80
2^512 =~ c160
2^1024 =~ c320
So a 512-bit composite can be quite reliably factorised in a few months by a commodity PC. Parallelisation is a partial help, the first stage of the algorithm can be distributed, but the final stage(s) (~25% of the work) still requires a single machine to do the work.
Part of the hard work in choosing random primes to use within such encryption algorithms is avoiding numbers which are susceptible to each of those algorithms, e.g. avoiding primes with simple p-1 factorisations to avoid being susceptible to Pollard's P-1 algorithm.
The wrong choice of prime and the factorisation becomes (relatively) trivial.
A 512 bit composite is not a lot of entropy - it's about 45 bits of entropy, which isn't much. sqrt(512) * 2 (since each prime is approx the size of the sqrt of the number, and there are two of them).
A few months to crack a 45 bit encryption doesn't sound quick to me. Even a 64 bit encryption would take many centuries at that rate.
Each prime is approximately sqrt of the product, not of size that is sqrt of size of product, the size of primes is approximately half of the size of their product. As they are primes their entropy is slightly lower than their length, but not too much (there are approximately x/ln(x) primes smaller than x, which would lead to entropy of 256bit prime being 248.5bits)
Schneier's Applied Cryptography is a very readable introduction.
If you want to get into the actual mathematics of it, then it is considerably more difficult. Luckily for me I took courses in pure math, RSA and some hashing functions at university, so I had to learn it :-)
As others have said, crypto is properly hard and it's very easy to make cock-ups. So unless you're prepared to do the years of work becoming a proper researcher, don't try and design any algorithms yourself (for fun is ok, but not for real world use).
Applied Cryptography is readable but not particularly instructive and occasionally misleading. It's not a good book on crypto. I'm not the only one who says this; Schneier himself alludes to it.
A better book is _Practical Cryptography_, now called _Cryptography Engineering_, which Schneier also co-authored.
Rolling your own version of a popular crypto algorithm using only the spec and then seeing how many of the well known attacks your implementation is vulnerable to is quite educational.
> I wish I knew more about encryption. Unfortunately it isn't a very accessible area, even if you have a degree in CS you aren't really qualified to even touch the stuff, you need a maths degree at the very least.
A good Number Theory course will cover much of the basics of the common asymmetric encryption systems. You don't need to do a whole Maths degree on top of a CompSci degree (although I've done exactly that).
But before that, I'd definitely recommend Schneier's Applied Cryptography.
I'm sure many CS graduate programs offer very good courses in cryptography. There are also books and online resources. Good ciphers/cipher schemes have withstood the test of time by large groups of experts and even under these circumstances some weaknesses are occasionally discovered. That's why a single person should not roll their own no matter what their qualifications are. The difference is those with more knowledge understand that.
In my experience, "don't roll your own crypto" primarily means "do not reimplement a well-known algorithm, because you are likely to get it wrong and likely to leak side channels out the wazoo". I think most people have realized by now that inventing their own algorithm is a terrible idea. (Although certainly there are people who haven't realized that!)
I think that most people mean inventing own primitives by "don't roll your own crypto". Side channel attacks on software implementations on software implementations of cryptography are certainly practical, but often not significant for overall security of the whole system (as either there are simpler attack vectors or the system has to be already partially compromised for the attack to be possible).
Amounts of home-grown algorithms (pseudo-"asymmetric" algorithms optimized for simple hardware or small messages), misused algorithms (RC4 without IV, mostly), otherwise secure protocols implemented with parties transposed around (many proprietary RFID and other authentication token protocols authenticate reader against token, not other way around) and plain weirdness in many even newly designed commercial cryptosystems today is still more significant that any kind of sidechannels caused by careless implementation.
Implementing a well known algorithm can be a very good learning experience! I implemented AES-256 for my college project this semester. Learned a whole lot about Cryptography that I never know before. :)
> I'm sure many CS graduate programs offer very good courses in cryptography.
On the topic: I have an interest in crypto and for a while I was looking at grad schools for cryptography. The number that even offer a course is less than impressive, and the number that offer more than one class are shockingly few.
The reason 256 bit AES keys are large enough is that in order to brute-force them, one of 2 things needs to happen:
(1) We need to build computers out of something other than matter
or
(2) A weakness in AES must be discovered that will need to reduce the search space significantly
We can disregard (1) as it will probably require changing (almost) all encryption algorithms that we currently use. (2) will most likely break all key-lengths so any use of AES will be weakened.
Interestingly with regard to (2), there is actually a (totally impractical) attack on AES that only affects the larger key 192 and 256-bit versions and not 128-bit AES - http://eprint.iacr.org/2009/317
Wikipedia _does_ put AES inside Category:Broken block ciphers.. "Widely cryptanalysed ciphers like Advanced Encryption Standard are considered stronger than un-cryptanalysed ciphers even if there are impractical attacks against them."
To be more specific: The only known things quantum computers are algorithmically superior for are factorization, computing discrete logarithms, and things that reduce to those. This happens to include all currently popular public-key cryptography. Fortunately, that is typically unrelated to symmetric-key cryptography.
At its core, to do public-key cryptography, you just need a difficult instance of an NP-but-non-P problem. Quantum computers put factorization and discrete logarithms into P, but there are still plenty of harder NP problems that have not been tapped. If quantum computing starts to become a significant threat, we'll probably see renewed interest in those.
Maybe I'm misunderstanding but if you had one of these you would have answered a fairly important question in CS theory. Perhaps you mean to say "NP-complete", but even then I'm not sure the proposition is correct.
You're right that my statement wasn't specific enough; I was going for brevity. The precise statement is: You need a problem with a known polynomial-time nondeterministic algorithm, but no known polynomial-time deterministic algorithm.
A very important point though: public-key cryptosystems rely on one-way functions, functions that are hard to invert in the average case, NOT just the worst case, which is not your typical complexity-theory mindset. So, I would be careful when talking about complexity classes in the context of crypto because it can be very, very easy to trip up and say something that's either not comprehensive or just downright incorrect.
I don't realy like rocking a boat... but quantum cryptography can't realy be used in practice.
Or, better, any practical implementation of quantum cryptography is succeptible to attacks. And the entire thing still needs an authentication method, guess what we use for authentication nowadays.
It is being used in commercial systems (e.g. idquantique). Practical implementations are prone to vulnerabilities, but none have been found that cannot be remedied. Some techniques such as device independent or measurement device independent QKD offer to make it far more difficult to come up with attacks in the future too.
The #1 advantage of QKD is not that the methods being used today will be immune to all attacks found in the future. It's that quantum states are unclonable, so there's no way to archive cipher-text for future attacks, as can be done with classically encrypted messages. e.g. If you send encrypt a message and send it via email today, the encryption method has to stand up to advances in algorithms and computational hardware for as long as the information remains sensitive. If you send something via QKD, an eavesdropper must break the protocol at the moment you send the message or it will be safe for all time.
Authenticating strangers, as in credit card transactions, is something that quantum computing may disrupt. QKD can be used safely by people who have met at some point in the past, but we probably have a bit of time before CC transactions need to be encrypted by QKD. i.e. While you should probably not send medical records or state secrets via many classical encryption protocols now, your CC info will change in a couple years so it's not as big of an issue if your transactions are cracked a few years after that.
Quantum computers have a fairly well-studied mathematical abstraction about which we can reason quite well. Algorithms for quantum computers exist which will break RSA; no similar algorithms exist to break AES.
And the link actually only applies to traditional computers; quantum computers have no analogue of destructively setting a bit, because all operations on a quantum computer are fundamentally reversible, and hence do not entail any inherent energy loss to entropy.
Public key crypto schemes (RSA, elliptic curve etc.) are vulnerable to some variation of Shor's algorithm. We'll have to move to quantum crypto when this happens. Hell for all I know D-Wave's machines can already break RSA for the DHS.
"And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space."
Or about a computer that isn't made of matter as we understand it...
Using brute force is not very effective on 256 keys. But this doesn't exclude the possibility of a breakthrough that reduce enormously the search space. For example even for AES there is an attack faster than brute force by a factor of about four !
http://research.microsoft.com/en-us/projects/cryptanalysis/a...
You are confusing different things. You are talking about hash algorithms, whereas the OP is about encryption algorithms. And it is not proper terminology to refer to hashes as "keys".
What you are trying to convey, is that hash algorithms need to generate hash sizes of 512 bits if they want to make collision generation as hard as bruteforcing a 256-bit symmetric encryption key. This is true. But please use proper terminology.
a birthday attack reduces the number of keys you have to search by half, not half the number of bits. This means that you would have to search 2^255 keys instead of 2^256 keys.
Look again. Finding a collision in a hash function with n possible values takes an expected number of guesses proportional to sqrt(n). If n = 2^256, then sqrt(n) = (2^256)^(1/2) = 2^(256/2) = 2^128.
" To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant."
How does that work? It seems odd that is is independent of the size of the system.
That comes from the second law of thermodynamics. Setting a bit on the computer reduces its entropy, so it must dissipate enough heat to increase the entropy of the Universe by at least the same amount.
Notice that this only applies to setting it into a known value. In theory there are some operations that we can do without dissipating any heat.
For further reference and a more detailed discussion of algo / key length selection take a look at NIST's Recommendation for Key Management [1], specifically section 5.6 "Guidance for Cryptographic Algorithm and Key-Size Selection."
I have to note that the length and strength of your AES key is no different than the complexity of your password. Once either is compromised, then you are totally exposed. So your attacker will take the path of least resistance. He can either brute-force the key or he can just break into your server and steal your config file or his you over the head with the proverbial $5 wrench.
Quantum computers can, in general, provide up to a quadratic speedup over classical algorithms and WILL in fact be enough. Once we have QCs all bets are off.
If QC is even possible (it seems an awful lot like cold fusion at this point -- always on the horizon, yet some issue prevents it from being realized), it wouldn't kill crypto. A hypothetical "triple-AES" would mitigate a quadratic attack. Lattice/linear code cryptosystems can provide security against quantum computing for public key systems. All a QC would do to cryptography is force everyone to use different algorithms, which is about as useful as forcing everyone to wear different colored clothing.
No, not really. Run the same example from the article with a quadratic speedup and see how scalable it is.
56 bit DES should be nervous. 256 bit AES, not so much.
There is a tiresome often repeated quantum computing meme or anti-pattern where QC is a magic implementation such that "my magic can do anything I can dream of". Well no, not really. Oh it could be powerful, maybe even implementable, but not limitless.
To clarify a bit further, only a few cryptosystems (certain public key systems) are really threatened by quantum computers.
Theoretically, literature indicates that quantum computers are really good at factoring large numbers [1], and they are really good at solving discrete log problems [2].
So yes, QCs are theoretically able to crack in linear time some systems whose security depends on those problems being hard (eg ElGamal, RSA). But systems that don't rely on large primes/discrete logs (eg symmetric algorithms like AES) are not vulnerable to quantum computers. (At least not vulnerable in the "throw them out, they're useless now" sense.)
In fact, there's even asymmetric cryptosystems that aren't threatened by quantum computers [3]. Again, any system that doesn't rely on large primes, or the difficulty of the discrete log problem, is not significantly threatened.
IMO the most vulnerable part of AES is not the side-channels but key generation. Compromise the source of entropies, preferably only for processes with specific signatures, just enough to be crackable on-demand then capture everything on the wire remotely.
bcrypt is a hashing algorithm for securely hashing passwords, this is something you want to be slow, yes.
However, the discussion in the link is about encryption, which you want to be fast. A popular webserver serving HTTPS to thousands of clients can't afford a 10x overhead on the speed of encryption.
Encryption and decryption speeds need not be symmetric. Salted passwords are a stereotypical example... Lets say my password is "password" and I salt it with a random 8 bit value to make rainbow tables 256 times longer than unsalted. I can encrypt the salted password in one round.
However on the decrypt side a salty password is going to have to average 128 rounds before a match and a worst case failure / non-match means you have to run ALL 256 rounds just to make sure.
Obviously we're talking salty passwords not secure web traffic, but the point is off the shelf systems have existed for decades where the decrypt system is orders of magnitude more expensive than the encrypt system. Implementing an algo with those characteristics for web traffic "superficially seems feasible" although the details are fuzzy.
As an example of a rough guess, 16 bits of salt would scale pretty well for "thousands of clients" if the server and client are more or less similar performance level (which within an order of magnitude or two is probably about right). Simple salt is the wrong algo for this job, but...
bcrypt is actually a cipher, like standard blowfish but with a much more expensive key expansion function.
When you call the bcrypt "hash function", what it is really doing is performing this key expansion then using the resulting key to encrypt a constant string.
GP is right - you can use bcrypt as a regular cipher, and the extended key schedule will make brute-forcing harder.
It is inefficient and unnecessary though - a 384-bit key will not be attacked by brute force anyway, and blowfish's small block size of 64 bits can be problematic (you need to change the key quite frequently to prevent active attacks.)
There are a lot of applications where we want encryption to be faster. The slowness of SSL delayed its widespread adoption until _very_ recently, and is still delaying it, for instance. And ciphers are often much faster on dedicated hardware (which a good attacker will have) than in software on a general-purpose chip.
It's better to make brute-force attacks infeasible by increasing the search space than by making each individual encryption take longer.
I think the 256 bits are a buffer against non-brute-force attacks, such as undiscovered algorithm weaknesses which may speed up by some factor, the cracking speed.
What exactly are you trying to imply with this quote? It seems like you're suggesting that Schneier's quote may become infamous one day should 256-bit key cryptography become crackable, which it might, but you say nothing as to how Schneier's argument could ever be wrong. He is saying that thermodynamically speaking, brute-forcing 256-bit key symmetric cryptography is unlikely to ever be possible. Do you think that to be false?
"NIST key management guidelines further suggest that 15360-bit RSA keys are equivalent in strength to 256-bit symmetric keys."
http://en.wikipedia.org/wiki/Key_size#Asymmetric_algorithm_k...