That's my favorite quote from Atwood! People are so prone to forget that while cryptographic algorithms are provably secure (under practical constraints) in a mathematically rigorous way, their implementations are subject to all of the shortcomings of any engineering practice. Makes quick work for an attacker trying to figure out where to start.
It's my understanding that most (all?) public key cryptographic algorithms aren't provably secure, but are conjectured to be. They are reliant on some problem being hard to solve (factoring of large integers, discrete log, etc.).
Something like a one time pad is provably secure, however.
This is a common misconception. The algorithm itself is provably secure, in the sense that violating the stated security guarantees of the algorithm is equivalent to solving a problem that's considered to be computationally intractable. The only part that isn't 'provable' is the basic assumption that the problem is intractable in the first place.
Didn't you just agree with him but substitute 'hard to solve' with 'computationally intractable'?
Yes based on our understanding today these things are computationally expensive (e.g not feasible), but they could theoretically be easy to crack given a mathematical breakthrough.
Am I misunderstanding?
As the field of mathematics advances there's a chance that current crypto will be broken. Why is this a misconception to point out?
Why is it not, on some level, conjecture to say these systems are secure?
It hadn't been implemented yet I'm any practical crypto system that I know of, buy it certainly seems like we are finally going to have actually, provably hard problems to build out security om.
It says you either have to use exponential time or quadratic storage. Schemes based on high memory requirements have actually been sought for a while, since (apparently) memory is considered less scalable than computation.
No, I mean your original comment is inaccurate. The paper presents a time-space lower bound for parity learning, but the encryption scheme based on this result is only 'unconditionally secure' in a model where the adversary is restricted to having at most (n^2)/25 bits of storage. This isn't a general-purpose unconditionally secure encryption scheme, which is what your original comment implied.
All proofs have axioms if you chase them all the way down. Given the axiom that the solving the math in the crypto is intractable, the crypto algorithm is proven secure. But only so long as the axiom holds.
For example, quantum computing may break the axiom, and then the proof will be invalidated.
It might be more correct to say assumption rather than axiom here.
It looks like probably secure is defined in cryptography to mean breaking the algorithm is equivalent to solving the underlying intractable problem [0]. In my mind provably secure meant that the problem was actually intractable (which is not the convention).
You're falling victim to the same misconception. It is not a contradiction to say both that a cryptographic scheme is provably secure and that its security relies on a conjecture about the hardness of a computational problem.
Ehhhh.... well, it's complicated. For most cryptosystems, the answer is no, because if you can solve the underlying problem efficiently you can break the security of the scheme as defined. It turns out that this isn't always a 'break' in the sense that most people understand it. For example, a 'break' might just mean the ciphertext is no longer indistinguishable from random noise, but it might be possible to prove meaningful security in a weakened model that doesn't require ciphertexts to look like random noise but, for example, requires that no bits of the plaintext are leaked with high probability. Cryptographers build schemes with very strong, conservative security guarantees for this exact reason.
Given the breadth of ways to leak information about the private keys--side channel attacks, physical attacks, userspace (allocator, random number generator) this would be extremely difficult (impossible?) to prove.
Using a PRNG to generate an OTP is called a stream cipher, and then it isn't an OTP. :)
When using an OTP, you have to use non-pseudorandom values to avoid just being a stream cipher. If you're doing that, you can skip sharing the pad and just share the initial state of the PRNG.
If you go to the trouble of sharing the pad, go to the trouble of using random data within it. :)
Cryptographic algorithms are generally not 'provably' secure, because most are based on an underlying assumption that some problems are hard, and this is not proven, just assumed. Also, there are mathematically verified implementations, and tools for verifying existing implementations that are about as 'provably' secure as the specifications of the algorithms.
There is obviously a place for cryptography in both personal and commercial communications. I'm always curious when one hears a politician moaning on about the 'dangers' of cryptography, if they understand how intricately intertwined cryptography is with the modern economy. And the fact that something is either cryptographically secure, or it isn't. There is no middle ground. If you intentionally break a cryptography system then you're going to disrupt trillions of dollars of commerce.
But crypto is of course not a magic pill. There are political issues that need to be addressed as well. This was a theme touched on in Bruce Sterling's SXSW keynote this year.
Just the idea that crypto CAN be fought indicates a misunderstanding of the technology. You can't break a technology implemented decently on every consumer computer on the planet and with many open source implementations.
This is why I always eyeroll when people complain about GPG user interface weaknesses.
The only way to really ensure the integrity of encrypted communications is by isolating and keeping the endpoints away from prying eyes. If your personal, business, political or criminal activity is such that you're concerned about third party interference with your clients, you have no business using iMessage -- which is protecting you from snooping network admins and carriers.
The beauty of a complex but powerful tool like GPG is that you can completely isolate your online activity from secure activity. There's nothing preventing you from printing cipher text and using a scanner attached to an air gapped computer without any network connection.
If your health and safety depend on secure communications to avoid extraordinary threats, don't use off the shelf tools that you don't understand. If you don't understand any tools, follow "the Godfather's" advice and avoid telecom-based communication.
Fortunately most browsers prevent you from pasting JavaScript URIs in the URL bar these days.
It's a little surprising Apple overlooked not one but two fairly obvious major holes: allowing JavaScript URIs, and the lack of same-origin policy. I wonder how many other applications are similarly vulnerable.
Well the lack of SOP is by design, since it's not a browser visiting multiple sites the idea of an "origin" doesn't always make sense. This is part of a larger body of work we've been researching, we found much more than this one (all known bugs have been patched, that's why we've been waiting to release this). We'll be submitting the full body of work to DEFCON/Blackhat, and a few other cons, hopefully we'll get accepted, be on the lookout if we do!
I despise the "if you have nothing to hide..." argument for the surveillance state. And I argue against it every chance I get.
But, practically speaking, I don't have much to hide. I also realized that one can draw more attention to oneself by taking drastic measures to preserve one's own privacy.
I know, citation needed... I believe FB (or a related party) released some research about detecting "holes in the social network". Browser fingerprinting is another front on which I've probably made myself more unique to trackers.
Don't we all want to hide our payment information when we buy stuff online? Modern commerce is built on identity assertion and securing payments between two parties over the wire.
Yes, surprisingly the OS X Messages app doesn't seem to share a lot of UI code with the iOS version. You can easily tell that it's a simple WebView from the way text selections behave.
Man, that's depressing. It's fairly easy to prevent this particular kind of injection—you just have to add a Content Security Policy to the HTML page. The appropriate value for web pages running from file://, with no expectation of downloading and executing remote JavaScript is: `script-src 'self';`
Really sad to see that Apple is using embedded web views without these sort of basic protections. I bet worse exploits than this are possible, given that they probably expose parts of the ObjectiveC layer through the JavaScriptCore bridge.
Implementing CSP and other mitigations for these types of same origin bypass attacks is relatively easy. I'm shocked that Apple didn't check this. I couldn't imagine Google ever making this mistake, their web security teams are solid.
Apple really needs to invest heavily in bug bounties and internal security audits. This is 101 type of stuff when implementing any user-controllable embedded web content.
The bar should never be this low for critical OS apps like iMessage.
> I couldn't imagine Google ever making this mistake, their web security teams are solid.
You haven’t seen their XML bugs in Google Toolbar’s web gallery in 2013, have you? Full access to the whole file system of their servers via XML includes.
A bunch of security researchers managed to dump /etc/passwd as a sample to get the bug bounty.
Google’s security isn’t that much better either...
In case of Android what you only need is that your application can read notifications (and has notifications/accessibility permissions). E.g. all whatsapp messages go through it...
A typical fanboyism argument when one's favorite company screws up. Just mention the other rivals and add zero insight into the original idea being discussed.
> In case of Android what you only need is that your application can read notifications
This "only" is much harder to do than sending a Javascript URL.
I had a similar thought with WhatsApp's Signal announcement. I believe that on iOS, by default all WhatsApp messages are backed up to iCloud Drive. So that would seem to be an easier attack vector.
Not just the messages - the key too. Just imagine the outcry from someone breaking his iPhone not being able to restore his messages because of the introduction of end-to-end.
That way, you don't even have to attack WhatsApp itself and they have all the plausible deniability they needed.
The cryptosystem in use by Signal and now WhatsApp does not work this way. It offers forward secrecy, where recovery of the long-term keys will not allow decryption of past intercepted encrypyed messages.
The only text/voip app that securely stores its data is Biocoded (https://biocoded.com/home). Even if the local on-device database gets copied elsewhere, it will be undecryptable outside of that device.
It's not that difficult to break. Anything encrypted with a password is not all that secure. Someone can clone your device or by using a security hole in the device can get to that storage blob and eventually crack it in reasonable time.
...or to paraphrase Jeff Atwood: "I love crypto, it tells me what part of the system not to bother attacking"