Matt Green, himself no cryptographic slouch, says this is the biggest crypto story of the year: malware that included a new cryptanalytic result in its toolset. Zero-day vulnerabilities are somewhat common in malware; zero-day crypto results in malware are practically unknown.
It's scary to think that one day length-extension attacks, misusing cipher modes, and other crypto related attacks might be as common as XSS and SQL injections.
the day where we have a deep enough understanding of computer science for that sort of attack innovation to be the baseline would be a pretty amazing future (and one which is pretty unlikely to happen I think)
This turned into a rant - possibly a rant with no informed opinions. Apologies in advance.
I think we can look at our vulnerabilities in three levels
- open windows
Backdoors left in as factory defaults, connecting 20 year old systems to Internet by slapping on an NIC. These are the blue collar computer systems running trains and factories. Replacing their systems with (OpenBSD) must be a major national priority. It will cost a fortune, and the foolish thing is to imagine we should replace like for like functionality.
As someone above noted, the md5 encryption was used to maintain backwards capability with a system uneconomic to upgrade. As a society we either make that an unacceptable deal or we turn off the lights now
This will disrupt a lot of industries. In both senses of the word.
- best practises - The cloud and competitive contracting will take care of things like not upgrading your OS for ten years. Thats doing what you already do but better. The killer practises will be the ones we don't do at all. Almost no-one sends me PGP signed or encrypted email. And even if they did I have no web of trust that links them back to me. Let's see Facebook or LinkedIns social graph become a layered web of trust. Then tripwire can tell me that the server I am giving my credit card has binaries that were signed by a sysadmin who is linked to three guys I know and the binary hashed match those signed by shuttle worth. In real time. And I am just making that up. But it is feasible.
Defense in Transparency - why can't I know what binaries a server is running, who put them there and when? I can look at the locks on doors, see the quality of the mortar in the brickwork. Why can't I ask a system to tell me how secure it is and expect to get a audit able answer.
I think I am ranting. But I know we cannot trust industry certification or peer pressure. We need to get to a point where when there is no lock on a door it is clear for everyone to see, and everyone expects to see locked doors.
Securing industrial systems is a major problem. Security - when not done perfect - is a disruption into efficiency, usability, and functionality.
If you calculate the expected cost of a hack vs. the expected value of rebuilding the electronic infrastructure, in some situations being hacked is cheaper.
Further, being hacked is rolling the dice... there is a probability it won't happen. Whereas, an electronic infrastructure rebuild is a guaranteed expense... and expensive.
Then there's the Columbine mentality, "It could never happen here". That's pretty pervasive.
These days being hacked is just a matter of time for say a winxp sp1/2. The environment is becoming flooded with oxygen and we need to evovle cellular Walls.
The problem with cost benefit analysis is that it makes one core assumption. That tomorrow will be substantively like today. I don't believe in black swans but I do believe in climate change
This increases the likely-hood that flame is state sponsored. It still seems surprising that in the wild malware would use a novel crypto result. The risk of publicly releasing the technique is fairly high. Perhaps with md5 being phased out, the technique was losing value quickly regardless. If info warfare becomes more widespread I wonder if mining malware for crypto analysis results could be a valuable research technique.
I get the feeling we can judge the amount and timeline of investment by Western governments from these releases.
Like any other new capability /weapon these things will be used when they are ready (No president is likely to wait and see if bombing Iran works first)
So 2006 Bush gives the nod to ramp up the program, 2010 and stuxnet gets into the wild. Four years of development by serious people.
The md5 attack was made known in 2009 and now three years later a similar attack. We have reports of viable hardware attacks and fab-based alterations. So 2015 - the year a secure server gives up it's secrets via hardware ?
With the DoD funded tactical cryptology research centers being opened at Camp Williams, UT and outside of Augusta, GA; nothing is safe anymore
This should be considered one of those watershed moments of humanity. Once again, some madmen have far more technology and power than intellect. The last time this happened, industrialized warfare led to the two world wars and a cold war that threatened us with absolute and complete destruction.
I have a feeling that this event will strike alarm bells around the world as nations race to arm themselves with the weapons of electronic warfare and destruction.
Talk about terrorism; we really need to be absolutely certain that when we go around brandishing our threats around the world from now on, we are not threatening someone that can shut down our infrastructure, or even turn our systems against us.
Just because it hadn't been knowingly done before now, doesn't mean it wasn't already a threat from state sponsors (or others). This was just a step towards an end that we all could foresee, be it from China, Russia, the U.S., or others. If the U.S. (assumption) hadn't done this, I can't imagine this sort of weapon hadn't already been envisioned by other states. It was only a matter of time.
Now that its on the surface, people (including governments) can understand its real, and have no excuse to not give it the attention that it deserves by instituting procedures and designs to properly protect critical systems.
I disagree. Experts have understood the true risk of a cyber war for over a decade, at least. The infrastructure of the United States, both public and private, is woefully unprepared for such attacks to be brought against it.
I'm reminded of a recent quote describing how getting stuxnet into Iran may have been difficult, but getting something into a US power plant would be trivially simple by comparison.
Not only is the US infrastructure not ready, but it looks like no one is actively trying to change that.
I imagine that businesses won't see the need to waste all the money until something big happens and they're forced to.
Yeah, but come on, everyone knows how counterproductive this is.
Flame and Stuxnet have been rumored to have been US/Israeli sponsored for a while; the confirmation is no real surprise.
What surprises me is how stupid this shit is. I mean seriously? If I knew the US was using crypto warfare leveraging the spreading properties of a virus, I would hope that it would be nearly impossible to tie back to us. I mean that's just fucking common sense.
I mean really, at the end of the day, what did this achieve? I'd say it probably just gave every crypto nerd in China, Japan, India and Russia an infinite budget.
" … as nations race to arm themselves with the weapons of electronic warfare and destruction."
I, for one, welcome our new Military-developer Complex Overlords.
Slightly more seriously - I'd see a race for more and more complex military/attack software as an improvement over the _previous_ "arms race". Shutting down or controlling or destroying SCADA systems would be _bad_ and could very credibly lead to people dying - but several orders of magnitude less than the old city-sized smoking glass craters option.
(And from a personal mercenary perspective, even though I would choose to not work as part of a Military-developer Complex, having that as a source of practically-infinitely-funded competition for developer talent in the market would still put upward pressure on pricing for even "conscientious objector" developers)
As with GCHQ's discovery of the concept of asymmetric encryption years before it was rediscovered[1], it's always good to bear in mind that there's likely similar things being devised currently. When weakening or breaking encryption & hashing are such golden keys for certain organisations, they'd be fools to not put a lot of money into researching it.
As I've remarked a few times before, anyone devising a feasible break of RSA or a practical quantum computer would probably keep it quiet and only act on it in absolute need or deniably (like Bletchley Park in WW2). As Bletchley shows, even major operations with tens of thousands of people involved you can keep things very good secrets for decades.
The main defense against AES or similar being secretly broken is that people (nominally) on the same side also use it.
If NSA had discovered a flaw in AES and not announced it - then when this was innevitably discovered, there may have been a few conversations with the CIA/Secret Service/The Whitehouse/SAC etc along the lines of:
"so there is a flaw in this algorithm and you have been letting us use it all this time for our secrets?"
>"Yes but it's OK only we know about it - nobody else could have discovered it independently"
"Really? Prove it!"
This did actually happen with Bletchley park. After the war various UK/US allies were given copies of the Enigma machine and told that it was unbreakable.
It doesn't matter what Microsoft uses to sign their binaries, they're not the ones the signature scheme is defending against.
What matters is whether or not the attacker can find an application accepts an MD5 signature and target it.
This is not a problem that can be solved by the CAs, though they can certainly stand in the way of the solution (by continuing to issue MD5 certs). The real problem is that a signature by definition is expected to be valid more or less forever. Even if the cert used to sign it expires, the signature is generally still valid.
It's a collision attack. Therefore, there must be a still-valid cert that was used to sign something (presumably) legitimate with a signature based on an MD5 hash.
I think what you said is correct... until an attacker figures out a way to violate it by changing some subtle assumption.
For example, before Dec 2008 it was generally believed that a cert signature forgery would require a second preimage attack. Then Stevens et al. proved that under the right circumstances it could be done using collisions only.
Say, did you know some x509 PKI entities keep the same keypair going indefinitely by reusing the same one every time they renew?
Wasn't one of the certificates used using SHA-1 (the other two were MD5) - could it be that this new cryptographic attack works not only against MD5 but also SHA-1 ?
"We have developed a forensic tool for collision attacks [7] that can efficiently detect a wide range of known and unknown collision attacks against MD5 as well as MD5's successor SHA-1."