Hacker News new | past | comments | ask | show | jobs | submit login
Son of Stuxnet (firstlook.org)
197 points by jbegley on Nov 12, 2014 | hide | past | favorite | 20 comments



An interesting aspect of this story that points the direction the Internet will probably take in reinforcing trust relationships:

The idea that a major government malware contracting effort was required to pop a particular Hungarian CA (presumably for deniability reasons) tells you something. The USG virtually undoubtedly controls several RSA keys that can be used to sign arbitrary SSL/TLS certificates. Why didn't they just use one of those?

I assume it's because they're expensive, and every time you use them, you risk burning the CA they're associated with: the major browser and OS vendors will excise your root keys, or attach constraints to their use.

Kim Zetter, for understandable narrative reasons, uses Gmail as an example of the kind of site that a CA-hijacker could compromise. But Gmail is the dumbest possible site to target with a traceable compromised CA key, because it's key identities are pinned in Firefox and Chrome; if the key indicated over the wire disagrees with the browser binary, the browser flips out.

This is why HPKP, TACK, and similar pinning/continuity/attestation frameworks are such a good idea. Over the medium term, they allow the users of the Internet to surveil SSL/TLS keys and detect compromised CAs.


tptacek, maybe the USG as a whole controls some RSA keys, but the more interesting parts, such as NSA, wouldn't be able to get access to them without setting off some red flags. Not to mention, the USG isn't a monolith; NIST recently rejected Dual EC_DRBG. Employees at NIST publicly criticized the NSA's (alleged, but almost completely likely) decision to backdoor Dual EC_DRBG.

Like you said, for attribution purposes the NSA had to get its keys elsewhere. I'm asserting that it's not just attribution that's on their mind.


Attribution back to NSA isn't the big problem. Operations is the problem.

You can't forge certificates without associating them with some specific CA.

If you forge a certificate for a pinned site, you risk detection.

If HPKP is widely deployed, every site could have that risk.

Unless you've popped all the CAs, the browser vendors can respond to detected forged certificates by curtailing the compromised CA. Meaning the NSA has to compromise another CA to continue their activities.

There aren't unlimited CAs to work with.

Stipulate that NSA doesn't care if attacks are attributed to them. Certificate surveillance is still an operational problem for them.


> The USG virtually undoubtedly controls several RSA keys that can be used to sign arbitrary SSL/TLS certificates. Why didn't they just use one of those?

aside from the economic cost of "burning" CAs, attribution itself seems to be a consideration, e.g. in other contexts like TAWDRYYARD or LOUDAUTO where "All components are Commercial Off-the-Shelf (COTS) and so are non-attributable to NSA."


A lot of tech related journalism gets (fairly) maligned. This is truly amazing writing, and I highly recommend reading the whole book, even if you know most of the story.


I wouldn't call it "amazing" writing, IMO it's silly sensationalism. Here's a particularly annoying sentence:

   There was nothing like staring down
   the barrel of a suspected cyberweapon
   to clear the fog in your mind.
Yeah, right. Staring down the barrel of a probably non-functional rusty .25 would clear my mind a lot faster than would some random Windows malware.

Which begs the question. After thousands of different Windows viruses have been identified, why is any company anywhere in the world still using that crap on any of their computers? Let alone why does a CA let Windows computers into their infrastructure?


Really? Even if the malware was nearly identical to that of a previously found malware that was presumably used by a government on government attack? But whatever, seems you are more mad at windows than any sensationalism in the article.


Yes, really. I have a very healthy respect for firearms. I've fired plenty of guns and I would never want to be looking at the wrong end of one. It would scare the shit out of me!

There is a very strict rule when dealing with firearms. You NEVER EVER point a gun at anyone else. Ever. No joking. No kidding around. You only point a gun at someone if you are prepared to imminently use it against that person. This means, literally, "life or death".

I've been handling guns for over 45 years and I've never even considered breaking that rule, nor have I seen anyone else break it. BTW one corollary to this is "there is no such thing as an accidental discharge, only a negligent discharge".

Heck, I've even "died" in paintball enough times to know that I don't like the odds.

Software, even malicious software, is OTOH "meh". Even if it's written by government hackers. It's simply not scary at all. IMO the analogy of guns to software was probably written by someone who wasn't familiar with firearms.

And you're right about my abhorrence of Windows. For close to twenty years now, everyone has sat around and said "oh no, woe is me, another virus, it's hopeless, it's terrible, I'm scared, my business is threatened, hackers are stealing my secrets". The details change, the general story is the same.

To which my response is: "the only winning move is not to play". So why do people keep using Windows?


Do you think that if another company's OS became the "New Windows" that that wouldn't have loads of viruses? Computer security is hard, Windows gets targeted by virus creators because that way they get the most chance to infect machines, not because Windows is less secure. If Windows and any other OS swapped market share today, there'd be just as many viruses for that new OS in 6 months time.


You realize it's a metaphor and not a literal gun, right?

And surely the most popular OS will be the one most targeted by criminals no matter what it is.


I think he's just trying to point out that the metaphor isn't apt, since most malware is as effective as a "non-functional rusty .25", but nowhere near as scary.

My heart starts racing a bit when I suspect my systems might be tampered with, so I think the metaphor does apply. What would have been better if the parent provided a better, less hyperbolic metaphor.


Based on this excerpt, it's clear that the author is out of her element on the technical issues. Several parts of this passage are sensationalized and/or inaccurate. Ether that, or it's intentionally written this way to be target non-technical readers.


This is arriving from Amazon tomorrow, looking forward to reading it.


>...the victim was NetLock[1], a “certificate authority” in Hungary... The logs showed that the attackers had signed into one of the command servers in Germany in November 2009, two years before Duqu was discovered.

Professionals stealing certificates since at least 2009.

[1] https://www.netlock.hu/USEREN/


How is NetLock still in business/trusted then? Could they somehow prove that their keys weren't compromised?


"The only catch was, he couldn’t tell anyone what he was doing. Bartos’ company depended on the trust of customers, and if word got out that the company had been hacked, they could lose clients."

It's unfortunate that still today the attitude is "cover it up" rather than disclosure. I would hope that any company that I entrust with my data would be forthright about breaches so that I, as a customer, would have the opportunity to take whatever precautions were necessary given the details of the breach.


It's unfortunately, but also expected. Self interest and all that.

CAs' only true marketable asset is trust.

Personally, iff the CA system is something we're going to stick with, there should be substantial legal penalties for failure-to-disclose timely updates on breaches by CAs. Imagine, something that a multi-national trade agreement could actually do that would be beneficial to everyone!


Hmm... seems like everyone is writing books on cyber war these days. Maybe I missed the boat. In any case, pulling back the cob webs I do remember there being releases post-stuxnet that indicated there was a c&c virus first. Which is I guess what this article describes.

What the heck is this "alternate C" language stuff though? Anyone remember?


Wikipedia:Duqu suggests it may be something called "Object Oriented C (OO C)", though it's hard to tell which one exactly. See https://www.google.com/webhp#q=object+oriented+c


So who owned the command servers that the virus reported to? Were they innocent servers that had been hacked?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: