Hacker News new | past | comments | ask | show | jobs | submit login
Unpatchable Malware That Infects USBs (wired.com)
198 points by joosters on Oct 2, 2014 | hide | past | favorite | 135 comments



My dad collected some usb memory sticks from a trade show and he tried to empty them for his own use. One of the sticks he could't use: when plugged in, it sent keystrokes <windows key> + r + http://whatevermarketingsite.com + enter.

So it was just a dummy usb hid device that emulates a keyboard. No exploit or anything but a simple marketing gimmick that people take because of 'hey, a free usb stick!'.

Here's some more info: http://blog.opensecurityresearch.com/2012/10/hacking-usb-web...


A common social engineering trick is to drop some USB sticks that have keyloggers, etc. loaded on them in the employee parking area of the target firm. Sooner rather than later, someone will plug one into a corporate machine...


Sure, but my recollection is that depended on either Windows Autorun (which hasn't been the default for quite a while) or people being curious/foolish enough to execute a file they find on the drive. This is a lot more insidious.


IIRC a talk by Travis Goodspeed mentioned that it was only internally disabled for USB and writeable media. If a USB device presents itself as a CDROM, it can still autorun things unless globally disabled by the user.

Or it could emulate a disk + HID keyboard, and enter the win+r filename.exe <RET> keystrokes necessary to launch it, or one of several other tricks.

It could also intercept mass storage requests, wait til you're copying an exe onto there (even if you've reformatted the drive since), and inject a payload into it.


This is how U3 drives, when they existed, were able to work. They emulated a tiny CD drive with an autorun bootstrap.


I've got one like that too. Interestingly its not an usb memory stick but rather more of an SD adapter with an actual SD card & the code isn't on the card.

This one does actual work as a legit USB stick too once you get past the annoying link part.


Companies that do this kind of stuff deserve to be publicly shamed and exposed as assholes.


Public shaming? Are you joking? The manufacturers advertise these capabilities openly! There are hundreds of them, maybe thousands, and they are in intense competition for your business. Contact a few USB stick vendors on Alibaba and see for yourself.


Just because you can, doesn't mean you should. Hijacking keyboard with something that pretends to be a pendrive is distasteful in my book.


I personally agree that it is distasteful. I didn't say it was tasteful. I was pointing out that your 'public shaming' suggestion would be totally ineffective as a means of addressing the problem, since the people who do these things obviously don't conform themselves to your moral schema in which suffering the reprobation of society is an undesirable outcome.

They are not doing this secretly, consumed by fear and guilt, manufacturing these devices in the backroom of some disreputable bar in the nasty part of town and then furtively selling them from a van in a back alley, living in constant dread of exposure. They do this quite openly. They manufacture these things in many of the same boring factories that make the reputable brands. If you ask them, they'll cheerfully mail you some free glossy promotional material detailing fake USB drives and many other nefarious electronic things that they'll sell to anyone with a credit card, and your order will be delivered to your doorstep quite openly via FedEx or DHL.

They don't care (at all, on any level) what we or anyone else think about the moral propriety of their operation, surprising though that fact may be. Indeed, a public shaming campaign would benefit them greatly as free advertising, by bringing their very existence to the attention of many new customers.


You're right, and I agree with what you wrote. The apparent disagreement might come from me not expressing myself clearly enough.

I didn't mean to shame manufacturers. They have plausible deniability anyway - "it's a keyboard emulator that looks like a pendrive!". I think that companies that buy these devices and use them for marketing on trade shows should be shamed and blamed.


I believe the comment was explaining why public shaming would be a moot effort, not that the practice was justifiable.


I guess. In this particular case it was intended as a genuine gift by an exec in a (non tech) company so they just didn't understand. Plus since its an SD adapter + SD card I still got at least a SD card out of it.


I see. I expressed myself in a very absolute way, which I realize might not be the best way of communicating.

Of course there should be a room for caution and thought. Maybe this exec and his company really weren't aware of the implications of what they were doing. Nevertheless, there needs to be back pressure. In this case giving feedback may be enough - I'm in favour of being as nice to people by default - but in others such methods should be denounced.


>I expressed myself in a very absolute way

No worries - didn't come across wrong. Personally I was a little miffed about scoring a snazzy SD USB adapter...that turned out to be "crippled"...but hey...gift.

>Nevertheless, there needs to be back pressure.

Thats tricky - firstly I can't jeopardize a big contract by complaining about a trivial gift (!). Secondly the company in question is "truly" innocent / oblivious here. They get a marketing catalog and pick 100 of item X. Not sure how one would reach such a marketing company - but they are the culprits here.


> Thats tricky - firstly I can't jeopardize a big contract by complaining about a trivial gift (!). Secondly the company in question is "truly" innocent / oblivious here. They get a marketing catalog and pick 100 of item X. Not sure how one would reach such a marketing company - but they are the culprits here.

I understand. It would be awesome if the word reached that company that the item they picked from catalog is both evil and moderately annoying to users. They actually might take it into account (people often do care if they know; I've personally made a SEO spammer lose business like that). But it's completely understandable you might not be in position to deliver such a message. On the other hand yes, that marketing company should be one to get the blame.


I believe that some security tokens (Yubikeys) work by emulating a USB keyboard as well.

http://www.yubico.com/about/intro/yubikey/


I have been thinking about creating an active "USB Condom" for a while, and this might be the right time to work on it. It would be pretty straight forward to make a device that sits between a PC and a USB device and monitors the traffic. It could allow only certain device classes, or it could ask you to press a button for each file moved. My original thought was to stop the spread of viruses that are designed to move between computers over USB drives (my last job had a lot of trouble with these), but preventing it from showing up as a HID would also be useful in this case.

One option would be to build this into the chassis of a PC so that you know anything plugged into the green USB port will only work as a drive.


You aren't the first with that idea, though hopefully you have enough knowledge to pull it off.

It'd probably be more simple to just make one type for each class of device: http://en.wikipedia.org/wiki/USB#Device_classes

It should be easy enough to train my users to only put their thumb drives into the one specific port. The rest are gonna get sealed off with hot glue at this rate...


You're talking about essentially an HW firewall for USB connections.

We can discount VID/PID based black/whitelisting since they're trivial to spoof, but it might be possible to use them to identify (from an external database) what features that V/P device is supposed to be capable of, and block/alert if it tries to do things it shouldn't.

So if you initially enumerate as 0x0403/0x6001 (FTDI usb-serial) then if you try to start serving up descriptors about how you totally do mass-storage or something, that's probably kinda naughty. You'd potentially need quite a bit of code to parse and decode the various protocols it might be observing, giving a really big code surface for an attacker to identify or compromise your 'condom' itself.

So, with a lot of work, and a really big and annoying to update database, it might be possible to protect against imposter/multi-personality devices.

The bigger overall threat though is probably masquerading as valid mass-storage, but then doing various things to the data you're asked to store/fetch. (See the Active anti-forensics iPod and similar)

As I see it, there's basically nothing you can do here against an actively hostile device that isn't a cat & mouse game of exploit->patch->repeat. Conceivably the OS could sign every file going in, and maintain a local cache of signatures to check when reading, but that falls over in the hugely predominant use-case of using a USB drive to transfer files between things; you have no easy side-channel to also transfer sigs. So then the machines need to trust each other (without the malicious drive compromising one and being able to fake sigs)


Perhaps its time for me to create a KickStarter account.


Bonus points for a "keyboard macro" mode.


This'd give you an artificial sense of security though. There's nothing stopping the device's firmware from injecting exploits into files that you put onto the device.

So you tell the device to store workReport.pdf, and it stores workReport.pdf, but also injects a malicious payload into the PDF so that when you open it again it infects you.


There is also the potential of the USB condom itself getting infected with the malware.


You mean like this?

http://syncstop.com/


I knew someone would mention that, and I almost put a note about it in my post. I think all that does is disconnect the data pins, so it wouldn't be too useful when you are attaching a flash drive to your computer.


Good point.

However, if the exploit is in the bios of the usb device, you can't trust anything coming off of it. All you've got left is recharging it.


Interesting but in a bit of irony I don't need that at all. I use a Windows Phone 8 device. Targeting it for malware is a waste of time.


Might as well merge it into 2 functions and just carry a usb battery



I've seen that before, but I still have no clue what this is supposed to do, and the website doesn't really explain anything. Could you please elaborate on what this device is actually for?


It's a device, mostly intended for research purposes, that proxies the USB protocol back up to a python program/library.

Basically you plug in USB at one end, at the other end you can script the USB protocol with python, emulating any USB device you want to (or that you can write the code for).


This might protect against malware infecting your device unexpectedly, but I think there's a need for OS manufacturers to consider this as part of the holistic security. Why not pop up a dialog when a new USB device with HID profiles shows up that the OS hasn't seen?


I hate to point out the blindly obvious, but what if you need the HID device to click on the box? So now you are stuck in a chicken/egg conundrum. You need the HID to click the box but the HID isn't installed until you click the box, great!

Unfortunately without going back to Windows 95-esk "please restart to install this keyboard" world, I cannot see how you fix this. Even blackholing some input from the HID is only at best a temp' solution (as they'd just add longer and longer sleeps before fake HID input was generated).

I guess you could redirect all HID input to a certain context (like a Virtual Desktop e.g. UAC prompt) until the user accepts it. However realistically most users would ignore this warning and just click "install" without reading it or understanding the implications.


I have an idea: Making and breaking the USB connection itself is a form of input!

1. Supposing all the more-convenient ways have broken down (no keyboard, no mouse, etc.)

2. The OS displays a random 15-60 second countdown telling the user when to unplug the device if they trust it

3. The OS displays a second (random) countdown telling the user when to reconnect the device if they trust it.

4. If both steps succeed to a reasonable level of accuracy (some fudge-factor for humans and for slow-powering-up devices) the OS will begin trusting the HID device and installing drivers etc.

This requires no additional hardware except for a monitor, and evil devices cannot reliably brute-force it without taking a lot of time and being very obvious and obnoxious about it.


So I hand you a malicious USB stick, you plug it in, the computer asks you to unplug it and plug it back in. You do so, because you trust the USB stick (why wouldn't you, free USB stick!).


Maybe it would be better to make you type some characters on the keyboard. Similar tricks with outher human interface devices. That way the user physically can't do it with fake USB sticks.

It would require that all these devices have at least some basic functionality with only some standard drivers. I don't know how true that is right now.


What? No, the operating-system is responsible for the dialog, and it would be saying something like:

"The following USB device has requested direct control over your mouse and keyboard inputs. Do you want to grant it access?"

"Note: If you are unable to interact with your computer, please wait X seconds for emergency instructions on how to enable this device."

You can't make anything totally idiot-proof, but a lot of people will be surprised/scared when a very unusual and seldom-seen dialog pops up when they plug in a particular misbehaving memory stick.


The malicious device can just fake breaking the connection and reconnecting. No way for the PC to tell the difference.


I think that's why he mentioned using random amounts of time for disconnect and reconnect. If your malicious device guesses correctly for the simulation of "fake breaking" (let's say it only has 10% chance of doing that) and similarly for reconnecting (another 10% chance), then the malicious device only has a 1% chance of fooling the PC.

Not great, but certainly a lot better than a 100% certainty of fooling the PC.


Yep, that's what the randomness is for, and you can also add entropy by randomizing the time before the dialog starts to present he user with the "emergency connect mode instructions" which explain the disconnect/reconnect process.

Plus, 99% of the time the user is not plugging in a HID device, so the "unrecognized input device" dialog can be made ominous enough that users will realize something is very strange about that one USB stick.


Well you could do this on the second keyboard that was plugged in. In normal use my machine has an HID keyboard already attached, so attaching a second one would both be unusual, and the first one would still be around to confirm or deny the input.


Until one day you see a hilarious Reddit post that makes you spit out your coffee and your keyboard stops working. Then you plug in a new one and have no way of making it work.


Unplug the broken one first, so that when you plug the new one in the OS knows that it doesn't make sense to display a prompt because there's no keyboard to confirm it with? You'd probably want to do that anyway if you'd spilt coffee on it.


Small price to pay for usb security. Also, if your only keyboard stops working, wouldn't it accept the next keyboard by default because there isn't one connected?


> I hate to point out the blindly obvious, but what if you need the HID device to click on the box?

You just have a timed dialog with allow/block options that defaults to allow the new HID after the timeout.


What stops evil devices from simply waiting for a period of time and springing the dialog at a later time or when the user is likely AFK? (Can it detect idleness by looking at voltages?)


When a HID device is plugged in, the OS presents the user with a random number and requests that it be typed (if you only have a mouse, an on screen keyboard can be used). The newly plugged in HID device would not be fully active until the OS receives the right number. Until that time only this authorization system would receive its input (so there is no chicken and egg problem).


The OS would allow you to interact only with the confirmation dialog at first and to prevent the USB device emulating a simple click on a confirmation button it should be asking you to type in a displayed random number. So basically a CAPTCHA for USB HID devices.


A way to mark certain root ports as trusted could potentially work. This would allow you to have keyboards and mice physically plugged in to a trusted USB port while all other devices are plugged into untrusted ports.


Security researchers have been saying since USB became popular that hot plugging should require confirmation from the user.

The only downside being that many users add a new HID as their only interface to the machine.


Perhaps some kind of time lapse would solve that problem. The USB won't execute anything for x minutes while you inspect what it is really doing (if it's your keyboard or mouse, do nothing). If nothing is done, allow the USB to load its drivers, etc...


Would it be possible to:

1. Connect innocently as a plain storage device 2. Wait a period of time or even monitor voltage fluctuations to guess when the user is not at the computer. 3. Disconnect and reconnect (or a new side-connection?) as a HID device

Users often won't mentally associate the long-delayed attack with the USB stick, and if it attacks when they are AFK the timer might hit 0 in total secrecy.


Absolutely true. The time lapse is only to allow the user to inspect the drive before it is allowed to execute arbitrarily. That may not actually be possible now that I think about it, so the idea is bunked anyway.


I just have to add Sleep(60 minutes) to my fake HID device and it will bypass this. You cannot blackhole HID input for an hour.


I'm not sure what that accomplishes. In order to provide input, the HID device must register itself with the OS, in order for the driver to be set up, etc. Sending HID messages without actually enumerating a HID device isn't going to work. Waiting 60 seconds to provide input doesn't change the fact that it has to tell the OS it can provide input in the first place. Yeah, it could wait 60 seconds to say "hey, a HID device just got plugged in", but that doesn't actually bypass the safety mechanism...


It can always tell the OS that it is a USB hub, so nothing precludes it from being seen as two USB devices (HID and storage)


Why not something like a USB Firewall?

Black/white list vid/pid (which are easily faked, yes). Closes the door a small amount.

But can also blacklist all HID.


I think this should already be possible under linux via udev rules.

It just needs to be wrapped in a convenient tool.

Might be worth it to at least extend the time span that an attacker would need if he had physical access to your laptop/smartphone.


I smell a software product opportunity for OSX / Windows.


In the worst-case scenario, why not use connect/disconnect events themselves as a form of user input?

https://news.ycombinator.com/item?id=8401924


Or if there is already a HID of that type installed and active, throw a huge warning.

"I see you plugged in a keyboard USB device, but you already have a keyboard installed. This may be a malicious attempt to take control of your computer by emulating a keyboard and sending keyboard presses to your computer from a USB device. Do you want to allow this? (recommended action is no)."

No more security headaches and marketers will stop their stupid tricks when they get pissy emails from clients about "sending us viruses."


Input devices are a bit painful on that regard.

The computer can't detect if I have a working and available input device connected - the fact that some devices claim to be connected doesn't imply that, as it may be damaged, a virtual device, or not wirelessly connected but simply listening for a possible connection that may appear at any time.

For example, right now I have a mouse and a keyboard connected, but Windows device manager somehow shows 5 keyboard devices and 4 mouse devices due to various connection drivers listening to devices that might be connected but currently are not. If I came home after a long vacation and found out that the batteries in them are dry, then there would still be multiple devices of the same type "available" when I'd try to connect a wired USB keyboard.

If I want to connect an input device, it is quite possible that the only way that I could allow or disallow anything is through that device itself.


My laptop has a USB keyboard (well, that's how it's presented, wiring is all internal), and I regularly connect an external USB keyboard to it.

You might think you could identify it by manufacturer/model (VID/PID) or even serial number, but all those are easily modifiable by a serious attacker for a given target.

It might make shotgun/blind attacks harder, but ultimately it wouldnt' be much more secure than MAC filtering on your network router.

There are also plenty of non-keyboard HID types that could potentially generate unwanted input of this sort, although not quite as easily.


Except that this vulnerability affects all USB devices, including HIDs.

So if you plug a keyboard or mouse into a computer carrying this malware, they could be infected. Then if you plug them into another computer, they could infect that computer.

Or more likely, you could find out your computer is infected, and decide to wipe or replace it. Then you plug the same mouse back in, authorize it as the expected HID...and now your computer is infected again.

Or consider a laptop keyboard that connects over the USB bus...

The only reliable solution to this vulnerability is to protect USB firmware via code signatures. That's going to take a long time.

In the mean time, I'm going to completely avoid USB thumb drives, and stick to Bluetooth HIDs.


That could help against some of the attacks (the ones involving virtual keyboards / NICs), but you still can't do anything against infected USB storage devices. They will appear as normal and can even be virus scanned, but will still manage to send you infected files.


The only way to mitigate against these attacks is to treat the USB bus the same way we treat the network: - Communication with devices should only occur once they have been identified & authenticated - Authenticated devices can only send type specific messages and can't change type without re-authentication. - Data should be written as encrypted volumes to storage devices. (You should be doing this anyway.)

This is more like how bluetooth works. However that still leaves significant problems for HIDs. The only solution I see is remote attestation and securing the comms channel with encryption.

Even if USB is patched like this it doesn't solve the problem of having un-trusted peripherals or appliances. For example, would you trust that the signed firmware for a USB scanner is actually secure? It is quite easy for a scanner driver to be subject to a buffer overflow caused by data from the scanner. Scanner embedded malware could be triggered by scanning a specific form, or anywhere it sees interesting text, like 'SECRET' headers on a page. Networked scanner/photocopier/printers are even worse because they return PDF documents and are serviced by outsiders, and anything on the network can often send them a PDF to execute. I've managed to crash my printer many times by sending large/complex PDF/postscript/PCL5.


Here's the talk from DerbyCon that the article is talking about: http://www.youtube.com/watch?v=xcsxeJz3blI


It's been nearly 2 decades since USB was invented and I'm still of the opinion that USB keyboards provide no significant benefits over PS/2 in the majority of applications today - a keyboard that is always plugged into the same port. All the extra driver complexity and security holes like this could be avoided if people realised this simple fact.

USB tries to make peripherals "universal" - and I don't think one of the primary input devices of the computer should be mixed in with the others in this way. (Trying to troubleshoot problems with USB controller drivers when the keyboard itself is USB can be... frustrating. The BIOS has its own USB keyboard driver but hands control to the OS after booting, meaning that any flaw in the rather complex USB stack can cause a loss of an important input device. On the other hand, I've never had any problems with PS/2 keyboard or mouse drivers.)


> significant benefits over PS/2

Being able to unplug a keyboard and plug it back in again while the PC/server is on without the PC/server freezing is a pretty significant benefit.

Also... though not a tech limitation... the only access point for a PS/2 plug being at the far back of the PC case can be really annoying (though I wonder what would happen if you had 2 PS/2 ports on a board and you actually plugged in 2 keyboards... probably a small explosion).


> Being able to unplug a keyboard and plug it back in again while the PC/server is on without the PC/server freezing is a pretty significant benefit.

What are you doing where being able to hotplug keyboards frequently is a "pretty significant benefit"? If you are working with multiple servers a KVM switch is a better solution.

(The problem with hotplugging PS/2 was that it originally wasn't designed for that so the interface lacked the necessary protection components, but more recently manufacturers have been realising that it's far cheaper to add the protection components than deal with returns from those who either accidentally or purposefully hotplug PS/2. There's still considerable variance between when manufacturers did this, so it's still officially "not supported" but recent motherboards and keyboards/mouses should be robust enough to handle the occasional hotplug.)

The PC spec also only allows for 2 PS/2 ports and no more: one for a keyboard and one for a mouse. They use the same physical protocol so no catastrophic effects from putting in two keyboards/mouses or switching them around, but they just won't work since the commands are slightly different; some newer motherboards with a single PS/2 port can detect whether a mouse or keyboard has been plugged into it, and configure accordingly.


I had 5 or 6 1u servers in a rack. I would roll up a monitor and keyboard (provided by the data center folks). Twice just plugging in a ps2 keyboard froze the server. The fix was a usb keyboard (kept in the cabinet).

A kvm would be a solution... but for the rare visit to the colo it just wasn't worth it. One of those 1u rack mount with integrated monitor would have been nice (though the 1/4 cabinet was the top one so I don't think it would have worked very well).

Also with KVMs... wires and wires and wires. Blah!


> What are you doing where being able to hotplug keyboards frequently is a "pretty significant benefit"? If you are working with multiple servers a KVM switch is a better solution.

People trip over cables, and PS/2 plugs liked to disconnect much more easily than USB; you could unplug your keyboard by accident while doing something that moves cables (e.g. using a mouse).


Hot swapping PS/2 devices has worked on most motherboards for a long time.


Not to understate the severity of this issue, but how is this any different than if I built a custom device that contained a USB hub + emulated HID keyboard to fire off a malicious macro + flash storage housed inside a plastic shell that looked like a normal flash drive? I guess that would require developing and manufacturing custom hardware.

So maybe it's not different. And this just significantly lowers the barriers to entry to an extremely easy process using only off the shelf hardware. And now the malicious device can be anything from flash drives to keyboards to USB Missile Launchers.

That seems like a problem the manufacturers need to resolve, however inconvenient it is to have read-only firmware.


It's very different because your device is just one device. "BadUSB" type malware can spread like a virus/worm by infecting other devices. In your scenario the specially crafted device is the only infection point. With BadUSB an infected device is just patient zero. It infects computers, which in turn infect other devices plugged into it, which infect other computers, and so on and so forth.


Ah! Got it now. Thanks.

Ok. Now I'm concerned.


> To prevent USB devices’ firmware from being rewritten, their security architecture would need to be fundamentally redesigned, he argued, so that no code could be changed on the device without the unforgeable signature of the manufacturer.

Complete rubbish. No thrusted computing is required to protect a device from surreptitious programming by malware.

All you need is a write protect switch ("fuse") that is burned when the device is programmed for the first time, so that for subsequent in-field updates, it requires a physical override to enter into a programming state (the user has to flip some switch, or hold some pin-operated button or whatever).


But how do you know the write protect switch actually works? Maybe the manufacturer implemented it as a flag to the microcontroller. Maybe your 'trusted' USB stick was replaced by one that looks exactly the same but that has a different chip inside that mimics the vendor/device ID.

So you are back to requiring trusted hardware and trusted handling, which is impractical except for strictly controlled systems.


The trusted USB stick being replaced by a lookalike is a breach of physical security somewhere in the delivery chain between manufacturer and you. In that case, the programmability issue is moot; neither the original USB stick nor the rogue one have to have field-programmable firmware. The rogue one is hard-coded at manufacture time to do harm.

This is a different issue from white hat USB devices that turn black due to a remote exploit, and it is addressable by some form of trusted computing, like authenticating that USB devices are genuine. With this authentication in place, it's possible that there could be remote exploits that somehow retain the device's ability to authenticate while changing its behavior.

As far as trusting the manufacturer to implement the switch: it is hard to get away from trusting the manufacturer. You're plugging in their hardware into your device. Under the assumption that the manufacturer is working in good faith to prevent rogue reprogramming, it is in their interest to implement the switch properly, so it cannot be overridden in software.

Basically, the situation is that if someone wants to sell you, say, a rogue keyboard that steals your keystrokes, they can do that without even involving the USB protocol. A keyboard can have its own storage for keystrokes and its own channel for transmitting the information, not involving the USB interface. For instance, it could have on-board Wi-Fi, and a rogue firmware that finds a free hot-spot, makes a connection and sends its logged data to its "mother ship".


So from my possibly naive understanding of this article the new researchers want to publish the exploit on GitHub to ensure it gets fixed. But even they hold off on some of the worst exploit vectors because of an ethical dilemma. It would seem to me that any of this being published is probably as good as all of it for bad actors to reverse engineer the exploit and then study the usb spec and create new attacks. Just the mention of this sort of exploit is probably enough to make someone try and find it. So they seem to be contradicting themselves here. If it needs to be fixed and releasing will spur the industry into fixing it, it should be published.

Is there something I am missing here? Serious question.


There needs to be a built-in pairing button on the computer that you have to press to allow a new device to be activated. Sadly, it probably won't stop this attack because when they plug it in they will just press the button.


Do we really need to repost this every 2 months? The first proof of concept for this kind of attack has been demonstrated 7+ years ago at the CCC. It pops up every year and every time people act like this is news.


Source?


The rule in the '80s and early '90s was never ever exchange floppy disks with anybody and never insert other people's floppy disks in your PC. Enter the LAN and the USB sticks, problem solved. Not.

I'm afraid we have to go back to exchanging files over the LAN. Or turn on our WiFi cards in hotspot mode and create a LAN ourselves. There are going to be good opportunities for desktop and mobile apps. Obviously Evernote and Dropbox are already here but they are not exactly the same thing.


> I'm afraid we have to go back to exchanging files over the LAN.

1 horribly infected computer on a LAN can be a nightmare for everyone else connected.


That's the definition of a worm.


Is there something in the USB spec that requires the internal processor to be updatable? Couldn't they just lock the firmware down at the factory?


There's nothing in the USB spec that requires an internal processor at all. USB is a wire protocol only. Obviously it happens that complicated wire protocols are most sanely implemented in software, so most devices have CPUs and firmware. But that's a fact of engineering expedience and not spec compliance.


I guess it depends on your definition of processor, but a finite state machine is certainly necessary to use USB.


The firmware could be locked down, but doing so is inconvenient for the manufacturers.

For example, I believe that the firmware flashing allows for easy configuration of the USB storage size, so there's no need to hard-code values in the hardware. As a result, you can easily re-use components and repurpose them as needed.

It might also be used for mapping out bad sectors during device testing? That way you can stuff low quality flash storage into a USB stick and then blank out the bad parts during testing. This could be more difficult to do if the firmware isn't easily updateable.


It could still be locked out before being sent to market as a final step, something like an internal fuse in an EPROM being blown in such a way that the only way to re-access the "manufacturer" bits would be to get a new chip.


Tricky part is finding a bug in the firmware after release. If locked out, then can't update the stuff in the warehouses. Would need to scrap all parts in inventory, potentially costing hundreds if not thousands of dollars.


The logical thing to do, then, is making the firmware unaccesible once it's been used by the final user. Adding a fuse that breaks once the 5V from the user computer are sensed for the first time, for example.


How do you define "user computer"? We run a bazillion tests on our stuff during manufacturing. Stuff is plugged in/out of Windows boxes over and over on the manufacturing line.

It's hard to tell where that "last step" might be. Boxing? Shrink wrapping? Delivery to Amazon's warehouses?

Our ASICs have efuses to disable insecure firmware changes. (Need insecure fw updates during fw development.) During dev we wind up having to scrap parts because we blew the fuse prematurely.

It's a tough logistical problem.


Haven't read the spec, but yes they can.

I know several common USB devices that actively prevent field upgrades of their firmware for security reasons, an example would be the Yubikey (http://www.yubico.com/faq/upgrade-yubikey-firmware/).


No, there's nothing in the USB spec that requires the internal processor to be updatable. The headline is clickbait.

To get this to work, you need to solder wires onto the USB flash drive's PCB, and then program a "burner" firmware image into the microcontroller. Only then you can remove the wires, and further program its firmware over USB. The stock firmware for nearly any USB device would never allow itself to be upgraded over USB.

This strategy allows you to give someone a malicious flash drive that enters keyboard commands into the computer (this has been done before), or infects the computer by pretending to be a device whose driver has a security flaw. It would not allow you to infect the other USB devices connected to that computer, because they don't have burner firmware.


I think you're wrong: as I understand it, many of the consumer USB sticks can be flashed over USB. No soldering required.


Yes, you're right - I've done some more research on the github site, and figured out how they do it. The burner image can be sent to RAM over USB, to cause the microcontroller to boot from it. Then, that burner is used to flash the malicous firmware image.


I can spot Wired articles just by the title now


This attack vector could also be used by USB charging devices. That's been done before using Windows vulnerabilities, but this is more general.


Since this is like the old floppy disk days, where any disk could be infected, would a version of the same solution apply? I'm thinking of the old boot-sector protectors that would announce your disk is clean, thereby occupying the space a virus would otherwise claim. Of course there would have to be something in place to verify this wasn't just malware calling itself a protector.


What this requires is called remote attestation. People have been working on this for smart cards and embedded systems, though PCs and VMs/cloud get more attention. It is very complex though.

http://web.cs.wpi.edu/~guttman/pubs/remote_attest.pdf


When I bought my last car, they sent what appeared to be a USB flash drive. It had some warranty documents on it, and also emulated a keyboard. I think it popped up a browser went to a webpage and prefilled my VIN and some other details so I could register my car. This was several years ago. How is this different?


That was probably done as an application that ran from the USB stick, similar to the way AUTOEXEC.BAT works on Windows (it sees it on mount, it runs it).

I think the exploit version occurs at a lower level down in the bios of the stick, where it's far more insidious.


I thought I remembered windows saying it detected a new keyboard device, or my brain may have invented that part. I'll see if I can't find it laying around and see how it actually worked.


Have usb device supply some kind of a checksum/key of its firmware when connecting to the computer.. the driver would then compare that checksum/key to the list of trusted checksums. But of course, hackers could extract the checksum from the driver, and hack the firmware to always return it :/


> showing that it’s possible to corrupt any USB device with insidious, undetectable malware

That is surely false. It depends on the type of microcontroller used and its configuration. And indeed the github link in the article is only for certain types of USB flash drives (Phison 2251).


USB is just so vulnerable... Kali Nethunter is now exploiting some of its idiocy.

http://www.kali.org/kali-linux-nethunter/


Is USB vulnerable or are many USB implementations vulnerable?


Can't some of this be patched at the USB driver level in the OS? A storage device shouldn't be acting like a keyboard, etc.


That's not how this works.

The device acts like a USB hub which happens to mount both a keyboard and a USB drive.

So unless you ban all USB hubs then this isn't workable.


Banning USB hubs isn't really workable either. Any keyboard which has USB slots would be blocked, for instance.


I don't know if the current drivers would allow that normally. However they would allow it if you created a fake hub to route through. No way to know if there is a real hub or a fake one between the USB drive/keyboard and the computer.


USB devices need to be regulated to ensure this kind of thing can't happen. Perhaps the FCC's arm reaches far enough that they could tackle this issue, otherwise maybe a separate task force should be created for this.


Am I the only one cringing at the use of the lone term "USB" to describe "USB flash drives" in this article? I would have expected more from Wired.


Because it's not just USB flash drives. Any device, a cell phone, a mouse, a webcam, anything that connects via usb, could essentially connect as a USB HID (human input device) and start issuing commands on the computer as if it were a human. But that's not the only thing it could do. You should read the article.


> Because it's not just USB flash drives

As for the actual flaw, yes that's true. But that's not what this article is talking about.

> You should read the article.

I did. It says:

> “People look at these things and see them as nothing more than storage devices,” says Caudill. “They don’t realize there’s a reprogrammable computer in their hands.”

It's clearly talking about the storage devices. If they were talking about the bus protocol they'd say "USB is nearly impossible to secure in its current form" instead of "USBs are nearly impossible to secure in their current form"


I think you're being kinda pedantic. It's like people who write "ATM Machine." It's wrong, but it's not a big deal and the reader understands what they're saying.


> I think you're being kinda pedantic

Possibly. I wouldn't care if it was speech. I wouldn't comment if it was a regular newspaper. But WIRED writing like this?


I used to really enjoy Wired in the pre Conde Nast days....


I work on printers. We've done useful things with our USB to make them appear as mass storage (drivers stored on the printer--no download necessary). Once the driver installed, we switch over to a printer+scanner profile.

If we can pretend to be a mass storage, we could pretend to be a keyboard/mouse, too. It's a small matter of firwmare.


Yes, but there's still a useful distinction between USB devices which can be corrupted as described here, and USB interfaces, which would be used by a compromised device but aren't (afaik) themselves subject to being reprogrammed in this way.


After reading the article, I came back to say the same thing. These passages do not sound like they're coming from a technology site:

"... USBs are nearly impossible to secure in their current form."

"... silently disable a USB’s security feature that password-protects a certain portion of its memory."

Just substitute the name of any other interface - PS/2, HDMI, Thunderbolt, etc - to get a sense for how weird it sounds, e.g., "HDMIs are nearly impossible to secure".


I think you're in the minority. It's fairly common to hear someone say "USB" in place of "USB flash drive". Especially when it's used in plural, "USBs", it's hard to be confused about what they're referring to.


I have never heard that before, it definitely sounds weird to my ears.


USB refers to the bus, that is the wires and signals linking a device to the computer. People probably mean "USB device".


It's like arguing against people calling it an "ATM Machine" (i.e. an Automated Teller Machine Machine).


I dunno. I hear "USB Drive" mostly. Never "USB flash drive" or just "USB".

Sometimes "USB thingy".


I was looking at laptops in my local supermarket and they had a special offer of a free "8GB FDD" with some laptops...


Well, I cringe when people talk about their "Bluetooth" (implying headset) but that has gained acceptance as well. When one product is so thoroughly associated with a brand, it can happen.


I would guess you would also cringe when some asks you for a Kleenex (facial tissue), or your daughter asks for her Barbie (doll), or son asks for his Teddy Bear (stuffed animal)...


We should all be cringing at headlines like this. Or not. I dunno. Which ever makes them stop.


Why so many Wired Magazine articles lately?


Somebody catching up on their reading, maybe?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: