Hacker News new | past | comments | ask | show | jobs | submit login

The argument for disabling loading new firmware on your own device is valid. It prevents an outside actor loading malicious firmware. But it's a tradeoff: it means that if a vulnerability is found, the device has to be replaced, and users can't customize their firmware. That's a good tradeoff; I'd rather risk paying for a new Yubikey than risk a security compromise, and most users are unqualified to verify the security of firmware being loaded onto the device.

The problem is, it's not a tradeoff Yubico have to make. They can allow users to achieve the same goals by distributing the device un-flashed, with the source code to the firmware. Upon flashing, the firmware would disable further flashing. If the user doesn't like this tradeoff, the user can choose to change the code. As a courtesy to more trusting users they could provide the service of optionally flashing devices for you. And qualified users can verify the security of the firmware before loading it.

But by flashing the devices themselves, Yubico has chosen the worst of both worlds. Now an outside actor can once again add malicious firmware: Yubico is an outside actor. AND nobody can verify the security of the firmware. This isn't even a tradeoff, it's just a loss.




> They can allow users to achieve the same goals by distributing the device un-flashed

There is the possibility of the device being intercepted before it reaches you. Or before you have gotten around to locking it down. Or when you plug it into your (compromised) system to lock it down.

Since all communication is done over the USB port, the problem is that the firmware can be flashed with a backdoored firmware that appears to be normal/unflashed. One that can be flashable (by basically having a virtual machine/emulator that runs the flashed image), appears to get locked down when you go through any lockdown process (since you just end up locking down the VM). But still has the backdoor in place.

Firmware aside, people can modify the hardware too. Unless you crack open the device and inspect the internals (which many devices are designed to prevent). And even then a really sophisticated attack could replace the chips with identical looking ones. If you are using off the shelf ones then it wouldn't be that hard. They can also add an extra chip before the real one that intercepts the communication. Or maybe compromise the 'insecure' USB chip (if it's programmable).

With locked down hardware the manufacturer can bake private keys onto the chips and ensure that the official stuff checks the hardware by asking it to digitally sign something with a private key. But if the attacker has added their own chip between the USB and the legit chip, they can pass through the requests to the official chip.

TPM will do something like keep a running hash of all the instructions that are sent to the hardware and use the resulting has as part of the digital signature verification, but if you mirror the requests that doesn't help.

The next stage is to use the keys on the chip to encrypt all communication between the 'secure' chip. So any 'pirate' chip won't get anything useful.

Users could be allowed to 'bake' their own keys in, but that leaves us with the intercepted hardware problem. The attacker gets the hardware, installs fake firmware that appears to accept your custom key and preforms the encryption.

Personally I think worrying about security to that level is over kill even if your dealing with quite a bit of money. It would have to be quite an organised attack. They would have to gain physical access to the device, compromise it, return it unknown and then gain physical access again later. Requiring both physical and digital security skills.

That's much more work than just, stealing it or applying Rubber-hose cryptanalysis. Attackers can also compromise the system being used to access whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: