An attacker not only needs your username and password, they also need physical access to your key. They then have to disassemble the device. If they want to give it back to you, they'll need to reassemble it.
So not exactly trivial!
A blob of nail-varnish over a plastic seam might be a useful canary.
But this does highlight one weakness of these FIDO tokens - you have to manually maintain a list of where you've registered them. And if your token is lost or stolen, you have to manually revoke every single one.
NinjaLab: "All Infineon security microcontrollers (including TPMs) that run the Infineon cryptographic library (as far as we know, any existing version) are vulnerable to the attack."
- Chips in e-passports from the US, China, India, Brazil and numerous European and Asian nations
- Secure enclaves in Samsung and OnePlus phones
- Cryptocurrency hardware wallets like Ledger and Trezor
Ledger uses STMicroelectronics secure elements and should not be affected by this. Trezor Safe uses Infineon OPTIGA though. See https://bitcointalk.org/index.php?topic=5304483.0 for a table with wallets and their respective microcontrollers/secure elements.
Nice to see a fellow enthusiast here, this is a nice point that different hardware will have different levels of related risk. But this is kind of an entire class of attack where similar paths may be able to be used on these other controllers. Don't gloss over it.
On a side note, used to frequent a bar where one of the creators of Ledger also did. Was nice to spend various crypto freely!
Ledger literally supports key extraction as a feature and pushes hard the firmware updates. Last S firmware w/o key extraction still works, while the same X version cannot be used anymore.
Passports are kind of a big deal. The customs agent is going to visually verify the photo vs the holder, but the customs agent is going to trust the valid RFID chip probably 100% of the time as it's assumed to be unbreakable.
However if we look only at border checkpoints (including airports) in first world nations the number is probably a lot higher.
Not only are agents likely to be using the chip, self-service immigration gates have become really popular at airports around the world and mostly use the RFID chip together with a face scan
On the bright side, this bug seems to require an ECDSA operation, and I would guess that most ePassports are using RSA. Can't seem to find any statistics but the standards support both.
Since it's a non constant time implementation of a specific part of the EC operation (modular inversion) my guess would be they reused the code for that everywhere and it's probably also present in ecdh and all other algorithms requiring a modular inversion.
That's assuming that the validation software even has all issuing countries' root keys available.
Supposedly it's surprisingly (or maybe not, given how international government relations historically work) difficult for countries to exchange their public keys: Since there isn't any central authority, nor a chain of trust available (a la "this key is signed by France and Switzerland, so it's probably the real thing to us, Germany"), it boils down to n^2/2 key exchanges, and n additional ones every time a single key expires or, worse, has to be rotated on short notice. Then all of that has to be distributed to all border authority systems.
Last time I looked into this (10+ years ago), my laptop doing Passive Authentication and Active Authentication using 10 lines of Python and my country's root certificate (it's publicly available) was supposedly more than what most border checks could practically do.
ICAO, the international organization which maintains the standards for travel document interoperability does have a public key directory that a reasonably large number of countries now participate in. The beauty of international organizations is that the individual members don’t all have to be on the best terms with each other.
Yeah, it’s surprisingly not straightforward. In my home country (Russia), only some biometric passports issued inside the country can be used on the automatic gates – mine was issued in an embassy overseas, so I can’t use them. It works just fine in Malaysia, though!
Fortunately (in this case) the payments card industry only acknowledged the existence of Elliptic Curves in 2021 [1], so most EMV cards should be safe.
The most important parts use symmetric keys anyway.
Sounds like they buried the lede with this one then. Some of the items on that list being 'crackable' seem infinitely more dangerous than a general-purpose device such as a YubiKey.
> - Cryptocurrency hardware wallets like Ledger and Trezor
Ledger hardware wallets (which btw can serve as U2F authentication but, AFAIK, not FIDO2) are protected by a PIN: three wrong PINs and the device, unlike a Yubikey, factory resets itself.
IIUC the side-channel attack relies on a non constant-time modinv.
I don't know if there's a way to force that modinv to happen on a Ledger without knowing the PIN. I take it we'll have a writeup by Ledger on that attack very soon.
This is at best another forensic tool (unlocking the TPM of a locked laptop/phone for prosecution) and at worst a red herring for security flag.
- Clone a passport -> why cloning if you can issue new ones - getting risked being detected while using a clone (2 entries in 2 different countries, and you also need to look like the person) not to mention you have to destroy the passport
- Phone enclaves -> see above
- Crypto -> Hardware wallets should be kept on eye as badly as your normal wallet
- SIM Cards -> Swapping is faster, or if you're the the gov, just an intercept warrant will do the trick
- Laptops -> see above
- EMV Chips -> If you have those skills and money, I don't think you'll lose time on cloning credit/debit cards
> - Clone a passport -> why cloning if you can issue new ones - getting risked being detected while using a clone (2 entries in 2 different countries, and you also need to look like the person) not to mention you have to destroy the passport
Well... not really. ICAO compliant passports do not require storing a photo embedded in the chip, as long as you can forge the physical part of the passport (or obtain blanks) you just need the digital certificates from a "donor" passport of John Doe, print "John Doe" and his personal data (birth day/place, nationality, issuance/expiry data) on the human readable and MRZ fields, but crucially the photo of the person using the forgery.
Also, there are no centralized, cross country stores of entry/departure. Lots of places don't even register it for visa-free border crossings.
Some national ID documents, e.g. the Croatian national ID card "osobna iskaznica", do store a photo embedded in the chip, so that indeed restricts a forgery from being used by a non-lookalike.
> ICAO compliant passports do not require storing a photo embedded in the chip
That's completely on the issuing country then, though, just like they e.g. might choose to not use dynamic chip authentication, which also makes the passport subject to trivial chip cloning.
I wouldn't be surprised if some e-border gates reject travel documents that don't support chip authentication or don't have a digital version of the photo covered by the issuer signature.
Well... not really, from the viewpoint of a bank. Look, now the user can extract the key that the bank TOTP app carefully keeps, and transfer it to another (rooted) device, or use without a phone at all, meaning that this app is no longer a "something unclonable that you have" authentication factor. From a risk management and compliance perspective, that's a contract breach: the bank is legally obliged to store that secret securely, so that the user is guaranteed to complain if it could have been used by someone else.
India has e-passports? I am from there and I have one I renewed during the pandemic, so might have missed the news, but I didn't know we have e-passports now. I tried googling and didn't find much (the official page doesn't load).
Also I checked the PDF on Ninjalab port (article linked in this post) and there was no mention of India there. Is it from some other source like Twitter?
If I remember correctly, Infineon already had a big TPM recall a while back. I remember my T470p had to first install a BIOS update to enable userspace updating of the TPM, then the TPM update itself. And I think some Yubikeys were replaced for free due to the same or similar issue.
> But this does highlight one weakness of these FIDO tokens - you have to manually maintain a list of where you've registered them.
FWIW, I use KeePassXC as my password manager and tag each account that uses my hardware keys, so if lost, stollen or broken, I can quickly get a list of sites from which to remove that key. I always register two keys and keep them physically separate, so I can still login in the event I lose one key.
A small advantage to this attack is they don't need their own manufacturing and can attack keys which are already in use.
A supply chain attack of "here a pre-backdoored key I'm pretending it's perfectly secure, go use it" has no need for this exploit if you have manufacturing capability.
If you don't, then intercepting new yubikeys in transit, extracting the key, and sending them along the way would also be doable with the exploit described.
> you have to manually maintain a list of where you've registered them
For an individual, the answer there might be to limit the usage of the device to your "core" services, and rely on good password hygiene for everything else.
> rely on good password hygiene for everything else.
Usually this means using a password manager, and these days many of them also support WebAuthN in a way not tied to a specific device (or platform/ecosystem).
It's still two factors regardless of storage. Say you accidentally paste your password into the wrong field and post it on a forum. Whoever gets that still needs the second factor.
Sure, if your password vault gets breached then everything is exposed but that's extremely unlikely and you have a lot of work to do in that event regardless. It's an inherent risk to using a password manager: everything is centralized so it's a valuable target.
If there's malware on my device, isn't it game over already? Even if I have a second factor elsewhere, the malware can access session keys to whatever service I logged into from that device, among other things.
In theory. But if everything is in the password safe, the malware can just grab that and upload? And cover its traces. As opposed to patching every application/service you might use, and get access only when you use it.
It's certainly not good, but if you require 2FA to, say, change the email address attached to your account, or make withdrawals, or other important actions, it's not game over.
Yes, but I don’t need/want 2FA everywhere, and it’s still a strictly better single factor than a password (since WebAuthN is resilient against server-side database leaks and phishing).
> For an individual, the answer there might be to limit the usage of the device to your "core" services, and rely on good password hygiene for everything else.
What's the advantage here over password + yubikey? Isn't password + yubikey always going to be at least as secure as password, even if the yubikey gets lost/stolen/compromised?
> But this does highlight one weakness of these FIDO tokens - you have to manually maintain a list of where you've registered them. And if your token is lost or stolen, you have to manually revoke every single one.
I agree. I've been keeping track of FIDO tokens and where they work in my password manager and it's great.
I honestly want to extend this idea not just to FIDO tokens, but for anything that would ever need to be revoked and replaced. So stuff like FIDO tokens, 2FA secrets, Passkeys (both already handled by my password manager), payment methods, GPG keys and such.
> An attacker not only needs your username and password
Usernames and passwords are leaked all the time. Many users even re-use these across multiple services.
> they also need physical access to your key.
With enough practice, a motivated actor could make it seamless enough that you don’t notice. Or stalk the target to find a weak point in schedule and give attacker enough time to perform EUCLEAK. We are creatures of habit after all.
> And if your token is lost or stolen, you have to manually revoke every single one.
I agree here. No way to easily track. I have to make a manual note for each service in password manager.
That's less related to passkeys, and more to what the site (i.e. the Relying Party in WebAuthN parlance) requires in terms of authentication security.
User Verification (including PINs) is possible for non-discoverable credentials and vice versa (e.g. Bitwarden's implementation doesn't seem to support user verification at authentication time, but supports discoverable credentials).
In any case, note that this particular attack seems to degrade the security of "Yubikey + PIN" from two factors (possession of the key, knowledge of the PIN) to one (possession of the key), as the PIN entry can be bypassed due to how user verification works in WebAuthN/CTAP2.
You can’t do that without the same assembly issue. Yubikeys don’t let you generate the FIDO2 keys at all and I believe there’s a flag on x509 keys which were imported rather than generated on the device.
The “looks physically the same” part is the problem I was referring to: this attack needs a lab and some time. If they have to add reprogramming a key and weathering it to look like yours to the task it’s certainly not impossible but it moves even further into “are you at war with a nation-state?” territory where the attacks are both unlikely to be worth the cost or easier than alternatives.
I’m not saying it’s not possible but 99% of normal people are really only at risk of online attacks or physical attacks which won’t be sophisticated stealth operations but more along the lines of “you’re sending me Bitcoin or I’m shooting your dog”.
Cloning a functional piece of hardware that behaves like the original yubikey doesn’t imply the clone looks and feels like the original. It sounds to me it’s like someone would make a clay copy of a key. I don’t think the attacker gets to replicate the manufacturing process to make an item the same shape and size as the original.
I think the most annoying part of this is that you cannot just replace a YubiKey.
Regardless of whether you're using passkeys or the non-discoverable ones, you need to manually go through each account and replace the YubiKey with a non-vulnerable key before decommissioning this one.
And then there is the non-discoverable part. I don't remember where I've used my YubiKey in the past.
Also, on the Ars article [0] there is mention of a PIN:
> YubiKeys provide optional user authentication protections, including the requirement for a user-supplied PIN code or a fingerprint or face scan.
This is not a YubiKey thing, but a FIDO thing [1].
Another side. I always worry more about being locked out by 2FA (a main use case of non-discoverable FIDO keys) as a consequence of lost/retired-then-forgotten keys.
This happened once when the US custom detained my electronics, including a Yubikey. Later I managed to recover many accounts that accept email for 2FA or as the ultimate auth factor (i.e., password/2FA reset). But a few, including AWS, doesn't allow that.
Many websites encourage you to enroll 2FA without clarifying the alternative factors and the consequence of lost access.
This is super annoying. I wish sites would standardize on a simple infographic saying:
You will be able to access your account with:
* Username, Password and 2FA device.
or
* 5 points from the following list: Username (1), Current password (2), old password (0.5), sms verification (1), mothers maiden name or other secret word (0.5). Email verification (2). 2 factor device (2), Video call with support agent showing government ID matching account name (1), has waited 7 days without successful login, despite notifying all contact addresses (2)
For added security, you can adjust the account to require 6, 7 or 8 points from the above list, but please be aware doing so might hinder your ability to access the account if you lose your password or 2FA device.
> the US custom detained my electronics, including a Yubikey
Can you elaborate on what happened?
I know that it's theoretically possible, but I thought that:
1- They would potentially detain only devices that they can extract information out of (even if encrypted), like a laptop or some hard disk... Yubikey (at least the old U2F-only ones) don't have anything that you can extract
2- They would eventually return the device to you (unless guilty of some national security violation?)
> 2- They would eventually return the device to you (unless guilty of some national security violation?)
"Eventually" is not good enough. People take things with them on their trips, because they expect to use these things while they are traveling/doing work at a remote location. Imagine you need to do some work on a remote site and you can't log in to your company's network, because the TSA has taken away your key so that they can inspect it.
If they just seize it, it will typically be returned at some point. If they decide it's subject to forfeiture, it is now their property. You can contest this with the forfeiture department but I guess if they decide the item is guilty of a crime or other excuses to keep it there's nothing you can do.
That's right. It will be United States v. Some Person's Yubikey. And you can hire it a lawyer if you want, because they will NOT give it a public defender. Massive violation of Constitutional rights if you ask me.
> Yubikey (at least the old U2F-only ones) don't have anything that you can extract
Newer Yubikeys hold secrets that, if exfiltrated, give you access to accounts.
I assume that if it doesn't already exist, there will be a Cellebrite-like device that governments can plug Yubikeys into to dump keys quickly like they're able to with cell phones.
The entire point of Yubikeys is that such a device should be impossible, and vice versa, if such a device were to exist, the Yubikey is nothing but an expensive USB flash drive.
You should always have more than one 2FA. Especially for anything Google as you'll never ever recover the account (unless you are famous or go viral <1%).
IAM users you are correct only allow a single 2fa key (their way of deprecating IAM users), but their SSO Users can have as many as they want and are honestly much better than IAM users. Even for my personal account I've moved to using an SSO User.
This would probably be a good place to suggest to others here to track which accounts you've logged into via Google or other social media oath.
I just had to log into stack overflow for the first time in years, and did not remember what I used to previously log in. Once I figured it out that information went into Keepass too.
You should assume Google, GitHub, and Apple are hostile and try to limit your blast radius. If you have an account problem they have no customer service to help you.
I can't get into my Google account that's almost 20 years old because I only have the username, password, recovery email and have all the email forwarded to me, but I no longer have the phone number and they silently enabled 2FA SMS at some point.
Yes, this is exactly why I won’t use these federated identity features of platforms like this. I have a reasonable amount of trust that they are mostly secure, but I have zero trust that they will be helpful if I ever have account troubles. What I don’t need is to have Google (etc) auth problems cascade down to every other account I own.
I really wish Bitwarden had more robust tools for organizing, sorting and tagging passwords. The current system of sorting them into folders is practically useless.
For sites that use a resident webauthn token like GitHub, you can list the known sites on the key. The UX for all this is not where it should be, however.
Any idea why this was changed? The big advantage of non-residential keys is that they do not take up any space on the Fido token and thus you can have an unlimited number of them.
I've used five RK slots out of 25 available on my daily driver keys. Three of those are for SSO providers that provide access to the majority of the other apps I use. I opt in to "sign in with Google" etc wherever I can.
I don't see this impacting me personally anytime soon, but that might change when more sites start insisting on rolling their own RKs.
Hopefully future Yubikey models will ship with more flash, enabling many more RKs if you so desire.
I mean, that's the security story around it. You solve this and buying multiple yubikeys. Google and others support multiple keys, which gives you the backup story (I have 4 keys in various places).
There's no fundamental reason it needs to be this difficult.
Yubico or really any other manufacturer could totally e.g. release "Yubikey pairs", both a stateless CTAP2 implementation with the same root secret, that would correspondingly work at the same sites without having to synchronize anything.
The reason they probably don't is that their entire trust model heavily depends on the perception that nobody, including the manufacturer, has a "reserve key" to any given Yubikey, even though of course the absence of such "linked keys" doesn't demonstrate the absence of any such backdoor.
To be clear, I don't have any reason to believe Yubico does this, but it would probably be a harder story to tell, marketing-wise, if they sometimes selectively did, even if it's for a valid use case.
I mean you could also design keys to be synchronisable by the user. Generate a key-pair inside of key (2), transfer the public key from (2) to (1). Encrypt the the root key inside of (1) with the public key, transfer it over to (2).
(Or just allow the user to generate the root key outside of the device and insert it)
I honestly think the interest from customers is just too low. I would bet the majority of Yubico's customers are enterprises where this is not really an issue for most use cases. If you loose your key used for Entra ID / SSO, just go to IT support and get a new one enrolled to your account. Much cheaper than having synchronised hot spares (at 30-50 USD a pop) for thousands of employees.
But what stops Mallory from simply using this sync method to sync your private key to her Yubikey? I mean, look at the kerfuffle that's been kicked up by this vulnerability, and a key-sharing scheme like the above is much easier to exploit.
(The second idea seems better assuming the user nukes the private key once the import is done. Otherwise the weakest link in the security chain will continue to be the opsec you practice with the private key file, in which case why spend the money on the Yubikey?)
Yeah of course the operation needs to be (cryptographically) authenticated somehow, I edited my comment in haste while going to work and accidentally messed it up completely. Thanks for pointing it out!
The idea I thought of is to essentially use the public key of (2) to seed the generation of the root secret on (1). Meaning the sync-pairing-setup is destructive to the root secret, and can only be done at startup (or if you are willing to reset the device).
(A mallory could of course reset it still, unless you have some e-fuse or something, but anyways that's only marginally worse than simply physically destroying it.)
> (A mallory could of course reset it still, unless you have some e-fuse or something, but anyways that's only marginally worse than simply physically destroying it.)
You can already reset CTAP2-compliant FIDO keys using a (non-PIN-authenticated) command [1], so this wouldn't add anything that isn't already there.
I think the real issue here is that users probably don't expect having to reset/initialize a Yubikey once they take possession of it. Given the horror stories of how e.g. Amazon commingles inventory, I wouldn't be surprised if fraudsters could succeed in getting paired keys back into the supply chain.
Targeted attacks to friends/family can probably also not be ruled out ("hey, i got this spare yubikey in a black friday sale, want it?"), and unfortunately something like a family member or partner trying to take over somebody's accounts isn't unheard of.
There are just too many ways for this to go wrong, and while Yubikey has, I believe, looked into this option in the past (there's a draft design doc for this idea somewhere), they probably came to the same conclusion.
> You can already reset CTAP2-compliant FIDO keys using a (non-PIN-authenticated) command [1], so this wouldn't add anything that isn't already there.
Ah I did not know that, thanks.
> There are just too many ways for this to go wrong, and while Yubikey has, I believe, looked into this option in the past (there's a draft design doc for this idea somewhere), they probably came to the same conclusion.
Would be interesting to see the draft. But yes of course, there are tradeoffs. Having a LED similar to on the Bio to indicate it's paired could be one way, or selling pairs in a SKU where the user actually have to initialise them (with some clear physical difference to help solve the family/friends case). But it's complicated, and I think Yubico has made the correct decision that it's simply not worth it (not that it's impossible to do in a secure way).
However, the lack of a resonable backup solution is keeping me away from Yubikey for any non-enterprise use, where a broken key would actually lock me out of an account for real.
HSMs support these kinds of scenario (high availability clusters, wrapped keys etc) but I agree. For enterprise use there's no interest or need, and for end users the feature would potentially be confusing.
I just use multiple tokens, and I knew that the infineon sle78 was an intel 80251 derivative before this report ;)
True, but what's even more convenient than that is to just not use hardware authenticators for anything but the most important accounts/sites, and e.g. use syncing credentials (as provided by many password managers, Google, and Apple).
The fraction of people willing to regularly schedule enroll-o-ramas at each of their accounts and each of their backup key locations is probably smaller than a percent of all potential WebAuthN users.
It is really annoying that more sites don't support multiple security keys, though. As far as I can tell, it's not encouraged by the FIDO Alliance and I can't think of a good technical reason for it.
I forget which financial institution I was using at the time, but they explicitly only supported one key. That is, you add a new one and the old one is expunged.
Banks are so slow with this sort of thing, and still require SMS as a fallback option.
The vast majority of sites I've used Yubikeys for have supported multiple. About the only one that I use which still doesn't to my knowledge is Paypal.
Maybe I'm out of date, then! I don't enroll new keys very often. Paypal is a great example of a service that I would like to support multiple keys, though, so it's disappointing that they still only support one.
How often do you check to see that those other/backup keys are still secure? This attack becomes easier if the attacker knows of the secondary key(s) location and because of disuse they aren't even necessary to replace.
I mean, not all at once, but (I only have 3) there's the one when I bought a new laptop in 2019 which I setup when I got that laptop and became the old one when I got a new laptop in 2021 and setup a second one. And then the third one is a backup key I made at some point and is stored offsite in case I/my house gets robbed/burgled or my place burns down in a fire.
It's inconvenient, sure, but it's more convenient than my bank accounts that are accessible online being cleaned out.
> I don't remember where I've used my YubiKey in the past.
I've yet to encounter a site that allows to enroll a FIDO device without setting up some other form of 2FA and for me it's TOTP which are kept in the app.
If you're using a yubikey solely for its PGP key stuff and you have a backup of the key or have a key hierarchy then replacing a yubikey is pretty trivial.
Everything about security / auth sucks in this age.
Each time I go down this path with either work or personal stuff it's just people changing passwords all the time / having to re-login all the time ... there's no happy path without a huge hassle.
Yubikeys are intended to block phishing. This attack requires physical access.
IE: If you're "worth it" to target IRL, you shouldn't use a Yubikey to begin with. Someone can swap your spare and you won't realize it until too late.
The last time Infineon chips had a crypto-breaking bug, Estonians got new ID cards for free. Meanwhile my less than two months old Yubikey 4 stopped working as a hardware attested PIV smartcard.
Software that keeps revocation lists (or whatever they are called) up to date stopped accepting keys generated on that hardware. The Yubikey itself continued to work just fine, but I had to switch to externally generated keys.
That isn't exactly some subtle side channel involving tiny emissions of radio waves... The time depending on the secret data is pretty much the first thing that any side channel audit ought to check.
It's super simple too - simply verify that all operations always take the same number of clock cycles, no matter the data. Errors too - they should always be thrown after a fixed number of clock cycles, independent of the data.
> That isn't exactly some subtle side channel involving tiny emissions of radio waves...
No, it seems to be exaclty that. What's non-constant-time is not the execution of the algorithm (as that would probably be exploitable even via USB, worst case), but rather the duty cycle of the externally-observeable RF side channel, if I understand the paper correctly.
Infineon's implementation doesn't seem to be vulnerable to a pure timing attack, as otherwise that RF side channel wouldn't be needed.
They also do implement nonce blinding, but unfortunately with a multiplicative mask significantly smaller than the size of the elliptic curve, so it's brute-forceable.
> What's non-constant-time is not the execution of the algorithm (as that would probably be exploitable even via USB, worst case), but rather the duty cycle of the externally-observeable RF side channel, if I understand the paper correctly.
Are you sure? Section 4.3 (pg 52) starts with "The leaked sensitive information relates to the execution time of Algorithms 1 and 4."
Maybe my terminology is off here, but a "pure time leak" to me would be something like a given operation varying in return time (i.e. execution time/latency being a function of some secret data), whereas this seems more like all operations take constant time, but in that constant time, RF emissions vary in their timing (e.g. on/off, strong/weak etc.) as a function of some secret data.
> primary goal is to fight the scourge of phishing attacks. The EUCLEAK attack requires physical access to the device
Something to consider: If someone is going to go through the effort to get physical access to a Yubikey, they only need to swap it with one that has a similar level of wear and a similar appearance. At that point, the victim will merely believe that their Yubikey is broken; and/or the attacker will have enough time to use the Yubikey.
For example, I have two Yubikeys. Someone could sneak into my house, swap my spare, and I wouldn't figure it out until I go to use my spare.
Basically: This attack is only "worth it" if your target is so valuable that you can target them in person. At that point, I'd think the target would use something a little more secure than a Yubikey.
> At that point, the victim will merely believe that their Yubikey is broken; and/or the attacker will have enough time to use the Yubikey. For example, I have two Yubikeys. Someone could sneak into my house, swap my spare, and I wouldn't figure it out until I go to use my spare.
You can inspect a yubikeys identity with `ykman list` so you can easily have checks to check if a yubikey is broken or actually swapped. If you have high security requirements you can do this periodically and/or have the physical location of the spare be physically secured.
> use something a little more secure than a Yubikey
> Seriously, it's trivial to fry a key and swap it with the working spare if you have access to it
So all an attacker needs to do is swap my Yubikey with a fried one. Maybe someone will figure it out if they're tracking the numbers written on the outside.
The point is that if you require more security there are tools to check it. For me I'm comfortable enough that an attack requires physical access to my keys, so I don't.
> Maybe someone will figure it out if they're tracking the numbers written on the outside.
So if your opsec requires it keep track of which keys you have and their identities. If one is fried remove it from all the services you authenticate with.
I'm not saying its perfect but you can create practices/procedures that protect (or at least let you know it happened) from most realistic attacks.
> Basically: This attack is only "worth it" if your target is so valuable that you can target them in person. At that point, I'd think the target would use something a little more secure than a Yubikey
Absolutely.
In practice, the Yubikey is almost never going to be the weakest link in the chain. They could target your devices, intercept your communications, or serve warrants on/covertly exploit the services that host your data.
Fantastic research by NinjaLab. One of the most interesting parts to me from Yubico's advisory is that the Webauthn protocols attestation [1] is also defeated by this local cloning. Could the protocol have been better designed to resist this local cloning attack?
> An attacker could exploit this issue to create a fraudulent YubiKey using the recovered attestation key. This would produce a valid FIDO attestation statement during the make credential resulting in a bypass of an organization’s authenticator model preference controls for affected YubiKey versions.
> Could the protocol have been better designed to resist this local cloning attack?
I don't see how, the attacker is cloning the secrets used to sign the request, if they have those secrets there's no way of distinguishing the clone from the original device. The whole security model of secure elements is preventing the keys from being extracted, if you can do that there's no more security than saving the key to a file on your computer.
Of course to get the key they need to physically open the device, so unless someone actually takes your key it's more secure than saving them on you computer.
The login service could send not just the request, but also N random bits for the next session.
These would be stored by the device and combined with the next sessions' request data before signing. The login site does it's own combining before checking the signature.
This way any clone usage would be obvious. If the attacker uses the clone before you, your key wouldn't work for that site anymore. The site could even warn you if it keeps track of previous values.
Likewise it limits the timeframe the attacker can use the clone.
I guess even just 16 bits of data should make it quite resistant to guessing by the attacker.
This requires some non-volatile storage to keep the "future bits", but at 16 bits you can do quite a few logins before having to do a single page erase.
Then again, not my field so perhaps there's something I'm missing.
Perhaps some kind of rolling key system could've been used? If the key was rewritten on each successful login, either the attacker would have to use their cloned key immediately (alerting the user), or have their cloned key become useless the moment the user logs in again. This would only work with discoverable credentials, and would increase wear on the device's flash storage.
Not really, unfortunately, given that attestation practically always depends on some piece of secure hardware being able to guard an issuer-certified secret with which it can authenticate itself to a relying party and thereby bootstrap trust into any derived/stored secrets.
If that attestation secret is extractable in any way, nothing prevents an attacker from creating a fake authenticator able to create fraudulent attestations, despite not behaving like an authentic one (i.e. in that it allows extracting stored credentials).
You could theoretically try to mitigate the impact of a single leaked attestation secret by using something like e.g. indirect attestation and authenticator-unique attestation keys (rather than attestation keys shared by hundreds of thousands of authenticators, which is what Yubikey does), but it would be a probabilistic mitigation at best.
I see the Yubico website says 5.7 or greater not affected.
Elsewhere on the Yubico website[1] they state that a feature of 5.7 release was ...
Migration to Yubico’s own cryptographic library that performs the underlying cryptographic operations (decryption, signing, etc.) for RSA and ECC
Hopefully they've had lots of eyes looking at that ! Not sure why anybody feels the need to write their own crypto libraries these days, there are so many implementations out there, both open and closed source.
> Not sure why anybody feels the need to write their own crypto libraries these days
Because this is the second time they've had a security issue (the last time was even worse) because of their vendor? When your entire company is based around doing cryptography, it actually makes sense to hire enough applied cryptographers to own your own destiny.
An embedded platform that hasn’t been ported to and may be impractical to port to because it’s so different. There may be other reasons and for closed source there may be economic considerations especially since this is a migration from a previous closed source (probably source available) vendor to an in house solution.
And had so many stupid bugs that it looked like it was written by summer interns. Everybody I know who actually worries about actual security ran far, far away from it.
If, however, you just need https for web pages, it's good enough to get started.
Including the very, very limited environment of secure elements, and the capability of interfacing with the sometimes very specialized cryptographic accelerators/coprocessors required for adequate performance?
We're talking low double-digit kilobytes of persistent storage, and sometimes single-digit kilobytes of memory here.
Also, including a full TLS library seems like complete overkill if you only need some cryptographic primitives. These things are usually certified in expensive code and hardware audits; you essentially have to justify (in terms of adding complexity, and with it the possibility of vulnerabilities) and on top of that pay for every single line of code.
As distinct from the attack itself? This is an interesting exercise and worth publishing, but in practice I don't see much real world consequence even for a notionally vulnerable device.
Additionally, it looks like this vulnerability exist on on all Infineon trusted X (platform modules, microcontrollers, etc...) that use the same internal crypto libraries.
My understanding is that Infineon cryptolib causes the hardware vulnerability and that their TPMs, for example, internally use this library to implement the crypto parts of the TPM specification.
> Some tools at different layers that would have stopped timing attacks against ECDSA nonce-inversion software: (1) the safegcd algorithm, https://gcd.cr.yp.to; (2) switching from ECDSA to EdDSA, typically Ed25519; (3) using TIMECOP (https://bench.cr.yp.to/tips.html#timecop) to scan for leaks.
Does anybody know if plain ECDH (as used by e.g. ICAO biometric passports implementing PACE) or Ed25519 (assuming Infineon's chips/libraries implement that) are impacted?
The paper specifically calls out PACE as also using modular inversion, but I don't understand either that or Ed25519 enough to get a sense for how bad this is for algorithms beyond just ECDSA.
> The new YubiKey firmware 5.7 update (May 6th, 2024) switches the YubiKeys from Infineon cryptographic library to Yubico new cryptographic library. To our knowledge, this new cryptographic library is not impacted by our work.
Found vuln in library used by many for 14 years. Solution: switch to custom code. That's a bold strategy. I hope it pays off.
> Infineon has already a patch for their cryptographic library [---]
On the other hand: The commonly-used library contained a vulnerability that went undetected for 14 years. Yubico is also very much in the business of embedded and side-channel-hardened cryptography.
This might just be one of the cases where switching to custom code is the right move.
They really should. The recovery of the one secret the device is supposed to keep is catastrophic. Sure, the recovery itself might be an edge case, but Yubikey users buy the product to protect themselves from edge cases.
> I'm a bit baffled that they're not offering a replacement programme.
That was under the previous leadership when Stina Ehrensvärd was CEO.
Now they've taken VC money[1], and more recently merged with a listed SPAC[2] I suspect replacement devices will never happen because, you know, shareholders come first.
I wish Yubico had some serious competition, but sadly they don't. NitroHSM is not the same thing (plus has flashable firmware, which leads to potential security risks). Tilitis looks interesting, but its far from maturity.
I have used SoloKeys since v1. Currently own two v2 SoloKeys, and they "just work" for anything involving FIDO2. I specifically use them for storing SSH private keys and WebAuthn credentials. The key can be used on any device with a USB-C port (there is also a variant supporting NFC, but I don't have that variant)
Despite being a bit careless with my keys (e.g. leaving them in a pocket and washing said clothing), they still work just fine. I highly recommend SoloKeys to anyone who wants to support open source hardware and firmware.
> I wish Yubico had some serious competition, but sadly they don't.
Looking at the list of FIDO certified hardware authenticators alone, they definitely do.
My country's eID scheme even requires FIDO Level 2 certification, which Yubico hasn't had for a while, so they practically supported only non-Yubico authenticators until recently.
> I wish Yubico had some serious competition, but sadly they don't. NitroHSM is not the same thing
What's not the same thing as what? There's no NitroHSM (Nitrokey has 2 different HSM-related products that are different kinds of things from each other, and neither is called that), and most Yubikeys aren't their special HSM devices.
> You don’t need to be some big corpo to be considered ‘serious’.
That's not what I meant and I suspect you know that. :)
I meant everything from the Yubico hardware (more compact and less bulky than anything else out there) to the Yubico software (extensive featureset with more controllability than most other products out there).
Also as I said already, Yubico is one of the few (only ?) one that does not permit firmware flashing. Most competitor keys have firmware flashing capability, which to me is a big no-no as its an attack surface just waiting for an exploit.
I suppose I'm not a regular consumer, but I buy devices like these under the expectation that they will eventually succumb to practical low-cost attacks.
I would be feeling a bit miffed if I bought one recently, though.
Previously when their Yubikey 4's were found to be suceptible to the ROCA vulnerability [0], they issued replacements [1] for any customers who had affected devices. I had a few of those devices and they were replaced for free.
I guess that's a disadvantage of having a non-upgradable firmware. They can't fix these devices that are already out in the field.
As I understand it, the ROCA vulnerability is "the secrets generated by a YubiKey may be susceptible to classic cryptographic breaks", something along the level of "the cipher is inherently weak."
This vulnerability, meanwhile, appears to be in the class of "if someone has physical access to your hardware token, and has access to some specialized (expensive) hardware to do side-channel analysis, they might be able to do side-channel on your hardware token." But if someone has physical access to the hardware token... I mean, at that point, most people would consider it compromised anyways and wouldn't expect security guarantees from that point.
Not being able to flash the firmware is a feature, not a bug. :)
Its the fundamental reason I won't buy NitroHSM because of the Rumsfelt unkown-unknowns about use of the firmware flash feature on NitroHSMs as a future exploit route.
> Not being able to flash the firmware is a feature, not a bug. :)
It is a feature only if they ship replacement devices in case of issues like this. If they don't and you're left with a broken device then I'd rather count it as a "bug".
If you have a Yubi Key and you are worried, read this part from their report:
> the adversary needs to open the device (...) Then the device must be re-packaged and quietly returned to the legitimate user (...) assuming one can find a reliable way to open and close the device, we did not work on this part
Yubi Keys are supposed to be tamper-evident and you can also put tamper-evident stickers on them. I am more concerned that a determined attacker may eventually find a way to record the signals without having to remove the casing.
In the FIDO2 case, only the derived keys are extracted. The master key that derives non-resident keys isn't extracted. So I think it's not possible to really copy the key.
In the cases of FIDO2 resident keys (passkey) / PIV / GPG, maybe it's possible to extract and copy the exact keys. But I guess it can be detected through attestations.
And I just looked at ykman command. It doesn't seem to allow you to import a passkey to a Yubikey.
This requires the attacker to steal your key. When that happens, by the time they can get the secret key I've already revoked it.
The biggest problem with FIDO keys isn't the fact that people can gain access to your accounts if it's physically stolen.
I have redundant keys for backup access. But I have no idea which accounts I used the lost key for, in order to log into them one by one to revoke the key.
How does everyone here keep track? A post-it note in the cookie jar?
>This requires the attacker to steal your key. When that happens, by the time they can get the secret key I've already revoked it.
You're held in custody, detained, arrested, etc while your keys are dumped and accounts are accessed. You don't have the opportunity to revoke it without risking prison time.
This situation can happen if you simply choose to fly or visit another country.
That’s a different situation outside of most people’s reasonable threat model. The police don’t need to clone your Yubikey if they can use it as much as they want, and if they decide to go NYPD on you nothing else you do is going to end in a different outcome unless your MFA check is an in-person confirmation in a location outside of their control.
Though in this scenario, your adversary doesn't need to resort to a technical attack to clone your key. They can compel you to comply, and keep you locked up until you do.
They can, but assuming the law is actually being followed, you can only be held for so long without charges, and can be compelled to provide so much testimony.
Being able to quickly clone keys gives any LEO an opportunity to access your digital life as part of a simple stop versus a full criminal case.
In the UK, s 49 of the Regulatory and Investigatory Powers Act 2000 provides for 2-5 years' imprisonment if you were to fail to do so, depending on the nature of the offence under investigation.
In Australia, s 3LA of the Crimes Act 1914 (Cth) imposes a similar obligation with a penalty of 5 or 10 years' imprisonment.
If you find yourself in this position in Russia or China, they would just make you disappear for as long as they saw fit.
That's not the only possible attack here: FIDO direct attestation requires a key to be shared among either none or at least 100 000 devices (for privacy reasons):
> If the authenticator puts the exact identical attestation key into a group of Authenticators (e.g., group of devices, phones, security keys...) so that the attestation key doesn't become a Correlation Handle, then each group of Authenticators MUST be at least 100,000 in number. If less than 100,000 Authenticators are made, then they MUST all have the same attestation key.
Yubico, to my knowledge, has chosen the latter route; this means that compromising a single Yubikey's attestation key compromises at least 100k others immediately
The article notes "The attack requires physical access to the secure element (few local electromagnetic side-channel acquisitions, i.e. few minutes, are enough) in order to extract the ECDSA secret key." (emphasis added)
I’ve created two TOTP 2FA on two different YubiKeys. During the TOPT configuration process, the website gives me a password that I enter on the YubiKey configuration app.
Then, I do not store that password. But if I stored it on Bitwarden, I could easily create a YubiKey backup or set it on another app like ente.
I have not kept that password because I considered that it could be easily compromise my security. I have kept the backup codes nonetheless.
Should I keep the TOTP configuration password that the website gives me when I tell it that I cannot scan the QR code?
> But if I stored it on Bitwarden, I could easily create a YubiKey backup or set it on another app like ente.
If you do that, you could just use Bitwarden’s TOTP functionality directly.
I don’t do that myself for important accounts as it effectively collapses 2FA into a single factor, but it’s an individual security/convenience tradeoff in the end.
I’d rather keep passwords, TOTP and backup keys in different services because I never saw the point of keeping them together with the passwords as you explain.
With the YubiKey, TOTP has become more convenient and more secure than it used to be with Authy and then Ente. But maybe I should consider integrating them for most accounts
That's reasonable, but then you can't store the TOTP setup code in your regular password manager (and vice versa, if you do, that's not different from just using the PW manager's native TOTP calculation feature).
The advantage of using the Yubikey for TOTP is that (I believe) there's no way to extract the setup secret from it, so even if somebody gains temporary access to it, they can't exfiltrate your future OTPs and would have to attempt a log-in right there and then. Storing the secret in a recoverable way negates that property.
It's really easy to blind a variable time modular inverse, assuming you have a random number generator handy: you want the inverse of x, compute instead the inverse of x*r, then multiply the result with r.
So I wonder if some things using the same chip/library are not actually vulnerable because they blinded the operation?
It's *better* to use a constant time algorithm, but that's harder to do in a curve generic way and has a pretty significant performance impact (particular before the safegcd paper).
> It's better to use a constant time algorithm, but that's harder to do in a curve generic way and has a pretty significant performance impact (particular before the safegcd paper).
Crypto noob here, but isn't modular inverse the same as modular exponentiation through Fermat's little theorem? I.e., x^-1 mod n is the same as computing x^{n - 2} mod n which we know how to do in a constant-time way with a Montgomery ladder.
Or is that too slow?
The powering ladder is unfortunately quite slow compared to the obvious vartime algorithm which is what temps these things to have a vartime algorithim in the first place, though too slow depends on the application. It doesn't help that these chips are underpowered to begin with.
Aside, for the FLT powering ladder n need needs to be prime, but it isn't when there is a cofactor, though there is a generalization that needs phi(n)... I probably shouldn't have made a comment on the issue of being curve specific since the problem is worse for sqrt().
Information is light about an actual Proof of Concept here.
I have no actual knowledge, and it makes sense to assume the PIN is required to implement the EM side channel attack, as without a valid PIN the old, vulnerable Infineon library most likely does not complete all the steps.
Requiring/not requiring the PIN is a per-authentication flag that the RP can set though, as far as I know.
Since the RP challenge is not authenticated in any way, nothing seems to prevent an attacker from just preparing a "user verification not required" challenge and getting the Yubikey to sign it.
Oh, potentially important corollary: This means that this vulnerability allows breaking an “always UV” credential as well:
- Do as many UP-only challenges as required on a stolen Yubikey to extract the private key, not involving the RP (or maybe a single incomplete one, to discover the credentialID)
- Use the recovered private key in an UV challenge against the RP
Don't have high hopes for this but I just requested a replacement device through their support system as the offered mitigations are not something I would like to consider.
There also seem to be simpler fixed-function implants (e.g. NTAG or similar NDEF-storage-only tags), but I don’t think there’s anything non-Java capable of doing something as complex as CTAP2.
For something that potentially needs surgery to remove from your body, I’d go for the most capable secure element you can afford for maximum flexibility and future proofing anyway; usually that’s also Java Card.
Can you explain me, why do you need to be online to extract the private key? Can't you just steal the token, input the nonces offline, and meter timing? Then, crunch out the private key, and only then, if needed, phish the password?
Yubikeys and similar FIDO hardware authenticators roughly speaking have two modes of operation:
Resident/discoverable credentials are stored on the hardware itself. You can attack these completely offline.
Non-discoverable credentials are not stored on the hardware. To get the authenticator to perform a private key operation (which is a prerequisite for being able to exfiltrate the private key using this attack), you need to supply the credential ID to it, which contains the data required for the authenticator to re-derive the private key.
Usually (i.e. in the WebAuthN-as-a-second-factor use case), a website will only reveal candidate client IDs upon successfully entering your password.
Infineon is a listed company, not sure how much news will be published on this before German and Austrian markets open in a couple of hours so could be a profitable short selling opportunity.
Don't ask question about cryptography friend! Just use excellent library that implement best practice elliptic curve parameters chosen (of course) *completely* randomly from hat and stop thinking about it!
Yeah, same. Not sure why - nothing is loading. Just a broken site theme, I guess. Switching to reader mode in your respective browser should instantly show the post content (or at least it did for me in Firefox).
An attacker not only needs your username and password, they also need physical access to your key. They then have to disassemble the device. If they want to give it back to you, they'll need to reassemble it.
So not exactly trivial!
A blob of nail-varnish over a plastic seam might be a useful canary.
But this does highlight one weakness of these FIDO tokens - you have to manually maintain a list of where you've registered them. And if your token is lost or stolen, you have to manually revoke every single one.