Hacker News new | past | comments | ask | show | jobs | submit login
What I Wish I Knew About U2F and Other Hardware MFA Protocols (goteleport.com)
120 points by dylanz on April 16, 2021 | hide | past | favorite | 90 comments



> TPMs are soldered onto motherboards so they are not portable. Like U2F, they also don’t let you sign arbitrary data.

Incorrect!

You can use TPM chips to RSA sign arbitrary data, and use that to authenticate SSH:

https://blog.habets.se/2013/11/How-TPM-protected-SSH-keys-wo...

Even under Windows: https://blog.habets.se/2016/10/Windows-SSH-client-with-TPM.h...

The secret is using TSS_HASH_OTHER as the hash algorithm, which tells the TPM "it's already hashed". Whether it actually is hashed, or just raw input, is up to you.


Also on a modern Apple device, you can use the Secure Enclave AES Engine to perform encryption and decryption. You can also use the Public Key Accelerator (PKA) for RSA and ECC (Elliptic Curve Cryptography) signing and encryption algorithms with hardware keys that stay within the PKA. https://support.apple.com/en-ca/guide/security/sec59b0b31ff/... covers an overview of the technologies involved. Of particular interest is probably the "Secure Enclave feature summary" at the bottom of that page which lists which Apple chips support which features.

Also relevant: https://developer.apple.com/documentation/security/certifica... and CryptoKit https://developer.apple.com/news/?id=3bwfq45y and https://developer.apple.com/documentation/cryptokit (Note that the Linux version of CryptoKit doesn't get any of the fancy hardware features, you'll need an Apple OS for them.)


TPMs are also sold separately that which can be plugged in to motherboards, e.g. Gigabyte (https://www.gigabyte.com/us/Motherboard/GC-TPM20-SPI-20) and Asus (https://www.asus.com/Motherboards-Components/Motherboards/Ac...) to name a couple.

The article has a couple of other weird faults, too:

1. I'm not sure why the author is complaining about FIDO2 having backwards compatibility with U2F/CTAP1. The article even incorrectly claims FIDO2 is "a 3rd incompatible standard" only to counter-argue the point a few paragraphs below explaining CTAP? People not having to throw away their perfectly fine old devices is a good thing in my book.

2. "All FIDO standards are web-centric and aren’t designed with any other client software in mind" the first part is true, the second part not so much. For example FIDO2 supports silent authentication (no user interaction) while WebAuthn explicitly does not[0]. It also supports the hmac-secret extension[1] which is used for offline authentication with Azure Active Directory[2] and IIRC no WebAuthn browser implementation exposes this extension to web apps.

[0]: see e.g. discussion on https://github.com/w3c/webauthn/issues/199

[1]: https://fidoalliance.org/specs/fido-v2.1-rd-20210309/#sctn-h...

[2]: https://docs.microsoft.com/en-us/azure/active-directory/auth...


I also don't know why the author thinks HSMs cost "hundreds to tens of thousands of dollars". In the first place, TPMs are a subcategory of HSMs and don't cost that much. In the second place, HSMs like the Microchip ATECC608B cost <$1 for the bare board.


OP says devices, so I read rack-mount gear that you buy when you can't use (or trust) AWS KMS, not chips.


I implemented a firmware signing infrastructure for a large embedded hardware manufacturer using USB Nitrokey HSM devices (sub-$100/ea). We got the functionality we wanted from them at a fraction of the price point of the "big name" HSM manufacturers' offerings. The development effort to integrate the Nitrokey HSM, as a simple PKCS#11 device, was very simple. What we could get of the big-name HSM manufacturers' development docs, w/o an NDA or a purchase, looked to be much more difficult to integrate.


Well, I don't know if they really meant rack-mount gear, but sure it costs a little extra to get it usable with a general-purpose computer.

Specifically, you can get that Microchip HSM in a form factor that plugs into a click shield, then plug the click shield into a Raspberry Pi's GPIO pins. You now have a PKCS#11-usable HSM from a Pi. Including the click shield still puts the cost at <$20.

(I have a few such setups lying around because my $dayjob includes a PKCS#11-consuming application that runs on such setups.)


Not yet implemented, but Solo 2 has a "stretch goal" to provide HSM functionality, if that would be good enough. https://www.indiegogo.com/projects/solo-v2-safety-net-agains...


Even then, you can buy a Yubico YubiHSM device for 650€.


Well, that is hundreds of dollars, isn't it? :)


From TFA:

> Since the U2F device creates and stores asymmetric key pairs, and is able to sign arbitrary “challenges”, can I use it as a general-purpose hardware key store?

You can however do it "the other way round" and use a private key to derive a U2F path. And that same private key can be used for many other applications (or none). For example you can use the Ledger Nano S (originally a cryptocurrencies hardware wallet), which has an HSM, with your "seed" (say a 256-bit secret, stored as 24 words you hide), to log in sites using U2F.

Additionally as long as you've got your secret, you can reinitialize your Nano S (or another one) as a new U2F device and there's no need to reset your U2F credentials on the site as the newly initialized device shall work exactly as if it was the old one.

Fun fact: the CTO of Ledger was part of the group working on the original FIDO specs.


> you can reinitialize your Nano S as a new U2F device

According to the yubico explanation linked from article, U2F includes cloning protection (an authentication counter, which the site should check has increased vs. its last known value), so that might not actually work if the site you are authenticating against is well-implemented (Unless the Nano S also lets you back up the counter value).


> ... so that might not actually work if the site you are authenticating against is well-implemented

I'm using it on several sites and already did swap / reinitialize my U2F devices... It works, including on GMail. As I understand it the most recent Webauthn is going to be supported by Ledger soon.

I don't think they're non-compliant or badly implemented websites: although I'm not sure what the specs say.

I do personally love that I can back up my "seed" and know that by going to pick my safe at the bank I'll always be able to reinitialize an U2F device and I'll also really love that it displays "Google" on the Ledger Nano S's tiny screen.

Pricey little thing to use as "only" an U2F device: about 60 USD but I like it a lot.

> (Unless the Nano S also lets you back up the counter value)

Late reply but... As I understand it as long as the counter is monotonic it'll always work. What Ledger does (and apparently the Trezor too from reading this thread, another device with an HSM) is, upon initializing the U2F app the first time on your hardware device, is to set the counter to the current timestamp.

So basically: once you use another device to log in, then you cannot use the old one, unless you reinitialize it (and then it's the other you cannot use, unless you reinitialize it etc.). These devices do not have a clock, which is why it works that way (in the case of the Ledger Nano S / Ledger Nano X at least).


Cloned (i.e. initially set up with identical seeds) yubikeys definitely work without any communication about a counter value between them; my personal setup involves two keys so that I wouldn't lose access if I lose a key. There are some obvious drawbacks of that, of course, but it is an option that works, so either that description does not apply to such devices or it is misunderstood.


Trezors work the same way and I have one that I set up as a backup factor (I still find Authy desktop/mobile the most convenient). It’s very nice to have a paper backup.


> Additionally as long as you've got your secret, you can reinitialize your Nano S (or another one) as a new U2F device and there's no need to reset your U2F credentials on the site as the newly initialized device shall work exactly as if it was the old one.

But isn't it the whole point that these devices never let you have the secret?


The device GP is talking about is a cryptocurrency 'hardware wallet', so no.

You may have thought (as I did at first) they meant a Yubikey; there is after all also a model called 'Nano', but not, I think, 'Nano S'.


The devices don't let you extract the secret, however, they generally can be set up with a secret that you know.


> Generally, FIDO standards push too much logic into hardware, when it could be handled in software instead.

This is a feature, not a bug. It's one of the core points: Everything is in hardware, so software can't attack it. In theory, a bug in Android/iOS could expose my authenticator app secrets, but even a full root compromise of the host operating system can't extract the secrets or trigger a signing from a U2F key without me physically touching it.


For a noob like me, I am thinking to get a Yubikey. What will happen if I lose my Yubikey? Am I essentially out of luck assuming the admins can’t reset my password or associated yubikey device?

How do I prevent such scenario from happening? Is there truly a fool proof way of hardware authentication?


Buy 3 keys. Register all of them to your account. Then register all of them to your spouses account too. Put one on your keychain. Put one on your spouses keychain. Put one in a safe place.

By enrolling my spouse and cross registering all keys, both of us are safe. We might loose our keychain, but we will always find each other, even when we are traveling.

This works for Google and GitHub, but not every service allows for multiple keys. But this should be a no-brainer imho.


That's all well and good, but a few weeks later I have another service I'm going to sign up for... I have to... first go to my safe deposit box to grab my third key? If I don't, then seems like it's a lot of bookkeeping.


Needless to say, I don't use these devices on my home depot account. I use them for Google, Github, Dropbox, I don't actually remember anything else. My DNS registrar doesn't support it :P

I also don't use my personal key for work stuff and recovering my work key is my sysadmins problem :)

That said, when I had admin accounts at work, we used TOTP with a similar scheme: when we registered important (admin) accounts we shared the second factor (the QR code) between 2 people and sometimes I printed the QR code itself. This works for AWS, gsuite, github, etc. I still receive calls from old colleges for TOTP codes occasionally :)


I actually just changed my DNS registrar to namecheap specifically because it supports U2F. I figure my domain is too important to risk losing because I use my domain for email.


You don't need to make it as elaborate as 'safe deposit box' or 'implanted into spouse' and most accounts that matter have other ways of recovery, e.g. an app-based authenticator, one-time recovery codes (a recovery code is something you might want to stick in a safe deposit box). You can just get, say, three hw keys, put one on a keychain another somewhere on your desk and a third in a drawer somewhere.


This is vulnerable to a house fire/other natural disaster. In general, I recommend having at least one thing offsite - whether that’s in a security deposit box, with a trusted friend or family member, in your desk at the office, or whatever.


'implanted into spouse'

What about steganography in tattoos? That would be pretty interesting. Perhaps combining that data with a short code or seed that you memorize.


Periodically generated seeds, encoded on actual seeds that you and your partner eat for breakfast.


I love it.

The tech exists. My neighbor works for a company that does encoding of data on packaging. Goal is that every cereal box on the grocery aisle can be individually tracked.


I do this! (but for weed) track-and-trace chains are fun!


Doesn't that mean that stealing any of the 3 yubikeys means full permanent compromise of all your and your spouse's account?

I think a good with system should include some sort of revocation, like a master key you can keep in a safe to revoke other devices.


No, it does not mean that. At least in my experience, every service where I have multiple YubiKeys registered still requires my username and password. Without those, someone who stole the YubiKey would not be able to login to my accounts.


The 2 in U2F stands for second factor, these devices are useless on their own.


You can un-register any of the keys when you're logged in. So if you lose one key, log in using the others and remove it. No need for a master key.


Your idea works well for recovery.

But I'm thinking of a revocation scenario, where a key is stolen. In that case the attacker can just remove your keys first.


So, the "something you have" element of security is to primarily avoid remote compromise. And even make local compromise require an additional theft step.

If the hacker gets your key, and your password it's game over.

But hacking the password will take some time hopefully, and systems usually have retry limits etc. so if you discover your lost key, you hopefully have some time to revoke the lost key.

I personally would not use it for password-less login, as it is only good as a second factor.

If your threat model includes any real likelihood of people capable of stealing your keys and cracking your passwords, then 2FA is only a small part of the opsec you need.

Things like NSA's Zero Trust Security model comes to mind https://news.ycombinator.com/item?id=26549363

If you're at that level, you probably need specialist infra.

But potential compromises don't mean you're not less secure than before, Yubikey would still make you more difficult to hack.


Yubikey still acts as a second factor, so for the average person's threat model this should be fine provided you use strong, unique passwords for your accounts (i.e. a password manager). 1Password supports U2F as a second factor to access your account, but you still need to know your secret key and master password; you cannot decrypt your vault with only a hardware token.


They need my password too, so they’ll have to have cracked my 1Password account as well. My third key is indeed in a safe, but it’s not special in any way.


I've been troubled by that question too. Especially since I've heard people complain about sites that only allow registering one key. But TacticalCoder just said something really interesting in another part of this conversation:

> you can use the Ledger Nano S with your "seed" (say a 256-bit secret, stored as 24 words you hide), to log in sites using U2F. > Additionally as long as you've got your secret, you can reinitialize your Nano S (or another one) as a new U2F device and there's no need to reset your U2F credentials on the site as the newly initialized device shall work exactly as if it was the old one.

If I read that right, some keys rather than having a hardcoded unique seed, will let you set your own. Which implies you can have multiple functionally-identical backup keys locked up securely somewhere. If true, that significantly reduces key loss anxiety, and increases my interest in hardware MFA.

Anyone know which keys support this (aside from the mentioned Nano S and Trezor)? What's the magic keyword to look for in specs?


> If I read that right, some keys rather than having a hardcoded unique seed, will let you set your own.

Set your own or generate one for you, using an hardware CSPRNG, and then let you write it down in a convenient way (24 words out of a dictionary of 2048 words, so 264 bits: 256 bits of entropy plus 8 bits of checksum. Heck, you can use 12 or 18 words too. I think the (BIP39 / BIP44) standard even allows for 15 words but Ledger does only support lists of 12/18 and 24 words although don't quote me on that).

> Which implies you can have multiple functionally-identical backup keys locked up securely somewhere.

I don't even bother having multiple devices ready to use. I simply store the seed (once again: as a list of 12/18 or 24 words) on a sheet of paper and I store this in vaults/safes.

> Anyone know which keys support this (aside from the mentioned Nano S and Trezor)? What's the magic keyword to look for in specs?

I've got a Ledger Nano S only for this. It's really a nice little device. I don't work for Ledger btw! They're a bit pricey but, indeed, the anxiety of getting locked out of your account or having through crazy procedures goes away.

It's a pricey little device but not that crazy expensive: about 60 USD I think.

> What's the magic keyword to look for in specs?

I don't know but the U2F "nano app" does work with Google and other sites and I know that Ledger is working on the more recent Webauthn support.


https://www.yubico.com/support/download/yubikey-personalizat...

you can do a lot with a yubikey but idk if you can actually change the U2F secret


You can tell the device you want a new random secret (so now your existing credentials don't work), but you can't get the secret out (obviously bad guys would use that, so it would be a terrible idea) neither can you set the current one to an explicit value.


You definitely can initialize Yubikey OATH-TOTP with a known value in order to have multiple devices with identical secrets. I have done that, also see https://support.yubico.com/hc/en-us/articles/360016614880-Ca...


So you can set the current one at the start of your use and keep the secret hidden somewhere forever. Sounds like the solution to me.


Evidently the way I wrote this previously wasn't clear enough, so I have attempted to improve this text.

No, you can have a random new value (effectively like you bought a different one), but you won't learn what that is, so it doesn't help you and there's no way to "keep the secret hidden somewhere" except in the sense that it's hidden inside the device where it belongs.


When you enable 2FA with security keys you have 2 options, let's call them the easy and the strong.

The easy. Add both a security key and OTP (e.g. Google Authenticator). You have 1 rule to strictly adhere: if you click on a link, you MUST use the security key, because that's the one that protects you against phishing. When you don't have your security key, you can just use OTP provided that you typed the url and not clicked on it from email/text.

The strong. You enable 2+ security keys. You keep one in a safe at home. You always use a security key to sign in. On your Google account you can also enable Advanced Protection, that's essentially this plus some extra restrictions to API access.


For anyone reading this and considering Advance protection, know that it causes a ton of problems with google home and other applications. I made the mistake of setting up google home with my primary google account. Which is necessary to get YouTube premium / YouTube music / YouTube TV (all associated with that account), plus all of my smart home devices that have been set up with that account over time. So when you flip on google advance protection, all of that breaks.

It’ll be a weekend project to convert over all of my smart home devices and automation away from my primary email account and transition it to a home media account. Then I will turn on google advance protection. Advance protection also puts a delay on logging into the system if you are locked out / some extra restrictions.

Until then I’m just using the key but not advanced protection. If I were starting fresh (a new smart home / google services), I’d make an email just for that and use my original address (a clean name with no numbers from 2004) strictly for mail / financial passwords / high priority accounts. The divide that I would do is smart home / subscription services used by the family gets the lower security one, everything else is high security.

I’d be interested to hear how others deal with this problem.


A simple method is as follows:

0. This mostly matters for the accounts that need to be particularly secure (eg email, maybe GitHub or Facebook or Twitter depending on how much you care about them). Also accounts for money if they offer this kind of security.

1. Set up a yubikey. Try to only ever use the yubikey for logging in.

2. Set up some account recovery codes, print them out, put them in a safe place (ie somewhere that you don’t live or work, though you could probably also keep copies there. If you have a folder of personal and account information ready in case you die unexpectedly, put it there too)

3. Set up Google Authenticator on an iPhone so you can get in if you don’t have access to your keys. You should treat these more like the recovery codes than the yubikey—be very careful about entering the name of the website and checking the certificate because they won’t protect you from phishing.


> Also accounts for money if they offer this kind of security

This is something that astonishes me. So many financial institutions still have:

- (low) max length limits on passwords

- restrictions like no special chars in passwords, or exactly this many numbers etc

- over reliance on a pin where a password would be more suitable

- no 2fa, or at best SMS based 2fa

- ridiculous security questions as if someone's favourite colour etc is drawn from a large pool of values

As a developer I can guess that most of these restrictions are probably stemmed from a mountain of tech debt, but I would've expected this to be a priority for such companies. It makes me wonder if they are bound to follow some outdated regulations or something preventing them from doing better


This is not a bad question, but it is one that keeps getting asked on this topic. I really think standards like WebAuthn should more explicitly provide an appendix about account recovery, even if that is out of scope of the technical standard.

As others have pointed out, account recovery should always be provided in some form. A common way seems to be the use of a one-time password that can be used to regain access.

What I'd really like is a clear best practice for this for developers to implement though. It doesn't help if your authentication method is strong when the recovery option has glaring security holes in it. It now looks like every site that uses WebAuthn just rolls its own recovery solution.


I tried YubiKey and SoloKey (OSS).

I've settled on OnlyKey (https://crp.to) because it's OSS and you can back it up and restore. Backup can be GPG encrypted.

24 slots, programmable to store URL, U/N, P/W, 2FA, with the ability to click the address bar in a browser, hit a key on the OnlyKey and watch as it types it all in without any other interaction.

It stores MANY types of auth.

Not affiliated, but a very happy user.

It's a device you'll learn more about as you use it, but YubiKey functionality is achieved in a couple of minutes.


OnlyKey's features are attractive, but its implementation has been questioned. https://news.ycombinator.com/item?id=21884184


I'm not sure I come away from that discussion with less confidence.

The dev managed to answer everything put to them with alacrity. The question of code quality was of no consequence as even ugly code can work, and it's got more eyes on it than closed source.

In my threat theatre this device is far more than adequate for securing GitHub, GOOG, and a couple of GPG/ssh keys.

Should I ever becaome a spy, I'll probably revert to speaking to strangers about how red sparrows fly at certain times of day again.


A hardware key is a resilient tool which can be taken out of the house, safe in the knowledge that no one can crack it open and get the keys inside.

You can have other forms of authentication — PostIts with backup OAUTH codes, console passwords, root password etc — but those have to stay at home in the vault because a piece of paper in the outside world is too dangerous to lose.

You want both. You’ll lose your hardware keys eventually.


You're out of luck, modulo extra keys and a recovery mechanism.

Most services let you register multiple devices. I typically use a Yubikey nano and a regular Yubikey. Then I have a backup on my keyring, but don't have to get it out every time. With WebAuthn becoming more popular, you can also use things like Windows Hello, Face ID, etc. Generally, I try to register all of those methods, and then if one device fails, I still have plenty of backups. But, some services don't let you register multiple devices (AWS comes to mind). In that case, you'll have to make sure you have a backup recovery method. (And those recovery methods obviously reduce the security of your account, SMS is notoriously weak.)


Usually people buy a second Yubikey, enrol both and have the second one somewhere safe.

Most services and web sites also give you emergency login codes to print out, though.


For services where no admin can get your access back, like most websites, a 3rd factor should be a compulsory part of 2FA. There's a balance between keeping hackers out and keeping yourself out. The more factors you require, the more optional factors you should also require users to have, not just optional codes but "you must write these codes down, we'll check later to make sure you did" or something like that.


not just optional codes but "you must write these codes down, we'll check later to make sure you did" or something like that.

...which people will still not do, or misplace/erase due to disuse, etc.

Security and availability are always at odds with each other. The question that you should always have when choosing a level of security is "does the risk of denying everyone access --- including myself --- outweigh the risk of someone other than myself gaining access?"


Isn't minimising the "I lost them/forgot where I put them" scenario the point of checking you still have them (not just asking - checking eg by saying "enter one of the codes")? That way, if you don't have them, you can print a new set of backup codes while you still have your "normal" second factor to hand.


This is the thing to do.

But I would suggest SoloKeys instead.

I use these to log into my Linux systems, in combination with a password. pam_u2f was pretty easy to setup.


Yes, you do need a backup way of getting in. Some websites will let you print out some one-time-use codes that you can store in a strongbox. (This is cheaper than buying two Yubikeys.)

You could also get 2 Yubikeys, but might be out of luck if the website doesn't support that.

(Also, some Android phones can act as a hardware key.)


I use YubiKeys for just about everything, the easiest solution is to simply have multiple keys. I have 3 total, gathering them all together once a year to renew the gpg key isn't a real issue and after the initial setup I don't find myself enrolling new services regularly.


You need at least 2 yubikeys. Register both and keep one in a safe hidden place. You can also print out backup codes and keep them in a safe.


The article complains about the spec and "design by committee" but I think the WebAuthn standard is great. I read through it and it gave me all the information I needed to create a secure implementation that works with all browsers and devices. From zero, I can now FaceID into my personal Grafana instance, which is great. Zero complaints at all, and there are plenty of libraries floating around for people that don't want to read the spec and just want passwordless cross-device logins for their web app.


I think HSM's are just expensive because of price gouging rather than cost of the device? Like the Yubikey HSM is the same form factor as the Nano FIPS key but over 10x the price.


Mostly yes. It's a niche product with low demand and relatively high R&D costs, so margins have to offset that.

There's probably also a bit of psychological biases at play, like: "if your HSM is 10x cheaper than everyone else's, it must be crappy and insecure".


Generally speaking, they're both: 1) higher performance, and 2) held to a much higher standard in terms of certifications they need.

For example, a normal YubiKey is unrated, a YubiKey FIPS is level 2 rated, and a Thales HSM is level 3 rated with all sorts of zeroization hardware.


Interesting, maybe also the development costs too. They sell way less volume of HSMs compared to the standard keys but the HSM's require I'm sure some very rigorous development and testing.


> HSM's require I'm sure some very rigorous development and testing.

I think they mostly require an outside evaluator to do a sort of documentation process that costs somewhere around $500k depending on complexity on a new product, and maybe $50k just for up-versioning.

It's generally hard to get that money back on a product since the market of organizations that need the certification is tiny and then the larger overall market for a security product is also usually small and not so happy to defray those costs.


I evaluated and purchased a few Thales HSMs. At the time the difference between the FIPS and standard/dev editions was a bunch of cash and the spaces within the device were filled with epoxy and would erase if tampered with.

Software was the same, hardware looked the same. The crypto module is validated only with the $$ hardware.

Sometimes the non FIPS devices will have other algorithms not on the FIPS list.


Did you ever consider the Yubikey HSM at all as it’s much cheaper?


No, as at time they did not offer a FIPS device, which was a requirement. Another nice feature of the Thales is that you could use multiple smart cards to ensure that no one person can do certain things.


I thought PKCS#11 was exactly what the author wanted: an API for performing arbitrary sign and encrypt operations using a hardware protected key. What doesn't it do?


PKCS#11 is a C API. It does not describe the wire format for talking to the actual hardware.

To use PKCS#11 for a particular device, you need a module (shared library) to translate between the C API and the actual hardware. This module is usually vendor-specific.

If I develop software with PKCS#11 support, I'm basically asking every user to find a PKCS#11 module from their device vendor and install it in the right place.

With U2F at least the hardware wire format is standardized: https://fidoalliance.org/specs/fido-u2f-v1.2-ps-20170411/fid...


The wire format is standardized: ISO 7816. Even U2F uses ISO 7816.

This issue with existing smart card technology is not lack of standardization, it's too much standardization--too much flexibility and stacks that are too deep.

Vendors ship their own PKCS#11 drivers as a convenience. But PKCS#11 isn't the only high-level API. The other is PC/SC, which is actually simpler than PKCS#11, though it often requires more local support from the OS. But not necessarily. You can write PC/SC shims that talk directly to hardware, or even to Vault servers if you want, w/o OS support. I have my own rapid driver framework that supports all of these. For example, I have a PKCS#11 and PC/SC client driver which can use the Apple T1 chip to authenticate to a Vault server for remote signing using Transit keys--the only engine that supports ad hoc remote key operations. This permits sharing GnuPG (via PC/SC) and OpenSSH (via PKCS#11) keys between users, without actually disclosing the keys, though Vault actually makes it difficult to do this securely as you need to write ACLs to prevent transit keys from being exportable.

BTW, you don't need special drivers to use Yubikeys, either. They just provide them as a convenience because the FOSS ecosystem is confusing and... non-optimal.

I'm hoping to release a macOS product soon and as part that may release some of my framework as FOSS.


That level of standardization is a feature, not a bug. PKCS#11 lets you use any compliant hardware device with any compliant software package, as long as they both implement the spec. Compliant software packages include: ssh, Java's keytool, the GnuTLS utilities, the openssl utilities, wpa_supplicant, various web browsers, and VPN clients. Nowadays many popular Linux distributions come with p11kit configured out of the box, which lets openssl/GnuTLS autoselect the correct PKCS#11 shared library based on the matching information in the PKCS#11 URI.

While the low level API is complex and the UI often isn't ideal, PKCS#11 has been a godsend for interoperability because it abstracts out the low level hardware interfaces and other implementation details. It lets your application seamlessly access hardware-backed keys whether the keystore is sitting on USB (Yubikey), ISO7816 (smartcard), I2C (TPM), or something else. On the application side, adding PKCS#11 support only takes about a dozen lines of code, after which the app can use hardware backed keys/certs to perform TLS negotiations.


I thought that too. The Nitrokey HSM would do what he's looking for, except that it's exposed as a PKCS#11 module.


A tomu (https://tomu.im/tomu.html) can be used as a U2F device. Since it’s hackable and the code for U2F is available maybe it can be adapted as the author was asking (do you know of a device...?)



> Unfortunately, we don’t have the freedom to expand other standard protocols the same way.

There's nothing standing in your way. However, SSH has the advantage that it's commonly used interactively, so FIDO is a good fit. Any protocol that is most often used silently with the user maybe not attentive or not even present won't work well - it's easy for your phone's mail client to just present the same password it remembers each time it re-connects, but it would be annoying if you need to touch the fingerprint reader each time. I can imagine (if anybody wanted to) retro-fitting SMTP submission to do FIDO, although I don't know if there's a practical way to hack it into the existing SASL AUTH layer.


My biggest gripe with a security key right now is that there is no clear way to build the habit of using it.

Basically every site will offer to "never ask from this device again" and then I have a key that I haven't used in a long time.

Which, yes, I can use nothing but incognito windows, but that is too extreme and a single time I forget breaks it. Why can't I set it so that I need to use the key once a week?


Does anybody know if there is a U2F software solution that works with mobile phones?

Ideally with the following features:

* Stores keys securely in the Hardware-backed Keystore

* Authentication via fingerprint + periodically via password

* Allows to backup the secret key during setup

* Supports multiple devices

* Open source

* Works over Wifi

* Works with Linux desktops and Android phones


U2F is the predecessor to the current standard, WebAuthn. If a web app supports WebAuthn, then that integrates with native keystores (Windows Hello, Face ID, whatever Android has), as well as hardware keys. The site operator has some flexibility to prefer certain methods (platform vs. external) and devices (attestation).

I wrote an authenticating proxy that uses Webauthn: https://github.com/jrockway/jsso2. I don't think you should use it, but you can fire it up locally and try enrolling the various devices. Actually, you can just use Duo's demo: https://webauthn.io/


Unfortuately WebAuthn with the native keystores requires Bluetooth, which is a constant source of pain. That's a deal-breaker for me.


I don't think that's true. I just logged into my own WebAuthn implementation via my iPhone with an NFC security key. (Bonus points: the key was enrolled on my PC.)

The server operator can choose to reject NFC, but the implementation/standard doesn't require that behavior.

Edit: Actually, I don't understand your question. Are you trying to authenticate on a PC using your phone as the security key? If so, I don't know how to do that. I just enroll Windows Hello and Face ID.


Google built a not as yet standard way for Chrome on a PC to reach out to a Phone and get WebAuthn credentials that way. It has to talk to the nearby phone somehow and so Bluetooth comes into the picture. I've never used this, and I don't know anybody who has, my Security Key plugs into USB, my cheaper FIDO dongle likewise, no Bluetooth here.


I think you're confusing the cloud-assisted Bluetooth (caBLE) transport with actual built-in device keystores. Google is testing this on their properties while it's being standardized in CTAP and WebAuthn, see https://github.com/w3c/webauthn/issues/1381



That's almost what I was looking for.

I wonder how long before Akamai kills it.

Main downside is that it doesn't support multiple devices


I thought it was already discontinued a while ago




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: