Hacker News new | past | comments | ask | show | jobs | submit login
Using Yubikeys Everywhere (tedunangst.com)
165 points by tdurden on Feb 20, 2017 | hide | past | favorite | 91 comments



You can have a security key on Google without a phone number, but you have to enroll with a phone number and then remove it. Google won't allow you to remove the phone number unless you have some other backup 2FA method available.

The best Google (actually: everywhere) 2FA "stack" in Q1 2017 is:

1. Hardware U2F

2. Software TOTP (Authenticator app on your phone)

3. Physically secured backup codes

4. Disabled SMS

My experience with the Y4 hardware and, particularly, software has not been great. I'm using the Y4 for SSH access (through ssh-compat mode on gpg-agent) and it's mostly OK, but if I try to use PIV mode on the same token I run into all sorts of problems. I'm bullish on U2F, but bearish on Unix token applications.


I have a concern about software TOTP. As tptacek knows (but I'll mention as background for other readers), the various authenticator apps use the standardized TOTP algorithm [1]. This algorithm uses a private key that the server and the client (your phone) have in common.

The question is, where is the private key stored on your phone? According to some posts from 2012 or so [2], on Android, Google Authenticator stores the key unencrypted in an sqlite database, which you can look at if you have rooted your phone.

This seems insecure. Is this still an issue with Android? Are Apple products any better?

[1] https://en.wikipedia.org/wiki/Time-based_One-time_Password_A...

[2] https://www.howtogeek.com/130755/how-to-move-your-google-aut...


You should basically assume your TOTP secret on your phone is as secure, and no more secure, than any other piece of sensitive information on your phone. If you don't trust your phone, that's a good reason not to use your phone to generate TOTP secrets. For users like that, you can configure a Yubikey to function as a TOTP token (but it's a pain).

For me, I trust my phone more than any other piece of computing equipment I own. I also rarely unlock my password manager on it, so there's no one attack that gets you everything you need by hitting either my phone or my computer.

If I had a generic, non-Google Android phone, or any kind of rooted phone, the equation would be different for me.


> If I had a generic, non-Google Android phone

Yeah, that's me. So I have started experimenting with the Yubikey 4. (For those who don't already know: the Y4 keeps the private key somewhere in hardware, and never lets it out; the Y4 itself does the TOTP calculation onboard). The Windows desktop Yubico Authenticator (which talks to the Y4) is not too painful.


I'm curious, why do you trust your phone to have a session token for your Google account, but not the TOTP secret? In my view they are essentially equivalent.

Both allow effectively unfettered access to your Google account (with the exception of some account security changes, for which both require your password), both can be revoked from the web interface (arguably a TOTP secret is worse as it can create new session tokens but revoking it doesn't revoke tokens it created). Both can be stolen from unencrypted flash, and stealing either requires root (or privilege escalation, which is effectively root).

This whole "TOTP is weak, use yubikey" argument seems 100% about convenience, with a smattering of pseudo-security backing it up.


> why do you trust your phone to have a session token for your Google account, but not the TOTP secret?

Why would you trust Google or anyone else with your work (bank, Github, whatever you use TOTP for) credentials? I can't understand how anyone would think it's "pseudo-security". As you rightly say, your credentials are in plain text on that phone.


> smattering of pseudo-security

That's a good description of the state of my knowledge.

To answer your question: the Google account that my phone has access to doesn't have anything important, so I'm not worried about any related keys leaking. The TOTP secret would be for something more important, that the phone itself doesn't have access to.

In other words, I'm wondering whether a generic Android phone is safe to use as an over-sized hardware security token (or rather, as backup for a more convenient hardware token). It seems the answer is no.


For general purpose operating systems, you can PGP encrypt the secrets and pipe them to a TOTP cli program such as oathgen (I wrote oathgen as a PoC when I was evaluating 2fa options a few years ago). This way, you have confidence in how your secrets are stored and used.

    https://github.com/w8rbt/oathgen
I share your concerns about mobile apps and browser extensions that do TOTP. All the ones I've seen store the secrets in clear text.


The story for security on Android is really, really sad. Google should be pressing really hard on the secure enclave button. Even game consoles have better security [here I wanted to add: "... than the most secure Android device", but that would be untrue, since I don't know all the Android devices out there. Let's just say the situation is sad, verging on terrible because it's Google, full of smart people who almost certainly know better].

Google probably can't get away with just trusting TrustZone, given the number of manufacturers of SOCs and the opportunity for the folks doing chip-level cut-and-paste of features to get things wrong.

I imagine there are patents in the way. Solve that with the realization that a united front against adversaries (crackers, state-level actors) is better than a fragmented one.

Solve the technical issues by getting the right 20 engineers in a room for a week or two, to set a proper direction. Android could have great security in a large set of phones inside of 18-24 months, with the right management.

Sigh.


While I agree with you, they do in fact have hardware-backed key storage these days; it's just only available (and reliably useful) in recent Android versions (namely those devices with fingerprint sensors).

Since we're on the subject, OP was talking about a perceived vulnerability with rooted Android devices. I highly suspect that even a decent secure enclave solution would warrant some skepticism if the system around it is wide open. For example, timing attacks or even plotting voltage draw on an oscilloscope could leak some key information.

If you're rooting your device anyway, you should probably invest in a hardware security key.


Depends on where you live.

If you assume that the US government is not trustworthy, and a possible enemy of you (possible if you're a journalist in these days), then an unrooted phone might be less trustworthy than a rooted one with custom software (such as a custom build of CooperheadOS)


Or just use an iPhone. The amount of marketshare Apple would lose if there was even a hint of a US-intelligence backdoor in their systems keeps them with a very, very vested interest to actively fight that.


> Solve the technical issues by getting the right 20 engineers in a room for a week or two

Oh man, I haven't laughed that hard in a while.


What do you mean we cannot fix the human factor?


> Google Authenticator stores the key unencrypted in an sqlite database, which you can look at if you have rooted your phone.

Interesting, actually that's exactly the reason why I like the TOTP algorithm over other two factor mechanisms. I can see what my private keys are, I can back up them, and I can even copy them to other phones I maintain.

Some would tell me it's not good for security, but I'd like to manipulate them in the same way I manipulate my SSH keys. It's just so convenient. If they are moved to TrustZone or something I'd be massively upset.


Why are you backing TOTP keys up? I hear this all the time and it never makes sense to me. You have a backup: they're your backup codes. You use them to enroll the device you get to replace the one you lost. The whole point of TOTP keys is that there's one per device; the idea is to simulate bearer tokens.


(replying to a dead thread, I guess)

One issue: where/how to store the backup codes? Paper in a box in the bank? Not very convenient if you are traveling and lose your primary device. Paper in your wallet? Not very secure (and the probabilities of losing your wallet and losing your phone are correlated). Encrypted file on dropbox? I don't trust myself not to make a stupid mistake and leave an unencrypted copy in a "temporary" file somewhere.

TOTP key: load it onto 3 password-protected Y4 keys. Take two of them with you on your trip, and leave one in the bank. No way (without very expensive expertise) to extract the TOTP keys from the Y4, which also means that a user blunder won't expose the keys, and hopefully your Y4 password is strong enough to give you time to change the TOTP keys before anyone cracks a lost Y4.


Lots of vendors offer TOTP but not backup codes.


> but I'd like to manipulate them in the same way I manipulate my SSH keys.

You don't have per-client SSH keys so that only one needs to be revoked if you lose control of a particular client? I bet the red team LOVES you.


Yubico has "Yubico Authenticator", a Google Authenticator lookalike where all the codes are stored on the YubiKey and accessed over NFC (you need an NFC YubiKey, of course).


If you use this, you can also generate TOTP codes on your computer using the yubioath Desktop application. It's quite neat.


If this is a concern you might take a look at storing the secrets with keepass and keepass2android. It will keep them encrypted and let you generate them through the app once you've entered everything in (and can be configured to lock everything down again immediately). There's still other possible attacks but it should at least be as secure as your password on your database while not in use.


I don't use the Google Authenticator for this reason.

There are some Apps in the Playstore which make use of the Security Chip, if present which is the case for me, such that even if the phone were compromised with a rootkit the attacker would not be able to read the TOTP secret.


Can you list one or more of those apps?


com.mufri.authenticatorplus is the app I'm using, however it's not free, which is why I'm hesitant to freely recommend it.

eu.overmorrow.thenticate also offers this by description and a friend is using it for this, however I have no personal experience with that app.


For Apple it won't matter since apps are heavily sandboxed and can only pass eachother data through predefined APIs. Jailbroken phones are another can of worms though, since you're literally leaving open an exploit to run unsigned code.


Where does Google Prompt 2FA fit into that scale? I'm unclear if it's it's roughly equivalent to Hardware U2F, since you would need to compromise google's push service, and gain access to my phone in order to bypass. I currently use my Yubikey, but I like the idea of Google Push.


I know it's really asking a huge amount, but my 2FA wish-list for 2017 is:

  1. U2F in all browsers.
  2. U2F on all the services I consider important (Google, GitHub, Facebook, etc.)
  3. U2F setup on the above services without a phone number -- just force acceptance of the backup codes.
  4. U2F integration in SSH.
I'm currently using TOTP through Google Authenticator. Not great, but definitely a step up.

I think with the above in place I'd move to LastPass (from KeePassX), as the security of my passwords becomes much less important. Still not a huge fan of putting my vault in the cloud though.

Oh, and a fire-proof "safe" for my backup codes.


What sorts of problems do you run into? I tried to set up this exact thing (PIV mode) a few days ago, and apart from the fact that gpg-agent won't accept the SSH keys for the card (which can be worked around with a simple alias), things have been great.


Authenticator app gives me just 6 digits, it does not seem to be safe. I wonder how much resources attacker would need to compromise google account by brute force method.

(of course he would need to know my google account password)


Its near impossible. Not only would they need to brute force it, but they'd need to brute force it in the 30 second window available before the code rotates again.


How about TPM?


That would be up there with U2F. It's successor is the W3C Web Authn spec, Microsoft is shipping an early version of it where it integrates Windows Hello in the Edge browser: https://blogs.windows.com/msedgedev/2016/04/12/a-world-witho...


For Google 2FA, I'm pretty sure that it lets you remove the phone number once you've added a 2nd 2FA mechanism. Eg, once you've setup Google Authenticator and a Yubikey. At least that's how it worked when I set it up a while ago.

Given that Ted uses OpenBSD, I'm surprised that he has not mentioned the dumpster fire that is U2F on Chromium for BSD. This bug details the issue: https://bugs.chromium.org/p/chromium/issues/detail?id=451248. The last I checked, (~2 months ago), it was still a problem on FreeBSD. In order to avoid the crashes, I needed to set a user-agent spoofer to Firefox, so that I would not get a u2f prompt (and would instead be prompted for a Google Authenticator code).

That became frustrating enough that I decided to try switching from Chrome to Firefox so that I could use my Yubikey. I had to setup a u2f plugin for firefox to use u2f on FreeBSD. And to even get u2f requests, I had to install a user agent spoofer, and spoof a chrome user agent. Argh! So now I was spoofing Firefox on Chromium, and spoofing Chrome on Firefox. My head was spinning, and (because the spoofer spoofed old agent strings), I was getting nastygrams from corp. sec. for running out of date, insecure browsers.

I finally just threw in the towel, and set up a Linux bhyve VM to run Chrome under linux. The performance sucks, but at least I can finally use my Yubikeys. On the plus side, I can use PCI-passthru to pass in a webcam, and actually do hangouts on my desktop now.


I think the core issue there, is this status update: "Wont Fix: Neither OpenBSD nor FreeBSD are supported platforms for Chromium."


Indeed. And I was hoping that a rundown of 2FA from a BSD guy would mention this bug.


This is on a slight tangent, but it annoys me, so:

Google U2F requires you to use Chrome, i.e. not FF with the add-on (until it's built-in).

It just seems like annoying 'we're the only browser with built-in U2F' gloating, when every other site I've seen (e.g. Github) just tries it, and allows you to fallback on TOTP at your own discretion.


The firefox bug to implement native U2F has been open since 2014 [1].

1 https://bugzilla.mozilla.org/show_bug.cgi?id=1065729


Yes, but I'm using the third-party add-on until it's done - which works everywhere but Google.


You can't have a U2F Key and a Mobile Phone Authentication at the same time with Google.


I believe you can have whatever auth mechanisms you explicitly allow. You will get prompted to use the key if you enable it, but you can also select alternative authentication methods when prompted for the key.


You can't have a security key as well as the new Prompt method.



Google Authenticator (which uses TOTP) is not the same as Google Prompt: https://gsuiteupdates.googleblog.com/2016/06/new-settings-fo...


Was anyone talking about Google Prompt? It seems like bb88 was talking about Google Authenticator, and drewg123 was definitely talking about Google Authenticator.


Nope, the phone authentication prompt.


I have this. It even knows that iOS isn't supported and doesn't ask for Yubikey but Google Authenticator code.


What makes you think that?


Because I have a u2f key, and you have to disable the phone authentication in google.


I have a U2F key and had to go out of my way to get rid of phone authentication.


I just recently started using my Yubikeys (in Challenge-Response mode) w/ LUKS to unlock my drives at boot -- it's pretty neat!

On my primary system (a workstation, at home), I use the Yubikey to house my GPG keys and I use the "derived" SSH key that lives inside of it for authenticating to everything at $work. It was a bit of a PITA to get working -- points at gnome keyring -- but it works great. I have a Nano I'm about to set up the same way and just leave it in my primary laptop permanently.


> I have a Nano I'm about to set up the same way and just leave it in my primary laptop permanently.

Do I misunderstand the purpose of a Yubikey/U2F device here? Is not leaving it plugged in permanently effectively reducing the strength of the second factor?

With TOTP codes behind my iPhone lock screen I can be reasonably confident that stealing my laptop isn't enough to break into my accounts (buying time until I can contact our SIRT team), but with a device plugged in permanently I lose that. Obviously an attacker would have to know my password, but that assumption is a primary reason for multi-factor auth to begin with.

What am I missing?


I believe its like using it as an HSM; keys are protected against a remote intruder copying them, but physical access is another issue - you should still use it in conjunction with something-you-know, then it doesn't matter if the laptop is stolen.


The big win from a U2F device permanently plugged in to the laptop is as a defense against phishing.


Leaving it in permanently is mostly for convenience. The keys are still protected (by a "PIN") even if the laptop and/or Yubikey is stolen. There's also FDE on the laptop itself, protecting the data.


You can also turn on touch-to-use policy so that you have to touch the yubikey each time you want to use the key.


This is important. Physical action outside the computer should be required to access keys, otherwise a root access is enough to completely compromise the system.


Where by "completely compromise" you mean "an attacker can use your second factor while they have access to your computer and it's plugged in", which is pretty far from actual complete compromise (i.e. "an attacker can use your second factor at any time without needing to access anything of yours")


In the context of it always being plugged in, and just handing out keys when the system asks for it, I call it completly compromised.


No. He still needs the PIN to actually use the Sign-Key. However that can be key-logged if your computer is compromised, so the extra 'touch' does add quite a bit of security.

The U2F needs touch by definition anyway. GPG/PIV do not, this capability was added with the Yubikey 4.


Imagine an employee works for a company where they can log into the company network with just a password (or alternatively, just an SSH key or TLS certificate). If an attacker gets their hands on this credential, by any means, then the attacker can use that credential to log in. These kinds of attacks can happen remotely in a variety of ways. Phishing is just one way. Malware is another - perhaps the employee downloaded apparently legitimate software that had a credential stealing trojan within it (e.g. keylogger or SSH key slurp). Or maybe the employee used their same password at another website that's been compromised, despite being told not to. Or maybe the password is just a bad password and the attacker brute-forced it.

An attacker who gains sufficient access to the right system can obtain your employee's credential, copy it, take it elsewhere, and use it later. They could use the employee's credential months later from anywhere in the world, and you could have difficulty identifying exactly how the credential was compromised originally, especially if time has passed and the attacker covered their tracks.

Now imagine a different employee who works at a smart company where logging into the company network (and accessing all company systems) requires multi-factor authentication using U2F or OTP in addition to a password. The attacker might be able to steal the employee's primary credential, through whatever means, but they have absolutely no way to obtain the employee's U2F or OTP codes remotely and electronically. The attacker simply cannot get into the company systems without having access to the MFA. This substantially raises the bar for a successful attack, and provides a great defense against attacking the company network and systems.

Yes, it's true that if you leave your MFA token in your laptop, then you could misplace your laptop and lose the token with it. In that scenario, though, the random attacker who discovers a random laptop in a cafe by chance won't know who it belongs to, or what company, and won't know the owner's password. Security is still preserved. The most plausible bad guy will sell the laptop to a pawn shop, not try to attack the owner's network. Furthermore, the employee should have been trained to call the IT helpdesk as soon as they realized their laptop and token were missing, cutting down the window for a successful attack.

To mitigate the risk of random loss, you'll want to employ full disk encryption, and disable the username auto-fill. This way an attacker can't get anything from the device without signing in, and has no information about whom it might belong to. Employ a policy that locks the screen after a few minutes of inactivity to make it difficult for attackers to swipe idle but still signed in laptops. These precautions all together make it incredibly hard for an attacker to get in the front door.

The only scenario in which security is compromised is if the employee's login credential has been compromised, and the same attacker has also physically stolen their MFA token. An attacker that is actively attempting to steal physical belongings from you in order to penetrate your organization is getting into Hollywood spy movie stuff. It's a real life concern for people doing certain types of work, to be fair, but those people usually mitigate the concern by keeping their equipment in secure facilities, and not taking their laptops to cafes to begin with.


If the adversary gets root access to the workstation, they can verify at will. Removing the key, even just to place it next to the screen, prevents this to some degree and only allow the verification to happen while you're using the computer.

But I still believe that proper 2FA require you to physically push a button on the YubiKey for it to give out keys. That enables you to control completely when it provide keys.


I use a smartcard with an RSA key with a LUKS-like format to encrypt my disks -- uses AES256-XTS for the actual dm-crypt, the smartcard's key is used to encrypt the symmetric cipher key.


What was the gist of it? I use LUKS with a pretty strong password on Fedora 25 and have a few Yubikeys I need to put to good use.


gnome-keyring replaces gpg-agent and ssh-agent (on the system) but doesn't actually include all the functionality... basically, disable GPG/SSH agent support in gnome-keyring (if it's running -- and it's quite often automatically launched, even if you aren't aware of/expecting it), and explicitly use the real agent instead.

You can find the details on Google, but it's one of things that you wouldn't even know was a problem to be aware of.

Edit: Sorry... for LUKS, write a custom initramfs hook (shell script, in my case) to prompt the user for a "passphrase", use that as the challenge sent to the Yubikey, read the response from the Yubikey, write that to /crypto_keyfile.bin (in the initramfs, so not on disk), and the "encrypt" hook should use that file as the key to unlock the disk. You'll have to do it by hand once so that you can add the key (response) to a slot in LUKS. There's examples on Github of this but of course I didn't find them until I had already figured it out myself.



Amusingly I just ran across this[0] link an hour or so before this HN post went live.

I haven't done it yet, but it looks pretty reasonable. I prefer not to mix LUKS with random many-year-old GitHub repos.

[0] http://askubuntu.com/a/599826


Instead of using Yubikey through GPG to replace the SSH key one can also use the PKCS#11 module they provide: https://developers.yubico.com/PIV/Guides/SSH_user_certificat... This requires replacing GNOME keyring with SSH agents, though.

One can also use one of the 20 additional slots (82-95) the Yubikey 4 provides over the Yubikey NEO by replacing the '9c' with '82'.


Using it through GPG already required you to replace Gnome Keyring with gpg-agent. At least it did in the past.


This article shows that U2F has LONG way to go before it is consumer ready.

Expanding on the 'key' metaphor, U2F (or MFA in general) needs to be as easy as unlocking a door. Everyone knows how to do that; insert key, turn key, open door.

Until things like Yubikey and their ilk are as simple as this (and I know it's mostly the software that has to catch up), it's never going to be wildly adopted on peoples personal logins.

I use Smart-card auth on my work laptop along with a 8 digit PIN to create 2FA, and although it's supposed to be universally supported across all the applications I use, it really isn't. When I first started working here, I spent several days very confused about where the smart-card did and didn't work - I dread to think how the layperson works this stuff out.


I absolutely agree that it is not consumer ready. I'm trying to fix sharp corners with Truefactor.io, any thoughts?


- U2F is a standard

- The FIDO Alliance is the body that maintains the U2F standard. U2F keys can be "FIDO Certified" if they comply to the standard and have been tested.

- Yubico is a company and Yubikey is a brand. Yubikeys often support more than U2F so some Yubikeys are much more expensive than a key supporting only U2F.

- Prices start at ~$10 for a FIDO certified key

- U2F Zero is an open source project, not FIDO certified.

- I have three U2F keys of various brands and they all seem equivalent so far.


Too much trust centralization. How do you know Yubikey key generation hasn't been compromised? They're an obvious target.


The logical end-state of this train of thinking is tamper-resistant key generation and storage, with fully published sources and audited assembly line production.

Except then they need to ensure that all the equipment on their line was produced by trusted suppliers with the same level of disclosure. Ditto all software which touches anything in the manufacturing process. It's turtles all the way down, see also "on trusting trust".


Or just do it with open code & hardware on an ASIC whose process node is verifiable by eye with a microscope against a reference image after samples of it are decapped. Either by the owner or numerous third parties. You make it sound much harder than it is unless they're using unverifiable technology. In that case, it might be pretty hard. ;)


That's ridiculous. People can't reliably produce good implementations of crypto software, and you suggest they design and verify ASICs? My point was that security is about understanding and accepting, not foolishly trying to eliminate, risk.

Just a few of the mountains which need climbing: First, you'd need a (actually) random sampling not just one (keep in mind any ASIC you test is likely a write-off due to tamper-resistance). You'd also need a high assurance supply chain to ensure your verification isn't premature (as well as sufficient levels of tamper resistance/evidence). You'd further need to verify the software around this hypothetical ASIC is free of bugs. You're restricting yourself to software which can be performant on older slower process nodes. You (almost certainly not an ASIC company) need to have an in-house ASIC shop, or you need to vet an external one (sounds cheap and easy for the average business). In fact, you need vetting of your internal staff at all stages of this process.

You are making this seem _much_ easier than it is. There's a reason this product, which there is definitely a market for, doesn't exist (and that reason is much more mundane than the cryptolluminati not wanting it to "get out"). Consider that Google did something like what you're suggesting. It took them years (and they're Google), what chance does the average person/SMB have?


"That's ridiculous. People can't reliably produce good implementations of crypto software, and you suggest they design and verify ASICs? "

You suggested that they verify or perfect every person or thing in the entire supply chain. That was ridiculous. I suggested they or trusted, third parties simply verify an ASIC. That's doable. Matter of fact, people already tear down and image ASIC's on older nodes for fun. A number of companies and labs tear down modern ASIC's to reverse engineer them for understanding or finding patent violations. ChipWorks is top company in that space.

So, it's my recommendation of a practice that's being done right now vs you're recommendation to change everything in an impossible way. I stand by mine.


You should reread my post in the context of the post I'm replying to. It's not a recommendation, it's an explanation of where the poster's "how can we trust their manufacturing process" question leads. People tend to throw out those types of questions as reasons to not use e.g. Yubikey without thinking through how they would want such proof given to them. People also reverse x86 binaries for profit, but suggesting that RandomSaaS do so to their compiler to check for backdoors would be equally a waste of their time and money.

Just because something is technically possible doesn't mean the average company should do it. Security is about trade-offs.


" It's not a recommendation, it's an explanation of where the poster's "how can we trust their manufacturing process" question leads. "

If it's that, then it was a decent attempt at explaining how big the problem is if one wants to trust every step. Good news is you don't have to as I illustrated. Just a small number of third parties doing a tear down. They can even be enemies reviewing same thing with equipment made in opposing countries for best effect. Old trick of mine. :)


I suppose you could use an fpga, but you still need to audit the assembly line. It's not simple, and you will place trust somewhere.


Use my same trick but with FPGA's. Those on older nodes barely have any logic gates, though. Another trick for the wealthy is using the latest node that's so cutting-edge they can barely get stuff to work at all. Hard to do some subversion of a black box if state-of-the-art verification barely made that black box functional to begin with.


So, more turtles waiting down there....

In the end, it is the attempt to create a positive proof in a system that is not a defined formal system (real world vs pure mathematics). Impossible - as you say, you need to place your trust somewhere. Even if it's your own abilities. But that trust can always turn out to be not justified.


You need to make a simple interpreter or processor that's verified by eye, brute force, or mathematically. An old example was FM9001 with newer ones being VAMP (open?) and AAMP7G (proprietary). You do that on a node you can verify by eye. You verify a random sample of the pile you ordered by eye. You use it with cheapest, simplest, oldest parts for input and display that you ordered in obfuscated way so they don't think source of order was important. Then, you have hardware you can trust to do the rest of the calculations for trustworthy hardware or software that's much better. Including image recognition software to semi-automate your job of spotting things that don't below in images of chips. ;)


Yibikeys are not the only U2F devices, you can have multible by different companies.

However, seems to me that if you are not Snowden, this is one of the smaller worries. Compared to all the other phishing, malware and so on.


There is still one issue left with Yubikeys: backups. When I loose my Yubikey after I cautiously registered it with a hundred different sites and servers, I would like to restore its secrets onto a brand new (blank) Yubikey and go forward. The only official course of action given by Yubico today is to manually un-register and re-register on all sites and servers.

Of course the key backup should be correctly protected, but that problem is not hard to solve.


I ran into the same problem with ed25519 :/. Hopefully they upgrade and offer this at some point!


neat. what are some use-cases for ed25519?



>This doesn’t have any effect on the phone, however. I have to tap back, return to the login screen, and enter my password again

I believe you can simply hit the login button (sans code) after validating the login attempt instead of closing the application.


Tried my U2F yubikey on Chromium 48 on OpenBSD 5.9, but it crashed :(


eiecccevfhqwhgadshgnsdhjsdbkhsdabkfabkveybvf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: