Hacker News new | past | comments | ask | show | jobs | submit login
Google: Security Keys Neutralized Employee Phishing (krebsonsecurity.com)
775 points by sohkamyung on July 23, 2018 | hide | past | favorite | 409 comments



For those interested, I recommend reading how FIDO U2F works. There's more in a security key than just FIDO U2F, but FIDO U2F is easily the most ergonomic system that these security keys support. Simplified:

* The hardware basically consists of a secure microprocessor, a counter which it can increment, and a secret key.

* For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.

* To authenticate, the server sends a challenge, and the security key sends a response which validates that it has the private key. It also sends the nonce, which it increments.

If you get phished, the browser would send a different domain (www.github.com.suspiciousdomain.xx) to the security key and authentication would fail. If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.

I'm excited about the use of FIDO U2F becoming more widespread, for now all I use it for is GitHub and GMail. The basic threat model is that someone gets network access to your machine (but they can't get credentials from the security key, because you have to touch it to make it work) or someone sends you to a phishing website but you access it from a machine that you trust.


It's also tremendously more efficient to tap your finger on the plugged in USB than it is to wait for a code to be sent to your phone or go find it on an app to type in. I've added it to everything that allows it, more for convenience than security at this point.

Most places that allow it require that you have a fallback method available.


It's more efficient, but remember the point of this story, which is that it mitigates phishing attacks, which code-generating 2FA applications do not.


1password does mitigate it to some extent by automatically copying the code to your clipboard after filling the form, these 2 things only work on the right domain. Of course you can still copy the values from the app but at least it hints at things being wrong.


I always wondered what was the point of using 1Password for 2FA. After all, if you store your 2FA secrets in 1Password to generate codes, you've just reduced your 2FA to one factor?


If you're using a password manager to have unique passwords for every site, what does TOTP 2FA even protect you against?

Since 2FA only comes into play for protection if the password is compromised, if you're using a password manager that should mean that data breaches at unrelated sites shouldn't be a risk.

So we're down to phishing and malware/keyloggers being the most likely risk -- and TOTP offers no protection against that. If you're already at the point that you're keying your user/pass into a phishing site, you're not going to second guess punching in the 2FA code to that same site. I'd even argue push validation like Google Prompt would be at a significant risk for phishing, unless you are paying close attention to what IP address for which you're approving access.


> If you're using a password manager to have unique passwords for every site, what does TOTP 2FA even protect you against?

Sounds a little obvious to write it out, but it protects against someone stealing your password some way that the password manager / unique passwords doesn't protect you against. Using a PM decreases those risks significantly, mostly because how enormous the risks of password reuse and manual password entry are without one, but it certainly doesn't eliminate them entirely.


It's not at all obvious to me, because 1Password passwords are stored in the exact same places that 1Password-managed TOTP codes are. You might as well just concatenate the TOTP secret to your password.


Having a TOTP secret would protect against theft of credentials in transit. The TOTP is only valid once, so that credential exchange is only valid once. They wouldn't be able to create any additional login sessions with the information they've attacked. However, there's a good chance if they could see that they might also be able to see a lot of other information you're exchanging with that service.


It creates a race condition in transit - if they can use the code before you, then they win. I can intercept at the network level, but also via phishing attacks - there is no domain challenge or verification in TOTP.

I know having someone malicious get into your account multiple times vs once is likely worse, but its hard to quantify how much worse it is - and of course using that one login to change your 2FA setup would make them equivalently bad.


Not quite exactly "equivalently bad", since a user is more likely to notice a 2FA setup change than they are a phishing site's login error and then everything working as usual, but yeah, perhaps it's splitting hairs at that point.


which is why I'm wary of using my password manager for OTP, and use a separate one. Not sure if it's too paranoid, but it doesn't make sense to me to keep the 2 in the same place.


There appear to be two points being conflated — 1/ 2FA via secrets stored on a separate device from your primary device with a PM provide more security than those stored on one device, and 2/ once you use a PM with unique password for every site, much of what OTP helps with for is already mitigated.

Both seem true, and what to do to protect yourself more depends on what kinds of attacks you're interested in stopping and at what costs. Personally, PM + U2F seems the highest-security, fastest-UI, easiest-UX by far — https://cloud.google.com/security-key/


If you're using 1p password for storing your passwords, then yeah, it would make sense to use something else for your TOTP.


This is the thing I struggle with: name a scenario where you would have your unique site password compromised but not have at least 1 valid 2FA code compromised at the same time.

The best answer I have for where TOTP can provide value: you can limit a potential attack to a single login.

I wanted to say you could stop someone doing MitM decryption due to timing (you use the 2FA code before they can), but if they're decrypting your session they can most likely just steal your session cookie which gets them what they need anyway.


Because you accidentally type your password for site A into the login for site B.


Someone “hacking” the 1Password web service

Logging in to a site on a public computer and the browser auto-remembers the password you typed

A border agent forcing you to log into a website (this scenario only works if you leave your second factor, which will most likely be your phone, at home)


Usually in a higher security environment, we'll make sure the authenticator is a separate device (phone or hard token) and expressly forbid having a soft token on the same device that has the password safe.


> If you're using a password manager to have unique passwords for every site, what does TOTP 2FA even protect you against?

Man in the middle attacks of course, which are possible on insecure connections. With the prevalence of root certificates installed on people's computers as a corporate policy, by shitty anti-viruses, etc, it's very much possible to compromise HTTPS connections.

The TOTP 2FA code acts as a temporary password that's only available for a couple of seconds. A "one time password" if you will.

Yes, it still strengthens security.

Read 1Password's article about it: https://blog.agilebits.com/2015/01/26/totp-for-1password-use...


This would make sense if virtually every website in the world didn't react to the short-term TOTP secret by handing back a long-term HTTP secret.


If there's no point improving client authentication until you've improved website security and no point improving website security until you've improved client authentication then neither will ever get better.


If there's a MitM attack, you've already lost. Sure, they can only login one time, but they're in once you provide the authentication steps.

Phishing sites collecting and using the 2FA creds in real time was discussed here, among other places: https://security.stackexchange.com/questions/161403/attacker...

With available open source like https://github.com/ustayready/CredSniper readily available, you're only going to stop lazy phishing attempts.

You only get protection if you assume the scripts are just passively collecting information for use at a later time. If they're actively logging in to establish sessions while they're phishing, it's game over.


But don't many sites require a second authentication to modify access to the account (change password, add collaborator, etc)? In that case, an attacker would need a second one-time code.


Normally I believe they just require the password. The threat model there is someone leaving their account logged in.


> Normally I believe they just require the password.

Shoot, you're right. Not sure what I was thinking. My bad.


Yeah that's why codes don't make for a good second factor. You should use something like Fido or a client cert such that a MitM can't continue to impersonate the client.


The point is that one time passwords are only valid once. If your password is stolen, it's stolen. If a TOTP code is stolen, it's probably not even useful because it's already invalid when they log in (including for time based, in well-designed software.)

There's obviously a class of attack that hardware tokens protect against (malware) that password managers can't entirely (unless your operating system has good sandboxing, like Chrome OS for example.) But it really does protect against phishing to a degree, as well as certain attacks (key loggers or malicious code running on a login page on the browser)

Hardware tokens are the winning approach, but even when you put TOTP into a password manager it is far from useless.


It only protects against the most naive phishing attacks, where the attacker just accumulates passwords for use at some later date. More sophisticated phishing attacks will just copy the OTP in real time:

https://www.schneier.com/blog/archives/2015/08/iranian_phish...

U2F defends against that sort of phishing as well.


Sure, but most people aren't targeted by advanced adversaries, so using your password manager for TOTP can be a lightweight way to make most hackers completely disinterested in attacking your account. U2F requires an additional investment. Depending on the type of physical security you want, it's normally a good idea to invest in at least n+1 U2F keys, so you have a spare key you can keep with you and permanent keys in all of your devices. (Obviously, the latter means that your U2F can be stolen easier, but the reality is that this is not nearly as big of a deal as stealing a password, since you can unprovision a U2F key immediately upon realizing that it's gone.)


Proxying the authentication isn't really an "advanced" attack. In a 19 minute video[0] the author of CredSniper[1] gives a complete walk-through for setting up his proof of concept tool, including building the login pages and registering for LetsEncrypt SSL certs. The hardest part still remains choosing the domain name and getting people to click the link, and still people find ways to overcome those hurdles.

As TOTP use has increased, the basic phishing toolkit has evolved to match. Attackers want accounts, not passwords, so they're just adjusting to get working sessions. The passwords were only ever just a means to an end.

[0] https://www.youtube.com/watch?v=TeSt9nEpWTs [1] https://github.com/ustayready/CredSniper


That attack doesn't work when using 1password. 1password refuses to fill on the wrong domain.


You may not be the best example of how this can help, sounds like you have good security sensitivity.

Where I'm working now, we deal with several credential loss incident each month. Invariably, our users are tricked into authentication via a bogus site. 2FA would protect the credentials from being used by unauthorised people. Our staff are encouraged to use password managers, but that does not help this situation.


TOTP can protect against knowledge leakage as it is a second factor. For example, it will prevent someone successfully using a shared password a LinkedIn, associated with a corporate email address, to log into Gmail/O365.

It doesn't prevent any sort of active phishing campaign, because the login process can just ask for and immediately use the TOTP credential. User gets a possible failure (or just content based on what they thought they were accessing), phisher gets account access.


While that's true because you have a single point of failure, I think it's more likely that your passwords get leaked through site security than 1Pass security (depending how you sync/if you use their cloud version) so it's still more (not the most) secure because if they find your password in a database they still don't have your 2FA code.


It's not multi-factor auth.

Most of the smartphone based solutions are two-step auth -- it's just a different kind of secret that you know. If you use 1Password or Authy, your token is your 1Password/Authy credential.

The hardware based token approach is always going to be better, because the secret is part of the device and isn't portable. The Yubikey and challenge/response tokens are great as you need to have it present, you cannot easily get away with hijinks like putting a webcam on your RSA token.


I’d say that a separate phone app with MFA codes that are only stored offline qualifies as a second factor, as you need both the phone and it’s access code (fingerprint etc.) to see the code.


It can, but users have the ability to undermine those controls in many cases via Authy, 1Password, etc.


I consider possession of my device and a knowledge challenge in the form of the password and pin to be two factors. Use of a biometric in lieu of password is also two factors.

I don't see a way in which having the possession factor be on my keys is stronger than having it be in my laptop. In fact, for sites that require it my U2F key is in my laptop (Yubikey nano-C).

(Aside: That doesn't limit the usefulness of having a possession factor that is portable between devices, just I don't think it is necessarily stronger)

This is actually why I very rarely opt into the 2FA features of websites - I figure I already have two factors protecting me, but not necessarily factors recognized by the site.


You could also use two separate systems for password and TOTP storage - one gets passwords, one gets TOTP, and the one with passwords explicitly does NOT get the password for your TOTP storage.


I think 1password discourages it too, the option to add a 2fa code is pretty hidden.

Marking something as "2FA enabled" is super easy in comparison.


If you're not tracking your 2FA code through 1Password you get a big warning banner[1].

[1] https://i.imgur.com/uENh9oL.png


Just add the "2FA" tag and the banner goes away :)

I was saying doing that is easier than adding 2FA through 1Password.


If you're using 1Password-generated passwords and storing TOTP codes in 1Password, how are the TOTP codes not just theater?


Normally I would reply back and explain, but you know more about this than I do, so instead I will ask a question.

Does it not protect against your password being compromised in some other channel? Sure you're probably not reusing passwords, but what if they compromised it some other way? What if the website had a flaw that allowed someone to exfiltrate plaintext passwords but not get at other application data?

Or to put it another way, if you're using a password manager, why use TPOP codes at all if you believe there are no other attack vectors to get the password that TPOP protects against?


The website and the password manager in this scenario are storing the exact same secrets. If you're going to store them in a password manager, it is indeed reasonable to ask "why bother"?

TOTP is very useful! Just use a TOTP authenticator app on your phone, and don't put them in 1Password.


> TOTP is very useful! Just use a TOTP authenticator app on your phone, and don't put them in 1Password.

I was fully in that camp before I started talking with friends on red teams that were allowed to actually start using realistic phishing campaigns. Now I'm fully in the "U2F, client certs, or don't bother" camp.

Maybe I'm jaded, but it feels like the exploit landscape has improved enough that TOTP is as hopeless as enabling WEP on a present-day wireless network. Not only does setting it up waste your time, you're presumably doing so because you have the false belief it will actually offer protection from something. It may have been useful at one point, but those days are disappearing in the rearview mirror.

The only place I see TOTP still offering value is for people who re-use passwords, but only because it becomes the single site-unique factor for authentication.


U2F addresses phishing and password theft. TOTP just addresses password theft. That doesn't make TOTP useless; password theft is a serious problem, routinely exploited in the real world.


But the secrets serve different purposes, they aren't the same. So why not keep them in the same place? I'll admit that it is less secure of course, since someone could compromise your 1Password. But it is still more secure that not using TPOP at all, is not?

Again, is there no attack vector that exists that makes TPOP worthwhile when you're already using a password manager that makes it not worthwhile if it's in your password manager?


I'm not really sure I see how storing TOTP secrets in 1Password is materially any more secure than not using TOTP secrets and just letting 1Password generate a random password for that site.


a keylogger sniffs your password for site X, now they have your password for that site and can log in. If you also had a TOTP code, they can only log in for the next 30 seconds using that TOTP code, but they can't send out an email with your password in a CSV file to their friends and expect it to be usable.

I know I'm wrong because you know everything but I can't get past this particular one. unless the argument is, attackers aren't that lame anymore, then sure.


Best not to assume anyone is infallible. If you don't put people on pedestals you've got less cleaning up to do later when they inevitably fall off. Yes that includes you (and of course me).

2-3 minutes is more realistic for real sites than 30 seconds, because there is usually a margin allowed for clock skew. But yes each OTP expires and that's a difference for an attacker who doesn't know the underlying secret.

TOTP is also not supposed to be re-usable. A passive keylogger gets the TOTP code, but only at the same moment it's used up by you successfully logging in with it. Implementations will vary in how effectively they enforce this, but in principle at least it could save you.

Caveat: The system may issue a long-lived token (e.g. a session Cookie) in exchange for the TOTP code which bad guys _can_ trade unlike the token itself.

I think there's also a difference with passwords on the other side of the equation. If I get read access to a credentials database (e.g. maybe a stolen backup tape) I get the OTP secret and so I can generate all the OTP codes I need, but in a halfway competently engineered system I do not get the password, only a hash. Since we're talking about 1Password, this password will genuinely be difficult to guess, and guessing is the only thing I can do because knowing the hash doesn't get me access to the live system. In this case 1Password is protecting you somewhat while my TOTP code made no difference. If you have a 10 character mixed case alphanumeric password (which is easy with 1Password), and the password hash used means I only get to try one billion passwords per second, you have many, many years to get around to changing that password.

Still, FIDO tokens are very much a superior alternative, their two main disadvantages are fixable. Not enough people have FIDO tokens, and not enough sites accept them.

[Edited to make the password stuff easier to follow]


The scenario of "attacker has a key logger but doesn't steal the entire password database" sounds like enough of an edge case to ignore. If someone's stealing data from my password manager I'm going to assume full compromise.


Do you have anything - statistics, examples of popular toolkits, something like that, to show this is actually just an "edge case" ?

In the threat scenario we're discussing bad guys aren't "stealing data from my password manager" they just have the password and OTP code that were filled out, possibly by hand. They can do this using the same tools and techniques that work for password-only authentication, including making phishing sites with a weak excuse for why the auto-fill didn't work. We know this works.


> In the threat scenario we're discussing bad guys aren't "stealing data from my password manager" they just have the password and OTP code that were filled out, possibly by hand.

Possibly by hand? You are definitely not discussing the same scenario as everyone else. They're talking about password and OTP being stored in the same password manager, both filled out at the same time all in software.

A key logger is stealing those bytes right out of the password manager's buffers. It takes more sophistication to dump the database, but it's a very small amount more.


You are, alas, not unusual in mistaking the autofill feature, which ordinary users are told is about convenience, for a security measure.

In the real world users go "Huh, why didn't my autofill work? Oh well, I'll copy it by hand".

A "key logger" logs keypresses. That's all key loggers do. There are lots of interesting scenarios that enable key logging. You've imagined some radically more capable attack, in which you get local code execution on someone's computer, and then for reasons best known to yourself you've decided that somehow counts as a "key logger". I can't help you there, except to recommend never going anywhere near security.


Ok, so you don't see any use for TPOP when using 1Password then?


The issue is with storing your TOTP secret in the same store as your password. The idea of using MFA is that multiple secret stores must be compromised in order to grant access to a service.

If you put your TOTP secret on your phone (or Yubikey), then both your TOTP secret store (that is your phone or keychain) and 1Password store must be compromised in order to gain access to your account. TOTP is useful in this scenario.

If you put your TOTP secret in 1Password along with your site password, then only your 1Password store needs to be compromised. This is the scenario where TOTP becomes pointless.


Isn't that a less likely scenario? Or at least, a subset of the possible compromises, meaning you have materially improved your security in _some but not all_ cases. I don't disagree that it's best to not have TOTP in 1password, but isn't it still _better_ than not having TOTP at all?


I understand that, but it isn't it still better to store it in 1Password than not have TPOP at all? At least you're still protected against other attacks, right?


Marginally sure, but what "other attacks" are you looking to protect against?

Most MITM scenarios are going to result in giving up at least one TOTP code -- and that TOTP code will be used to obtain a long-lived HTTP session (I can't remember when Google last had me auth).

I think it's common for folks to think that TOTP means it's safe to lose a code because it is short-lived and invalidated when a subsequent code is used (usually), but it just takes one code for a long-lived session cookie to be obtained.

If an attacker is in the position to intercept your password via MITM, Phising, whatver, they're in position to intercept your TOTP code. They're not going to sit on that code -- they're going to immediately use it to obtain their long-lived session while reporting an error back to you.


No, I use TOTP and I use 1Password. But my TOTP secrets live in Duo's iPhone application.


And I also store them separately. But don't you agree that storing them in 1Password is still better than nothing, as there are still some use cases that you are protected against that way?


No, that's where you lose me. If you're using 1Password to generate passwords in the first place, then I really don't see how using it for TOTP accomplishes anything. To me, it looks like you could literally concatenate the TOTP secret to the 1Pw-generated password and have the same level of security.


In particular OTP codes are intended to be single use they're a ratchet. If a site does this properly then any OTP code you steal from me is not only worthless when it naturally expires, it's also worthless once I use that code or a subsequent code to authenticate. If you used a passive keylogger that may mean by the time you get the key events that OTP is already useless. Likewise for shoulder surfing attacks.


TOTP != HOTP


Nevertheless, RFC 6238 (TOTP) specifically tells implementers that:

Note that a prover may send the same OTP inside a given time-step window multiple times to a verifier. The verifier MUST NOT accept the second attempt of the OTP after the successful validation has been issued for the first OTP, which ensures one-time only use of an OTP.


The question is whether there is any point in having an OTP secret if it's stored in the same location as the password.

We're not talking about stealing single codes, but the entire secret.

With HOTP the answer is yes, because of ratcheting. A clone of the secret doesn't let you impersonate the original device, because their counters will conflict as both are used.

With TOTP the answer is no. You can make codes freely, and the clone is indistinguishable from the original.

The rule you cite is basically irrelevant. It just means that original and clone can't log in at the exact same time.


You've short-circuited by assuming the threat model is a bad guy breaks into 1Password. But there's no reason to insist upon this very unlikely threat model, there are other threats that _really happen_ in which having both OTP and a password under 1Password saves you.

Getting obsessed with a single unlikely threat leads to doing things that are actively counter-productive, because in your single threat model they didn't make any difference and you forgot that real bad guys aren't obliged to attack where you've put most effort into defence.


First, I don't agree that if the attackers have access to the password, guessing that they have access to data stored with the password is "very unlikely".

Second, any theoretical advantage still has nothing to do with ratcheting...


First: Fuzzy thinking. The attackers have access to _a copy of the password_. The copy they got wasn't necessarily anywhere near the OTP secret.

If I tell my phone number to my bank, my mom and my hairdresser, and you steal it from the hairdresser, this doesn't give you information about my bank account number, even though the bank stored that with the phone number.

Bad guys successfully phish passwords plus OTP codes. We know they do this, hopefully you agree that in this case they don't have the OTP secret. So in this case 1Password worked out as well as having a separate TOTP program.

Bad guys successfully steal form credentials out of browsers using various JS / DOM / etcetera flaws. Again, they get the OTP code but don't get the OTP secret regardless of whether you use 1Password

Bad guys also install keyboard monitors/ logs/ etcetera. In some cases they could just as easily steal your 1Password vault, but in other cases (depending on how they do it) that isn't an option. I believe it's "very unlikely" in reality that they'll get your 1Password vault unless it's a targeted attack.

A passive TLS tap also gives the bad guys the password plus OTP code but not the OTP secret. Unlike the former three examples this is going to be very environment specific. Your work may insist on having a passive TLS tap, and some banks definitely do (this is why they fought so hard to delay or prevent TLS 1.3) but obviously your home systems shouldn't allow such shenanigans. Nevertheless, while the passive tap can't be used to MITM your session it can steal any credentials you enter, again this doesn't include the OTP secret.

Second: A ratchet enables us to recover from a situation where bad guys have our secret, forcing the bad guy to either repeat their attack to get a new secret or show their hand. TOTP lets us do this when bad guys get one TOTP code but not the underlying TOTP secret.


> Second: A ratchet enables us to recover from a situation where bad guys have our secret, forcing the bad guy to either repeat their attack to get a new secret or show their hand. TOTP lets us do this when bad guys get one TOTP code but not the underlying TOTP secret.

I'm just going to focus on this, because it's not based on opinions of likelihood but simple facts. TOTP does not have a ratchet. If you copy the secret, you can use it indefinitely.

A ratchet is a counter (or similar) that goes up per use, so you can detect cloning. TOTP does not have this. It does not store any state. If I log in every day, and the attacker logs in every day, you can't look at the counters to see that something is very wrong, because there is no counter.


How is that materially different to storing the password and TOTP in 1Password?


Obviously, because if you compromise my desktop, you still won't have my TOTP secrets.


But what if I seize your phone at a customs inspection, or a traffic stop? Don't I then have password and OTP?


Do you see any problem with using a phone TOTP authenticator, but when setting it up saving a copy of the TOTP secret in a file encrypted with my public gpg key?

The idea is that if I lose access to my phone, I can decrypt that saved copy of the secret, and load it into 1Password temporarily until I get my phone back or get a new phone and get everything set back up.


Before people started storing their TOTP secrets in desktop applications so they could auto-fill them in their browsers, this question used to be the front line of the 2FA opsec wars. I was a lieutenant in the army of "if you want to back up 2FA secrets, just enroll two of them; a single 2FA secret should never live in more than one place". I think that battle is already lost.

Lots of reasonable people back up their secrets, or even clone them into multiple authenticator applications. I try not to.


> Lots of reasonable people back up their secrets, or even clone them into multiple authenticator applications. I try not to.

Because if they lose access to the 2FA secrets, you lose access to your account. If that's just one account, recovery might be doable (depending on who ultimately is root on the machine). If its your Bitcoin wallet or FDE though, you're toast.

There's also a variety of protocols used for 2FA. I've seen: USB2, USB3, USB-C, BlueTooth, NFC.

As for how people do this: they use a second key, save their key on a cryptosteel(-esque) device [1] (IMO overpriced, YMMV), USB stick, a piece of paper, or gasp CDROM. Where its saved differs. Could be next to a bunch of USB sticks, in a safe, at a notary (my recommendation though does cost a dime or two), in a basement under a sack of grain, ...

[1] https://cryptosteel.com


What the actual fuck is this "cryptosteel" thing?


There's a FAQ on the bottom of the page.


I know, I read it. What the actual fuck is this? Who would spend money on this? How is this not an insane product concept?


> Who would spend money on this?

https://www.kickstarter.com/projects/zackdangerbrown/potato-...

https://en.wikipedia.org/wiki/Juicero

etc.

> How is this not an insane product concept?

I thought sanity died years ago.


It costs $199 and you can't even store '@' with it!



Thank you for the link, that was actually an informative comparison instead of cursing.


> if you want to back up 2FA secrets, just enroll two of them

Could you elaborate on how you do this in practice?


Just like the first one. Most U2F web sites let you register multiple keys.

Any one gives you access. So you take one with you and put one in a drawer at home.


Parent's argument is that it mitigates phishing - i.e your normal workflow is you go to a site and your credentials are automatically filled in, so you'd be suspicious if that doesn't happen. In my experience, the autofill breaks so much that I've started copying my password in manually all the time.


> In my experience, the autofill breaks so much that I've started copying my password in manually all the time.

FWIW, this has not been my experience with 1Password at all.


I use LastPass but have had the same experience - autofill is very, very reliable


The TOTP code does not add anything to phishing mitigation.


Depends on the exact attack. If its a full MITM (including TLS), no. If its a fake website who don't forward after password-based authentication, yes. U2F would also detect the domain is incorrect, though so does my password manager. Though that's based on a browser extension. I suppose if the browser gets mislead, as would the password manager. And that did happen with LastPass (XSS attack IIRC).


Seems like they'd still protect you from anything that records your password and TOTP, but doesn't gain access to your store? E.g. a website gets some JS injected that skims your login. Which doesn't seem all that unlikely.

Basically it becomes "just" replay prevention. Which is a nonzero benefit, but totally agreed that it's not at the same level as a separate generator of some kind.


They are time-limited, at least? But yes, I've had similar arguments with coworkers who've started using 1Password for TOTP in the same way.


It seems odd to focus on physical tokens, when this could just as easily be built into the Browser/OS.

Sure, you also get some additional resistance when your machine is hacked, but it's pretty marginal compared to the phishing benefit.


I think part of it is the secure element. Apple is moving that direction with TouchID on MacBooks.


Can somebody please explain to me why hardware tokens/U2F mitigate phishing whereas 2FA does not? My imagination fails to show me a mode of attack that would be relevant here...


You can phish someone by getting their username/password, using that to log in to the targetted service, and then convincing the user to type their 6 digit 2FA code into the phishing page.

If they plug in their hardware token, the browser will give the token the real domain name which won't match the legitimate domain name, so the attacker can't use the response from the key to log in.


A phishing attack can often involve local compromise, making the user install malware etc. In that case, it's a simple attack variant to spoof the USB communication and get valid credentials whenever the user uses the key.


It can, but at that stage it's no longer a phishing attack, it's a full remote compromise. Your average phishing attack is just a web page.


Thanks! I imagine instead of via USB to hardware token, the query could theoretically go via my PC's Bluetooth to my phone?


One thing I don't understand is why are apps like authy or google authenticator not using push notifications to allow you to directly auth via unlocking or touchID instead of having to go through the app. If you really want the user to type something then you can still use push notitication for easy app access


It's mainly an issue with infrastructure and syncing a 2fa code to a specific phone or app.

Sending a push notification requires GA to register for push notifications with a server that has the Apple APNS certificate or firebase key. Google would likely have to run this central server and provide a portal/cloud console API for developers to register for sending these push notifications.

Authy already does this, providing both the TOTP and the ability to send "is this you signing in? yes/no" push notifications, however, charges for it: https://www.twilio.com/authy/pricing which is likely why not many providers actually use Authy and just generate a standard GA-compatible TOTP token.


Ah! Thanks for this answer. Makes sense.


Some do. A lot of companies that use Duo will set that up for their internals.

The problem is that those push solutions require that the company have some means of communicating with the app that you're using to trigger the push and the confirmation (as far as I can tell). This technology works around that by letting the browser talk to the plugged in device, circumventing all of the network/api bits.


> One thing I don't understand is why are apps like authy or google authenticator not using push notifications to allow you to directly auth via unlocking or touchID instead of having to go through the app. If you really want the user to type something then you can still use push notitication for easy app access

My company has something like that, through Symantec. When you need to authenticate, it sends a notification to your phone over the network for you to acknowledge.

It's terrible though: cell signal is horrible in our building, so the people who use it are constantly dealing with delays and timeouts. I opted for a hard token that has no extra network dependencies, and I'm happy with my decision.


Thank god they don't. I had to recently extract the keys from Duo's store for precisely this reason. All the notification crap is proprietary and uses GCM. It won't work on AOSP.


Logging in to Google works exactly like this at the moment.

It's probably because setting this up is more involved for the backend, setting up that key which you have to type in is fairly simple technically.


TOTP is designed to be usable even while offline.


Lord, I hope there aren't apps trying to use TOTP offline. TOTP works by using a shared secret between user and service. This secret and the current time are used to generate a code.

All parties involved must have the secret (this isn't public key crypto).

That means an app that can accept TOTP offline has the secret stored locally where it can be extracted.


But is that trade off worth it? Is the ability to work offline worth giving up a simple prevention of phishing attacks?


Full circle here, since FIDO U2F has phishing-resistance like push notifications and lets you work offline like TOTP. "Offline" in the sense that everything besides whatever you're authenticating against can be offline.


Push notifications offer no phishing resistance. The attacker can present a fake login experience and conduct a real login behind the scenes at the same time. If you think you’re logging in, you’ll approve the push for them.


If you are offline, how do you transmit the TOTP value-of-the-moment to the location where the protected resource is?


That's the 'T' in 'TOTP': it's Time-Based (if the clocks aren't synced it doesn't work)


A push notification to help you open the app doesn't prevent the app being usable offline though.


Authy used to have push notification support for LastPass but it has stopped working for me.


Okta provides this capability.


MS authenticator does that


Thats the single reason I got a smart watch. Just to have my 2fa codes on my wrist instead of getting my phone out of my pocket (I'm using Authenticator+)


Well, remember the entire point of U2F is to not be phishable. Authentication codes—and your smart watch—are phishable.


Any specific recommendations for a smart watch exclusively for 2FA?


Pebbles are quite cheap and offer 7 days of battery. The company went bankrupt, but you can still use the devices with rebble. The Pebble 1 is about 40€ and the rare Pebble 2 about 120€.


I learnt from this thread that my smart watch (Garmin Vivoactive HR) also has 2FA apps available for it (e.g. https://apps.garmin.com/en-US/apps/c601e351-9fa8-4303-aead-4...). I love the watch (long battery life, tons of apps including one from the famous antirez, built-in GPS, etc.) so I am thrilled that I have 2FA options for it as well.


There are probably better options. I'm using a Huawei watch first gen. Rock solid construction, awesome (always on) display, enough battery life, android wear 2, no proprietary strap bs. It's about 100 USD for B Stock.


This is not normally considered a “hacker-friendly” option, but I use an Apple Watch. The above-mentioned 1Password has a watch app, so by using it for my passwords and TOTP codes, I maximize my personal convenience.


>"It's also tremendously more efficient to tap your finger on the plugged in USB than it is to wait for a code to be sent to your phone or go find it on an app to type in."

But with regular TOTP and a software device on a smart phone I can print out backup codes in case you lose your phone. This allows one to log in and reset their 2FA token. What happens if you lose your Yubikey or similar? I guess this doesn't matter as much in an enterprise setting where there is a dedicated IT department but for for individual use outside of the enterprise doesn't TOTP and a software device have a better story in case of loss of the 2FA device?


> What happens if you lose your Yubikey or similar?

Get two, leave one in your safe deposit box. Every service I've seen that supports U2F supports multiple tokens.


I see that does't help much if you're on the road travelling unfortunately. At least with ToTP backup codes, someone at home can read you a printed backup code in order disable and reset your TOTP.


Almost every site that I've set it up with actually requires you have a backup method (app, codes, sms, etc).


> I've added it to everything that allows it, more for convenience than security at this point.

It's convenient only when you physically have the security key; it's a hassle if you forgot or lost it.


I have one. It's attached to my key chain.

If I've lost my keys, I have bigger problems.

It's convenient.


You can (and generally do) have multiples of these tokens. They are easily revoked. The Fido only tokens are $15.


It doesn't have to be your only method.


If you're interested in seeing how it works in action, I built a U2F demo site that shows all the nitty-gritty details of the process - https://mdp.github.io/u2fdemo/

You just need a U2F key to try it.


> If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.

At least a year ago or so (last time when I checked) most services didn't appear to check the nonce and worked fine when the nonce was reset.


If you can reset the nonce without resetting the key you can probably retrieve the key easily if you can read the traffic. The service should not need to check the nonce, and adding that much state is going to be complicated.


It's not that kind of nonce. It's not even called that formally, it's called the 'signature counter.' It's just a part of the plaintext signed with the keypair. There is zero risk of what you're talking about.

And how is it complicated to store a single integer per account and perform a comparison if `counter <= previousValue` at each authentication to see if it's not monotonically increasing? They already store that user's public key and key handle, they can store another 4 bytes.

In fact, the WebAuthn spec makes verifying this behavior mandatory. [0]

[0] https://www.w3.org/TR/webauthn/#signature-counter


The counter feature is dubious. You correctly describe the upside - if Bob's device says 4, then 26, then 49 it's weird if the next number is 17, and we may suspect it's a clone.

But there are many downsides, including:

Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.

The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.


> Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.

State is only expensive when it adds a significant amount of die area or forces you to add additional ICs. If you need a ton of flash memory, you can't put it on the same die because the process is different, and adding a second IC bumps up the cost. However, staying with the same process you used for your microcontroller, you can add some flash with much worse performance... which is a viable alternative if you only need a handful of bits. Your flash is slower and needs more die area, but it's good enough.

> The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.

What kind of ridiculous threat model is this? "Alice logs into Facebook and GitHub, and Bob, who has compromised both Facebook and GitHub's authentication services..." Even then, it's not guaranteed, because the device might be using a different counter for Facebook and GitHub.


> the device might be using a different counter for Facebook and GitHub.

At least for YubiKey, it appears to use a global counter:

https://developers.yubico.com/U2F/Protocol_details/Key_gener...

> There is a global use counter which gets incremented upon each authentication, and this is the only state of the YubiKey that gets modified in this step. This counter is shared between credentials.

Having a global counter does seem like it could weaken the ability to detect cloned keys. If an attacker could clone the key and know the usage pattern (e.g., there are usually 100 uses of non-Facebook services between uses of the Facebook service), then they might be able to use it for a while without being detected. Though, having service-specific counters may have worse security ramifications (e.g., storing which services the key has been used with).

Though if an attacker is going to that much trouble, they may as well just use the wrench-to-the-head method.


The main benefit is that the number should always be increasing. The moment either key uses an old number, the service knows the security device was cloned. The attacker will have to be sure not to increment it more than the target or else an attempt by the target would notify of the cloned key.


Keep in mind that "Bob" in this threat model is Facebook (because they are one of the entities that tries to track what everyone is doing everywhere). So it only needs to get the Github nonce. Collusion on the part of Github is, I suspect, much more likely than Facebook compromising Github's login flow.

Maybe sites colluding with their ad providers to track people is not part of your threat model, but it definitely is for some people. Yes, I know Github does not host ads, so isn't a good example of this threat model.


U2F tokens are already cheap enough to give away. We carry them around in sacks and hand them out at events.


they are not that cheap.

the only yubikey that works with mobiles (NFC) is $50. the cheapest u2f I could find (it only has usb-a port) is $20.


There are at least two U2F keys I was able to find on Amazon which were under $10 USD:

https://www.amazon.com/dp/B01N6XNC01 https://www.amazon.com/dp/B01L9DUPK6

The latter is an open source / open hardware design:

https://github.com/conorpp/u2f-zero


Neither of them works with smartphone hence not of practical use.


I think you might be having trouble with the concept of a U2F token. The mail application on your phone doesn't need a token; it already has a long-term cryptographic binding to the server.


> Devices now need state, pushing up the base price.

You can buy pretty decent (16Mhz, 2K EEPROM, 8K flash) microcontrollers for less than twenty cents (my numbers are from 7 years ago, things are probably cheaper, faster and bigger now). A few bytes of stable storage -- whatever you need to safely increment a counter and not lose state to power glitches -- are not going to add significantly to the cost of a hardware token.


I can think of a few ways to reduce that tracking risk. The token could use a per-site offset (securely derived from the per-site key) added to the global counter, and/or could have a set of global counters (using a secure hash of the per-site key to select one). I don't know how much that would increase the cost, or if there's something on the standard mandating a single global counter.


Nothing mandates it. In fact, it's specifically discouraged in the WebAuthn spec:

> Authenticators may implement a global signature counter, i.e., on a per-authenticator basis, but this is less privacy-friendly for users.

Since you can have multiple keys on the same site, you could go one better, and have a per-key offset. When the key is rederived from the one-time nonce sent from the server, you'd also derive a 16-bit number to add to the 32-bit global counter. But even that wouldn't actually be enough to make correlating them impossible.

A large but finite set of independent global counters is a great idea, though. 256 32-bit integers is just 1 KiB of storage.


Touch to auth is also the part that Google ignores for some strange reason. Their high security Gmail program defaults to remembering the device! There isn't a way to disable it either.


I guess I don't need to touch-to-auth when I start every work day ;)

Our internal gmail might not require it every day, but most systems at Google do. You can't get very far without it.


Thanks for the explanation!

Do you know why GNUK (the open source project used by Nitrokey and some other smart cards) chooses not to support U2F? I don't understand the maintainer's criticisms[0] and I'd like to probe someone knowledgeable to find out more.

[0] https://lists.gnupg.org/pipermail/gnupg-users/2017-March/057...


I am having some trouble understanding as well. Here is what I understand.

The point of GNUK is to move your GnuPG private key to a more secure device so it doesn't have to sit on your computer. With GnuPG, users are basically in control of the cryptographic operations: what keys to trust, what to encrypt or decrypt, etc.

With U2F, in order to comply with the spec you are basically forced to make a bunch of decisions that don't necessarily line up with GNUK's goals. You have to trust X.509 certificates and the whole PKI starting from your browser (CA roots and all that). Plus, U2F is basically a cheaper alternative to client certificates, but with GNUK you already have client certificates, so why go with something that provides less security?

To elaborate: With GnuPG, the reason you trust that Alice is really Alice is because you signed her public key with your private key. You can then secure your private key on a hardware device with GNUK. With FIDO U2F and GMail, you have to trust that you are accessing GMail, which is done through pinned certificates and a chain of trust starting from a public CA root. This system doesn't offer you much granularity for trusting certificates. Adding FIDO U2F to a system designed to support a GnuPG-like model of trust dilutes the purpose of the device. By analogy, imagine if you used your credit card to log in to GMail, maybe by using it as the secret key for U2F. The analogy isn't great, but you can imagine that even if you can trust that (through the protocol) GMail can't steal your credit card number, the fact that you are waving your credit card about all the time makes it a little less secure.

In general, people who work on GnuPG and other similar cryptography systems tend to be critical of the whole PKI, and I'm sympathetic to that viewpoint.


U2F really, really isn't at all like client certificates. The certs baked into tokens are for _batches_ the specification says (and famous implementations like those from Yubico do this) that a batch of thousands of similar keys should have the same certificate, it exists to say e.g. "This is a Genuine Mattel Girl Power FIDO Token" versus "This a Bank Corp. Banking Bank Security Token". Relying parties are discouraged from examining it, since in most cases there's no reason to care.

Unlike your GnuPG key, the FIDO token isn't an identity. The token is designed so that a particular domain can verify that it is talking to the same token it talked to last time, but nothing more. So e.g. if you use your GnuPG key for PornHub, GitHub and Facebook, anyone with access to all three systems can figure out that it's the same person. Whereas if you use the same FIDO token, that's invisible even to someone with full backend access to all three systems.


In the last GNU related post about Emacs, a security person suggested changing defaults related to TLS to address many of the known dangers with the current PKI situation. There, the lead developer apparently didn't want to change the defaults because that would somehow be top-down aggression against user choice in the style of the TSA at the airport.

Here, you are saying that GNUK won't add FIDO U2F because the lead dev is critical of the whole PKI system. Thus, the GNUK user doesn't get defaults which allow them to easily bootstrap into the web services that are used by a large portion of the population.

I mean, that's fine and justifiable as individual projects go. But one could just as easily imagine the approach of these two projects switched so that Emacs reflexively reacted by choosing the most secure TLS settings for defaults, and GNUK being liberal with which protocols they add.

So what's the point of the Gnu umbrella if users and potential devs essentially roll a roulette wheel to guess how each developerbase prioritizes something as critical as security?


I appreciate the summary, but it's still a bit unclear to me. What do you mean by "for each website"? Certainly that doesn't mean every website in existence, so there must be some process by which a new website is registered with the hardware and the key communicated to the site?

But if so, I don't see how that solves the problem of "user goes to site X for the first time, mistakenly thinking it's Github." That registers a new entry for the site and sends different credentials/signatures than it would send to Github. But site X doesn't care that they're invalid, and logs you in anyway, hoping you will pass on some secret information.

Am I missing something?


Normal MFA is the user answering a challenge. Hopefully that challenge came from the expected site, but it is up to the user to verify the authentication/authenticity of the site. If the username/password/OTP challenge came from someone actively phishing the user, the phisher can use the user's responses to create a session for its own nefarious purposes.

Verifying the authenticity of a site is something that has been demonstrated both to be nontrivial and also something that the majority of users cannot do successfully.

U2F/WebAuthn tie the identity of the requesting site into the challenge - by requiring TLS and by using the domain name that the browser requested. So if the user is being phished, the domain mismatch will result in a challenge that cannot be used for the legitimate site.


Solely going by GP's summary, nothing needs to be 'registered with the hardware' because the public/private keypair is deterministically generated on-the-fly, cheaply, with a PRNG every time it's needed. Only two things are ever in the nonvolatile storage on the device: the secret key used as entropy to generate those keypairs, and the single global counter.

The system makes it impossible for phishing sites to log in to your account using your credentials. That's the threat model it guards against.

Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?


>The system makes it impossible for phishing sites to log in to your account using your credentials.

So it's an additional factor for authentication, not a way of identifying fraudulent sites to the user? Okay, but you also said:

>Entering 'secret information' that isn't user credentials just plain isn't part of it.

Which is it?

>Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?

You would think, but people have definitely entered information in similar circumstances. Also, there's always "sorry, server problem showing your emails, feel free to send in the mean time".


What contradiction? It just plain isn't part of the threat model. Was that not clear?

Although, actually reading the spec, it can actually double as a bit of extra authenticator of the website. Any site has to first request registration (client uploads a new, unique, opaque key handle/'credential id' to the server, along with its matching public key) before it can request authentication (server provides credential id and challenge, client signs challenge).

A credential ID is a unique nonce from the device's global counter, signed by a MAC. The real site will already have a registered credential ID, which the device will take, verify it, and use the nonce to HMAC the private key back into existence.

A phishing site you've never visited before will have no credential ID. Any fake ones it tries to generate will be rejected since the MAC would be invalid. One from the real website won't be accepted either, because the MAC incorporates the website's domain, too. They'd have to get user consent to create a new key pair entirely, which a user could notice is completely different from what's normally requested at login. Then they'd have to consent again to actually authenticate.

https://developers.yubico.com/U2F/Protocol_details/Key_gener...

https://fidoalliance.org/specs/fido-u2f-v1.2-ps-20170411/fid...


>A phishing site you've never visited before will have no credential ID. Any fake ones it tries to generate will be rejected since the MAC would be invalid.

That was my original question: presumably there has to be some way for new websites to be registered on the system. Does it just categorically reject anything not on a predefined list? I mean, there are legit reasons to visit not-github! And new sites need to be added.

In order to say something is “fake”, that has to be defined relative to some “real” site you intend to visit, and I don’t see how this system even represents that concept of “the real” site. Phishing works on the basis that the site looks like the trusted one except that I’ve never been to it.

Put simply: I click on a link that takes me to a site I’ve never been to. Where does this system intervene to keep me from trusting it with information that I intend to give to different site, given that the new site looks like the trusted one, and my computer sees it as “just another new site”?

>What contradiction? It just plain isn't part of the threat model. Was that not clear?

Not at all. You said it “stops phishing sites from using your credentials to log in”. That implies some other secret that’s necessary to log in (making the original credentials worthless in isolation), and yet the next quote rejected that.

If you were just repeating a generic statement of the the system’s goals, which wasn’t intended to explain how it accomplishes them, then I apologize for misunderstanding, but them I’m not sure how that was supposed to clarify anything.

Late edit: as in the other thread, I think I’m just being thrown off by this being mislabeled as a phishing countermeasure, when it’s just another factor in authentication that also makes use of phished credentials harder. Not the same as direct “detection of fake sites”.


There doesn't technically have to be a way to register new sites. There is, but theoretically there never actually had to be, given keys are generated deterministically on-demand, using the website's domain name effectively as a salt. There's no system with a list of websites.

The signed challenge-response you give to the phishing site cannot be forwarded to the real site and accepted, because you used the domain name as part of your response, and as part of key generation, so it doesn't match. That's all that meant. 'Credentials' included the use of a public/private key, not just the typed password.


No! Registration is important. What you've described would be subject to an enumeration attack.

During Registration the Relying Party (web site) picks some random bytes, and sends those to the Client (browser). The Client takes those bytes, and it hashes them with the site's domain name, producing a bunch of bytes that are sent to the Security Key for the registration.

The Security Key mixes those bytes with random bytes of its own, and produces an entirely random private elliptic curve key, let's call this A, the Security Key won't remember what it is for very long. It uses that private key to make a public key B, and then it _encrypts_ key A with its own secret key (let's call that S, it's an unchangeable AES secret key baked inside the Security Key) to produce E

The Security Key sends back E, B to the Client, which relays them to the Relying Party, which keeps them both. Neither the Client nor the Relying Party know it, but they actually have the private key, just it's encrypted with a secret key they don't know and so it's useless to them.

When signing in, E is sent back to the Client, which gives it to the Security Key, which decrypts it to find out what A was again and then uses A to sign proof of control with yet more random bytes from the Relying Party.

This arrangement means if Sally, Rupert and Tom all use the same Security Key to sign into Facebook, Facebook have no visibility of this fact, and although Rupert could use that Key to get into Sally's account, the only practical way to "steal" the continued ability to do so would be to steal the physical Security Key, none of the data getting sent back and forth in his own browser can be used to mimic that.


Right, there's a good security reason they have the registration step. Though I don't think what you described is quite how it works. The FIDO and CTAP protocols don't let the Relying Party provide any entropy to the authenticator. The only input is the domain name(+ user handle in CTAP). Authenticator has to create the key with its own entropy. It doesn't need the server's entropy to have multiple keys per domain name.

(In Yubikeys' case, E is actually a Yubikey-generated random nonce that's used to generate the private key by HMAC-ing it with S and the domain name, not an encrypted private key, but that's all opaque implementation details. E can be anything as long as it reconstructs the key.)


No, the Relying Party absolutely does provide entropy here. Specifically the "challenge" field which you probably think of as just being for subsequent authentication is _also_ present in the registration and is important.

This challenge field, as well as the origin (determined by the client and thus protected from phishing) are turned into JSON in a specified way by the client. Then it calculates SHA-256(json) and it sends this to the Security Key along with a second parameter that depends on exactly what we're doing (U2F, WebAuthn, etcetera)

You can see this discussed at the low level in FIDO's protocol documentation: https://fidoalliance.org/specs/fido-u2f-v1.0-ps-20141009/fid... and you can see the Javascript end discussed in WebAuthn: https://www.w3.org/TR/webauthn/#createCredential

The Security Key doesn't get told the origin separately, it just gets the SHA256 output, this allows a Security Key to be simpler (it needn't be able to parse JSON for example) and so the entropy from the Relying Party has been stirred in with the origin before the Security Key gets involved.

As well as values B and E, a Security Key actually also delivers a Signature, which can be verified using B, over the SHA-256 hash it was sent. The Client sends this back to the Relying Party, along with the JSON, the Relying Party can check:

That this JSON is as expected (has the challenge chosen by the Relying Party and the Relying Party's origin)

AND

That SHA256(json) gives the value indicated

AND

That public key B has indeed signed the SHA256(json)

The reason they go to this extra effort with "challenge" and confirming signatures during registration is that it lets a Relying Party confirm freshness. Without this effort the Relying Party has no assurance that the "new" Registration it just did actually happened in response to its request, I could have recorded this Registration last week, or last year, and (without the "challenge" nonce from the Relying Party) it would have no way to know that.

Thanks for correcting me on how Yubico have approached the problem of choosing E such that they don't need to remember A anywhere.

[edited: minor layout tweaks/ typos]


>So it's an additional factor for authentication, not a way of identifying fraudulent sites to the user?

It doesn't identify fraudulent sites (tls is the tool for that). but it won't give a properly signed login response for gmail.com to a request from the site fakegmail.com.

That's a poorly worded answer to your question, but here are some slides I made to explain the specification: https://docs.google.com/presentation/d/1AkcTHahME5xY-FExm6vN...


It could still be susceptible to a user mistaking fakegithub.com to github.com, but a pairing with github.com will never work with a request from a server from fakegithub.com. Likewise, github.com cannot request the user to sign an auth challenge for fakegithub.com. The requesting server is directly tied to the signature response.


Okay, but then that doesn’t sound like phishing protection, but obviation of theft of secrets ... aka regular multi factor authentication.


> For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.

Can one usb device work on two separate accounts for a given domain, (e.g. work gmail and personal gmail), or do you need two of them?


One device can work on two separate accounts, no problem. For the same reason you can use the same password for two different accounts (although there are other reasons why you wouldn't want to do that).


So is the main difference between FIDO U2F and regular TOTP simply the addition of the HMAC-SHA256 of the domain on the server side?

Is there a requirement that FIDO be implemented on a hardware device?


Do you have a recommended device? Ideally it would work reasonably well with iphone as well as macbooks (unfortunately both usb-A and a courage's worth of usb-C).

thank you


I hope Firefox will implement it, because Chrome isn't a trusted browser anymore


Firefox has implemented it, although you need to go to about:config and enable "security.webauth.u2f" for it to work.


This is disabled by default because it doesn't entirely work. WebAuthn is fully implemented in Firefox, and on by default, but U2F is so far still much more common, and the U2F enabled by this feature switch is only kinda-sorta compatible.

Sites need to move to WebAuthn, which works with the same tokens and browsers (well, Chrome, Firefox, Edge) have either shipped or demo'd with a promise to ship. But right now today U2F works in a lot of places if you have Chrome whereas WebAuthn is mostly tech demos. The most notable website that has WebAuthn working today is Dropbox, seamless in Chrome or Firefox, any mainstream platform (including Linux) and all the tokens I have work. That's what everybody needs to be doing.


How much implicit trust do users place in the manufacturers of their security keys?


U2F is fantastic. I wish Apple supported it in Safari (hoping!).

Also, YubiKey 4 is a great device. Set it up with GnuPG and you have "pretty good privacy" — with convenience. I recommend this guide for setting things up: https://github.com/drduh/YubiKey-Guide

The great thing about YubiKeys is that apart from U2F, you also use them for OpenPGP, and the same OpenPGP subkey can be used for SSH. It's an all-in-one solution.


WebauthN is coming to WebKit (already available in Firefox, Chrome and Edge). Once that is supported we should be able to have u2f everywhere.


Nice! I'm also looking at getting a YubiKey 4 or a Nitrokey after reading about it being used by all developers with commit access to kernel.org.

https://www.linux.com/blog/2018/3/nitrokey-digital-tokens-li...


Yea, except YubiKey got compromised.

https://www.yubico.com/support/security-advisories/ysa-2017-...

And, if you lose your fob or your backup fob you're boned.


That vuln only affected RSA keys generated for specific niche functionality and not most uses of the YubiKey.

> The issue weakens the strength of on-chip RSA key generation and affects some use cases for the Personal Identity Verification (PIV) smart card and OpenPGP functionality of the YubiKey 4 platform. Other functions of the YubiKey 4, including PIV Smart Cards with ECC keys, FIDO U2F, Yubico OTP, and OATH functions, are not affected. YubiKey NEO and FIDO U2F Security Key are not impacted.


That didn't stop me getting about 15 calls from RSA declaring Yubikey will never recover. The annoying thing with this non-issue is the FUD around it.


Hm, I suppose, though that is the functionality the poster I was replying to was discussing. Though, one has to wonder, what other flaws are lurking below the surface on that chip. It isn't flawless. Once there is another major issue it is going to be an abandon ship type of situation. What are the alternatives if any, move to a new key that doesn't have the problem or look into an alternative means, etc.


I think this is a revocation and provisioning problem: when the device is compromised, how hard is it to revoke that device and provision a new one for yourself?

Structurally, actually making these tokens should be commoditized anyway. So on the software side, it needs to be not absolutely painful to rotate credentials. Something like a one-time-pad that you can use in "in case of fire break glass" situations.


If you've ever used GitHub's SSH keys provisioning, any halfway decent U2F or WebAuthn implementation (including GitHub's) works a lot like that.

You can register as many keys as you like within reason, you can give them names like "Yubico" or "Keyfob" or "USB Dildo" and any of them works to sign in.

Once signed in you can remove any you've lost or stopped using, and add any new ones.

The keys themselves have no idea where you used them (at least, affordably priced ones, you could definitely build a fancy device that obeys FIDO but actually knows what's going on rather than being as dumb as a rock) and there's no reason for your software like a browser to record it. Crypto magic means that even though neither browser nor key remembers where if anywhere you've registered, when you visit a site and say "I'm munchbunny, my password is XYZZY" it can say "You're supposed to have one of these Security Keys: Prove you still do" and it'll all just work.


Thanks for the explanation. It all makes sense, and the public/private key system is awesome for that.

The point I was getting at was "if your one Yubikey is stolen, what do you do?" If you fall back on password authentication, then your Yubikey based system was only as secure as the password mechanism protecting your account recovery mechanism.

The answer might be "provision two keys and stick one in a bank deposit box", etc. Regardless, there's an inherent problem that you want your recovery mechanism to be as hard to crack as your primary authentication mechanism, but you need it to not be an absolute pain.


Most sites require you to set up another form of 2FA along with U2F (for example, TOTP using Google Authenticator). There are also recovery codes that you print and store on paper.

I don't consider losing a Yubikey to be a serious problem, though it's important not to use it to generate RSA keys, as then you will not be able to make any backups. Generate your keys in GnuPG and load them onto the key, keeping backups in secure offline locations.


Several of the sites offering 2FA begin by telling you a bunch of arbitrary one-use passwords for such emergencies. They suggest you write _those_ down and stash them somewhere.

They also tend to propose you provision several other 2FA mechanisms, such as SMS or TOTP OTP. But yes, I always begin by enrolling two Security Keys, and then one of them goes back in my desk drawer of emergencies.


Potentially difficult if you were relying on a unique product like yubikey which doesn't have a 1 to 1 competitor in the industry at the moment.


There are many makers of FIDO U2F complaint hardware devices these days.


The original poster was discussing the OpenPGP feature. The U2F feature of YubiKey wasn't compromised by the vulnerability.

The vulnerability is real and still exists. There was even someone in this HN thread that was planning to use an old key fob Arstechnica sent him, specifically for the OpenPGP feature.

I should have split my backup and vulnerability comments into two, because they've sparked two unrelated debates. It started out as such a simple comment! :)


Yes, but with OpenPGP you can just rotate your subkeys. For encryption subkeys it's advised to back them up somewhere either way.

It maybe you're talking about U2F applet of Yubikey? Then it's not affected by the bug you posted. And you should have backup codes enabled.


The use case I gave is: You lost your backups and your main, now what? You're done. Firesale on your life or business. Backups are something everyone has to contend with in any situation, but it isn't one that has been completely solved in the security industry yet in a way that is acceptable or uniform in any way. The average user just doesn't have a clear system for providing a high level of protection for both their security and ensuring they have redundancy in their life or livelihood.

There are lots of different ways to skin a cat but no one has established a definitive solution or made it easy or obvious. Something like a YubiKey is only one part of a solution, and without something more you are at risk. Or, perhaps there's a way to create an encryption with redundancies built in so you're never in that situation to begin with. What if the concept of a backup was built into the key exchange and losing your original didn't necessarily lock you out.


Is this really a part of the standard? There isn't a "I lost my token" process like there is an "I forgot my password" process on every website now?


None if this affects me — I generated my keys using GnuPG and I do have backups (offline, of course).


I'd like to mention that I've been testing the Krypton app (iOS only for now) for U2F. You install Krypton on your iOS device, it creates keys that are only on the device. You then install the extension for Chrome. When U2F is requested they send the challenge to the iOS device which calculates the response and sends it back to the extension. App can be configured to require approval or always send response.

App also support SSH keys.

Works very well for me and the service is free. https://krypt.co/


Good to hear you're liking U2F on Krypton. Android support was released last week, and Firefox/Safari support is coming soon!


I wish you just had the workstation download on the homepage again. I had to find your homebrew bottle GitHub repo to figure out how to install Krypton on my new MacBook.


Agree. The new page seems phishy. I double checked the domain and certificate before trusting the page at all. Other than that.. great product


Sorry for the inconvenience. You can also find the install instructions on the help screen of the app.


When I started using Krypton for ssh and code-signing last year, the first thing I did was ask the Krypton team on twitter if they were going to add U2F. Glad to hear it’s in beta! It’s rarer these days to subsume another device into our phones’ functionality, but it’s still a good feeling.


Am I only the one who is disappointed in the seemingly stalling of traction for U2F? Google, Github, and Facebook supported U2F 2 years ago - so all I can see is Twitter, Dropbox and niche security news like KrebsOnSecurity.com have added support since then? Sure it's something, but 2 years I would have expected more - Who am I missing? Without more websites, consumer mass market has little incentive to adopt - and without users, websites have little incentive to support U2F - thereby furthering the stalling.


Well, maybe I'm over-reaching, but I think that most banking "security" sucks.

Last month I tried to make an e-banking account in South Europe. In 2018.

- They required "6-12 characters as a password, and no special characters". You can't hash special chars?

- Apparently it's okay, because "2FA". Which is a "changeable via a call" 4-digit-code, which the bank employee knows "only" two digits.

I'd be far more inclined to trust Twitter or GitHub than my bank with my data.


In my country, many banks force people to install "security modules" which includes a driver that monitors their network. There is no privacy policy.


I needed a new bank and thought surely there will be one that offers U2F.. days of searching later, and I still have yet to find one that does. It seems like the vast majority of online banks don't even support any kind of 2FA except email/text. Really really sad.

For regular guys like me, I can't think of any online service more important to protect than my bank account.


From https://twofactorauth.org/#banking, the only American or Canadian bank that supports a Hardware Token is Wells Fargo - which only seems to support RSA SecurID: https://www.wellsfargo.com/privacy-security/advanced-access


Banks seem very slow to adapt to technology. My credit union for years after the release of the first iPhone still used a Flash login, although they did have a mobile login link you could get from them by asking.


FWIW, in Poland some banks started using 2FA (many different types) several years before Google or any other site I know of.



Yet only Chrome is supported -- and this does not include chromium-browser on Linux.


It sounds like they force you to use a phone/email code when you log in from a new device? Or am I reading that wrong.


> Am I only the one who is disappointed in the seemingly stalling of traction for U2F?

The problem is that all of these things are a PITA to administer.

I wanted a VPN between our two offices. Cool. I'll buy some YubiKeys, type some command line magic on Linux and I'll be good to go ...

Pschye!

This stuff is fine if you have 100+ people and the resources to administer.

If you simply want to manually distribute stuff to <10 people, it's a nightmare.

Until I can set up something easily at the 10-person level and scale it gradually to 100+, this stuff is going to remain tractionless.


U2F was never fully supported in browsers making it hard for sites to deploy it everywhere. The new WebauthN standard is going to be supported everywhere which makes it more likely that sites will actually use it.


Something like U2F is never going to find mass success in a consumer application. Every enterprise auth provider supports it, which is its major use case for now.


I guess this is a dumb question, but is it still "multi factor authentication" if you only use a single physical device to complete the login process?

The way the article is written, it makes it sound like the physical key is a replacement for 2FA instead of just a hardware device that handles the second factor (while leaving the password component in place).


The key replaces the keycode of 2FA auth - password still has to be used.

You can already use the same process on your GMail if you have a compatible U2F key.


OK thanks, this clarifies the part that says "it began requiring all employees to use physical Security Keys in place of passwords and one-time codes," which I found super confusing.


Strange sentence, but I believe they mean replcaing "password + one time code" with "password + U2F"


Actually U2F can be used to devise several different schemes, either token+password, or token alone, even just token without username. Of course each of these has various advantages and disadvantages.


Some logins are cookie + security key (basically if I've already logged in today) which basically feels like "tap my security key and I'm logged in".

Of course, more sensitive stuff (access to production, access to pay stubs, access to $cloud_erp) requires re-entering password plus the security key.


The password can be replaced by something simpler like a PIN, which is why you'll read about U2F replacing passwords and one-time codes.

Sometimes 'replacing passwords' is used to mean 'replacing the traditional username and password login' as well.


> I guess this is a dumb question, but is it still "multi factor authentication" if you only use a single physical device to complete the login process?

This is a common misconception. The threat model of 2FA is not "I lost my device, and it is now in the hands of someone who knows the password".

The threat model of 2FA is one of:

1) "An attacker has gained remote access to my computer, but not physical access"

2) "I have been targeted by a sophisticated phishing attack, and I trust the machine that I am currently using"

TOTP (and even SMS) protects against (1) in most cases, though U2F is still preferable. U2F is the only method that protects against (2).


> U2F is the only method that protects against (2)

A bit of clarification: U2F protects against phishing attacks by automatically detecting the domain mismatch when a link from a phishing email sends you to g00gle.com rather than google.com, which is something that a human might overlook while they're typing in both their password and the second factor they've been sent via SMS. However, if someone were to use a password manager and exclusively rely on autocomplete to fill in their passwords, then that would also alert them to the fact that something was fishy when their browser/password manager refuses to autocomplete their Google password on g00gle.com. So this isn't exactly the only method that protects against the second scenario above... though I will concede that using a password manager in this way sort would sort of change 2FA from "something you know and something you have" to "these two somethings you have" (your PC with your saved passwords and your USB authenticator), which is something that might be worth considering.

Regardless, these physical authenticators are a huge step up from SMS and I'm very happy that an open standard for them is being popularized and implemented in both sites and browsers.


> However, if someone were to use a password manager and exclusively rely on autocomplete to fill in their passwords, then that would also alert them to the fact that something was fishy when their browser/password manager refuses to autocomplete their Google password on g00gle.com.

Lots of websites do weird modal overlays, domain changes and redirects, redesigns, or other tricks that break password autocompletion. I've never seen a secure password manager that's robust enough against all of these that it would eliminate the human factors causing the phishing opportunity here.

Apparently Google hasn't either, because that was their motivation behind developing these schemes.


> U2F is the only method that protects against (2)

Would you be able to elaborate on this? I'm not understanding the difference between TOTP and the physical key from the article for this scenario.


With TOTP, a sufficiently clever phish may convince you to enter the one time code.

With U2F, there is communication between the browser and the device, requesting authentication for a specific origin hostname -- that can't (shouldn't) be fooled by a phish hosted at Google.com-super-secure-phishing.net


Where do password managers fit in here? If a phisher convinces me to try to login to google.com-super-secure-phishing.net using my google account I'm going to notice something is wrong when my password manager refuses to fill in the login form.


This is where it comes down to user behavior. One of the security engineers from Stripe gave a talk about this at Blackhat last year -- she had phishing campaigns that had users ignore that autofill didn't work and manually copied/pasted their password manager credentials into the phishing sites.

https://www.youtube.com/watch?v=Z20XNp-luNA


> If a phisher convinces me to try to login to google.com-super-secure-phishing.net using my google account I'm going to notice something is wrong when my password manager refuses to fill in the login form.

You say that, but the overwhelming body of evidence from real-life phishing attacks and red-team exercises demonstrates that even very technologically-literate engineers will not consistently notice.


Google and Apple both have mobile (non-SMS) based two factor prompts that seem equally immune to phishing?


> Google and Apple both have mobile (non-SMS) based two factor prompts that seem equally immune to phishing?

Any "type in a code" or "approve this login (yes/no)?" authentication factor is technically vulnerable. All the phishing site needs to do is proxy the authentication to the actual site in real time.

These guys put together a great overview of the approach: https://www.wandera.com/bypassing-2fa/


The current domain is sent to the device and used to generate a private key that is used to authenticate. If it's a phishing domain, the device will return a private key that won't work on the real domain.


That's interesting, thanks.

I always thought the 2FA threat model was "Someone acquired my password" or else "someone has access to my email account and may try to do password resets by email."


First paragraph answers it:

> Google has not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical Security Keys in place of passwords and one-time codes, the company told KrebsOnSecurity.


If they "use (physical security keys) in place of (passwords and one-time codes)", that would no longer be MFA: they're authing strictly with "something they have".

A more in-depth quote is later in the article: "Once a device is enrolled for a specific Web site that supports Security Keys, the user no longer needs to enter their password at that site (unless they try to access the same account from a different device, in which case it will ask the user to insert their key)."

The parenthetical seems to imply that they're doing initial auth (and thus cookie generation) with password + U2F, and then re-validating U2F for specific actions / periodically without re-prompting for the password, similarly to how GitHub prompts for "sudo mode" when you do specific account actions.


Correct me if I am wrong, but I think the YubiKeys are PIN based. So in order to use it to authenticate you have to enter a PIN and three wrong attempts results in it locking. The PIN itself would be the, "something you know," and the YubiKey is the, "something you have."


Depends how you use the yubikeys.

They support x509 certs, which use PINs. Whether it needs the PIN once-per-session, once-per-15seconds, or once-per-use is configurable. The number of failures before it locks is also configurable. More details can be found here: https://developers.yubico.com/PIV/Introduction/

They also support TOTP/HOTP, where the computer asks the device for a code based on a secret key that the device knows. This can require a touch of the button.

EDIT: TOTP/HOTP modes do support a password, as cimnine pointed out. I'd forgotten about that setting.

Yubico OTP is similar to TOTP/HOTP, and is the mode where the yubikey emits a gibberish string when pressed. The string gets validated by the server against Yubico's servers. This does not require a PIN.

The U2F mode does challenge/response via integration with Chrome and other apps. The app provides info to the USB device about the site that's requesting auth, then you press the device and it sends a token back. This is critical to the phishing protection: barring any vulns in the local app (which have happened before), you can't create a phishing site that will ask my Yubikey for a challenge that you can then replay against the real site. This mode requires physical touch but no PIN.


YubiKeys support PINs for protecting TOTP/HOTP.


Right you are, my mistake. I've added an edit to my comment correcting that.

Thanks!


They do have a PIN but you don't enter it every time you authenticate.

The PIN is used to configure the YubiKey itself.


This isn't correct, they only have a single small button.


Poorly worded (or possibly misunderstood) — it was password + OTP → password + U2F. (In practice the OTP was also usually supplied by a dedicated USB stick, so the change was mostly transparent.)


Now the real question becomes: how often were they getting phished before the new policies? Knowing Google, there's no way they will answer THAT before another decade.


Why? What does it matter? Presumably it must have happened at least once for them to bring in this policy.


Well, for one, it would put the non negligible costs in perspective. Second, it would be an additional data point. More data is usually better than less of it. In this case, though, it's sensitive data, for a number of reasons, which is why I don't see it happening for years.


Huh? The article literally is about 2FA, as originally conceived. It isn't replacement for 2FA -- it is 2FA.

The key (sic) thing about U2F isn't that it is new and special (it isn't -- it's plain old 2FA as used for more than a decade) but rather that it is practical to deploy for smaller organizations. You don't need to buy large quantities of keys. You don't need a special server. You don't need staff with special skills to deploy it. It works with "cloud" providers like Google and Facebook, out the box (same key as you use for your internal services).


Not quite. A 6 digit code can be phished out from users pretty easily. They'll enter it anywhere its asked, similar to a password.

However the U2F and Fido spec requires a Cryptographic assertion (with all that replay attack mitigation stuff like Nonces) that makes it so that an attacker can reuse a Token touch. I'd probably encourage a glance over this https://fidoalliance.org/specs/fido-u2f-v1.0-ps-20141009/fid...

Sadly the Wikipedia article doesn't have a good layman's explanation yet, but I'm sure it'll will soon.

Yes at a high level, its still 2FA but like most options in any factor of Auth. It can be improved upon. (For a simple case, take Fingerprint readers and look at the advances of liveliness checks and how many unique points it requires.)


When I say "2FA" I mean proper 2FA with a hard token. As used for 20 years or so in government, large companies.


No, the key thing about U2F is that it can't be phished.

Any other 2FA method can.


How do you phish a smart card?


The best explanation is really: https://fidoalliance.org/

Short version: the keys are matched directly from the device to the site making it virtually impossible to phish unless you control the site itself.


Microsoft's position is interesting. The article states Edge will be implementing support this year. They run Github, which supports U2F.

But Microsoft are in the process of launching a new MFA and password management product in Office 365/Azure, and I'm informed U2F isn't on the roadmap.


Welcome to The Strategy Tax


Google's instructions for setting up these keys are unfortunately very bad, so I wrote this guide: https://techsolidarity.org/resources/security_key_gmail.htm


I wonder what happens when you forget your key at home. If I forget my keyfob for work, I usually have to do the walk of shame, around the hallway from the reception desk, whenever I use the bathroom. But I can still get in. And if I really wanted to, I could get a temp key for the day.

Do compiles like Google, et. al. have security departments that give people temporary keys that expire after a day, or do they have to run back home?


Yeah, you usually have one time use keys that can work in cases like that. Or you can get a new key from IT and go through a registration support.


For my personal email (gmail) I only recently went to Google Authenticator because I finally figured out how to put it on multiple devices. I was worried about the SMS code method's security, but I had no other way to ensure that losing my phone wouldn't leave me with no way to get into my accounts. (If I lose my phone I can still resurrect my phone # on another device by SIM replacement).

The U2F works fine for corporate, etc. where you have a support team who can help you in case you lose it or forget anything. They can make you come in person and prove that you are you.

The problem with implementing this for personal is that if you ever lose the key or code generator, you are absolutely fucked because there is no way to prove who you are to Google and have them reset your password / security.


You don't need multiple authenticators. The right thing to do is print multiple copies of your one-time emergency backup codes (https://support.google.com/accounts/answer/1187538) and put them in many places. Wallet, car, house, parents' house, etc. You only have to do this once. The codes are useless without your password and you can revoke them at any time if you really need to, so spread them widely and then you won't ever have to worry about losing your authenticator.


What I personally do is the backup codes stored in a bank vault.


Notice something important: they didn't use SMS.

Also, look at how GitHub uses U2F. Anytime you need to make an account change, you can simply tap your u2f key. It's a great user experience and really locks down your account.


How does one use a U2F / YubiKey on a mobile device like an iPhone (lightning port), or are they only compatible with laptops and Android phones (USB-2, USB-C, USB-3) connections?


You have to get the version that works on mobile (Neo?) which uses NFC to connect to the phone. It works properly on Android.

Due to some restrictions from Apple it doesn't work well on IOS yet:

https://www.yubico.com/2017/10/iphone-support-yubikey-otp-vi...


On Android, you have plenty of options and they all work. Specifically, my Neo key works with NFC and my YK4 works with USB OTG (both original flavor and USB-C).

Apple limits the capabilities of Yubikeys so much that it's best to summarize it as "Doesn't work". It's more of Apple's anti-competitive restrictions that seem to go unnoticed by most people because they have a shiny UI.


Thanks for that info.

So, then, if Apple wanted to adopt a similar solution to Google - i.e. for their own enterprise security, as opposed to for their customers - could they use one of the Yubikey NFC options, or, would they have to ‘create’ a Bluetooth-specific device?

Maybe the long-rumoured Steve Jobs ring could take the form factor?!

And from an Apple customer perspective, is there a valid argument to be made that iOS devices can be, or are currently, less secure than Android counterparts because of the current lack of Yubikey / U2F options?


The Yubikey NEO recently supports iPhone 7 and up: https://www.yubico.com/2018/05/yubikey-comes-to-iphone-with-...


With some limitations, at least in iOS 11:

“Besides the fact that the NFC Reader interface can only be fired up from an app, Core NFC [in iOS 11] does not allow for write operations that are required for authentication protocols like FIDO U2F. ...

However, because NFC tag reading is supported, it allows developers to build apps, including consumer facing or purpose-built enterprise applications, with one-time passcode (OTP) support.”

Which is what ‘smiley1437’ and others have effectively stated in other parts of this thread.

edit: added last sentence


There is a NFC Yubikey which you can tap against your phone to log in.

There is also a bluetooth version which is not manufactured by Yubikey.

https://www.amazon.com/dp/B01LYV6TQM/?coliid=I3IA8VD7QRRZDT&...


The protocol specifics 3 transport options with this in mind: usb, nfc, and Bluetooth.

As others mention, nfc works great for android. Bluetooth is your only option for iOS, and it’s clunky because you have to deal with pairing.


I asked the same question earlier, then found out that Yubikeys can work with a lightning-USB adapter.

[1]https://support.yubico.com/support/solutions/articles/150000...


Thanks, good to know.

Again iOS 11 only supports the One-time password (OTP) featureset:

“When a YubiKey is connected to an iOS device, only the OTP mode functions listed below are available. Yubico OTP Static Password OATH-HOTP”

Does anyone know if iOS 12 will update Core NFC to overcome: “Core NFC [in iOS 11] does not allow for write operations that are required for authentication protocols like FIDO U2F.” [1] https://www.yubico.com/2017/10/iphone-support-yubikey-otp-vi...


iPhones and high-end Androids have their own secure elements these days. I’d expect the OSes will eventually provide their own hardware-backed U2F APIs that you enroll directly, rather than piggybacking tokens. Same for corporate laptops with TPMs.


Good point, that makes a lot of sense with Apple’s hardware strategy in mind.


People already mentioned the NFC, but you can also use the key on another secured device (w/ USB) to generate an OTP for the phone login. This still requires using the key to get the temporary OTP.


Is there any inexpensive USB-based security keys? I'd love to get one for my Mac and PC but Yubico Nano is $50; I would like two of these, but they are $100 already.

EDIT: I also see Yubico Fido Keys which are $20 each (and $36 for 2). Are there any differences between these and the regular Yubico keys?


If you have to ask, get the super cheap ones; you probably aren't going to use any of the features on the expensive ones (like the nanos and the Y4s).

You will read lots of people talking about the cool things they do with their Y4s, but really they're just doing it because they can, not because there's a well-thought-out security benefit they're getting (I'm as guilty of this as anyone). 95% of the benefit of a security key is simply U2F.


I use Yubikeys to store my GPG and SSH private keys. This way even if my laptop gets 0wned, the attackers won't be able to get my private keys. It basically is a more convenient form factor then using a Smartcard (which was how I had previously stored by GPG / SSH keys).


Is there any simple way to keep SSH keys synced across multiple keys?

I have 2 laptops and an Android device, so if I was to start using these things it would be convenient to keep my private keys available across them all.


I don't think it's so much about features, but I do think there's a huge difference between having a nano and a normal sized one, especially on a laptop.

Being able to leave it in instead of having to take it out of your wallet and plug it in every time makes a big difference in usability, which in turn makes it much more likely that you'll want to use it everywhere.


It's funny because I see it the opposite way: the problem with the nano is you'll want to leave it in all the time, which reduces your practical security. I go out of my way to keep the security key on my physical keychain; I don't even like leaving it in my bag, where it might get stolen along with the computer.


What's the threat model this protects against?

(In other words, what's better about having the security key being on your keychain vs. be a permanent part of your computer)


I am less likely to lose both at the same time as I am to lose just one, and you'd need both of them to get access to accounts.


What do you mean you need both? The two in 2-factor is password + ubikey, not laptop + ubikey.

If someone gets your laptop with ubikey inside, they can't do anything without your password. And the whole point of a physical second factor is to basically limit the 99.9% of the people (mostly from other countries and cities) that have no physical access to you from hacking you.


It's "Yubikey", not "ubikey". The Nano and Y4 keys do more than U2F; they also generate and store RSA keys. They aren't just 2FA devices; they're also small HSMs.


If I have access to someone else's laptop with a nano key plugged into it, can I access all the SSH key files stored on the key, or are they also protected from access somehow?


Yubico Fido keys (Security Key by Yubico, the blue ones) only support FIDO U2F. This lets you use them for logging into GMail, DropBox, and other websites which support FIDO U2F. You can't store passwords or other data on them. This means you can't use them for storing SSH keys or things like that. There's not some technical reason why you can't use FIDO U2F for SSH authentication, it's just that the software support isn't there yet. You could probably hack this together yourself, the idea would be that you generate a SSH key on your computer, authenticate to some certificate manager with U2F, and then use that to install the public key on the computers you want to access. You could then have it automatically expire 8h later, forcing you to reauthenticate, giving attackers a shorter window to compromise your machine if they want to hop to others with SSH.


The cheapest way to get a yubikey in the US right now is to subscribe to wired for $10 for a year. You get a free YubiKey 4 within a month of your subscribe date.


The U2F-Zero is about $9, and the cheapest U2F device I know of. They work well, but aren’t as durable as other options.

https://www.amazon.com/U2F-Zero/dp/B01L9DUPK6


The $20 Yubikeys only do U2F; the more expensive Yubikeys also have the ability to generate one-time passwords and act as smartcards (for things like signing git commits).

I have the Nanos in all my computers because I'm used to that setup from working at Google, but that's more expensive than what I actually need.


So you leave it plugged in all the time?


Yes.


I looked in to it a while ago, but none except for portability. The Nano's are designed to take up a USB slot and be semi-permanent in there. That is, they can be removed, but its not as easy as removing a normal USB key. Either good grip or a pair of pliers for the fingernail-less.


And you have to be very delicate when using tools on it or else you'll chip away at it. I've found needle and thread to be effective.


yeh this is probably miles better than what I do to my Nanos.


I‘d like to add a related question: I already have three or four U2F keys. Is there any reason to upgrade to FIDO2 keys?


Not yet. I'm writing a Django library for webauthn support (i.e. logins without usernames or passwords), but no browser supports that yet, that I can see. They only support the second-factor mode, not the first-factor.


If you request resident keys the browsers seem to switch over. Chrome Canary had it hidden behind a command-line flag last I checked (a month ago, which is an eternity considering how hard they are pushing)


Yeah looks like it is now chrome://flags/#enable-web-authentication-ctap2-support


That's fantastic to know, thank you!


Unfortunately the $20 one is only available with USB-A, not USB-C.


When I buy a key, how will I know I can trust that key?

Is there any way to validate it?

Edit: It seems Yubico is a trusted brand so I guess you are safe when you buy keys from them.

Here is a list of FIDO certified products: https://fidoalliance.org/certification/fido-certified-produc...


(Cheap) Security Keys are very dumb objects. They don't know anything secret except their own secret key. So I think the two naughty things bad guys might do are:

1. Sell you a key whose secret key they already know. This is hard to defend against. But if you just buy a generic key from a reputable manufacturer and aren't a major target this seems pretty safe in practice.

2. Hide something malevolent inside the security key's case, e.g. it's secretly a GPS tracker or it's a tiny USB disk plus keyboard that hacks your PC after detecting inactivity.


Is there any way to generate a new key on the device? What steps can a paranoid user take to mitigate problem (1) above?


In principle devices could be designed with the ability to generate a new key. But if you don't trust the hardware how does that help?

You may be able to make more of the hardware yourself, depending on how capable you are with electronics.

(Much) more expensive devices can implement FIDO while actually using arbitrary new keys for each registration and you could arrange to hand-pick the keys and then verify it behaves as intended and uses your chosen keys.


In the more expensive devices, are the arbitrary new private keys imported from the computer it's plugged into? If the new keys are generated on the hardware you don't trust, it'd still be the same problem, since the private keys could be generated deterministically from a known seed and a counter.

You can at least verify when a cheaply-designed device has changed its secret key, because the public key it offers for github.com is different from before, but yeah, that 'new' secret key could still just be derived from a manufacturer-known seed/secret serial number, too, same as the first one was, but with an incremented counter.


You can order it on a library computer with a Visa debit card you bought at a grocery store you don't usually go to with cash, and ship it to another address than your own.


I guess you also should buy directly from Yubico? You can buy them on Amazon, but how do you trust that its the right hardware? Do they offer some sort of validation?


Don't buy anything you need to trust from Amazon, especially when it's stuff that is easy to manipulate and/or easy to make a counterfeit of. Amazon puts everything that claims to be the same product into a single bin and if somebody sends Amazon counterfeited Yubico products, they will be delivered to anybody that buys a Yubico product, even when the seller they are buying from made sure to only send verified ones to Amazon.

This obviously only applies to products that are being shipped with "Fulfilled by Amazon".

Sources:

https://www.engadget.com/2018/05/31/fulfilled-by-amazon-coun...

https://www.theguardian.com/technology/2018/apr/27/amazon-si...

https://www.cnbc.com/2016/07/08/amazons-chinese-counterfeit-...


Yubico puts validatable device certificates on it's devices: https://developers.yubico.com/U2F/Attestation_and_Metadata/


Nitrokey is open source (hardware and software). They claim the "installed firmware can be exported and verified, preventing attackers from inserting backdoors into products during shipping". (https://www.nitrokey.com/)

They list Google as one of their customers.

Also if you're a Linux kernel developer you get one for free: https://www.nitrokey.com/news/2018/nitrokey-partners-linux-f...


Too bad the MacBook Pro (without TouchID) only has two USB-C ports.

I use one for charging and the other one for the external display.

Guess I'll have to buy the one with TouchID for extra security.


If you buy the one with TouchID, you can use the TouchID element for U2F as well: https://github.com/github/SoftU2F/pull/29


A USB-C hub would be cheaper.


Only if your looking for a USB-C to USB-A hub. There is a surprising lack of good USB-C multi-port hubs, let alone any that support USB PD on anything but the upstream port. You're just now starting to see any that even have generic pass-through of PD.


It is surprising; I understand the difficulty in making a USB-C multi-port hub, but it seems like a huge missed market opportunity. The MacBook is far from the only device with only a single USB-C port and a headphone jack.


Does anyone know how to enable U2F support for LastPass as the article claims? I was under the impression that LastPass only supported OTP codes with YubiKeys and not U2F.


I agree. I think it’s a mistake in the article. The author most likely saw that LastPass supports Yubikeys and thought that was the same as U2F?


Do the security keys prevent phishing because they will only login to the same site by checking the domain? So you can't mitm someone.


U2F incorporates the origin, which prevents phishing (unlike TOTP 2FA)


Thanks!


They don't actually "prevent phishing" per se, they just make it ineffective. Users can still get phished if they use a hardware or software 2FA.

The thing is, if you fall victim to a phishing attack, the attacker may steal your credentials; but he/she will not be able to login to your accounts because even though the username and password works, they still need to have the 2FA code which only you have in your physical possesion (wheter it be a Yubi Key or a OTP in your phone). So the attack will be unsuccessful.

On the other hand, you could still get infected with malware via a phishing attack if you don't have a secure system. In this case, 2FA won't help much.


U2F actually does prevent phishing since the domain is included in the token generation.


I understand that, but let's say I try to phish you with a fake login page. Of course, the Yubikey won't send the code to that fake page as the domain name doesn't match, but an unsuspecting user could still enter his/her credentials into the fake form. As I said, the attacker may not do much with those credentials if every system uses 2FA, but they may be useful some day :)


Would it be possible to reverse order?

I mean: 2fa-code, login, password instead of: login, password, 2fa-code. Maybe login could be automatically filled based on 2fa-code public key? That should prevent leaking password to fake page.


You need the login first.

A cheap Security Key has no idea what public key it told you to use when registering.

There's a cute trick here. When you tell a key "Hi, authenticate please" you must send it a "cookie" it gave you during registration. Now this could in theory be some pointer it uses or whatever. But in fact it's actually the private key it will use to authenticate, encrypted with its own baked in secret key. It decrypts that, then authenticates. But if you don't know which user you're authenticating you can't send their cookies, you'd have to try every cookie for every user. Not fast.

If every user uses WebAuthn then just a login (username or email address or something) is enough. But if some just have passwords then doing anything before the password step gives away what's up.


An interesting solution could be to first enter the username, then the OTP/Key, then the password. I haven't given it a lot of thought and can't find anything wrong with it.


Like GP said, that would give away which accounts have WebAuthn enabled on them, since those without it would send you straight to the password prompt instead.

But more importantly, phishing sites will always tell you 'your key succeeded. Enter your password next' regardless, so this doesn't protect against password disclosure at all.


Nope, because you'd be relying on the fraudulent phishing page to tell you that.

Real page: Give me login You: Login Real page: Good login, press your 2fa to authenticate You: Press Real page: Good 2fa, enter your password You: Password

Versus:

Fake page: Give me your login You: Login Fake page: Good login, press your 2fa to authenticate You: Press Real page: Good 2fa (wink wink), enter your password You: Password

The fake page wouldn't have a working login to the real page because the 2fa would be wrong, but it would still have your password.


You can still get phished, and have your password be stolen. It just doesn't allow you to login in to the FIDO protected resource. Other endpoints might still work if they dont' require FIDO or you reuse the password.


The keys are completely happy to authenticate to every site, the cheap ones have no memory so they have no idea which sites you use.

So mybank.phishing can ask your Security Key to please authenticate, and convince you to press the button, but the output is useless for getting into the real site at my bank.com

Not only is a valid authentication to PornHub useless for GitHub, even if they both collaborate and share all their data they can't prove you've used the same token for both sites, so this is even a privacy win.


This is important to note : you don't need a hardware key to prevent phishing this way. This all could be accomplished with software.


However, unless there is specialist hardware (which might be baked into your iPhone etcetera rather than a separate token) there's a risk bad guys steal the secret key via another route and then you're in a world of pain.

Secret Key USB or NFC devices also more resemble other physical keys for which users already have some useful intuitions we can leverage.


You can still hack the U2F software in the browser and steal the key. If an attacker is local, the game is over. The only thing the button does is limit how many opportunities the attacker has.

Almost all practical attacks are against a key stored on a server somewhere, a key in flight on the network, or a lack of security in the client. If we properly secure these aspects of access, we don't need a whole lot of extra layers, which are only workarounds for the actual root problems.


"Steal the key"? Steal WHAT key?

In the cheap mass-produced FIDO tokens we mostly care about the only keys anybody is storing are 1. a single secret symmetric key inside the token. It has no reason to give this up to anybody, and no API for doing so; 2. _public_ keys used to check credentials

If you're thinking "Wait, that can't be right, where do the corresponding _private_ keys live?" then Bzzzt, time to go read the FIDO/U2F specification before writing anything else on Hacker News.

The whole point of this type of design is that even if the attacker has local code execution you haven't lost. An attacker with local code execution _plus a button press_ gets not a key which can be stolen but a single proof of control of their choice. It's like stealing a single OTP code, it's not _nothing_ but it's very far from everything.


Even if you phish their password to a site, the person can't log in because they also need the physical key. The key doesn't validate anything related to the domain AFAIK


That would be useless, as the key could just be passed through.


What do you mean "passed through"? You can't just steal a key and replay it whenever you want. (Unless you physically steal the key)


But you can trick Bob into entering his credentials + using his security key on corp.bank.co.m and then use those credentials + security key interaction to log into corp.bank.com IF the security key interaction is domain agnostic (like you can do with the 2FA codes you get on your phone - if you can trick Bob into entering his password you can trick corp.bank.com into sending Bob a 2FA code which he will also give you).


U2F key interaction is not domain agnostic. That's why it's so good against phishing--it can't be collected by a fake domain to pass through to the real one.


The key requires physical feedback, the user needs to push the button when prompted by the software and that button pushing will only authorize a single authentication.


Does a password management / U2F solution exist that would let you view all password titles with a master password but only dispense the actual passwords, one at a time, via a button press? Would prevent having your entire password DB stolen if you were keylogged/mitmd/whatever.

Picture of what I kind of mean here : https://pbs.twimg.com/media/Diylx-0X4AIjrqO.jpg

*edit- slide #2 and #3 are backwards. The passwords are stored on the USB device, if that wasnt clear. Master password allows you to view password titles and essentially 'unlock' the usb device. However, every action needs to be confirmed one by one. So for instance, you could in theory export 'all' passwords in one shot, but it would present you with that prompt on the device itself.


I'm storing a GPG key on a yubikey (set to always require a touch to decrypt [1]) and use that with pass [2].

pass stores the password, username (and whatever else you want) in a simple textfile, which is gpg encrypted. There's a browser extension for it, a GUI implementation and lots more (all on the website at link [2]).

[1]: https://developers.yubico.com/PGP/Card_edit.html [2]: https://www.passwordstore.org/

Edit: Addressing your edit: you could use pass with pass-tomb (puts all the separate password files in a folder and encrypts that, see website) and use a GPG key with password to encrypt all that (and re-enter that on every separate password decryption attempt). Don’t know any other password manager that would allow you to do exactly that..


Looks like the trezor has native support for ALMOST this exact functionality: https://www.youtube.com/watch?v=5Jva-vcFQjE (it for whatever reason, stores the passwords on dropbox, instead of in the device...)


Dropbox isn't secure. They have a master key override and have many times already unlocked boxes without the user's permission. Also, they cache your credentials, anyone who gets a hold of the cache file can put it on another machine and get into the box without authenticating.


I got a Yubikey for free through Ars Technica, but I haven't set it up yet. Regrettably it was a base model instead of the NFC model, which means I'll have to grab several adapters to be able to use it with my various Android devices, all of which tie to the Gmail account.


> I got a Yubikey for free through Ars Technica, but I haven't set it up yet. Regrettably it was a base model instead of the NFC model, which means I'll have to grab several adapters to be able to use it with my various Android devices, all of which tie to the Gmail account.

For what it's worth, you can have multiple U2F devices. Twitter is the only website I'm aware of that only lets you register one U2F key.


I guess one limitation of this approach is that one can't login to anything from a VM that is running on a server that one has no physical access to - i.e. no way to plug the USB key in.


There are one-time codes and and there is support of remote keys over SSH (in most simple from this key just pretends to be USB keyboard and does typing of the code for you).


Most remote desktop apps have a form of USB passthrough. And I believe you can also do this in a variety of ways with ssh plus some combo of libusb.


I'd say that is a feature rather than a limitation.


> ...thieves can intercept that one-time code by tricking your mobile provider into either swapping your mobile device’s SIM card or “porting” your mobile number to a different device.

I know I'm paranoid, but this makes me wonder how safe it is at cell phone kiosks and stores when you grant them access to your account so they can see if you're eligible for a promotion or upgrade.

Last time I was at one of these kiosks (in a busy store) I had to ask the guy to log out of my account before I walked away.


That isn't really just being paranoid. Logging into your account on any public device is not secure at all.

Those kiosks are just computers, even if that employee logged out, who is to say that he didn't also install a keylogger before you typed in your credentials?

That is a really bad idea.


Yeah. I agree.

Maybe this makes it slightly less bad: to log into my account, the guy typed in a single-use random code which their special administrative interface texted to my phone. Assuming that code is truly only good for one use, there's a little safety in that.

But, I still wonder what exactly these cell-phone representatives do with the info they can access on my account, and whether they truly log off, or capture that web page's info somehow, etc.

I once saw a cell phone rep in-training take a picture of such a private account screen with their personal phone and then text it to another person in the company so that they could ask that person how to carry out the next step in the sequence of screens they needed to fill out. It was pretty disturbing.

Customers have to place a lot of faith in the retailer, unfortunately.


While password based phishing might have been stopped by U2F it still leaves Gmail accounts vulnerable to OAuth phishing attacks which can be just as devastating.


What are the mobile prospects here? Sound like the main motivation is that phone numbers are not secure second factors. But the result is a solution that only works on desktops (is that correct?) and requires a physical key that can be easily lost.

Modern phones now have secure coprocessors and biometric authentication. Why not use that method for the second factor? It doesn’t rely on a phone number and it would handle both mobile and desktop.


If security keys are so great, why do I still have to process two to five reCAPTCHAs every single time I log into almost any website?

reCAPTCHAs are ubiquitous and becoming increasingly time consuming. I probably have to spend 4 minutes every day filling them out. Google is getting 20-25 hours of free labour from me this year.

The class action suit that was questionably dismissed by the judge in 2016 should be revisited.


reCAPTCHAs usually have a trustiness factor built into the code. If some combination of identity (IP address, browser fingerprint, last login on the site, etc) is questionable, it will give you more captchas. Do you use a lot of public VPNs? Are things from your network stuff Google might consider shady?


Lovely argument you have there. It boils down to this:

"Perhaps you are not subjecting yourself to monitoring by Google so that they may further monetize your Internet history."

My initial argument stands:

"If security keys are so great, why is Google subjecting me to so many CAPTCHAs? Perhaps it is because they want free work. A password and security key challenge should be all the proof that Google needs."

I've had similar discussions before. I've even had someone call me a criminal for running an ad blocker. I'm not fond of the word shady either.


Here's an example of censorship on Hacker News.

My comment that was up-voted multiple times was reset to 1 point. It now remains at 1 point. I even created another account from a different IP address and tested it.


If you are on mobile internet or often clear your browser cache/cookies/go incognito, you are more likely to get CAPTCHAs. Could also be that you are on a VPN through a data center.


My comment wasn't a technical support request that also completely misses the point.


Does anyone know if these things are possible to get working when your daily interaction is with a thin client?

My workstation sits in a rack in a server room, and where I work's current policy of 1.7 people to a desk means we all hot desk. Whatever thin client I sign into uses RDP to connect to my workstation. Is there enough UDP redirection support in RDP to make using these keys possible?


I am not sure. I'm assuming your thin clients have USB sockets and you can plug in generic USB keyboards, mice, etcetera. If you have to use a built-in keyboard + pointing device then you're almost certainly screwed.

The USB FIDO tokens are HID devices, but they deliberately don't specify what _sort_ of HID device they are. The idea is that this makes the client (browser) side easier as every major OS has some means for ordinary programs to talk to generic HID devices - to support graphics tablets and other odd things. So it's possible that a system generic enough to let you plug in any HID device (mouse, keyboard, trackball, stylus, whatever) to your thin client could work with FIDO.

Security Keys do seem like an attractive idea for a thin client environment if they work.

RDP does have "input redirection" but the problem is whether it's low level enough to redirect a HID protocol it doesn't understand. If RDP insists on thinking about keys pressed or movement of a pointer that's obviously no help for FIDO, but if it can just proxy the HID layer itself that's enough.


I don't get it: how it's better than my password manager not autofilling passwords? Which is basically free, and doesn't require inconvenient usb dongles. (I'm not claiming that it isn't better, honestly asking how it's better because I don't understand benefits of this technology over simpler solutions like password manager)


Question here - how do I use a technology like this with my iPhone and iPad?

This looks great for my laptop, but that’s only about 20% of my online time...


For Google Apps, you could use a Bluetooth LE U2F Security key, like a Feitan [1] plus the Google Smart Lock app on the App store[2].

1. https://www.amazon.com/Feitian-MultiPass-FIDO-Security-Key/d... 2. Google Smart Lock by Google, Inc.


Yubi has added near-field for mobile - supports both Android and iOS and maybe others. I think on iOS though you need iOS 11+ and then app developers need to use the Yubi SDK.


Adversaries just steal the cookie issued after MFA completes these days.

“We have had no reported or confirmed account takeovers since implementing security keys at Google,” the spokesperson said.

Makes me wonder if they have the right detections in place. It's extremely unlikely and naive to think that Google would not have at least one compromised account at any given time.


> Adversaries just steal the cookie issued after MFA completes these days.

Stealing a cookie is a much different attack vector than phishing, which is what TFA is discussing. It also requires a completely different level of access and sophistication, which puts it in a category so different as to make comparisons irrelevant.

> It's extremely unlikely and naive to think that Google would not have at least one compromised account at any given time.

Stealing a session cookie does not equal a compromised account, while phising does.


In the online world a session cookie or Bearer token is pretty much equivalent to an account compromise, in fact often it is exactly the same. Hard to argue if one gets email access to claim that their account wasn't compromised.


Not so fast. For years, Google has supported channel binding between GFEs and Chrome. The cookie alone is not enough: you need to steal the private key as well. I can't remember if that's the case, but it would make sense for @google.com accounts to have more aggressive settings.

http://www.browserauth.net/channel-bound-cookies

Even before that, Google has had a system to detect cloned or tampered cookies on the server side for more than a decade, as described in its patented glory (don't open if you think your company's lawyercats are going to be unhappy):

https://patents.google.com/patent/US8302169B1


So something to note on using them:

https://www.yubico.com/support/security-advisories/ysa-2018-...

If Chrome has WebUSB enabled and can see it, it would be possible to get around the security U2F affords.


Surprised not more people mentioned or recalled this, its perfect for phishing. one click, thats it.


The US Federal Government started migrating to using "security keys" almost 15 years ago for this reason.

Thanks George W. Bush !

https://www.dhs.gov/homeland-security-presidential-directive...


John C. Dvorak has been complaining for years about how Google handles press requests. I don't have a good link to point to that captures what he's stated- but i'd summarize his position as this:

Google is completely shut down to journalists. They ignore press requests with legitimate information requests and if they don't like the tone, you'll get no official response. Dvorak observes that Google is starting to get worse and worse coverage in mainstream tech media as a result.

When articles like this do come out, they're usually privileged access and tightly controlled. Lots of beautiful photos of young, attractive google employees "changing the world."

In this case, we have yet another article published talking about how google is doing everything right.

Krebs is typically a reliable journalist, but this article stinks of privileged access to me. The key point is right at the beginning of the article:

Google has not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017 [...] the company told KrebsOnSecurity.

How much trust should we be investing in The Great Google again?


Taken literally, the article only highlights a correlation - introduction of the physical keys + absence of phishing. But Google may have introduced other security changes as well. The article doesn't go so far as to say that the switch to physical keys from SMS 2FA actually solved the problem on its own.


All I can think of is this fun story from a few years back: https://www.theregister.co.uk/2013/01/16/developer_oursource...


Series of blog posts on how to use a YubiKey for SSH, 2FA, and 1Password:

http://www.engineerbetter.com/blog/yubikey-all-the-things/


We use keys like the one pictured at our company. Handy for key stores of any kind and for U2F.

That said, the dongles are an abomination of engineering. I know that USB-jacks have a defined top. That is to no interest to vendors as it seems...


You really have to trust something that you stick in a USB port and press a button on.


Yes, that's true! But we are still okay with buying USB keyboards, and typing our passwords in on them. I think once you stop trusting your hardware vendors you are in a very expensive and time-consuming realm of paranoia. The idea that someone has compromised a hardware vendor in order to attack your company is not without its merits, like how RSA was compromised, but the attack is very costly and historically it's followed by key revocation which means that the attack only works once. So you'd only use this kind of attack for extremely high-value targets.


So? You have to trust your firmware, HDD controller, keyboard controller, OS, browser... What's one more device in that chain of trust?


Seeing as I'm getting downvoted for this, we've already seen breaches due to infected USB sticks being sold next to NATO in Kabul [1], these seem like a pretty good black hat attack vector to me, especially as most employees are going to carry these things around on a keyring so they don't lose them. It only takes a second to switch it out for a compromised key when that employee is at the bar.

[1] http://uk.businessinsider.com/russia-planted-bugged-thumb-dr...


Each key is locked to an account. If you tap your key and it doesn't work, that's a potential security issue to be reported.


Yes, yes it is.


My point is

1. This is immidiately obvious 2. You've now maybe pwnd a single device, but in doing so you also removed any credentials from the device, so it's not valuable 3. USB mice and keyboards already exist, and are plugged in to most computers.


I think drcongo's point is that if an operative meets someone at the pub and swaps out a similar looking fob on his keychain for one that contains a virus. It doesn't matter if you only "pwned" a single device, you're in the network and it is time to start exploring.


But you're not in the network. You have to authenticate to access the network, and that requires the u2f key that you just removed.


Your virus is on a machine in the network, therefore you're in the network. At that point, it is a matter of exploring the network, fingerprinting systems, scraping for exploits, and attempting intrusions. Or, waiting until an administrator does something silly like attempt to use their privileges on the machine to accomplish some task. I believe this was exactly how the Sony hack was conducted.

Edit: Also, at some point the employee will be reissued a new key fob for the "broken" one and at that point they will enter their credentials into the network again on that machine giving you access.

Edit 2: I guess a procedure that could prevent this is to require I.T. check the serial number of a fob that has been reported as "broken" thereby verifying there hasn't been a potential intrusion.


As far as I know, at Google my work laptop has as much access to the 'network' as my personal one does, at least until I'm authenticated. (Beyondcorp)

And the last time I plugged my keyring key into a computer was a year ago. Most use the nano keys which you never remove from the history device.


Deleted

Edit: In answer to your response. Yup.


I can't make heads or tails of this comment, likely due to hn formatting.

But as far as I can tell, this exploit requires 3-4 zero day exploits to be discovered in a system the attacker has no access to, and to all go undiscovered for an unknown amount of time for while said attacker is exploring.

That's much better than "I can steal user credentials and then download an exploit trivially."


How does everyone keep track of where they have used their Yubikeys?


I wish smartcards had gotten more support. I like that it can combine two factors in one authentication mechanism.


This is surprising to me as well. In Germany, where I live, the government is using smartcards as authentication mechanisms at many places for at least 5 years now, if not 10 or more. The workers don't need passwords or user names, they're simply plugging in their card into their keyboard and when they're leaving for a break or are going home, they're taking their card with them and everything locks automatically.

This is only working internally, externally they have to log into a VPN via username/password and then have to use the smartcard as well.

I don't see why this is suddenly innovative.


> ...thieves can intercept that one-time code by tricking your mobile provider into either swapping your mobile device’s SIM card or “porting” your mobile number to a different device.

Are these theoretical attacks? Has this ever actually happened?

The article only correlates the end of phishing with introduction of the physical keys. I'm wondering if it's really necessary - if typical 2FA via one-time pw to SMS is easily sufficient.


>> ...thieves can intercept that one-time code by tricking your mobile provider into either swapping your mobile device’s SIM card or “porting” your mobile number to a different device.

> Has this ever actually happened?

Yes. For example: https://krebsonsecurity.com/2018/05/t-mobile-employee-made-u...


It's possible if you're targeted for some reason. IMO it's very unlikely if you're Joe Random logging in to your bank's website, and better than not having any 2nd factor at all.


Yes I can't find the articles now but there are reports of phishers using this technique to get around 2FA over SMS.


They got around 2FA over SMS because a number of services like GMail offered password reset via SMS as well as 2FA over SMS.

It was the password reset process that was the most vulnerable, and strangely the part that kept getting glossed over when people reported on the takeover incidents.


Nice article. However a bit weird that they link to one of their advertisers :/


Imagine if a promoter of a technology implemented in their browser was flawed/didn't work.

If you ask a wine producer how tasty his/her wine is, she would reply that's awesome.


Same can be achieved without hardware part.


even Google employees fall for these phishing attacks?


If Google has an internal offensive pen test team, I assume they would likely disagree with that statement, esp. since Chrome allowed (maybe still allows?) to read Yubikey info via WebUSB for instance - only one click in UI that was the mitigation.

If Google would authorize external hackers (eg via bug bounty), it probably would take about 2-4 hours to phish an account successfully. :)


I believe this is the key quote:

“We have had no reported or confirmed account takeovers since implementing security keys at Google,”

Security keys basically remove the account takeover path, but there are still many other types of phishing attacks out that are still effective.


The WebUSB angle is interesting - hadn't thought about that. I remember hearing about this during a Chaos Communication Congress talk last December. It only required one wrong click from a Chrome user to authorize exposure.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: