The thread here seems like a dumpster fire to me. Everyone here is worrying about lock-in to an open standard, so I want to clarify things.
WebAuthn is an open standard. It's a way for you to prove to a website that you have a specific private key. There's no lock-in, because the key is portable (unless you don't want it to be). There's no privacy issue, because the key is unique per website. There's no security issue, because it's unphishable and can be unstealable if it's in hardware.
If you don't like Google or Apple, use your favorite password manager. All it will have to keep is a private key per website, and you're done. No usernames or passwords. You visit a site and are automatically logged in with a browser prompt.
This is amazing, it's the best thing that's ever happened to authentication. It's something the end user cannot have stolen. Can we be a bit more excited about it?
EDIT: If you want to try it, I just verified that https://www.pastery.net/ works great with Passkeys even though I haven't touched the code in a year.
That means that django-webauthin also works great with Passkeys, for you Django users:
> The thread here seems like a dumpster fire to me. Everyone here is worrying about lock-in to an open standard.
There is a certain fiddling-while-Rome-burns quality to this comment. The blog post is not about the open standard, it explicitly focuses on a specific company's products. People are naturally worried about this even though the standard may be open, because we are at historically high levels of platform lock-in from megacorps. Gmail is the new "Blue E". Getting locked out of your Google account in 2022 is probably much worse than not being able to use a different browser in 2001.
Sure, HTTP is also an "open standard". How many real browsers exist that can play DRM-encumbered media? You'll find that the answer is "very few – basically anything made by Apple, Google, or Mozilla" (perhaps Brave as well, which has an ex-Mozilla founder and uses Google-funded tech).
The best way to get people to adopt the open standard is to actually showcase uses of it that are not just a single company's product, not call them names for being worried about lock-in.
Right, but that's like Gmail coming out and people complaining about another proprietary product locking people in. There are two issues with the current comments:
1. Absolutely nobody currently uses WebAuthn. I'm extremely excited about the popularization, which, unfortunately, requires big players to get behind it.
2. The comments feel very "oh my God this car I'm driving is heading towards the edge of the cliff even faster". Don't use Google, they're unreliable and evil. While I'm very excited about Passkeys popularizing WebAuthn, I don't think anyone should ever rely on Google for authentication, so just don't use them and use Bitwarden instead, if/when it supports Passkeys.
A vendor coming up with an implementation of a great standard isn't the problem, the fact that people use it is the problem.
> If you don't like Google or Apple, use your favorite password manager.
Unless the service you are trying to use requires that you use a particular model of authenticator, which the service provider can enforce via attestation.
> Unless the service you are trying to use requires that you use a particular model of authenticator, which the service provider can enforce via attestation.
Isn't it even worse? Browser vendors don't even offer any way to not use their baked-in authenticator? With WebAuthn they offered at least the options of using a security key via Bluetooth/NFC/Bluetooth as alternatives to the device authenticator. I don't see that option with passkeys.
So, there seems no way to use a password manager with passkeys.
Sure, but passkeys are meant to replace passwords, not SSO (although I won't complain if they do replace SSO).
Most services probably would not restrict you to just one model of authenticator, but it wouldn't surprise me at all to see a service require that the authenticator be backed by a secure element, in which case you could use a security key, a passkey, a TPM, et cetera, but not a password manager. I'd still take that over passwords, but I don't think everyone would.
I'm also surprised at how negative the reaction here is to Google embracing an open standard. Both Android and iOS devices keep your passwords/keys encrypted end-to-end between your devices. You can absolutely share passkeys between devices (if you want to) and even make them exportable to another secrets manager (if you want to).
If you ask me what's one thing in 2030 people will look back on and say "I can't believe we did that!" I'd have to say "passwords". Passwords need to die and WebAuthn is a great step forward.
Yes, WebAuthn is an open standard, but it seems that Passkeys can only be synced using Google password manager and I don't see any API that would allow the user to use a different password manager as is the case with the auto fill API.
Ah, that's interesting. So if I use Apple keys to log in to sites, I can never log in to them with my Android phone? That's unfortunate, though I don't see this being sustainable for too long, I imagine (and hope) they'll open it up to alternate authenticators soon.
Based on the way implementers at those companies have talked about it, I don't think that's the desired long-term state of affairs. One thing Adam Langley mentioned in https://www.buzzsprout.com/1822302/11122508-passkeys-feat-ad... was that the BLE handshake identifies the cloud service to use because they need to support corporations using their own services for all of their devices. Hopefully there'll be enough demand for that to ensure it's fully implemented soon.
As long as you don't need master key escrow to essentially be with the vendor (ex google / apple), you can have the master key backed up elsewhere so you can pass the mud puddle test [0], and the vendor has no way to access the master key, I'm ok.
But push google / apple about solving mud puddle problems and it's curiously missing from their wallet implementations and they stutter around it when they give talks about FIDO2 and such and people ask them. It's the lock in direction they see everyone going towards that makes people uncomfortable.
> There's no lock-in, because the key is portable (unless you don't want it to be).
> There's no security issue, because it's unphishable and can be unstealable if it's in hardware.
You mean, you can (in theory) choose whether you'd rather have a lock-in or a security issue. Both options are mutually exclusive, you can't have them both at the same time.
No, I don't. You don't have to use Google's thing, use an open source password manager that syncs via Dropbox or whatever. Same thing, different vendor.
Sure I can. But then, if an attacker gains access to my device, so can they. They can just set the phone to sync with their own cloud service.
Phishing would also be back on the table: The phishers' narrative would just change to something like "Dear $user, we're upgrading our systems. For technical reasons, please change your sync target to $url, otherwise you will lose access to all your logins. Yours truly, Dropbox"
My understanding was that many of the advertised security properties of passwordless logins stem from the property that no one, not even the owner of the account has access to the key. This renders phishing impossible because the user cannot physically give away the key even if they wanted to.
But that solution is fundamentally incompatible with copying the key to anywhere else.
> EDIT: If you want to try it, I just verified that https://www.pastery.net/ works great with Passkeys even though I haven't touched the code in a year.
There is a real privacy issue if online services will now force you to store your "passwords" on a device - whether it be in your phone or a password manager.
People are raising really good points here, but I do find it interesting how negatively this news is being received vs. when Apple said the same thing: https://news.ycombinator.com/item?id=31643917
The second most popular top level comment chain is:
> Unless I can back it up and import it into a new device from a competitor, then there is no way I am going to use this unless forced. I do not trust one company anymore.
Which is the same sentiment as this thread. The first comment was just talking about the open standard of Apple's implementation and weakness of 2FA loss/recovery.
Yup - GP made the mistake of treating HN as a single person with a coherent opinion. It's not, and it's extremely tiring and intellectually uninteresting to repeatedly see people doing that.
I agree, but with the following caveat. Sometimes hot takes inspire reflexive reactions from the crowd, reactions that aren't coming from a place of deeply well thought out reasons.
So if you get a mob of people all reacting the same way to something, who, upon examination, have arrived at their conclusion for a smattering of contradictory and inconsistent reasons, it can be a helpful point of information for diagnosing the irrational mob response.
Granted it's not always the case, people get to similar beliefs through different reasons.
But these are two phenomena that exist side by side and there are times when it's necessary and helpful to be able to diagnose the first.
Your GP did not treat HN as a single person, they are simply pointing out population trends. And these trends are important when analyzing the dynamics of a democratic (upvote-based) content platform
HN is not really a pure "democratic (upvote-based)" system as moderators can (and do) "interfere" in the process - they can remove comments or bring a low ranking comment to the top of the discussion if they feel that a discussion on that might be more interesting for the HN crowd.
No. While I have tried to counter these group think posts you talk about, I only find them interesting because I think it is human nature to do it. I know I do it all the time.
I find that sentiment ironic because I won't use it unless it can't be backed up (the main selling point of 2FA and hardware keys).
If it can be backed up, then a casual bystander/process can also "back up", filch all of your credentials in a few moments with you being none the wiser.
The protocol is open, so I can use one proprietary key from company A, one from company B, and a few open source keys. Keep one for regular use and the rest as backups.
Yeah Google is more evil than all of those other totally non-evil companies that act in your best interests at all times. It's not like other companies have the same incentive to profit from you!
You believing $company is not as bad as Google definitely has nothing to do with marketing!
It's not particularly surprising. Apple has a much better reputation at customer service than Google does – they have actual stores you can walk into.
Now I'm not sure whether they can help you unlock your Apple ID if you prove to them that you're the owner of the account, but I can at least visualize Apple having the scale to do that.
Google on the other hand has a horrendous reputation for locking out people out of their accounts totally and permanently. Of course everyone has concerns about handing all your account login responsibilities to a company with such terrible customer service.
Apple, on the other hand, has terrible problems with working with others that google does not.
Apple will happily tell you that if you want grandma to have a whatever color text bubble, you should buy her an iPhone, to name a recent example, rather than adopt the standard everyone else is using. I bought a Macbook last holiday season and couldn't even set it up until my wife set up her iPhone on my account to activate the laptop. I bought my wife a iWatch last week and briefly thought of getting myself one but you can't use it without an iPhone (they have a "kids" feature where a parent can activate it but it basically has no smarts at that point AFAIK).
So, for making an authentication standard, I'd trust Google over Apple.
> I bought a Macbook last holiday season and couldn't even set it up until my wife set up her iPhone on my account...
I have no idea what you mean by this. You do not need, and have never needed, to own an iPhone to use an Apple computer.
> I bought my wife a iWatch last week and briefly thought of getting myself one but you can't use it without an iPhone...
This, on the other hand, makes a little more sense. The Apple Watch is designed as a companion device to an iPhone. Much of its functionality relies upon the phone (e.g. displaying notifications from the phone, installing watch apps which pair with corresponding phone apps) -- it can't do much on its own.
If pre-installed and configure, it’s still convenient for things like wallet/Apple Pay, esp when I do dumb stuff like forget my wallet and phone at home and need to pay for things.
When I got my M1 MBP there was no way, that I could find, to set up the computer and login the first time without using an iPhone. I don't remember the exact phrasing it gave me, but the only way I could get the system set up was my wife logging out of her iPhone, and logging into it as me. I don't know if this is newer than your experience, or there was some trick to get past it, but I couldn't find one.
I have a M1 and M2 (Pro and Air, and non-M Pro and Air) and have never done this, and never seen it. I do not even have an Apple ID logged into any Mac laptop I have ever owned.
You do not need an account or an iPhone, this makes no sense to me. Was this purchased new from a retail store?
The cellular version of the Apple Watch can perform a limited set of tasks -- like making phone calls, sending/receiving text messages, and playing music -- without a phone present. It still requires a phone for full functionality, and for setup.
My TV offered free Apple TV+. In order to sign in, I had to get a code sent to a laptop sitting far away. I assume they expected me to be carrying an iPhone, as no non-Apple device could provide the OTP. Using an Apple service is simply not worth the hassle.
At the risk of being pedantic, no. Apple Stores aren’t able or empowered to provide Apple ID support beyond what the public website based recovery workflows provide. I am sure someone will note that someone at an Apple Store has helped them reset an Apple ID password. What I mean is that Apple Store employees have neither procedure nor access to override Apple’s account system. You have to call support for assistance beyond what the website can do. I am sure Apple Store employees have been helpful and I’m sure policies have changed over time.
As an Apple user I find this as frustrating as it is wise. Mostly for future-me who may one day not be as savvy and manage to screw myself.
I don’t fully trust iCloud Keychain and Apple to never lose my data in a “I don’t concern myself with backups” manner. So I opt for using Passkeys where I can also add my FIDO2 tokens.
Eh. Pedantically; the Apple Store will at least try to help you, even if they’re not empowered to fix it they will guide you through the process until there is a resolution.
It’s not as “you’re on your own” as some products I’ve owned.
This is key. That the Apple Store can't directly help you doesn't mean they won't direct you to the proper person (and possibly stay with you if possible & desired)
I suspected this, which is why I accounted for it – the point is that if Apple started getting enough bad press, they would have to put Apple ID verification in their stores, because their customers are the people who pay for Apple products.
Google's customers are mostly the people that get to show you ads. So they don't seem to care about even the paying users for their products. And it's sad that Apple seems to be going the way of trying to be an ad platform as well.
> but I can at least visualize Apple having the scale to do that.
> Google on the other hand has a horrendous reputation for
Neither are true nor false but definitely exaggerations. All you're doing is displaying personal biases by providing them with benefit-of-the-doubts. They too have a reputation for locking people out, and are well known for turning data over, but one that HN in gernal prefers to ignore.
> Neither are true nor false but definitely exaggerations.
Google does not kill services. That does not happen. Google definitely does not deplatform people killing all their accounts and all their access. That also does not happen.
Apple absolutely did not abandon the Xserve platform after promising a professional and modern Unix experience. Google is definitely the only one with a penchant for mercy-killing unsuccessful products.
Passkeys sound like another way for companies like Google and Apple to lock you into their walled garden. Having each walled garden randomly generating a key for every single domain instead of using the actual domain name as part of the key is a great way to lock regular people into their respective ecosystems.
My understanding of passkeys is that they are using WebAuthn under the hood (hence the nod to the w3c/FIDO at the end, and the fact that the passkey in the screenshot was associated with tribank.us).
They are solving a very real problem. WebAuthn uses private keys, but those private keys are tied to the device where they were created. This is a blessing and a curse.
It's a blessing because it eliminates a whole trove of phishing attacks. After all, if no one can get the private key, they can't steal or share it. Well, of course they could steal the actual device, but that's orders of magnitude harder than stealing online credentials (points to https://haveibeenpwned.com/ ). That's a good thing.
It's a curse because the same person logging in from their ipad, android phone, and desktop PC needs to set up WebAuthn three times. For each domain/website (broadly speaking). If they only set it up once and lose the device, well, they are either locked out or need to have another means of account recovery (username/password, calling a customer service rep).
This curse is what passkeys managed by Apple/Google are attempting to solve.
I believe the WebAuthn 3 draft is going to try to address some of this: https://www.w3.org/TR/webauthn-3/ but that's based on what a co-worker said. A quick scan didn't turn up anything.
> They are solving a very real problem. WebAuthn uses private keys, but those private keys are tied to the device where they were created.
To clarify I am not talking about the issue of syncing the device's private key. I am talking about the artificial problem these walled gardens are creating by having every single domain getting its own randomly generated private key. The only practical way to keep all of these randomly generated keys synced across multiple devices is to use the "cloud".
If instead the per site key was generated using a private key and the domain name, users would only need to transport that one private key to another device and would get syncing for free without the requirement of the "cloud".
Every site is supposed to be keeping its randomly generated data to give back to you as an input to proving you have the device, but no one wants to give the relying parties open source "enterprise authentication" for free like in the days when Apache was king..
I don't really see a safe way to do what fido was trying to do while letting keys flow about and using their cloud for the original setup with the security we were originally expecting wouldn't have the conveniences they are talking about.. So it seems like more phishing, now for getting an activated device/chrome session.
In most implementations there's only a key or two on the device that you have to sync. The webauthn protocol requires a site that accepts it to store a small amount of arbitrary data for each registered device, which is then handed to any device that attempts auth. Most devices use that to store a copy of the site-specific private key encrypted with one of the device keys. (IIRC there's usually a symmetric key and an asymmetric keypair that are protected by the device, and the symmetric one is the more convenient to use for encrypting the site data.)
It's possible to store every key the device has registered, but most devices don't do it that way to keep the cost of the secure enclave low.
> I am talking about the artificial problem these walled gardens are creating by having every single domain getting its own randomly generated private key.
That’s part of the design though. That’s what completely eliminates the ability to do phishing-attacks.
If there were a common root to leak, that would just provide a new target for phishing attacks, and effectively risk reducing a persons entire online security down to 1 shared root-key.
While obviously better than having just 1 common shared password, why reintroduce this risk when you don’t have to?
> That’s part of the design though. That’s what completely eliminates the ability to do phishing-attacks.
If the actual domain name is used to generate the key that would also completely eliminates the ability to do phishing-attacks. Paypal.com and PaypaI.com would generate two completely different keys.
That's how hardware FIDO2 keys work now. There's no walled garden. You can remember a passphrase that generates a key that is used to derive every website's key in turn.
No cloud, no synching, nothing. You're fucked if your passphrase is stolen, but that's the tradeoff.
If I can get access to your device to exfiltrate the private key that generates the domain specific keys, why wouldn't I also have access to the the randomly generated site keys? Your device needs access to the keys to use them.
In both cases your device has a private key that it needs to secure. In my scenario we remove the third party cloud service.
I can register an authenticator multiple times, for instance to represent multiple different accounts of my own, or represent multiple people on a shared device.
If I delete a credential, the expectation is that registering a new credential is not going to correlate the authenticator (and thus the user).
If I want to have hygiene steps of rotating the cryptographic key a user uses to log in, I won't want registration to create the same key pair each registration.
And for the cloud sync:
The UX can present that web authentication is an option to log in. The user will be confused if that option is presented for sites which will not recognize the authenticator.
The site can store data alongside a credential to be returned to optimize the log in process, such as a site-specified identifier to look up the user credential in a database. That state needs to be synchronized.
A Yubikey for a phone or an iPad seems pretty clunky. Outside of super high risk people and on-call devices, are there people actually using physical keys for their mobile devices ?
I actually do and it works pretty well. My Yubikeys are NFC compatible. I have my OTP generated on it and also use it for U2F / Webauth. I only need to tap my key to my phone for 2FAuthentification.
Seems like you wouldn't want to share passkeys for the same reasons you don't normally want to share passwords?
Instead, each website that accepts passkeys should allow you to register multiple devices and probably print out backup codes as well (for the especially important accounts).
If there's no reason to migrate anything then lock-in is irrelevant. Just add more login methods so that when you lose some, you have others.
I have over 500 online accounts. Imagine if all of them used a login method where I had to have backup devices registered, instead of just me backing up the credentials (like I do today with a password manager).
With backup devices, whenever I upgrade or replace a device, I need to go to each of the 500+ online accounts and register the new device. This is much more work than a quick login to each site via my password manager (which can happen on-demand, only as I need to use the services).
WebAuthn credentials can be backed up, if you want. They can also be impossible to back up (and thus steal), if you want. It's up to you, which is more than I can say about passwords.
Websites already have a hard time to get users to sign up, so requiring them to enroll backup authenticators (which they won't have) is not going to work. Printing or writing down backup codes is even worse from a UX point of view.
IIRC the spec has a flag to hint that the passkey is backed up (in iCloud or your Google account) so the relying party (website) knows whether backups are mandatory but that means the secret doesn't stay on your device and goes to the mothership. Then I don't see why the spec wouldn't standardize the transfer of secrets from one company to the other.
Chrome allows you to export saved password in CSV (chrome://settings/passwords) So I'd say it is a regression in this regard. You won't be able to switch to an other browser easily, you'll have to go to each websites and change/add authentication methods as far as I know.
The entire third party auth push has turned into what may be one of the largest incumbent power grabs I have ever seen. Stuff like Google Amp or even App Store walled gardens pale in comparison.
What drives me nuts is how little discussion of this I've seen. People don't even seem aware of the implications of it. It's being pushed hard as a boon to security, which it is in some cases, but at a cost that nobody is even considering or talking about.
The implications are pretty profound: large companies having the power to lock you out of everything on a whim (even your own systems and unaffiliated third party services), levy taxes on the use of everything (e.g. Google starts charging you or sites to log in with Google), surveil literally everything (including logging into everything you have as you and sucking down data), and if a big identity provider gets seriously hacked it'll be an epic security apocalypse. Imagine someone stealing the master keys for a provider and pushing ransomware to millions of companies at once.
... and don't forget the obvious: "Oops I got locked out of Google and now I'm locked out of 50 SaaS services, my company's bank, my VPN, and my remote servers."
It just totally blows me away that these systems have no privacy protection at all, no portability provision for me to select or change my provider built into the protocol, no built-in support for third factor auth that I can control (e.g. FIDO2), no built in provision for recovery codes, and so on. These kinds of things didn't even seem like they were considered in the design of things like OpenID/OIDC. It's just a big "oh hey lets give god level access with no recourse to third parties and implement it so there's total lock-in... what could go wrong?"
Edit: yes some well-implemented systems offer their own built-in support for some of those things (recovery codes, changing your auth provider, reverting to password, etc.) but in my experience it's a minority and there is obviously nothing in the standard to encourage it or provide any guidance on how to do those things securely.
Man, every one of these comments has completely misunderstood the point. WebAuthn is an open standard. The provider is only there to sync your key. If you want, you can keep it yourself.
Why is everyone yelling about the sky falling down when this is the best thing to happen to authentication since ever?
All I'm saying is that authentication is literally the keys to the castle, and inviting third party control of authentication has some scary implications in terms of privacy, monopoly control, and security.
We should at least be discussing this, but I don't really see that much discussion. People are just blindly adopting this stuff because it's convenient and not even thinking about what's under the hood or whether there is a way to back out or change provider.
This is also going to be a body blow for our privacy - if BigTech have access to your keys, so will the government and both can abuse it. The idea is to force you to "save password on device" (whether you want to or not) so that when a government authority gets your device they can also easily access all your internet accounts. US courts have already affirmed that it is legal for the police to force you to unlock your device if it is biometric protected (fingerprint or face scan etc.). Moreover, by forcing you to use your personal device for identity verification, BigTech is ensuring that identifying you and datamining your personal data becomes more easy for them. Don't be surprised if this is also extended in the future as a "super cookie" service to allow easy tracking.
I don't use my phone to log in to anything. All my stuff is done on a computer with a password manager.
At no time am I even likely to rely on Google for anything this important; every other week there's a thread about Google killing off accounts for no reason. No way would any sane person allow Google access to this with their track record. And this isn't even considering my suspicion that Google only wants to "help" with this so you're locked into their services and they are better able to track your activity.
Exactly. I will never trust Google to control access to my logins knowing that the Sword of Damocles (the Google "AI" deciding I'm a bad person) is hanging over my head.
If Google did an about face and started providing reasonable escalation mechanisms for when they lock you out of your account based on a faulty decision of their algorithm I'd consider it.
> I don't use my phone to log in to anything. All my stuff is done on a computer with a password manager.
More or less the same, except that I haven't found good TOTP solutions for the desktop, to the tune of KeePass (something that can run on Windows/*nix instead of making me use something like FreeOTP, Google Authenticator or other Android/iOS apps; or in addition to the mobile apps).
That said, even with multiple Google accounts for different things (e.g. personal e-mails, file storage, cloud services etc.) it feels like eventually you might want something like Qubes OS, another way to run multiple separate VMs, or just use separate devices for separate use cases.
Much like how some orgs have separate laptops for accessing prod environments, that are more tightly controlled, even though that's not convenient enough for most people.
Thanks, this seems like the solution with the least friction for someone who's used to KeePass! A lot of other good solutions in the sibling comments as well, actually.
Bitwarden will do TOTP, and its CLI tool is quite usable. If you want it fully local, just stand up a docker of their server software (which is open source) or the open source reimplementation (vaultwarden).
Depending on your setup you could just generate TOTPs on command line and copy to clipboard, that's what I've implemented: https://github.com/Ciantic/totper
It works pretty well with pass (password manager) that stores each individual entry in GPG encrypted file. GPG is pain, but if you happen to use it already then it works.
I’m with you, I de-Googled all my services a few years ago, and I couldn’t be happier with the decision.
I’m curious though, what’s preventing you from using a password manager on your phone? I use KeePass, and I’m able to use my password DB on any device I want.
How do you handle syncing? I use KeePassXC on my desktop, and I back up the encrypted password DB to SpiderOak, but I haven't figured out a good way to get that DB onto my phone, auto-synced.
Also are you worried about the security of the DB on your phone? My password DB's passphrase is a good 50+ characters long, which I can type quickly on my laptop, but I can't imagine pecking that out on a phone. And I feel like I would not want the DB unencrypted/unlocked all the time on my phone; given the possibility of my losing it or it getting stolen, I'd want it to re-lock immediately after each use.
Nothing really other than it's a device I can loose to easily or it could be stolen. I don't do banking on my phone either. Plus there's the hassle of syncing it too.
I don't think the point is that they can't. Just far more unlikely to, and there's usually more recourse. Google is mega-corp who never listens to their users, and are un-contactable. Some AI can and probably will ban some people for no real reason and even Google won't know why it happened.
you can always use a password manager like the KeePass family which is file based. sync it across devices with nextcloud/syncthing instead of gdrive/Dropbox/one if you are extra paranoid
In my experience on iOS, you can't make Passkeys work without also turning on syncing, that's sort of the entire point of Passkeys, otherwise they would just be WebAuthN tokens.
From TFA (the security blog): "The main ingredient of a passkey is a cryptographic private key. In most cases, this private key lives only on the user's own devices, such as laptops or mobile phones."
Are the keys encrypted with a key derived from a master password?
Does the decryption only occur on the user's device?
Is this master password not reused for the account or has account authentication been changed to use a cryptographic proof produced on-device?
If the key is ever decrypted on vendor's servers, everything else is theater.
And this is all of course also excluding auto-updating vendor-supplied authentication code from the threat model because the industry is not ready for that conversation yet.
These are all implementation decisions, since behind-the-scenes this is all just Web Authentication API.
That also means if you dislike the idea of some big company holding all your keys in cloud backed-up vault, you can just use one of the dozens of hardware FIDO key manufacturers.
On iOS, the keys are stored in iCloud Keychain, which is also the password auto-fill vault.
These keys are protected with two levels - iCloud encryption and an effective HSM cluster of apple security enclaves.
There is no master passphrase/secret exposed to the user, it is synchronized by phones on the account. You must join the 'ring' of personal devices in addition to using the iCloud login to decrypt iCloud information.
This means unlike basic iCloud encryption (which has a recovery HSM used to help people gain access to their accounts and which legal processes may grant access to read data), you need to perform additional actions to get access to this vault.
Each 'passkey' (Web Authentication Credential) is a locally generated secp256r1 key pair in that keychain, with associated site information and storage for additional information such as the site-specified user identifier and friendly display name.
There's basically three levels of protection for the data
1. whatever the cloud hosting provider has for data at rest
2. the per-account iCloud encryption key, which is never shared with the hosting provider but exists on an Apple recovery HSM
3. the per-account device ring key, which is not visible to Apple.
so no, the credential's private key itself is never visible to Apple.
Apple does have a mechanism (if you go into Passwords) to share a passkey with another person's Apple device. You need to be mutually known (e.g. need to have one another as contacts, with the contact record containing a value associated with their Apple ID) and it needs to be done over Airdrop for a proximity check. Presumably, this uses the public key on their account to do an ECDH key agreement and send a snapshot of the private information over the encrypted channel.
Auto-updating vendor-supplied authentication code for iPhones is complex because of the split between the operating system code and the Secure Enclave firmware, the (misuse) of that API via a compromised operating system, and the potential to get malicious changes into the Secure Enclave firmware itself.
> Are the keys encrypted with a key derived from a master password?
No, because PBKDFs are not a good mechanism for creating encryption keys. Instead you have an actual random key, and your devices gate access to that key with your device password.
> Does the decryption only occur on the user's device?
Yes, because only the user's devices have access to the key material needed to decrypt. Apple cannot decrypt them.
> Is this master password not reused for the account or has account authentication been changed to use a cryptographic proof produced on-device?
Not sure what you're asking here?
> If the key is ever decrypted on vendor's servers, everything else is theater.
As above the vendor/apple cannot decrypt anything[1] because they do not have key material.
> And this is all of course also excluding auto-updating vendor-supplied authentication code from the threat model because the industry is not ready for that conversation yet.
Don't really agree. The malicious vendor update is something that is discussed reasonably actively, it's just that there isn't a solution to the problem. Even the extreme "publish all source code" idea doesn't work as auditing these codebases for something malicious is simply not feasible, and even if it were ensuring that the course code you get exactly matches the code in the update isn't feasible (because if you assume a malicious vendor you have to assume that they're willing to make the OS lie).
Anyway, here's a basic description of how to make a secure system for synchronizing anything, including key material (secure means "no one other than the end user can ever access the key material, in any circumstance without having broken the core cryptographic algorithms that are used to secure everything").
Apple has some documentation on this scattered around, but essentially it works like this:
* There is a primary key - presumably AES but I can't recall off the top of my head. This key is used to encrypt a bunch of additional keys for various services (this is fairly standard, the basic idea is that a compromise of one service doesn't compromise others - to me this is "technically good", but I would assume that the most likely path to compromise is getting an individual device's keys in which case you get everything anyway?)
* The first device you use to create an iCloud account or to enable syncing generates these keys
* That device also generates a bunch of asymmetric keys and pushes public keys to anywhere they need to go (i.e. iMessage keys)
* When you add a new device to your account it messages your other devices asking to get access to your synced key material, when you approve the addition of that new device on one of your existing ones, that existing device encrypts the master key with the public key provided by your new device and sends it back. At that point the new device can decrypt that response and use that key to then decrypt other key material for your account.
All this is why in the Apple ecosystem if you lose all your devices, you historically lost pretty much everything in your account.
A few years ago Apple introduced "iCloud Key Vault" or some such marketing name for what are essentially very large sets of HSMs. When you set up a new device that device pushes its key material to the HSMs, in what is functionally plaintext from the point of view of the HSMs, alongside some combination of your account password and device passcode. You might now say "that means apple has my key material", but Apple has designed these so that it cannot. Ivan Krstic did a talk about this at BlackHat a few years back, but essentially it works as following:
* Apple buys a giant HSM
* Apple installs software on this HSM that is essentially a password+passcode protected account->key material database
* Installing software on an HSM requires what are called "admin cards", they're essentially just sharded hardware tokens. Once Apple has installed the software and configured the HSM, the admin cards are put through what Krstic called a "secure one way physical hashing function" (aka a blender)
* Once this has happened the HSM rolls its internal encryption keys. At this point it is no longer possible for Apple (or anyone else) to update the software, or in any way decrypt any data on the HSM.
* The recovery path through requires you to provide your account, account password, and passcode, and the HSM will only provide the key material if all of those match. Once your new device gets that material it can start to recover all the other material needed. As with your phone the HSM itself has increasing delays between attempts. Unlike your phone once a certain attempt count is reached the key material is destroyed and the only "recovery path" is an account reset so at least you get to keep your existing purchases, email address, etc.
You might think it would be better to protect the data with some password derived key, but that is strictly worse - especially as the majority of passwords and passcodes are not great, nor large. In general if you can have a secure piece of hardware gate access to a strong key is better than having the data encrypted to a weak key. The reason being that if the material is protected by that key rather than enforced policy then an attacker can copy the key material and then brute force it offline, whereas a policy based guard can just directly enforce time and attempt restrictions.
[1] Excepting things that aren't end-to-end encrypted, most providers still have a few services that aren't E2E, though it mostly seems to be historical reasons.
>> No, because PBKDFs are not a good mechanism for creating encryption keys
> I'm curious about what you mean by this. Isn't it in part what PBKDFs are designed for?
Password-based key derivation functions start with the assumption that some entropy is provided by the user. Which means that the entropy is typically of awful quality. A PBKDF does the best it can with that low entropy, which is to make it into a time- and maybe space-expensive brute-forcing problem. But a PBKDF is starting with one hand tied behind its back if the user-supplied entropy is "password" or "hunter2." If we aren't burdened by that assumption, then we can generate high-quality entropy -- like 128 or 256 bits of CSRNG-generated noise -- and merely associate it with the user, rather than basing it on the user's human-scale memory.
PBKDFs also generally assume that users are transmitting their plaintext passphrases to the server, e.g., when you HTTP POST your credentials to server.com. Of course, browsers and apps use transport security so that MITMs can't grab the passphrase over the wire, but the server actually does receive the phrase "hunter2" at some point if that's your passphrase. So again, it's a rotten assumption -- basically the foundation of most password-database compromises on the internet -- and PBKDF does the best it can.
If you remove that assumption and design a true asymmetric-encryption-based authentication system, then you don't need the obfuscation rounds of a PBKDF because the asymmetric-encryption algorithm is already resistant to brute-forcing. The script kiddie who steals /etc/passwd from a server would effectively obtain a list of public keys rather than salted hashes, and if they can generate private keys from public keys, then they are already very wealthy because they broke TLS and most Bitcoin wallets.
Think of passkeys as a very user-friendly client-side certificate infrastructure. You wouldn't let your sysadmin base your enterprise website's TLS certificates on a private key derived from their dog's birthday. You wouldn't let users do that for their certs, either.
sowbug has a more detailed answer, but the TLDR is the PBKDFs were consider ok a long time ago before the security implications were really understood. Essentially they're low entropy in practice (e.g. a person _could_ make a 10+ word password, but they're not going to for a password they have to enter frequently).
You're much better off using the password to a truly random key, though that of course immediately raises the question of how you store the true key :D Modern systems use some kind of HSM to protect the true keys, and if they're super integrated like the SEP (or possibly the SE, I can never recall which is which) on apple hardware they can simply never expose the actual keys and only provide handles and have the HSM encrypt and decrypt data directly before it's seen by the AP.
tl;dr: Yes, and further they're only decrypted using the secure chip on the device, so the vendor supplied authentication firmware can't be updated without user interaction/approval.
From a post linked in the article:
> Passkeys in the Google Password Manager are always end-to-end encrypted: When a passkey is backed up, its private key is uploaded only in its encrypted form using an encryption key that is only accessible on the user's own devices. This protects passkeys against Google itself, or e.g. a malicious attacker inside Google. Without access to the private key, such an attacker cannot use the passkey to sign in to its corresponding online account.
> Additionally, passkey private keys are encrypted at rest on the user's devices, with a hardware-protected encryption key.
> Creating or using passkeys stored in the Google Password Manager requires a screen lock to be set up. This prevents others from using a passkey even if they have access to the user's device, but is also necessary to facilitate the end-to-end encryption and safe recovery in the case of device loss.
The question I always ask to figure out how things work: What happens if I lose my phone?
Vendors trying to peddle a solution will always try to answer this question in a way that doesn't say "well in that case you're screwed" and any answer except "you're screwed" means there is some kind of potentially-vulnerable recovery process, and the description of how the process works usually gives you an idea of how secure it is (or at least a starting point to ask more questions).
If you lose your phone (and all other devices you might have), apple does have a secure (as in apple cannot access it) last ditch recovery path (see my other wall of text/word soup answer).
But in the absence of that the data is gone - it's one of the big concerns that come up in response to "E2E everything": people are not used to the idea that forgetting your password (or losing devices in a 2fa world) means the data is actually irrecoverable and it's not just a matter of reseting the account password (e.g. you can't go into a store with your ID to "prove it's you" because that isn't the problem)
Final step is key escrow authority that will store your private key and produce it to you if you can proof your identity with government ID. It is not enough to store in cloud storage (which Google, Apple, or someone else could deny you access to), or your own device you could lose or destroy (which is why backup hardware tokens are always recommended for U2F MFA); you need the ability (but not a requirement) to bind cryptographic identity to IRL identity.
Of course, one doesn’t need to utilize this, but you’re SOL without a recovery mechanism of last resort (unless individual sites and services have their own recovery processes to re-provision a user who no longer has access to their cryptographic credentials).
FIDO credentials have some baked in assumptions about the cryptographic properties they were generated with, that an RP can use to reason about credential strength, and are designed so that unwrapped private keys are not handled outside of an authenticator device. These assumptions make it undesirable for sync fabric vendors to interoperate.
You are correct that a Passkey ecosystem has an inherent risk of being locked out of cloud storage / sync, and that a third party escrow system is a mitigation against that. But it's not sufficient. You'd end up with keys that could, at best, only be imported into authenticators of the same ecosystem you were denied access from which, as Sync Fabrics are not interoperable. This is presumably not the outcome you're looking for.
I believe some sort of mechanism to assert credential strength at presentation time rather than generation time, and/or some sort of mechanism for TPMs/Secure Elements/Secure Enclaves to establish trust and import trusted credentials from a different authenticator vendor would be needed. This would allow vendors that don't control the hardware (i.e. are not Apple/Google/Microsoft) to build something like a "1Passkey" without having to implement their authenticators in software (i.e. a Virtual Authenticator), and you could keep your wrapped passkey store in escrow with any third party of your choosing.
For the vast majority of services, there's no value or even negative value in binding my "identity" to some sort of government ID.
You accept Google and Apple might deny you access but then you just blithely assume the US Federal Government (for example) would never do so, which shouldn't pass the laugh test.
I can imagine it being valuable if my government wants to help get me back in to, say, my bank accounts if I somehow lost all my credentials (e.g. my home burned down suddenly but I somehow escaped with nothing). But I don't feel like my GitHub, Gmail, Patreon, etc. make sense in this context. If my friends can lose a phone every year or two and make a new god-damn account, I think "My home burned down and I have nothing" is a good enough reason.
Gitlab's attitude of (for unpaid accounts): Too bad, just make another one - seems appropriate for almost everything. If tialaramex never wrote another HN comment, and instead tlrmx or tialaramex2 or whatever began posting, who would even care ?
HN participants, for whatever reason, approach these challenges as “but this isn’t a problem I have.” You’re the builder (broad strokes and wild assumption), but there are far more citizens (hundreds of millions at least) who are simply consumers of these systems. They are your grandparents, your parents, your siblings, your children. Passkeys are rolling out internet wide to all sorts of critical services people rely on, and they’ll need a solution if they lose their cryptographic identity assertion, because you can’t always just create a new account when you lose access (either because data, finance, or authority is tied to that account). Loss of gitlab access is inconsequential compared to losing access to your email, your bank account, etc.
In the United States the US Postal Service would be a great fit for a job like this. They already have good infrastructure for identity verification and physical distribution.
I wouldn't want escrow of private keys, however. I'd rather the USPS just act as a certification authority that provides strong guarantees of identity verification.
Yes, definitely. USPS + Login.gov could act as trust anchors, with cryptographic keys reprovisioned upon proofing, versus storing them. I am open to whatever is the optimal balance between security and practicality.
> The Postal Service Reform Act of 2022 has recently expanded the Postal Service’s ability to provide identity verification to all levels of government. A window of opportunity is currently open for USPS to contribute to closing gaps in government identity verification processes.
Except that here in the US a non-trivial number of politicians of a particular persuasion[1] actually believe that government issued ID, of any kind, is the "mark of the beast". There's a reason that Real ID had a lot of push back. Having USPS, already a political bogey-man for that same crowd, become a holder of "identity" is probably going to face a lot of pushback.
I can assure you that a lot more constituencies than just the Christian coalittion are concerned about universal government-issued IDs as passports for online participation.
I think it's mostly because easily obtainable government ID will make it easy for Black and Latino people to register for vote, which means Republicans will never win any election ever again :)
because simple, easy-to-use ID lowers the barrier for demanding ID in more places and attacking anonymity. the easier we make it to demand id, the more people will demand it. wanna use a fake name/not divulge your identity? doing something politically sensitive where you may need some protection? just like your privacy? tough shit show your id or GTFO.
Shouldn't we instead solve the problem that you might "need some protection" because you're doing something political - instead of relying on security-through-obscurity which honestly doesn't even really work anymore, IDs or no IDs. There's so many other ways for governments to track people of interest these days.
i disagree with the assumption that it's a solvable problem. people remain fallible and the correct solution is to mitigate our downside by minimizing state power. not disagreeing with the idea that we also need to work on lots of other ways to reduce its power and ability to track people btw.
i don't want this because then more websites will start expecting strong identity verification. the last thing we need are more attacks on anonymity. at least with a private key i can say i'm joe biden or crazy horse or whoever. i may not be eligible for "government escrow" but who cares.
My bank is already regulated and required to have strong identity verification for some operations. They won't let use any sensible multi-factor authentication, however. Requiring they use such a government mandated authentication infrastructure would be a major "win" for my piece-of-mind.
Except for the fact that a mobile phone does not, as far as I'm concerned, fall in the "user's own device" category, as much as Google would like you to believe it does.
"on your device" doesn't mean that you can actually access the key though. It might be stored in the secure enclave/TPM or otherwise unavailable if the phone has a locked bootloader.
so what? maybe it is also pinned to your google account so regardless that it's a private key that only lives on your device, doesn't at all mean you can somehow transfer it. it could simply be one of many components that make up the effective key.
For all the talk of "one app to rule them all" (which is an awful idea) this is a step closer to that.
For all it's faults, crypto has one thing right -- not your keys, not your stuff. I get that doing keys/passwords is hard, but the best thing in the long run is for them to stay in the hands of the user.
And if not, the holder of the keys needs to be someone you can easily hold accountable, i.e. either fire, or arrest, or sue if they get it wrong.
> For all it's faults, crypto has one thing right -- not your keys, not your stuff
Erm, this isn't really an aspect of cryptocurrency, per se. It's more of a general rule that informed the initial thinking around cryptocurrency. In fact, most users of cryptocurrency seem quite content to give up cryptographic custodianship.
If you went back a similar time to the nascent web/cloud/etc, you'd find plenty of similar sentiment about remote software and storage. It's just that individual autonomy loses out over time due to convenience created by the massive investment in the surveillance economy.
That's a fair point, in that I should have said something like "crypto-fundamentalists." But the idea is the one I wanted to get across, and I have mostly those same feelings about remote software and storage (e.g. I tell students, priority one in your life -- if there's something digital you care about, e.g. photos, get at least one copy of them on something you can hold in your hand)
Is it nanny-ish just because it makes it simpler for end users? Fairly certain most users are not interested in managing their own key sharing infrastructure.
It's built on the same technology as FIDO keys, so if you want to take control of it yourself, just use a hardware key.
Less customer support for dealing with hacked accounts. Same reason places support and even push 2FA, otherwise what is their incentive to support 2FA?
Fortnite even has a free dance that you can only get by enabling 2FA.
In the second instance, you are controlling the where and how of your keys being backed up. If you are smart you will have backed up your keys to multiple locations, for disaster recovery. One of the fundamentals of privacy is having control of your data, which the first option does not provide.
What is concerning about giving encrypted keys to someone? If I give my encrypted key to you, right now, I retain control of my data. One of the fundamentals of encryption is that you can freely share the ciphertext without giving up control of your data.
I'm unclear on what you think they could do. Is your idea here that Google is so smart that they can break end to end encryption? If so, we've got bigger problems.
It isn't fair to presume that everyone shares your lack of knowlege on a subject, and it's simply incorrect to presume that because you don't understand something that it cannot be safe or reliable.
What they say today about end-to-end encryption seems like it should work fine from a technical point of view. It is entirely possible the Google is very good about this, and when implemented, it might work perfectly as stated today.
But I'm not talking about incorrect or correct and I don't care about fairness in presuming whoever's intelligence either, because the thing I'm talking about is more important, which is risk.
Large companies taking on big tasks that you don't pay them for is undeniably risky for many reasons. One, they screw it up today. Two, they don't screw it up today but they change it tomorrow. We know this because many of these companies have done things like this before.
No, those are the same thing. The "not your keys" thing in crypto is exactly the reason they tell you NOT to store your crypto with e.g. Coinbase/Binance. Just use them as on/off ramps, but have your own wallet.
And BigTech's cloud (who will have no problem sharing it with the authorities). And when all your keys are on the device, it also becomes a lot easier for the government to access all your internet accounts by getting access to the device.
Who cares if it is "end-to-end encrypted" if the device with all your keys / credentials can be easily used to access all your online accounts? (And no online service forces me to use a password manager).
Pretty sure there's been talk on the Bitwarden community forums about them adopting support for using it as a provider. I assume once that's available you might start to see it move into Vaultwarden. But, that's sort of the risk you run with using a 3rd party to a 3rd party...
At least with Vaultwarden, it's open source and running locally with data stored on a server in my house, so if they misbehave, I could easily fork or switch projects. If Android doesn't allow or otherwise hobbles 3rd party password managers, there isn't much recourse.
That's fair, but Bitwarden itself is also open source... though, my understanding is significantly more difficult to get up and running than Vaultwarden.
Can we have this but self-hostable and open source, please? Something like Bitwarden that you can stuff onto your own device? I know there are hosted services for handling auth on the server backend, but what about the other way around?
I use Krypton but that's not maintained (and already broken on some websites like Github). I trust the secure storage module of my phone and I trust my computer's TPM, unlike many other Linux users; surely it should be possible to integrate with the OS somehow to make it secure, right? The last example I saw used USB over IP to inject a virtual FIDO device, which works great, but the implementation is clearly not ready for prime time.
Google's auth is getting increasingly frustrating. Recently when I logged in with TOTP 2FA, I had to also open up YouTube on another device and click approve. What's the point of 2FA if they're just going to ignore it?
Also along these lines, there's a really neat little library called "SimpleWebAuthn" that also supports Passkeys, and is basically a small dependency-free set of client Javascript (for initiating the user flow) and a small JavaScript server component to go with it: https://simplewebauthn.dev/
The code is pretty simple and a good place to start, as well as AuthCompation, if you wanted to roll your own library in your language of choice or whatever, or something very custom. I found both useful recently.
> A passkey on a phone can also be used to sign in on a nearby device. For example, an Android user can now sign in to a passkey-enabled website using Safari on a Mac. Similarly, passkey support in Chrome means that a Chrome user, for example on Windows, can do the same using a passkey stored on their iOS device.
> Since passkeys are built on industry standards, this works across different platforms and browsers - including Windows, macOS and iOS, and ChromeOS, with a uniform user experience.
I see no mention of Linux in these examples, which tells me that users having access to their keys is not a primary concern for these implementations?
For those of you who want something like this with Firefox on Linux, the virtual-fido project might provide a decent alternative, it uses Linux's USB-over-IP support to provide a fake FIDO device, and Firefox supports FIDO devices for WebAuthn:
I would use this in addition to those. Instead of having to buy two Yubikeys I can buy one and use a software solution as well.
Since I already use a phone capable of doing the same thing, let my phone be my main authenticator, and then I can use a Yubikey as a backup.
It's not like one is necessarily better than the other, except that you already carry a phone and they're capable of being a hardware device that works with Webauthn. No need to carry a second device or, pay for one, for that matter. Since at least with Apple's solution it'll sync over iCloud Keychain.
If you're happy with Yubikey's, nothing changes. But for the average person, this makes Webauthn an option without having to buy any hardware or carry something you are more likely to lose because you don't understand the intricate details of how the thing works. I wouldn't expect my parents to understand how a Yubikey works well enough to know it should be used as a pair, for backup purposes, but that is a barrier to entry for them that they don't need to worry about now.
"To address the common case of device loss or upgrade, a key feature enabled by passkeys is that the same private key can exist on multiple devices. This happens through platform-provided synchronization and backup."
Thus, unlike a FIDO2 key, you don't have to visit every online service to tell it about the new redundant keys you add.
The rest of the security article linked by madjam002 goes into detail how Google implements their version of that backup. It's a bit like Keybase in the sense that your other devices act as keys to unlock the backup for new devices.
Passkey will be supported, with no new user behavior, by ~a billion devices currently in use. It is better because a billion+ devices already have support for this.
This is public-key-crypto-based authentication for the average user who will almost certainly never buy a security key but who probably owns a device that offers secure identity verification (laptop, phone).
Yubikeys are great but they're super niche. Among Android users alone there might be a billion people who will never buy one.
At the very minimum, one undeniable technical advantage Passkeys have -- that they share with their foundation, WebAuthn -- is that Passkeys are unphishable.
Don't all fido2 yubikeys support webauthn? They have the advantage that they can't be cloned/sync/etc. Might be an inconvenience for some but for me that's an advantage.
I'm noticing very little discussion about the user aspect, and I say that with non-savvy users in mind. I run a mid-sized web app/community where I've been supporting such users for a long time.
Right now, I offer a classic login, and a few social providers. You'd think this is straightforward to support, but about 70% of support requests consists of the endless ways in which users can mess this up.
"Can't get in"
Try recover password. Email didn't come. Because they entered the wrong email. Correct email this time. No wait, think I signed up with a social account, not sure which one, have many. Login worked. Wait now it doesn't again (saved browser password did not update).
This is just the tip of the iceberg. This new solution, whatever merit it has, is going to be additive. It won't replace anything, it's yet another way to log in, if at all, as it depends on websites implementing it and about 90% of the web is basically not maintained.
So it's only adding complexity/confusion specifically to these users, which I consider to be the vast majority. In turn leading to more support headaches.
The flip side is that it’s incredibly easy to use, faster, and means you don’t have to worry about forgotten passwords or phishing. It’s like an order of magnitude faster than less secure MFA options, too.
> Well, no, it isn't. Clearly you don't do old people tech support.
Actually, I do — in fact the largest system I work on supports predominantly older people with disabilities. I would strongly suggest that you consider whether your assessment of the relative difficulty levels is skewed familiarity with the existing problems with password systems.
> No email identifier or thing to remember. How can I know log on at my other device?
> With a QR code? What on earth is that?
You still use your email address. This replaces passwords, not SSO, and QR codes are only used in some cases for some implementations where you might have restrictions on things like network connectivity. Try the demo here:
1. Enter your email address
2. Select the option to use a token
3. Approve your device's prompt (on iOS, this is a system dialog which explains that it's stored on all of your devices using iCloud Keychain and the site owner doesn't get any of your PII)
Note what's not there: picking a secure password, setting up MFA, remembering that password, and entering it reliably every time. You also can't get phished, which seems like something a lot of people would like.
We're familiar with the friction around passwords but consider how many hours a day humanity spends creating passwords, resetting them, dealing with typos, etc. If you support older people or especially those with disabilities, that process is a lot harder. For example, entering a password over a screen reader which meets most site's complexity requirements is terrible. Most non-WebAuthn forms of MFA are pretty painful that way, too, because it requires someone to switch apps, copy/paste or remember a code, switch back, etc. before it times out.
This won't be perfect on day one, I'm sure, but it's already easier and faster to use and that's only going to continue because now the system can be improved by the browser vendor rather than needing every site to agree on improvements.
Q: how many of you will add support to Passkeys to your application? Is it worth the effort of adding yet-another-way-to-login for your users? It will be a long time before you could use it as the ONLY way to login. You will need to figure out how to enable your existing users to convert to Passkeys. Apple has a glide path for converting username password -> but not for other mechanisms.
I believe we in letting the user choose whatever way is best for them to login -- and to take that burden off of the developer. If you want to learn more, check out the Show HN post on Hellō I wrote this morning. https://news.ycombinator.com/item?id=33177705#33182379
It's be a damned good time for someone to start building a competing Google Sync impl & server & passkey implementation into Chromium.
For a while this was largely built around XMPP but now the stock Google implementation is custom.
I'd love a refresher crash course on what's in Chrome that's not in Chromium. It's been a long time since I used Chromium but I think when I did it seemed to have a as-best-I-could-tell working Google Sync implementation.
It's hard to imagine a scarier project to fork. I dont think there's a lot of resources out there for DIY'iny a Chromium fork.
I wish WebAuthn would have a standardised HTTP header or TLS extension so it would be usable without JavaScript, currently every website has to implement their own login protocol in JavaScript.
Yes, the tutorial is good, but as often only discusses the server side and javascript side of the standard and doesn't talk about the client device side.
First, note that the article you linked is pretty old — the people who build biometric systems have added countermeasures in the last couple decades. They're definitely not perfect but it's not an especially easy attack since it's personalized and doesn't scale.
The first thing to remember is that your fingerprints / face scan are not the identifier for your passkey. They are used by the local device to unlock its secret store but the actual keys are regular crypto keys and the remote website never sees any of them. The interface also does not provide access to the private keys ever, and it should be rate-limited so it's not “get all of the keys” but the much slower “use the phone I stole to hammer out requests to different websites, mashing that sensor every second or two”. That means that whoever stole your phone & forged your biometrics is in a race with you revoking their access, but when you do it won't matter that they have your biometrics unless they can also steal your new phone (stop pissing off the Mossad).
The other thing to consider is what your threat model is. If you're worried about someone stealing your phone and building a realistic model of your fingerprint or scan of your facial structure, you have to ask what the alternatives are. For example, it'd be a LOT easier for an attacker to use a hidden camera or drone to record you entering your password — not using biometrics means you're typing it frequently, for example — and you're also at risk for all of the scenarios which passkeys are immune to (credential reuse, phishing, weak passwords), which happen to be by far the most common way people are compromised. Very few of us have to worry about targeted attacks by skilled adversaries, and if you are worried about that you probably need to move or hire a bodyguard more than anything involving infosec.
> The interface also does not provide access to the private keys ever
There are both statements all over this thread, "the private key is inextricably bound to the device" and "the private key can be backed up". This seems to be an attempt to declare both the problem with theft and the problem with lost devices as solved. Except, they are mutually exclusive: If the key is bound to the device, I can't back it up. If I can back the key up, it can't be bound to the device and can be stolen.
There is one way to get both, which I assume Google is offering here: Have the key inaccessible to the user of the device, but allow some cloud provider access, so it can be automatically backed up to the cloud.
That's all fine and good, but now you're dependent on the cloud provider not playing games with you. And you still have to authenticate to the cloud provider in some way, e.g. to add a new device. (And have to do that in a way that someone who stole your phone couldn't just do the same)
> The other thing to consider is what your threat model is. If you're worried about someone stealing your phone and building a realistic model of your fingerprint or scan of your facial structure, you have to ask what the alternatives are. For example, it'd be a LOT easier for an attacker to use a hidden camera or drone to record you entering your password
You can also view this from a different angle: Phones get lost all the time, sometimes stolen. It's statistically not that unlikely that your phone will get lost at some point and the people who find it may not always have the best intentions.
If a phone is in the wrong hands, it's much easier to gain access to the phone using fingerprints than it is using passwords: There is a good chance some usable fingerprints from you are still on the device cover itself. Sure, an attacker would need some resources to take the fingerprints and build something that can be used with the phone's sensor, but if enough people use fingerprints for auth, attackers will streamline this step quickly.
That's bad enough if it grants "only" access to the phone itself, but now it would also grant the attacker access to each and every online account.
Sure, you could race to deactivate keys - if you remembered to setup alternative login methods before, because otherwise it's you now who cannot login anymore.
> There is one way to get both, which I assume Google is offering here: Have the key inaccessible to the user of the device, but allow some cloud provider access, so it can be automatically backed up to the cloud.
It's a bit more complicated than that: if you enable synchronization, Google or Apple have stored encrypted blobs but they don't have access to the data. To decrypt it you still need access to one of your linked devices.
The other thing to remember is that the keys can be exposed only for the time it takes to install them: you get a new phone, it downloads the blob and has one of the other devices approve decryption, and that key is immediately re-encrypted locally. That makes it a lot harder to get a copy because the attacker has to compromise one of the registered devices enough to attack that process rather than simply getting access to an unlocked phone.
Finally, it's not as simple as one-key-to-rule-them-all: the WebAuthn handshake has both the synced key and a device key. That means that a high-security application could still do additional checks any time they see the synced key used with a previously-unseen device key.
> If a phone is in the wrong hands, it's much easier to gain access to the phone using fingerprints than it is using passwords: There is a good chance some usable fingerprints from you are still on the device cover itself. Sure, an attacker would need some resources to take the fingerprints and build something that can be used with the phone's sensor, but if enough people use fingerprints for auth, attackers will streamline this step quickly.
Again, I would suggest looking at the decades of prior art (or considering that it's moot for the majority of Apple users using infrared facial scans with liveness checks). It's actually pretty hard to get usable prints since they need to have some structure, not just a single flat image, and people don't commonly build phone cases out of things which record that kind of detail.
It's not impossible but the odds are already low even before you consider that the attacker only has a few tries and a time window to accomplish it in. Most people are not interesting enough to launch this type of attack against (think about what it's like trying to make money with that unlocked phone without leading the police to your door) and in a targeted attack looking for something like political dirt they're going to get that password by watching the user type it because not using more advanced systems means they're forced to do so frequently.
> That's bad enough if it grants "only" access to the phone itself, but now it would also grant the attacker access to each and every online account.
Kind of like what happens if they get full unrestricted access your phone without this and recover all of the saved passwords?
Remember that this is all optional geared at the vast majority of normal users. If you really think that the risk of someone recovering a fingerprint / facial scan is too high, you're free to continue using hardware WebAuthn tokens like you are hopefully doing right now. What this is solving is the friction around creating secure passwords and not losing them, and getting away from the idea that an email address is the real key to everything.
Note you can't just get the keys even if you have a fingerprint. You would need to maintain continuous access to the device while it is still signed in as the user. It would be pretty easy to the "find my device", if the user didn't already notice you were handling it
Do you know if it's possible to see a list of stored passkeys in Android? I installed the Play Service beta, managed to create a passkey and sign in, but can't see the list of credentials anywhere in the UI.
There will be, in the password manager settings on Android (under Settings, "Passwords & Accounts" and "Google"), but it will unfortunately take a couple of more days to roll that out to the beta.
This is all part of the FIDO Alliance, so, a standards based solution that anyone with the wherewithal to implement it can do so. Many password managers have already said they'll be supporting it, as well as major vendors (Google and Apple for instance).
I'm struggling to see your complaint being a valid one. This is basically webauthn, so use a Yubikey or similar device if you wish.
When you say "the FIDO Alliance", you mean the entity that runs a Metadata Service[0] which:
> provides organizations deploying FIDO Authentication with a centralized and trusted source of information about FIDO authenticators.
The aim of this centralized system is to allow revocation of hardware that doesn't meet their unchallengeable opinion of whether you've spent enough money on your device or not. They can similarly require that devices do biometric scanning[1], and be issued by your government, and require you to agree to lengthy (and self-updating) terms of use.
There are actually (at least) two different types of device attestation that FIDO support[2]. One uses a hardcoded on-device private key, that's common between 100,000 devices of the same model, which means that an attacker can brick 99,999 other people's devices just by extracting the key from their own device. The other method requires a certificate from a "trusted third-party Attestation CA", which presumably allows a malicious (perhaps government-mandated) CA to spy on (and filter) every login request you make.
This system is like a dystopian parody of the traditional model of web security, which had no need for "authenticators that have a Trusted Platform Module (TPM) onboard", and which only required CAs to be on a list that the user agent is in control of (and users can add their own CAs to). Instead, what FIDO are building is basically DRM for human identity, with all the corruption and failure modes that entails.
This is really no different between switching hardware devices.
Step 1. Sign in using your existing device
Step 2. Add your new device
Step 3. Remove your old device (or keep it for a backup)
And if you feel that's going to be a big issue for you, use a 3rd party software based tool like Bitwarden, 1Password, or other tools that have indicated they'll be supporting this. That info will sync between those software based tools and allow you to use whichever devices you wish, across multiple platforms with minimal effort.
That's possible, unless you are switching device because your previous one broke (or was destroyed/stolen/lost).
I imagine that Apple has very precise data about the reasons why (and situations where) people switch from iOS to Android, and making that switch as unthinkable as possible is part of their user-retention efforts.
Sure, but here's the deal with this. Even with Yubikeys it has always been recommended (as long as I've been involved in these types of discussions anyway) that you should have two of them. If you lose one you have one safely secured that can get you into any service you need to.
This is my general stance on it as well, and one that I think I would still strongly recommend even in this new Passkeys era. That would cover situations where your device is damaged and you can't easily turn the old device on.
Part of this is solved by these keys being synced on each's services, they're synced via iCloud Keychain on Apple's side for instance, so you'd just need to get a new Apple device, sign into your iCloud account and provide the password for iCloud Keychain then you're off to the races. But if in the middle of this you opt to switch from Apple to Android, you're SOL unless you have a backup.
Just my two cents on it, I don't think any of this completely solves the need for backups, but for some portion of people it probably does, and that's those that are unlikely to switch between platforms.
If you don’t trust yourself to have backup keys, you use the Google or Apple ecosystem. As long as you can get back into your Google or iCloud account, you can get back into every other passkey-protected website. You can also use third-party “cloud” password managers if you prefer.
WebAuthn lets you dial the convenience/security tradeoff exactly however you prefer. I’ll be using hardware tokens, but I’ll be telling non-technical people to use their existing smartphones.
You realize you can have multiple devices for Passkeys, right?
It’s webauthn. Which means you can have one or more of the following, in any mix you wish: yubikey, iPhone, Android device, password manager that has said they’ll support this (1Password, Bitwarden, Dashlane, and probably more).
Password managers will sync the private keys between devices as well. So, as long as you can access your password manager you should be able to use that.
You have choices now, whereas before you had basically one.
Yubikey offers you a hardware device specifically for this purpose. It can't be copied and it really is the definition of something you know and something you have. It has pros and cons, one of those cons being that if you want to use them you are stuck having multiple devices, one for a backup.
Don't like that con? Well, play the game a little and you have additional options coming. Such as the solutions from password managers and platforms like Apple and iOS. Add your sites in a password manager and it'll sync between devices, you basically only have to add one single thing (your password manager) and as long as you have password manager access you can sign in to those sites anywhere that your password manager is available, and where it isn't you gain the QR code passkey option that is being added.
You can mix and match this to your hearts content. Want to use a Yubikey as a backup? Add the device to your sites, stash it away where necessary. Yes, the con of having to add it to each site is still there but it is an option. Want to use all of these? Sure can. Add your iOS device, your password manager, and your Yubikey.
Want to only use one? Just add that device. But you might be foot gunning yourself without backups depending on which you use.
Stop being difficult and just use your head a little.
Not at all. The options truly suck and makes passwords seem like a godsend. Which it is, until you are forced to use these crappy solutions that only aid in you getting locked out of your digital life and where you have no control of your logins.
I truly hope there is something better coming out of this because this is a nightmare.
Yes, and it still is. I have been using Gmail since 2004 and I very rarely touch the web or mobile UIs. 99% of my interactions with it are through SMTP and IMAP. I don't think that Gmail has fundamentally changed in this sense, it has just gotten very popular.
The various incarnations of their chat services is a better example. Gtalk used to have great XMPP support, even federation for a while. All remnants of that are gone now so I had to stop using it.
WebAuthn is an open standard. It's a way for you to prove to a website that you have a specific private key. There's no lock-in, because the key is portable (unless you don't want it to be). There's no privacy issue, because the key is unique per website. There's no security issue, because it's unphishable and can be unstealable if it's in hardware.
If you don't like Google or Apple, use your favorite password manager. All it will have to keep is a private key per website, and you're done. No usernames or passwords. You visit a site and are automatically logged in with a browser prompt.
This is amazing, it's the best thing that's ever happened to authentication. It's something the end user cannot have stolen. Can we be a bit more excited about it?
EDIT: If you want to try it, I just verified that https://www.pastery.net/ works great with Passkeys even though I haven't touched the code in a year.
That means that django-webauthin also works great with Passkeys, for you Django users:
https://pypi.org/project/django-webauthin/
Also, the latest Firefox on Android seems to work great.