While the practice of not updating PBKDF2 iterations is bad, I think with LastPass the problem was more the aggregate of many things, a sort-of death by a million cuts. Because truthfully, the PBKDF2 iterations count issue was relatively unimportant. Some good conjecture about it:
Both Bitwarden and LastPass should improve this situation by making the iteration count automatically increase over time. For LastPass though, there are... A lot of concerns. The breach, how it was handled, persistent issues with the security of their browser extension (many, including an RCE at one point) and of course the fact that not everything in the vault is actually encrypted.
KeePass XC or 1password may prove to be better options from a strict security practices standpoint, but from what I've seen I don't suspect Bitwarden has a pattern of bad security practices overall. It does seem like there are opportunities to make it better, though.
We took a similar approach to passphrase stretching in EnvKey v1 [1] (EnvKey is a secrets manager, not a password manager, but uses end-to-end encryption in a similar way). We used PBKDF2 with iterations set a bit higher than the generally recommended levels, as well as Dropbox's zxcvbn [2] lib to try to identify and block weak passphrases.
Ultimately, I think it's just not good enough. Even if you're updating iteration counts automatically (which is clearly not a safe assumption, and to be fair not something we did in EnvKey v1 either), and even with safeguards against weak passphrases, using human-generated passphrases as a single line of defense is just fundamentally weak.
That's why in EnvKey v2, we switched to using high entropy device-based keys for our root encryption keys. It's a similar model to SSH, except that on Mac and Windows the keys get stored in the OS keychain rather than in the file system. Also like SSH, a passphrase can optionally be added on top of the device key.
The downside (or upside, depending how you look at it) is that new devices must be specifically granted access. You can't just log in and decrypt on a new device with only your passphrase. But the security is much stronger, and you also avoid all this song and dance around key stretching iterations.
What are your expected failure modes? Consider this pretty common scenario: ahe user lost their one and only enrolled device. How can the user reinstate access?
I suppose the user must have printed the real root password / secret key / whatever and put the paper somewhere in a safe. That password should allow to reinstate access when all hardware is lost. But it should not be required daily.
Good point. We have recovery keys for this. These are 12 word random phrases generated from a 1952 word list (about 131 bits of entropy).
They require email authentication to redeem, so a recovery key by itself isn't sufficient to access an account, though of course they do need to be protected.
An org admin can also re-invite a user that loses access this way. The only scenario where you're really in trouble is if you're the only admin, you lose access to your only authorized device, and you lose access to your recovery key.
Apples solution for this with their new hardware 2fa stuff is to not let you turn it on without registering two hardware keys. So perhaps forcing two enrolled devices might work? (I have all my totp seeds on my iPhone and my iPad, for that reason.)
Tying my entire life (every account etc) to two devices that can be remote-killed by a corporation which is not your friend is definitely a risk that I would not take.
You really keep no access to your life that isn't under someone else's control?
I was under the impression that the user is using 2 iOS devices as key generators for their 2FA. And Apple can certainly blacklist their device, delete their Account, and remotely disable their devices permanently.
If they are just registering yubikeys to their iphone and ipad so they can use their apple account... then sure, welcome to 2012.
The obvious question is: what if you lose both keys?
Deep down, I think it's something that requires cooperating with real world entities (governments, banks, basically real world trust), not something that tech bros seem to want to do for ideological reasons
It's even worse now, no company truly locks you out and with enough noise on social media a real human can get you your access back even if you don't have your Yubikey. So it's always vulnerable to social engineering.
Of course I'm not talking about just relying on SIM. Maybe we can stop with the knee-jerk reaction and actually think of how to add better ways to do it. Government IDs could enter as some piece of the puzzle, trusted contacts, yeah, even SIM... At the very least out here in the real world I have some recourse if my ID is stolen, and I don't have to worry about having to buy all my stuff back because I lost my keys.
As I understand it, Keybase actually has a very interesting concept of spreading key materials over your social media. So it's not even unprecedented.
Only because companies are trying to do this human verification on the cheap. SIM swapping-alike attacks aren't a problem with institutions like banks where they keep ID on file and you can visit in person to prove your identity.
Oh yeah, I guess we can be all about getting 10 yubikeys and keep one in your wallet, another together with your keys, another in your home, bury another in your family's farm, another in a safe in the capital city of every country you visit...
That sounds like a good idea. Using device-specific keys sounds a lot like Keybase to my naïve ears. Are there any major differences in your design and theirs?
I'm not deeply familiar with Keybase's design, but speaking in broad strokes, our approach is similar.
That said, one area that Keybase compromised on that we haven't is offering a web interface. It's well known among security and encryption people that you can't do serious end-to-end encryption in a web app--an attacker with access to the server can just modify the html/js payload, rendering the encryption pointless.
Of course, web interfaces are convenient and users want them, so many products give in on this for the sake of UX even though it fully undermines the zero-trust model. EnvKey hasn't though. It has a desktop app and a CLI--there's no web interface.
This is especially bothersome with Bitwarden and the reason I ditched it shortly after trying it. You can use the desktop/mobile clients, but some functions require the use of the web UI and that's a dealbreaker for me...
I wish there was a browser plugin that verifies the congruence of the web UI with upstream source - or even better a client that supports all functions that the web UI supports.
They shouldn't be using PBKDF2 for new installations at all. It's been nearly a decade since the Password Hashing Competition, and you should just use the memory-hard Argon2.
Also, W3C should finally get it into the WebCrypto API, but it seems like whoever is responsible for it just let's that API rot. There are fast wasm implementations, though.
Memory-hard is not a panacea. I think the point, which is pretty well made by the blog post I linked to, is that the security of the vault is extremely sensitive to the passphrase anyways. Adding more PBKDF2 rounds is a nearly free way to make bruteforcing harder, whereas switching to Argon2 or scrypt or bcrypt or any other KDF requires more effort, not to mention yes, lacking Webcrypto support is a significant performance problem (WASM Argon2 implementations are at least multiple times slower than native IIRC.)
> Adding more PBKDF2 rounds is a nearly free way to make bruteforcing harder, whereas switching to Argon2 or scrypt or bcrypt or any other KDF requires more effort
The way I see it it's the other way around. The only complexity cost that lives here is that you need to distinguish between different derivation configurations. Whether that configuration tells you to use PBKDF2-200000 or Argon2-64M-4-1 doesn't matter, and you'll have to add a clause to the code either way. On the flipside, the memory-hardness allows you to increase the cost for the attacker a lot more than for the user.
> WASM Argon2 implementations are at least multiple times slower than native IIRC
I haven't looked this up, but my suspicion is that it's still better than a WebCrypto PBKDF2 configuration taking the same time for the user, measured in Wh for the attacker.
Just to be clear, the cost of changing the iterations is almost zero. Bitwarden already supports variable PBKDF2 rounds on all platforms, as well as changing the number of rounds. I'm sorry, but the cost of deploying Argon2 in production to a lot of platforms across a lot of devices is non-trivial by comparison. If you have enough different combinations of devices and environments deploying an if statement can become a challenge. In this case, it's especially a problem considering the lack of Webcrypto support.
By all means, use Argon2 in new code, or any other more modern KDF. But PBKDF2 isn't broken, and replacing it warrants actually doing the ground work to see if it makes sense: is it fast enough on most devices? is the security improvement meaningful enough? etc.
The truth is, 1password has the right idea here with their Master Key system. Even very unwieldy long passwords are pretty low entropy compared to a proper cryptographic key, and KDFs cannot significantly improve this situation. If you want to do more than re-inforce the speed bump, you're going to need to work outside the password. It has a usability trade-off of course, so maybe it's good that not everyone does it that way.
Does Argon2 have support for the various platforms that Bitwarden (or other platforms) need? You need support for browser (javascript most likely), iOS, Mac, Windows, and Android at minimum.
On top of that, are the implementations all equal or is one behind in terms of speed or support? You have to have everything else fall to the lowest common denominator. My guess is that that will be browser based implementations.
This is why so many password managers continue to use PBKDF2, because it has widespread support, particularly in browsers. Until Argon2 (or others) have support that matches it, many products won't use it because it brings with it all sorts of issues.
Argon2 is pretty memory hungry. Recommended defaults go up to the gigabyte range. That means that you have to be careful that you don't create a situation where the user originally encrypts their passwords on a device with lots of memory but then tries to decrypt their passwords on a system with not as much memory as was used for Argon2. Then the Argon2 implementation blows up with an out of memory error and the user has effectively lost access to their passwords.
Cache hardness might be more appropriate for situations where multiple devices are involved. Then the user just has to wait for a while if things go bad...
> Both Bitwarden and LastPass should improve this situation by making the iteration count automatically increase over time.
Bitwarden does let you increase the number of PBKDF2 iterations through a setting, they they also provide this warning:
> Warning: Setting your KDF iterations too high could result in poor performance when logging into (and unlocking) Bitwarden on devices with slower CPUs. We recommend that you increase the value in increments of 50,000 and then test all of your devices.
Translation: upping this could make your old device very slow.
I'm not sure I'd want to be receiving end of a whole pile of users complaining Bitwarden is becoming unusable on their current devices because they silently upped it, so I'm be leery of upping for existing users too.
As for the server side key issue highlighted in the article: they have a point; it could be done better. But it's already pretty good, and if that's the only issue it's the least of my concerns. It's only a concern if a hacker got read access to the server, but read access means they've been compromised. And if they've been compromised those same people might have write access. If someone gets write access to Bitwarden's servers, then all bets are off. They can just modify the javascript to send themselves my unencrypted key.
Then there is the /dev/mem thing. The bottom line is if someone has access to your machines RAM, then you can likely see the entire database unencrypted. Is your Windows desktop "corporate managed"? If so, I'm looking at you, sir. While Linux / Android / iOS are more protective of their users than Microsoft (who seems hell bent on selling their soul to their corporate customers), they aren't whole pile better. They may not sell their soul to high paying corporate customers, but they will still do whatever their governments ask and you will be none the wiser.
The bottom line is all these proprietary solutions suffer from this "we won't let you see but we promise you can trust us" flaw. I refused to use LastPass because it was hopeless in that respect, and later their promises turned out to be hollow. With Bitwarden there is a lot less trust involved because we can inspect the code they promise we are running.
Nit: yes, it's true that making the iteration count high could actually make some devices unlock very slowly. BUT:
- This only occurs when performing key derivation, i.e. when you're unlocking. It does not matter once the key is in memory. Therefore, it's actually OK if it takes a few seconds. When unlocking with biometrics, the key derivation function is not used, so on mobile devices, this occurs even less often.
- PBKDF2 is very fast. A few hundred thousand iterations is not going to be a noticeable hitch on old SoCs anymore. I actually suspect the real problem was JavaScript/webextensions/the web vault. However, it's probably a non-issue now that all browsers have decent WebCrypto implementations with PBKDF2 support. The difference between JS and native code is very noticeable with cryptographic code. Even then, it might be less of a problem now with improvements to JavaScript engines and WASM.
I generally pick open source solutions over closed source ones, and Bitwarden does check that box, but to be fair, so does KeePass XC and compatible mobile companions. I like Bitwarden as an easy tool to recommend to friends, but for power users KeePass XC is certainly worth a look. They came out looking pretty good when security researchers began approaching ways to attack the clients themselves:
Bitwarden already takes a couple of instants more than I'd like, each time the vault is unlocked in my phone. It's always one of those moments where I'm doing something that requires a new login, and usually the step of having to derive attention to the password manager is an undesired distraction, agravated by watching an idle screen for at least a couple of seconds while the vault decides to open, and at last I can search for my secret and continue what I was doing...
I do not use biometric because they fail a lot on me (fuzzy fingerprints due to stuff)
It's not even a nitpicky complaint I'm dropping in the internet, it does really bother me absolutely every time. Making it even slower, please no.
If you are using PIN entry: It's the same story. Key derivation is only occurring when you actually unlock the vault, in other cases the passphrase is not used and therefore key derivation is not needed. If PIN unlocks are taking too long, it has nothing to do with PBKDF2.
If you're entering the master passphrase every time: Obviously this has the benefit of not keeping the derived key cached anywhere, even if where it's cached is 'secure'. However, if you choose a slightly less secure key, even just a couple of characters shorter, you have to keep in mind that this dramatically lowers your security. You are better off avoiding this and using features that cache the derived key on mobile devices where it's feasible to do this.
Nobody is suggesting you can't use less secure settings to make up for having slow devices. You totally can. The defaults, however, should not be designed around your needs. They should be designed around security tradeoffs that give the best outcome to the broader public. People with special requirements should be the ones touching settings like this.
I'm not sure you're actually dealing with a problem where PBKDF2 iterations are eating up a significant amount of time, though. You may just be underestimating how many iterations of PBKDF2 can fit in a second on a modern mobile SoC. I would actually guess that the reason why opening the vault takes long is unrelated to the actual unlocking process.
edit: To make it more explicit, check out this benchmark from 2012 of a pure JavaScript PBKDF2 implementation.
If a Google Nexus One can get 38k iterations/s on a 2010 SoC using a pure JavaScript implementation of PBKDF2 (using old browsers even!), I can assure you that the time it takes to do a couple hundred thousand rounds in native code is absolutely nothing at all on a phone from a few years later.
After reading you and checking the links, I am now sure the unlocking latency is not due to the number of iterations.
First and foremost, because I'm using the PIN entry. And it already takes some very noticeable time to open. I do not, however, notice too much difference between using PIN or the passphrase. So the delay must come from other unrelated limitations.
I still think it's maddening that LastPass's website says that the vault is encrypted, yet in reality that wasn't and may still not be the case, where aspects of the vault as we now know aren't encrypted.
Unless I'm missing something, to me that is one of the biggest failures. It is even laid out in their technical and organizational measures document.
Agreed, in fact LastPass should be heavily fined for this and be forced to go out of business if things don't provably change within a reasonable timeframe.
They tout on their website that they get third party assurance testing done and yet none of it matters if we can’t see the actual reports.
I just can’t believe more people aren’t enraged about it. Or that people aren’t seeking to sue, purely based on that. Zero trust architecture is fine if you’re breached that’s the whole point, but saying that the information within the vault is encrypted when parts of it aren’t is downright malicious.
Just got word that one of my team members runner the python script to extract the seeds out of LastPass. Excited to leave LP and move to bitwarden. Like you said, a lot of small stuff at LP is the issue, unresponsive plugin sometimes, got a update and new UI last week, now the search in the chrome plugin isn't working on my end, really don't understand how you can write such shitty software for something so simpel (the UI part)
> I think with LastPass the problem was more the aggregate of many things, a sort-of death by a million cuts.
That's true.
> Because truthfully, the PBKDF2 iterations count issue was relatively unimportant.
IIRC, some people's LastPass vaults were set to use a very old default of a single PBKDF2 iteration, which I understand is basically nothing, nowadays.
As of three weeks ago, mine was set to 500, so I can confirm that low, at least.
Plenty of master password changes in the past several years, so plenty of opportunities for it to have been automatically set higher. The only reason I hadn't set it higher myself is I didn't know it was a setting at all.
Oof, my Bitwarden account was created a while ago and was set to only 5,000 iterations. You can see and change the number of iterations here: https://vault.bitwarden.com/#/settings/security/security-key... (or if you don't trust links for something like your password manager: log into your web vault, click on the top-right dropdown menu, then Account settings > Security > Keys).
I've updated it to 600,000 iterations and so far don't see any noticeable impact on performance, both on desktop (using the Firefox extension) and on mobile (iOS).
I am not a cryptographer but to my understanding, the number of PBKDF iterations is really only of concern for weak (low-entropy) passwords. If you know that your password has high entropy (>128 bit), for example because you generated it randomly uniformly from at least 2^128 possible outcomes[1], you are safe even if you used only 1 iteration. PBKDF is all about password strengthening, so if you are making changes for yourself the most effective change is just to use a secure password and stop worrying about key derivation functions.
[1] 28 characters in a single case, 23 characters if both upper and lower case are used, 22 characters if you include numbers, 12 words if you use a word list of 2000 words and sample uniformly
> If you know that your password has high entropy (>128 bit)
I don't think that is practical for most users - 12 words (or 10 taken from a 10k list) - or 22 random alphanumeric characters - is hard to remember - and long enough that they are difficult to type correctly. 70 bits might be a more sensible goal - but still long. (6/7 words, 12 characters from a set of 62).
This is the "trust anchor", so something the user needs to remember and type in - from what I've seen - remembering/representing and inputting 128 random bits is tricky.
And with modest stretching and a salt, probably overkill anyway.
I think your point is valid and important, especially considering the average user. However in my experience it worked surprisingly well with a long word based master password. Since I only needed to remember 1 password that I then used daily it was not that difficult. And typing it was quick since it was all lowercase which most keyboards are optimized for. However the issue came when I started using my password vault on my phone and tablet. I was way too slow at typing on them. I now have a 22 character password which takes the same time for me to type on a keyboard, maybe a bit slower, but is faster on my phone though still annoyingly slow.
As for 70 bits password, it might be enough, but you need a lot of iterations (2^58) if you want to completely make up for the lost security margin. Which will also be unusably slow in practice.
I had bumped mine up once before (to 200K) and I just bumped it up again to 600K. But my wife (registered several years ago, like me) was still at 5K, I just bumped hers up too. Wish Bitwarden would force this for anyone still on an old default - particularly given the LastPass compromise we just saw.
As of now, your link is fine (though the comment is probably still editable), and I believe you have the best of intetions but Note: It's not a great idea to click a link to something like Bitwarden, since a phishing domain could be used.
Why are you upping it up that much? I guess "too much is not a bad thing" in this case, and Bitwarden itself says: "We recommend a value of 100,000 or more.".
When I see that I read: "With our knowledge of security and encryption, which by the way is much greater than yours, we consider that 100,000 is a perfectly safe number and a good middle point so go ahead and use it".
Am I wrong to think like that? My Master password is a battery-horse-staple thing, but not with 12 words as some other commenter says; that's absurdly long and would be too difficult for me to remember. I usually strive for around 18-20 characters, that's already in the verge of me forgetting it. I use incorrect or derived words of my own (so not really existing in dictionaries).
According to the OP article, the server side iterations are ineffective for adding security in bitwarden, so you need 600,000 on the client. This would not be the case if the design was correct.
(I'm not a security expert, so I'm going by the article)
> When you change the iteration count, you'll be logged out of all clients. Though the risk involved in rotating your encryption key does not exist when changing KDF iteration count, we still recommend exporting your vault beforehand.
600.000 was a bit too much for my 3 year old low-end android phone. 400k workes OK. when I sorted everything into folders I could bump it up to 600k probably due to time saved rendering.
A way to think about the difference you just made is, you increased the difficulty of cracking your password by 600000 / 5000 = 120. Making your attacker guess 7 extra bits (well, slightly under) would have the same effect, so that translates to a slightly under 7 bits of entropy. Appending two randomly chosen digits to your password would have about the same effect.
Those first 5000 iterations added over 12 bits of entropy.
The article is complaining about not adding an extra 100,000 iterations which would double the difficulty, so he's effectively berating them over 1 bit of entropy.
When you do this reset and sign back into your devices/browser plugins, you will need to go into the Bitwarden Settings on each one and set a few options again - notably timeout (defaults back to Browser Restart, change back to a time you're comfortable with), Biometric Unlock and PIN. All of those are local settings to each Bitwarden client/endpoint, so you need to do them on each device.
Your master password is put in a box that’s very hard to break into. But because someone might be really determined to get in, we put that box in another box that’s just as hard to break into. And because someone might be really really determined, we keep putting those boxes in new boxes so it’s really really difficult to get to the password. But sometimes we also need to get to the password, so we use enough boxes that it’s difficult for them, but not so many that it’s annoying for us.
That might be a little too much “like I’m 5”, but that’s the general idea. Hashing is easy one way but it still requires compute cycles to do each iteration. We don’t want to make it excessively expensive for us.
To kinda just expand on that because I think the analogy's most of the way there:
You're trying to keep something safe and all you've got is a weirdly infinite collection of cardboard boxes. So you have this brilliant idea... you get a dozen boxes and put your treasure in one of them.
That's great, it's certainly safer than leaving it laying out. It'll take someone at least like... a minute to go check all dozen boxes and find your treasure. But it still only takes you mere seconds because you know which box to open.
Except you'd really rather your treasure stay safe for longer than a minute. So you take all your boxes and put them inside other boxes. And put those boxes inside other boxes like nesting dolls. You nest each one a dozen times.
So now if someone wants to come and find your treasure, they need to open all 144 boxes to find it! But you still only need to open 12 because you know which stack to look in.
The iteration count is basically just how deeply you plan to nest your boxes.
In more concrete terms, increasing the iteration count is just a knob to control how much cpu/memory resources it takes to compute a hash. You want to turn it up enough to make brute forcing prohibitively expensive (make it as high as you can), but not so much that calculating the single correct hash to verify is too expensive (don't make it too high or else it will take too long to check your password at sign-in). People are saying this should go up over time because computing resources generally become more readily available (faster/cheaper), making both brute forcing easier as well as allowing you to perform more iterations to compute the correct hash on commodity hardware without it taking unusably long.
Feynman would be proud. The ability to take a complex subject and break it down in such a way that nothing of value is lost but a child can understand it is rare.
Another issue is the hash used: SHA-256 is a hash which can be calculated extremely quickly on dedicated hardware (which has been incentivised heavily by bitcoin mining). So the gap between the speed at which an attacker can run the hashes vs the intended user is larger than with other hashes, like argon2, which is specifically designed to be resistant to acceleration by dedicated hardware.
Is it actually 144? Wouldn't that be the worst case scenario for the attacker? If I understand the whole thing correctly, the attacker can be lucky and find the right stack of 12 in the first try.
So basically, I would set it at around half of the 144: 72
or did I get something wrong there?
No, you're thinking of something different, like making the password more complex.
While they made it confusing by using "12" for two different things, their analogy is completely correct.
Instead of each password/location having one box, it has a box within a box within a box[...]. You have to unwrap the entire stack to check if the treasure is there.
If you use an iteration count of 5000, then when you log in you unfortunately have to do 5000 hashes. But an attacker has to do 5000 hashes per password guess.
So a deep iteration count can't substitute for a good password, but it can make up for an extra few characters. And it's basically free to increase iterations until the wait becomes visible.
PBKDF2 directly might be some sort of multiverse locked box? Store a "safe key" in a box, in a box, in a box, in a box, in a box... in an iteration of boxes.
Every key in the universe is able open the first box, and every other box. But each key opens up a different multiverse of boxes. The problem for the attacker is they have to open every box, in a box, in a box... in succession to get to a "safe key" stored in that multiverse, which looks like a safe key and quacks like a safe key, but might not be the right safe key.
Maybe not quite right, as the "safe key" is really just another box in the end, but that doesn't make much sense... unlike a box of multiverse of boxes =)
Iterations refer to the number of times the password goes through the hash function. The higher the number, the longer it takes, so you want it low enough that it doesn't impact your day-to-day use but high enough that it will hinder an attacker in case the hashed password is leaked.
Maybe overly pedantic to expand on this, but since we’re in an ELI5 context: the attack vector is brute forcing hash collisions. Making it computationally slow for attackers is a hindrance because the relative value of a collision diminishes over time (depending on the value of their target).
It's not really hash collisions: SHA256 is still too secure for that, you're unlikely to find a value which isn't the password used to generate it. It's just brute forcing the password
If your (binary, derived 128 bit) encryption key is the key your treasure chest - then the "derivation" is a map from your password ("hunter2") to the where the key (011101...) is buried.
With plain derivation, the map takes you directly from the password to the key. With an extra iteration, the map just points to a place on the map where a trail starts.
If you want to guess the key, by guessing the password - now you have to first walk to were the password points on the map, then follow the path (the iterations) - then dig and see if the key is there. Then you can try the key in the chest (the encrypted data).
The iterations (length of path) adds a certain, predictable, amount of work in order to find the key. It does not make it harder to guess the password, just harder to get the key from the password - and so it makes it harder to check if you've guessed the right password (try the key in the lock) - because there's extra work to be done.
Now you could compare memory-hard paths and compute-hard paths by adding elevation and distance to the analogy (different types of hard work).
Tangentially related: why would a password manager provide a configurable iteration count? This is a number whose purpose is fairly hard to understand for many people and yet it’s an important corner stone for password security, especially for those who do not grasp the concept of an iteration count.
This should absolutely be application managed and gradually increased over time.
Also: while I understand that FIPS is the reason why we are stuck with PBKDF2 in the case of the more enterprisy password managers, wouldn’t it still be FIPS compliant to do some scrypt or argon rounds on top as a means of not constantly having to update the PBKDF2 iteration count (assuming that scrypt and argon are more resilient to hardware brute-forcing)?
> This should absolutely be application managed and gradually increased over time.
One issue I could see with that is that because it’s the encryption key it’s going to lock out all your “live” devices, so an explicit step is an easy opportunity to warn them.
The second issue is that the transcryption would have to be done on login, which is a pretty shit UX as the user logs in then immediately gets locked out for however long it takes to convert the store (then again for most people I’d assume the payload is not enormous).
> assuming that scrypt and argon are more resilient to hardware brute-forcing
They are but needing to update the work factor as hardware progresses remains. In fact scrypt and argon have more work factor knobs than pbkdf2, which only has the iterations count.
> One issue I could see with that is that because it’s the encryption key it’s going to lock out all your “live” devices, so an explicit step is an easy opportunity to warn them.
how so? The iteration count must be part of the non-encrypted parts of the vault data. If a client is offline, it will use its locally stored vault with the old (lower) iteration count. If it's online, it will have the updated vault with the higher iteration count.
> The second issue is that the transcryption would have to be done on login, which is a pretty shit UX as the user logs in then immediately gets locked out for however long it takes to convert the store (then again for most people I’d assume the payload is not enormous).
You could do this asynchronously in the background: Decrypt the vault, store it in memory (which all password managers do for some amount of time in order to provide any UI), re-encrypt, store to disk, send blob (which will continue an unencrypted iteration count) to server.
But this is the complicated case where the vault is re-keyed. What would totally be sufficient is to re-encrypt the same value key using a new hash derived from the same password, only with more rounds which means that the bulk of the vault blob won't change - only the password-derived key and the iteration count.
If any of this happens simultaneously on multiple machines, treat it the same way as you already treat editing conflicts (I'm not offering guidance there - this is a hard problem that each cloud provider is already solving one way or another).
> They are but needing to update the work factor as hardware progresses remains. In fact scrypt and argon have more work factor knobs than pbkdf2, which only has the iterations count.
Given the current state of the art and given these two algorithms, I think it would need to happen significantly less often than with PBKDF2, so if there's something that would need to cause the UI to re-lock immediately after unlock as you think (and I'm not sure about) then having argon or scrypt in the loop means you have more time between causes of shitty UX.
> how so? The iteration count must be part of the non-encrypted parts of the vault data. If a client is offline, it will use its locally stored vault with the old (lower) iteration count. If it's online, it will have the updated vault with the higher iteration count.
The iteration count affects the encryption key, and bitwarden neither has the old encryption key nor the actual password to derive either.
So the vault has to be updated at the first device connection after updating the iterations count, and any other device will have to derive the new encryption key and log back in.
so it would log other devices out, but not the device you're currently looking at. I think that's still an acceptable behavior compared to have people stuck with iterations counts of 500 or even 1 as we had seen in LastPass
Benchmark it once on each device. Then have a user-friendly slider.
"Do you want your security to be:"
a) "It only secures pr0n from my aunt" (1s for fetching a password)
b) "Not great, not terrible": (5s for fetching a password)
c) "Pretty Good Protectivity": (10s for fetching a password)
d) "The CIA haunts me and my name is Edward: (24 minutes for fetching a password)
and even then there should be no way to offer a completely insecure iteration count to a user because one of their devices is slow because the attacker's devices won't be.
Even on slow devices, password managers can employ techniques to help like only using the full count of rounds for a cold start but then re-encrypting the key for the vault key with fewer rounds but only keep that copy locally.
That way a user of a very slow device only needs to wait for, say, 10s once on first unlock.
While this is a downgrade in security, it's still better because now the key with the small amount of iterations is confined to the one device, not available on the server where an attacker can get bulk access.
Of course, devices where 1M PBKDF2 iterations take so long that it's noticeable are probably also old enough to be full of unpatched (due to EOL) security holes which makes such devices the weakest link anyways, but this would still be a better situation because this way not all users are punished because of one user's slow device.
keepassxc does this the right way — you're not picking the "iteration count" (which is hard enough to understand even for someone relatively technically inclined), but the time it takes to open the database. The default is 1 second, with the minimum of 100 ms.
"Higher values offer more protection, but opening the database will take longer".
I highly recommend keepassxc to everyone instead of these password-solutions-of-the-day that are coming and going so fast it's hard to remember all of them.
Recommended away, but ease of use matters. I can use, and am comfortable with keepassxc, but there is no way in he* my wife, daughter or parents would be. It was hard enough getting them used to using BitWarden.
One issue with "gradually increasing over time" is that, without a master password change, the old hashes are still "out there" and potentially available for inspection.
I don't think "on top" gets explicit guidance, but it's also almost never needed. In this case for example scrypt and yescrypt are fine to use directly. NIST has had a strong leaning toward memory-hard functions for coming up on six years now. See §5.1.1.2 in https://pages.nist.gov/800-63-3/sp800-63b.html#sec5.
I'm not familiar with FIPS. Is this NIST document part of requirements outlined in the FIPS publication? Which according to 20 seconds on Google, appear to be numbered 140, 180, 186, 197, 198, 199, 200, 201, and 202.
Password hashing is controlled by NIST SP 800-63B, not FIPS, but FIPS supplies the approved primitives. When NIST says:
> The key derivation function SHALL use an approved one-way function such as Keyed Hash Message Authentication Code (HMAC) [FIPS 198-1], any approved hash function in SP 800-107, Secure Hash Algorithm 3 (SHA-3) [FIPS 202], CMAC [SP 800-38B] or Keccak Message Authentication Code (KMAC), Customizable SHAKE (cSHAKE), or ParallelHash [SP 800-185].
Those options are authorized by FIPS. The main consequence of this is that there are FIPS-validated implementations available, which are what you want if you're selling to the government.
You are quite right. Open source software is always riddled with complicated and unintuitive UX like this. It's created by developers for developers. It's only when product owners, designers, and commercial managers get involved that the UX begins making sense.
"But Apple is trying to control me!" screams every dev who doesn't understand this, not realizing they are signaling their inability to empathize with normal end users and misattributing why Apple does what it does.
Apple gets it, devs don't. In this space, 1Password is least worst, yet is still more confusing than the average user quite understands.
Apple isn't a most valuable company because they want to control you. They're a most valuable company because their engineers blend software and hardware into experiences for end users not for engineers.
So much more software would be so much more successful if usability and adoption were as prioritized as utility and configurability.
I have 524 passwords in my vault. That's a list large enough to require a big sheet of paper that's very inconvenient to carry around. I'm also changing passwords often enough such that keeping track of which password I changed on what machine in order to just carry deltas with me and updating my hand-written lists manually is way too inconvenient.
Syncing password managers solve all these issues and, if they do encryption right, there is nothing that could possibly happen even if they are hacked and an attacker gains access to my encrypted data.
The doing it right part is why I was asking my questions with regards to PBKDF2.
I once leaned heavily upon Google Chrome as my password manager, but then I discovered that you could view the passwords in Chrome for Windows by knowing my Windows login password, instead of my Google password.
This feels off topic a little, but in all the discussion of password managers lately, I seldom hear people talk about the web browser being a good/bad idea. It almost feels like they are slipping through the cracks of the conversation.
For the record, I no longer use that platform for important passwords or secrets, ("driver carries no cash")
Note that if you have autofill enabled on website login pages, then password-protecting the browser's password store doesn't do anything. Anyone with your Windows password can just go to any website, autofill the password, and then copy the password out of the page itself. Try it on HN, go to the login page and run `document.getElementsByName("pw")[0].value`.
To say nothing of the fact that, if they used some method to divine your Windows password, then they've probably already done the same for your password manager's password. And even if they only had your Windows password somehow, they could just install a keylogger to get your master password anyway. And even if you 2FA your password manager, the keylogger can still intercept any other password and take over any non-2FA'd accounts.
I noticed this as well, I used to do the same and so did everyone I know. I stopped used any Chrome based solution after seeing that if my Chrome was sync'd to my phone, and my phone was unlocked, someone could open Chrome on my phone and, with only my phone's PIN to unlock the vault, view all my passwords. This seemed super weak. So I switched to a different method of storing and generating passwords. But as far as I know, most people I know just use Chrome's password manager. And you know... I haven't ever heard of a Google breach where vault databases have been breached...
I've bought a few online businesses; and when doing so - we of course transfer various online accounts.
Once, I was given the primary Google account and as I was going through and updating security items on that account I discovered the previous owner had been using Google's password feature and I could login to a whole slew of his personal accounts (Not just the "Login with Google" ones). Of course, I just deleted all those - but the risk of centralization was certainly highlighted in that moment.
I still haven't seen a clear explanation of how the # of iterations scales in relation to password length.
If it is true a few extra characters is as good as having sky-high iterations, the guidance should be on 'forcing' users choose long-enough passwords, not in this nitpicking over the 'right' # of iterations.
If your first two statements are correct I can't see how the third can't be.
If we choose our password from only ~24 chars then you can get the same effect of 100K iterations form just 4 more characters, or ~1 more dictionary word.
Imagine you're an adversary, and you've just stolen a list of hashed passwords.
The values you see are:
ae1fb1a0b0ee --> ???
f10abddc10a0 --> ???
What you're going to do is start bruteforcing letters until you find the matching passwords. You know that the passwords are hashed with 100k iterations, but you don't know how long the password is.
First you start with a, b, c, d, ... z. That's 26 combinations.
Then you do aa, ab, ac, ad, ... zz. That's another 26^2 (676) combinations.
The next length of aaa, aab, aac, ... zzz is >17k combinations and it keeps increasing exponentially.
Increasing the number of iterations applies a linear multiplier to the time. If it take 26ms to bruteforce all the one character hashes with 1000 iterations, it will take 2600ms with 100,000 iterations.
But increasing the password length adds a multiplier of 26 with each new character (and that's only assuming single case letters). Adding 4 extra letters is actually an improvement of over 450,000x (26^4).
(Assuming you are using the printable ASCII character set, that's actually 95^4 = 81,450,635x)
Increasing the iterations is basically free (up to the point where it requires human-noticable amounts of time to calculate), so there's no need to require 4 more characters.
Also, teaching users enough of this to do it well is basically impossible. You have to have a threat model that includes your users being fairly shit at passwords. Especially when your product is solving the problem of users being shit at passwords.
I don't have great values for the constants, and it's a bit of a question of who your expected adversary is, but assuming a randomly generated alphanumeric password, the equation should be something like:
Expected time to break = 1/2 * 36^L * iterations / (hashes per second your adversary can do)
So you'd want to pick how long you want this to remain secure (~50 years is probably beyond good enough). For hashes, a _very_ conservative choice might be something like the hashes per second done in all of Bitcoin, worldwide. One arbitrary result in Google suggests that that is 286,767,038,956,306,900,000 H/s.
If I did the math right, that works out to 17-18 characters long for 1000 iterations. The number of hashes per second there is obviously way too big, but I'm uncomfortable picking a lower one without doing more research than I'm interested in doing. There's probably a recommendation from some security experts out there somewhere, but I'd imagine it's going to be a bit of a struggle to get one of them to tell you anything except "use more than 1000 iterations".
But what if you need to enter it somewhere that doesn't support it? A physical device, a VM that doesn't allow copy and paste, a mobile app without support for copy/paste or password managers...
All those scenarios happen for me every couple of weeks and it's what's keeping me from using really long passwords with high complexity.
Popular movie quotes or lines from books with minor iterations are bad choices. They are somewhere out there and not as safe as one might think. Completely random choice of words is good, but it is not feasible to remember random passphrases for all of your accounts.
Other common methods include appending a particular character to each word or alternate words...creating a pattern of sort, but this again makes it difficult to remember, which was the reason why we preferred passphrases instead of passwords in first place.
> Not all books in all languages ever published are "somewhere out there".
I mean, they mostly are or can be. What's the point on relying on "nobody happened to catalog the book I copied my passphrase from"? Are you going to check every week that nobody uploaded it to an archive site?
For smaller languages the steps would be:
- Somebody would have to digitize an old book without mistakes.
- Somebody would have to publish it online.
- Somebody would have to scrape and archive that.
- Somebody would have to transliterate it to Latin script.
- That transliteration would also have be the same transliteration I'm using.
It's unlikely it will be done for a lot of languages.
> There's easier schemes that don't rely on that.
Remembering random words is hard. This is how we got into this in the first place.
> Remembering random words is hard. This is how we got into this in the first place.
It's really not. You just make a story out of it. My memory is quite crap, I'm still able to remember the ~3 passphrases I actually need, and I'm able to rotate them as required.
There's some things that are obviously bad: popular movie quotes, slightly less bad (but still bad): any quote from anything ever produced in any medium.
Some things that are obviously good (you can calculate the entropy easily): diceware style schemes, generated with dice or a secure random generator.
Anything in the middle it's quite hard to say. Humans are really bad at being random, so words you pick out of your head I'd be fairly suspicious of. But it's hard to prove it's a bad idea.
Considering length is key in computing “strength” I’m curious how using a long dialog from a movie might make it bad? Presuming you account for the full 95 entropy set (numbers, upper/lower letters, special characters) and padding¹ then how would an attacker know that a failed phrase failed because it was the wrong phase or because they forget to add some padding that is still unknown.
From a dictionary/rainbow table perspective I'm curious how they would know to include the following in their lookup tables before going fill number crunching mode:
TO be or NOT two be - that is the question!!!!!!!!!!7872665398
Bitwarden suggests this is strong as does GRC Haystacks¹ thoughts?
1) the choice of quote. Say that's in the top ten quotes ever, so something like 3 or so bits of entropy.
2) the modifications and additions to the quote. Really depends what the scheme is, but few bits for which words are capitalized (~4), few bits for where the hyphen is (~3), few bits for how many bangs (~4), and a bunch of bits for which number goes on the end, (~30ish). Some bits to account for the scheme itself and its choices too, but I don't know how to put a number on that.
Do you see how little is actually coming from the quote? Your passphrase might as well just be "95!!!!78726653980" and if anything that's _easier_ to remember.
Compare against something like a diceware passphrase. _All_ of the entropy comes from the passphrase part, the part that's easy to remember and trivial to calculate how secure it is.
So a quote is bad because you can _make_ it secure, but you making it secure is just throwing crap at it until it's no longer functionally a quote in any real way. It's secure the same way a blank password is.
what I don't get with this argument, why does the quote only give 3 bits of entropy?
Are the cracking algorithms so good that they know to try "or not to be" after they get to "to be". Also, as far as I remember you can't get a "you are partially there" result. Either you get the password or not.
So they wouldn't know that "to be" are the first five chars.
Even for badly pw parts which could traced back to me.
Let’s say I use my girlfriends name, surname and birthdate.
If someone targets me directly, definitely a bad idea. For a random bruteforcer or even a dictionary attack with rockyou.txt, as an example, it wouldn't change a thing.
> what I don't get with this argument, why does the quote only give 3 bits of entropy?
Good question. 3 bits is based on the part I mentioned where "to be or not to be" is one of the top 10 quotes. log(10) is about 3. The reasoning for this is that this quote is going to be in a "dictionary" your attacker has. 10 is probably a bit unfair on my part, because an attacker is probably really going to be guessing from a larger pool of quotes, but it ends up not mattering _too_ much. If their pool of quotes is 1000 long, that's more like 10 bits of entropy (still far, far too little on its own).
> Are the cracking algorithms so good that they know to try "or not to be" after they get to "to be". Also, as far as I remember you can't get a "you are partially there" result. Either you get the password or not. So they wouldn't know that "to be" are the first five chars.
Yeah, it's not based on anything like this. Assuming whoever implemented the password input (bitwarden in this case) isn't _maliciously_ incompetent, an attacker would get no information from a partially-correct password guess.
> Even for badly pw parts which could traced back to me. Let’s say I use my girlfriends name, surname and birthdate. If someone targets me directly, definitely a bad idea. For a random bruteforcer or even a dictionary attack with rockyou.txt, as an example, it wouldn't change a thing.
This is not completely wrong, but somewhat incomplete. Names and birthdates/years (or dates in general) are both really common parts of passwords. So an attacker will have a dictionary of common names (or ~all names, there's not that many of us), and every date that's possible to be important to someone.
So that already reduces the entropy a lot. And yeah it's bad enough if someone targets you directly that it's just a horrible idea.
The other problem with schemes like this: if you're using a password of that form, you're probably reusing it multiple places. This allows any site you have an account at to trivially access any _other_ place you have an account at. Really, really bad news.
Bitwarden also allows you to generate a random passphrase, which is pretty nice for those situations where you want to be able to manually type in the password.
Number of iterations being discussed is how many times the password is hashed. It is a setting the system chooses and is independent of the password length the user chooses.
If you are asking if the length of the password by itself be sufficient to create a secure password, then the answer is mostly no. You need many iterations of the hashing process otherwise brute force attacks become trivial given today's hardware.
Unless you have a high-entropy long password. 10 Diceware words (words chosen uniformly at random from a list of 7776 words) is over 128 bits of entropy, even a very fast hash would be enough for such a passphrase. Of course at that point you've essentially memorized a cryptographic key, not a traditional low-security password. Good for the master password of a password database, not so usable anywhere else.
I'm not super practiced in hashing theory. If the 10 words were usually longer than the hash function output (say, starting at an average of 7 words), would adding more characters (words) still increase the entropy or would the entropy get truncated?
It will theoretically increase the entropy until the total entropy exceeds the length of the hash (not the total length of the input).
What we really care about is how hard it is to determine the passphrase given the hash. With a 128 bit hash, an attacker requires an average of 2^127 guesses if they are guessing completely randomly. So as long as your passphrase is well before the first 2^128 guesses an attacker is likely to make, making it harder to guess is theoretically useful.
For example, "AAAAAAAAAAAAAAAAAAAAAAAAAAA" has more than 128 bits, but it's also going to be (relatively) easy to guess. As shorthand we say "it has less than 128 bits of entropy"
In the example that GP gave, you could advertise "I used 10 diceware words for my passphrase" and it would still be as hard for the attacker as attacking a 128 bit hash. 7 diceware words would be much longer than 128 bits, but if you advertised "I used 7 diceware words" it would give the attacker a significant advantage, since there are much less than 2^128 possibilities.
The way we solve server-side iterations with Standard Notes (which uses Argon2 and not PBKDF2) is to tie the derivation parameters (iterations, bytes, etc) to a hard-coded protocol version number. Accounts which register today for example have a protocol version of 004, which corresponds to specific, immutable derivation parameters.
For a given user, the client then receives from the server not key derivation parameters, but the version of the account. The client then maps that version to the precompiled derivation parameters.
Of course a server can then misreport a user's account version to something lower than it actually is. There are two solutions we implement here:
1. Deprecate older versions as quickly as possible after new protocol version rollouts. Older versions begin to get rejected by clients and clients will not allow sign in to proceed.
2. Allow an optional sign-in flag users can check called "Strict sign in" that forces the client to reject any server provided version that is not specifically the latest version. This means that if a user checks this option and the server reports a version != 004, the sign in will be rejected and the client will not perform any sort of handshake with the server.
> Even if you configure your account with 1,000,000 iterations, a compromised Bitwarden server can always tell the client to apply merely 5,000 PBKDF2 iterations to the master password before sending it to the server. The client has to rely on the server to tell it the correct value, and as long as low settings like 5,000 iterations are supported this issue will remain.
This seems like a serious flaw that completely undermines setting a custom value, no? If an attacker gets temporary control of a bitwarden server then they can get your password in a more easily crackable form no matter what you set.
The only way around this would be to require a human to set the # of iterations independently on all client devices. Because if you change the # of iterations using your laptop, the server wouldn't be able to tell your phone that the # of iterations has changed. Instead you just wouldn't be able to log in until you ALSO manually changed that setting on your phone to match the new server configuration that you set with your laptop.
That would be pretty poor UX.
So yes, it's a flaw. But I don't see an acceptable mitigation for it, as all clients will have to be compatible with passwords for accounts that were last logged into 10 years ago when [super_low_iteration_number] was still considered an acceptable value. And the server will have to be able to tell the client "This account was last logged into 10 years ago. We're going to increase the number of iterations right after this, but just one last time, send me the password after hashing it 5,000 times."
No, because today you pick a number of iterations that takes some "long but reasonable" amount of time. Maybe that's 2 seconds, maybe that's 2 minutes...but it's the longest your customers will put up with waiting at login.
Then 10 years from now, CPU's will have sped up a LOT so that will only take 0.2 seconds or whatever. With normal users, you could update the # of iterations every year, but that requires a login to occur so that the "new" password hash can be sent by the client. With a user who only logs in every 10 years, you have to support the # of iterations that existed when the user last logged in.
How would the server know what the hash is? So you log in with your phone and its 5,000 iterations, then you log in with your desktop and it's 500,000 iterations? Then you get an updated driver for your GPU and it fixes a performance bug and now its 550,000 iterations...how the fuck would the server know what the matching hash should be that it never got sent to it?
It sounds like the server would have to store the lowest possible hash (your phone), which defeats the purpose of the larger iterations on the desktop machine.
Yes but the server has to have something stored. If the data stored was last encrypted with a hash that has fewer iterations then the client side, then
1) the data cannot be decrypted with a hash of more iterations because the server can’t undo client side hash iterations.
2) if the encrypted data is exfiltrated it will still be as easy to crack as the iterations performed for the original encryption of the data at last login ten years ago.
Other than the low amount of default iterations (at least compared to the OWASP recommendation [0]) the article doesn't explain why the server-side hashing is "useless" and what the design flaw actually is. Am I missing something?
I disagree with the article that server-side iterations in this case are useless. They are used for access control.
Bitwarden's API likely doesn't permit anybody to access the encrypted blobs of anybody. You have to authenticate at the server to be able to access your blob. Since the iterations might be low for producing the master key and therefore the master password hash, the server must treat the master password hash as just another password and therefore iterate the hash quite often (100,000x).
Assuming no malicious insider or an outside attacker gets their hands on the encrypted blobs this is the most important attack prevention.
> the 100,000 PBKDF2 iterations on the server side are only applied to the master password hash, not to the encryption key
So only the password hash itself gets the extra iterations. The diagram and the text of the whitepaper seem to be at odds, though. The diagram doesn't show extra iterations for the encryption key, but the whitepaper says:
> PBKDF-SHA256 is used to derive the encryption key from your Master Password. Then this key is salted and hashed for authenticating with the Bitwarden servers. The default iteration count used with PBKDF2 is 100,001 iterations on the client (this client-side iteration count is configurable from your account settings), and then an additional 100,000 iterations when stored on our servers (for a total of 200,001 iterations by default).
If I understand correctly, the problem is that the hash created on the client side is used to create the encryption key before the server side hashes are applied. Only the master password uses the extra server side hashes.
It's useless in most circumunstances: it depends on what the attacker has access to. If the attacker only has the master password hashes the server uses to gate access to the encrypted database, then they would need to go through the full 200,000 iterations. But if they have the encrypted database then they can run 100,000 iterations and just try to decrypt the database (assuming this check is cheaper than the 100,000 iterations, which it likely is). And it's far more likely the attacker has the encrypted database (which can be pulled off of any machine which has been logged in, as well as the same database which contains the master password hash, if bitwarden's servers were to be breached) than the master password hash alone.
> Testing the guesses against the master password hash would be fairly slow: 200,001 PBKDF2 iterations here. But the attackers wouldn’t waste time doing that of course. Instead, for each guess they would derive an encryption key (100,000 PBKDF2 iterations) and check whether this one can decrypt the data.
I don't understand. As far as I know the key space of PBKDF2-SHA256 is 256 bits and the vaults are encrypted with 256 bit AES. Is the author arguing that Bitwarden is insecure because the attacker could (in a roundabout way) bruteforce 256 bit AES?
edit: I think I understand, the text didn't make it immediately obvious but I believe the author is talking about (configurable) 100k client-side iterations which are then used to obtain the "stretched master key" (from the diagram). This would render the 100k iterations done on the server pointless if an attacker already has a copy of the data, they only protect (slow down) the normal authentication flow.
- PBKDF2 (2.0 2007 RFC 2898, 2.1 2017 RFC 8018) <- This is here, although it was revised in 2017.
- bcrypt (1999)
- scrypt (2009)
- argon2id (2015) <- Have we not yet evolved to address threats here?
What about minimum resource complexity (mem-/CPU-/GPU-/FPGA-/ASIC-hard) guarantees on the client (assumed trusted, as much as one can trust)?
Picking one number out of the sky for today that doesn't evolve with technology doesn't make sense. Plus, it isn't necessarily something a human shouldn't be choosing for every use-case without a risk assessment. There should be a sanity-check lower bound that evolves with best-case performance coupled with a specific threat environment.
Calculator website with:
1. "Which algorithm?" (some choices)
2. "What type of data is it?" (Level 1 - 6 with familiar descriptions)
3. "How long does it need to be protected?" (1...100 years in almost log progression)
4. "How much is the data worth?" (some choices, or 1e2 ... 1e11 USD / other currencies)
5. "What would be the consequences of its disclosure?" (with familiar descriptions)
6. "What model of device will slower users have?" (some choices of new to old laptops and touch devices)
7. "Funding amount of highest reasonable threat actor?" (some choices, or 1e5 ... 1e12 USD / other currencies)
8.-11. "What is a(n) {un,}reasonable {un,}lock delay?" (ms)
And then output parameters (n, salt/nonce sizes, factors) and password complexity requirements valid for implementation now.
It would also be nice to output an algorithm generated to forecast values needed X years in the future with similar guarantees.
One way that I've many people using is to have a long random password stored on a yubikey that will be entered on long press, then you have a shorter password that you remember and type in.
So when you enter your masterpassword, you first type the part of the password you remember, then long press the yubikey to get it to enter the long static password.
If one is looking for an alternative, may I propose Psono? (I am the main developer behind Psono) It doesn't suffer from some of the reported issues and as such uses for example scrypt instead of pbkdf2 for the hashing of the masterpassword and the urls are for example also encrypted. Noteworthy its open source so everyone can take a look at the source code or ask questions in our discord channel. https://discord.gg/RuSvEjj
The real vulnerability with password managers is autoupdating browser extensions. That's the disaster on the horizon. They update on their own, all the time. You can turn updates completely off but otherwise cannot control this behavior.
Eventually, a password manager's development environment will be compromised and bad actors will sneak a trojaned extension in, submitting it to Google and Mozilla and Microsoft and the rest. Your browser will update to that extension and the bad guys will get everything.
Self-hosting won't help, your homelab running vaultwarden in a docker container not exposed to the internet is no protection from this at all. 2FA won't help either when the extension itself is your enemy, it'll simply upload your entire unencrypted vault off to the bad guy on IRC or discord or whatever.
(2FA will help on individual sites supporting it, assuming you don't use your password manager to store 2FA tokens too. They all support that functionality, but you don't do that, right?)
The only way to avoid this outcome, which again is inevitable, it WILL happen eventually, is to laboriously copy/paste passwords from a separate password vault program that is not configured to autoupdate.
The thought of actually doing that makes me cringe, so I'm still using Bitwarden. I know it's coming, and hope it hits another more popular password vault first so everybody gets wise and figures out some solution to this problem. I also use Firefox which seems to have humans reviewing extensions, at least sometimes, and is less popular than Chrome.
I have been thinking about this and one way to resolve it is if the encryption data and the login cannot be linked without going through the server iterations.
The reason this matters is because the salt for the hashing is the email address. Attacking the encryption key directly requires knowing the corresponding email. So denying attackers that knowledge adds to their burden.
If the customer/vault records can only be linked via the server side encryption the attackers with the vaulrs and list of user emails will have to test every email as salt (also reqires some care to ensure logs can't be used to correlate vaults and emails, but all the scenarios I've played out make me think this is possible). In fact I think they could physically separate the customer and vault databases entirely to different servers or datacenters.
Initially I thought this might be what Bitwarden does (it seemed pretty clever), but the database schema on github does indeed directly/explicitly link customers and vaults.
If the customer and vault datasets were independent, Bitwarden could further complicate attacks by filling the customer and vault databases with convincingly fake entries since the basic database attack surface would scale as the product of customers and vaults.
> In case you are wondering whether it is even possible to implement server-side iterations mechanism correctly: yes, it is. One example is the onepw protocol Mozilla introduced for Firefox Sync in 2014. While the description is fairly complicated, the important part is: the password hash received by the server is not used for anything before it passes through additional scrypt hashing.
> Firefox Sync has a different flaw: its client-side password hashing uses merely 1,000 PBKDF2 iterations, a ridiculously low setting. So if someone compromises the production servers rather than merely the stored data, they will be able to intercept password hashes that are barely protected. The corresponding bug report has been open for the past six years and is still unresolved.
Is this not always going to be a flaw with server-side iterations? And therefore knocks down the first paragraph's contention that it's possible to do server-side iterations correctly?
It is largely unsolvable if you rely fully on the server. I describe here how we handle this at Standard Notes where the client can reject weak parameters from the server: https://news.ycombinator.com/item?id=34506062
This is extremely irresponsible advice for anyone who isn't already skilled in securing their systems and keeping their software up to date. Skimming through that video, there's no thought given to securing the OS, "pi" account is effectively given root access (through "docker" group) with a default password of "raspberry", network access is unrestricted, there's no thought given to secure remote access (e.g. a VPN), there are no auto-updates for either the OS or vaultwarden, leaving you exposed in case of future vulnerabilities.
I've given up trying to self host stuff like that, it's always a nightmare compared to paying for the service. Plausible Analytics being a prime example.
I would say that in the case of analytics, there are more things to consider than price and ease of installation when considering self-hosting. By not self-hosting you are sending your users data to a 3rd party, which might have impact on their privacy or on the law compliance.
What you are referring to is essentially a compatible server implementation called "Vaultwarden" (formerly Bitwarden_rs), where the original company will see no money whatsoever.
I am not sure I get the flaw. The author says that the problem is an attacker only needs 100,000 iterations to get the master password hash, instead of doing the 100,000+100,000 iterations to get the master password and the master password hash.
Wouldn't though the master password hash be so long, that 100,000 iterations would be really hard to brute-force?
The point is it goes master password -> encryption key -> master password hash. The master password hash is only important if you want to download the database from bitwardan's server, the real valuable part is the encryption key, and the attacker is extremely unlikely to have the master password hash but not the encrypted database which they can use to check the encryption key.
Question: Would there be any benefit, not just to increasing the number of hash iterations but also changing it to a random non-round number? I would have thought if you were brute-forcing the hashing key it would be worth limiting it to round numbers (e.g increments of 10,000 or 50,000) to increase the efficiency.
Good to have the heads up. I just bumped my KDF iterations from 100000 to 600000.
One thing that is also worth mentioning for anyone nervous about their password security is that you can use a physical security key with the paid version of Bitwarden. I need to use a yubikey to log in to any new instance of Bitwarden and it's been working well.
Yes, 2FA/MFA only serves as access control, to limit who can retrieve a copy of your encrypted vault from the server (to then decrypt locally).
Like in the Lastpass scenario, if someone gets a hold of your vault from a server side backup (or compromise), then your access control is bypassed, and won't make your vault harder to decrypt.
Using MFA is definitely good practice though, as in normal circumstances an attacker will be trying to get to your vault without server side access.
I, personally, think that password managers contradict the idea of passwords.
A password is something that YOU know. You and nobody else. Ideally not even the system you access.
If you write it down, give it to somebody else, put it in the cloud, etc., the password isn't safe anymore.
If you can't remember your passwords, then use something else.
> I, personally, think that password managers contradict the idea of passwords. A password is something that YOU know. You and nobody else. Ideally not even the system you access.
That's the theory of passwords, but it has been demonstrated that most people simply can't manage their passwords.
I have about 600 passwords stored in my password manager.
Without a password manager, I'd have to reuse passwords (either partially or fully) to manage all of that.
A much better option is to get my password manager to generate a random password for each of those sites.
My Hacker News password, for example, is 60 characters long and contains upper and lower case letters, numbers, and symbols.
Not that it matters much in practice, but a 60-character uniformly random password is overkill. Given that a 128 bit key is considered secure and one may occasionally need to type a password due to technical constraints, 21 randomly selected characters from a 72-character alphabet is enough. Double it if you want to target 256-bit security, but the threat model here doesn’t really support that. Are you expecting a large-scale quantum computer attack on the HN password hash database?
I think that the idea of passwords has shifted over time.
Today we are using more and more accounts, almost every website or service seems to require an account for something. It's impossible to remember strong unique passwords for 300 different websites. Anyone with that many accounts who isn't using a password management system is almost guaranteed to be re-using the same passwords or patterns.
Data breaches have also become more common and accessible to bad actors, to the point a script kiddie or hacker could look up your email, see much of your old passwords, and use that to help bruteforce your current password for some important account.
Password management defends against this by allowing you to use random meaningless passwords for each website without needing to remember each one. There is no more human element in picking your password, and your old passwords become useless for any would-be intruders.
But you really can't know if the system handles passwords correctly or just stores them as plain text into a database. And memorizing a unique password for each system you want to access seems like a hard task.
keypass works great! Nobody knows or has your passwords except you, and you still only have to remember one complex password. password managers aren't the problem, it's the idea that other companies should have access to all your stuff.
It's not really needed if your password has a lot of entropy. 40 random characters is like 256 bits or something so that's crazy overkill and would be safe regardless of how many iterations.
I use pass <https://www.passwordstore.org/> to store and get my BitWarden master PW. Pass is encrypted by a PGP key residing on my HW token/smart card and encrypted with a good but (for me) memorable PW.
In my case it's not really due to paranoia, but as I already had pass in use for critical and very important credentials before, and as I 1) was only evaluating BitWarden first and 2) did not want to remember two master passwords, so I went for this approach, and it works out quite well for how I use BitWarden (basically only on my workstation where I require my HW token anyway).
Yeah, a developer owning a YubiKey and using a password manager, that's really revealing...
Unless you're in for the criminal acts (and their repercussion), i.e., stealing my token and my keys to wherever my workstation stands and then torturing out both, my workstations and key's PW from my mind, it won't really help you...
Or what did I really reveal with what actual real world implication that can compromise the security of (which?) systems I can access?
IMO, if your "security pipeline" can be compromised due to documenting it (not the credentials used to access it!), even publicly, it wasn't that good of a "security pipeline" in the first place.
Leaning towards self hosting at this stage. Sure my security skills are no match vs Bitwarden engineers…but I also don’t have a giant state actor sized bullseye painted on my back
At least I know, so I manually increased the iterations on my account above 600,000 as a safeguard. I'll have to keep in mind to update it in the future.
I switched to Bitwarden in early 2018. Just checked and my PBKDF2 iterations were 300k, so everything was fine. I increased them to 600k just for the heck of it.
More confirmation that Keypass was a good choice vs popular alternatives. It lets you choose how may iterations you want in the settings and even that won't matter unless someone has access to your offline database.
Not to excuse the behavior in that bug, but the situation is very different for Firefox than for BitWarden - as the blog post notes, 1000 iterations is only used for the key as it is in-flight via https to Mozilla's production servers, not when it is at rest. An attacker getting access to any encrypted databases would need to deal with scrypt, not these 1000 PBKDF2 iterations.
tl;dr rant, and not an exaggeration: the amount of time I've spent skimming these (edit to be nicer) exhausting comment threads about password managers has taken 10x the amount of time it took to use a pass-compatible or age-based password manager.
Okay, here we go, let me be explicit: there's a Venn diagram I imagine in my head of two circles - first is "password managers that require a web ui and browser integration", second is "password managers written in a language and with an overall complexity that I'm comfortable with".
For certainly-informed users, there's ZERO overlap in 2023. what's super fun is that this comment would get entirely different scores in yah, every 2 years for the past decade+ now. (think abour Rust circa 2021, 2019, 2017, I feel confident saying... for well most, opinion has been shifting in a certain direction).
luckily recently there is a tool which is,
1. rust-written
2. pass, pass-totp, and pass-tomb compatible
3. hasn't yet broken the CLI UX in nearly every release like another prominent "safe-lang" re-write of pass. that's all I'll say because it turned into a many-paragraph rant otherwise.
I'm trying so hard to watch myself here, but it's just not that hard. Let's imagine what needs to break for me to compromise your WV account versus what you'd need to compromise my public-hosted git-repo-backed yubikey-hardended-gpg-encrypted pass accounts. "I'll post mine if you post yours?"
Embarrassments will continue until everyone realizes all the "security experts" recommending password managers have marketing deals with them. Password managers are an awful antipattern, I've been saying it for years, and it's absolutely comical to me that people do not get the message. When one falls, it's "oh that one sucked, use this other one instead".
An Internet-connected data vault is subject to attack from anyone on the planet. A Post-It stuck to your monitor is only subject to attack from people who can visually look inside your office. Guess which is safer? Once you realize a Post-It is a better choice, and you can think of trivial ways to improve on that... why are people storing their passwords online?
Bitwarden didn't fail, there was no embarrassment. They actually encrypt the vault unlike Lastpass. There is heightened awareness around the various issues that came to light following the Lastpass attack, and PBKDF2 is one of them. In Bitwarden the iteration count is user configurable.
Note also that Bitwarden provides its server software (and there is a great alternate implementation in vaultwarden); it doesn't have to be an "internet-connected data vault".
Your position is not vindicated by TFA, but I applaud your caution and you are like 10% less of an old man howling at the moon.
Is it stored on a device which connects to the Internet? It's online!
I think there's a role to play for things like KeePass when you need to share some secret values with family or team members, but it shouldn't be for high security things.
Two-factor authentication is probably our best practical defense right now, and generally the best way to do that is not to have your password saved anywhere: Two-factor is "something you know and something you have". If your passwords are stored somewhere, it's just two things you have.
If, like some Bitwarden users, you backup your 2FA tokens in your password manager, 2FA is just one thing you have. An using a password poor enough for you to remember and a two-factor token is better than just two separate apps on your phone.
Ehhhh, I dunno on that one. I think "stored on an end user device" is quite different from "stored on a server with a public address". Firewall rules differ, for one. ISPs are more restrictive on what they flag as suspicious for domestic connections for two.
Yes it would be better if the password manager were airgapped but that's a bad trade off in terms of user inefficiency and risk reduction IMO. In the same way, the reduction of attack surface by using post-its seems dulled by the consequent decrease in password complexity caused by people deciding their own passwords (since people subconsciously apply patterns to "random" strings they generate).
This isn't to say "post its are bad" or "password managers good" but I am pushing back at the categorical statement of "post its good, managers bad". It seems contingent on risk profile.
I will agree with you end user devices are generally safer. No points for ISPs being useful for anything, because they tend not to be, but firewalls for sure, and of course, the best accidental security protection ever developed: NAT. The other big difference is that end users have a single user's credentials, which is way less exciting than popping a large provider which can compromise millions of users at once.
That being said regarding Vaultwarden, as someone who contributes to a self-hosting platform, I interact with a lot of self-hosters. And self-hosters do a lot of really dumb things that aren't secure, and, of course, tend to add a public DNS endpoint to their password manager. :P
People put a lot of investment into the concept of making passwords super secure. For most passwords, that is silly and probably does more to increase risk. I would argue a password you can remember + 2FA is much safer than a password generated by a password manager, and any platform smart enough to support 2FA is also not going to give you unlimited password attempts.
But the biggest issue I have with people's views on password complexity and password managers is the idea that all passwords should be equally secure. (Or even, that you "must use a unique password on every site".) I sign up for a lot of crud. Usually it's because something made me sign in to read or comment or something, or a one-off purchase where I'm not even storing my payment credentials. If we're talking about risk profile, these aren't passwords that need to be heavily secured. But if you treat them like they must be, you'll end up using a password manager, likely for all of your accounts, including making your more important accounts, like your email and bank, less secure.
Understand the risk of an account getting compromised, and set it's password accordingly. Absolutely use bad worthless passwords on one-off sites that can't impact you much. Heck forget those passwords, and reset them if you ever need to come back to the site. Password resets are cheap for things you rarely go to.
Turn 2FA on everywhere, and ensure your important passwords are high quality and unique. If you have a bad memory, create some sort of portable reminder, or if you have to write your passwords down on a card or something... lie on it in a consistent, easy to remember way.
There's an interesting general argument you're making here and I'm not prepared to immediately reject it, but one detail I will push back on hard is "use low complexity passwords for unimportant accounts".
This is inadvisable. While the accounts direct utility may not be high, and it increases user overhead, a malicious party can accumulate access to tens of a user's "low value" accounts to farm metadata or incidentally relevant data.
Additionally, the end user is not always the best judge of which accounts are even high value. My aunt insists, for example, that her Amazon account does not need a complex password because she "only" buys cookware from it. A silly example, but it illustrates the point.
I'd generally say anything you save payment info in for general physical purchases should probably be secured decently. But consider: Social media accounts used for public posting present no additional metadata. The risk profile to many accounts being stolen is "they can see my already public content, and also pretend to be me on that site". Which is of limited value. I'd really hope nobody trusted a sensitive transaction solely based on my HN posts, for instance. (It's definitely fair though that many people are not a good judge of this particular risk assessment.)
And I'd say for many sites, using a one-time password that you immediately don't bother to save is also probably a reasonable step up from this. If it remembers you on all your computers for a while... just lose the credentials and reset it later.
The _pass_ tool is an nice simple offline password manager, based on PGP, which works well for me. But I don't expect to be the victim of a targeted attack.
If you think it's ok to be putting all your passwords on some random server owned by a random company then I don't know if you care about any other design flaws. This also includes the websites you visited and just happened to either accidentally or on purpose save your login.
Now multiply the privacy/security implications of that when said company is pumped and dumped by a major VC.
> consider the fact that the threat model for a cloud-based password management solution should start with the vault being compromised. In fact, if password management is done correctly, I should be able to host my vault anywhere, even openly downloadable (open S3 bucket, unauthenticated HTTPS, etc.) without concern. I wouldn't do that, of course, but the point is the vault should be just that -- a vault, not a lockbox.
I keep making this same point in various HN threads. It should be trivially obvious to anyone who understands cryptography, but I guess lots of people really just don't.
Pick a good password, pick good algorithms, and you should feel very comfortable about hosting an encrypted blob of data anywhere. Maybe you should worry a little if you're at risk of being specifically targeted by the NSA, but I doubt they've seriously broken any state-of-the-art crypto. At that point OS exploits and trojans are your real concern.
Even a fairly middling password manager implementation is better than just about any other strategy that anyone is likely to use.
Especially because for the vast majority, the other strategy is going to be reusing the same password ~everywhere and if you're lucky the might use a special password for their bank or something.
I guess for most people online, writing all their passwords down in a notebook is more secure than using a password manager. It’s just less convenient.
A notebook is better in some ways, but worse in important ones. It can't save you from phishing, and your password strength is going to be relatively poor (since a notebook can't generate them for you, and if they get long it'll be annoying to type). Also, a notebook is easier to copy, steal, or lose (though this is a fairly minor consideration for most people).
I would say a notebook is worse than a password manager. It's not strictly worse in every way, but on the balance it's not a hard choice.
A notebook is better than most other not-a-password-manager solutions though. So it has that going for it.
If your notebook is destroyed (e.g. dog eats it, fire, water damage, et al) then all your passwords are gone. With most good password managers you can actually backup and store a copy of your vault data locally.
I installed BackBlaze years ago for my 88-year-old mother-in-law. She has a binder besides her computer with a sheet for each account, some with 7 or 8 passwords scratched out and replaced.
I really should have her write out a few key passwords and put them in an envelope for me to keep.
My biggest complaint is that password managers treat passwords as something precious. They're the opposite of that, in most cases they don't even have to be remembered at all, because there are easy password reset flows and long session times. Just get a new password if you need to log in from a new device or the session ended.
Sure, you need to know how to log into your email, but that isn't any more passwords to remember than the password manager master password.
I don't rely on just that, but between the reset flows and the browsers built-in password store, I don't really see what I gain by adding an external point of failure.
> I don't rely on just that, but between the reset flows and the browsers built-in password store, I don't really see what I gain by adding an external point of failure.
I mean, a browser "password store" _is_ a password manager. It's just usually not a very featureful one.
There's a version of this sort of password manager that is safe. Not saying Bitwarden is it, just that strong, well-implemented cryptography can get us there.
If it's not opensource, you cannot know if the claim of it being end2end encrypted is actually true or not. The thirdparty can disable the encryption with the click of a button.
https://neilmadden.blog/2023/01/09/on-pbkdf2-iterations/
Both Bitwarden and LastPass should improve this situation by making the iteration count automatically increase over time. For LastPass though, there are... A lot of concerns. The breach, how it was handled, persistent issues with the security of their browser extension (many, including an RCE at one point) and of course the fact that not everything in the vault is actually encrypted.
KeePass XC or 1password may prove to be better options from a strict security practices standpoint, but from what I've seen I don't suspect Bitwarden has a pattern of bad security practices overall. It does seem like there are opportunities to make it better, though.