Hacker News new | past | comments | ask | show | jobs | submit login

Why not just time the checking function and then round it off to 0.5 seconds? So, regardless of how long the actual checking took, the function would pad with a sleep() call to 0.5 seconds. This sounds like it would work for everything...



Why bother? Odds are that a call to sleep(500) vs. sleep(500 - bcrypt_time) is going to be subtly different depending on timer resolution for your platform and so on. An attacker might be able to bog down your system through other means such that the difference becomes statistically significant or such that bcrypt takes longer than 500ms, then you're not only worse off but you wasted the time it took to implement something that only sounded like it was going to be good enough.

Avoid the pain and burn the CPU cycles for all code paths. If you find that bcrypt takes up proportionally too much CPU, reduce the work factor by 1 until it's acceptable. Keep in mind that the vast majority of logins will be legitimate. For the small number that aren't and persist with failed attempts, let their IP get banned then you don't need to spend any CPU cycles dealing with them -- in this case, it would be fine to sleep an arbitrary amount of time and then display the banned notification.


My earlier comment (which was for some reason downvoted) also solves this problem: you will always sleep for half a second, timer resolution is irrelevant, and if an attacker bogs down the system such that bcrypt takes longer than half a second, when the timer finished it would notice that bcrypt wasn't done and treat it as a failure.

It would be a bit of a hack compared to just using bcrypt every time, but it would save processing power, which may be useful in some contexts, like very energy-efficient devices, etc.


So if your system is heavily loaded, nobody will be able to login if bcrypt takes longer than half a second? If processing power is an issue, there are alternate solutions such as dynamically lowering the work factor (upon successful login, rehash the plaintext password they supplied using a decreased work factor) or offloading logins to a pool of servers.


Well, in this instance you wouldn't need any work factor, since it would always take half a second. And if a zero work factor hash is taking half a second to complete, something is very wrong... But granted, for almost all cases it would be best just to bcrypt in every circumstance.


A bcrypt work factor of 10 or 11 on a recent CPU will take about a tenth of a second to hash. Assuming little other workload, you can get about 10 valid logins/second before you'll hit a backlog. If your app is busy and the logins/second increases you'll eventually reach half a second per hash time. If you always return failure when the timer expires and bcrypt hasn't finished, you're doing a denial of service on your own users.

You seem to only be considering the case where the only logins you need to deal with are invalid logins. A busy and successful service will see the vast majority of logins being for legitimate, known users where the bcrypt check must consume CPU time. You have to design the system to be able to handle the workload from both good and bad logins without revealing information about a bad login to an attacker. Anything other than going through the same motions every time will leak information in ways you don't expect.


You can limit it to only checking X hashes per second. The attacker can, under heavy load, figure out how many people are trying to log in, but that doesn't help them figure out any passwords.

Also, there is no reason to implement an extensive backlog in the first place. It's far better for a system at 105% load to drop one in ten attempts than to have an infinitely growing queue.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: