Hacker News new | past | comments | ask | show | jobs | submit login
All you ever wanted to know about building a secure password reset (NSFW) (troyhunt.com)
157 points by ColinWright on July 23, 2012 | hide | past | favorite | 121 comments



This happened to be posted literally, to the minute, while I was ripping out some of the advice from my codebase, so let me take exception with one recommendation:

If someone attempts a password reset on foo@example.com, and there is no record in the DB for foo@example.com, immediately tell them so.

Why?

1) While this does "leak" the fact of foo@example.com not being in the DB (and, by consequence, can be used to verify whether any email is in the DB), naive implementations of the best practice will also leak that via a side channel. If you would not immediately greenlight $20k to have a security consultant spend one week auditing this feature extensively to avoid the timing attack, a) your app is not important enough for this level of security and b) this buys you no real security, just the illusion of it.

1.5) This is totally security theatre if you have a registration process which checks for provided emails already being the in the DB, for example for telling people "That email is taken." Different route through the maze but same cheese at the end.

2) You can already spearphish people for their accounts on example.org . Start with an email list of people. Spearphish for their accounts irrespective of whether they have them or not. Collect wins since losses cost your botnet nothing at the margin.

3) MOST IMPORTANTLY: you're buying yourself six years of frustrated emails from customers which had 90% totally automatically discoverable solutions if you do this the "right" way. I'm speaking from experience. Many of your customers have multiple emails they habitually use. If you do the best practice, every time they misremember which one they signed up with, you get a support email. This will be your #1 cause of support emails and the cost for dealing with them is astronomically higher than the minimal marginal gain in security for your users.

Edit: Omnibus response for people doubting that a side channel attack is possible:

The attacker wins with microsecond precision over the open Internet and nanosecond precision over the local network, which is plausibly achievable on all of the cloud providers than HNers like to host their apps at. Microsecond precision is enough to discriminate between "record in DB" and "record not in DB" on many plausible application architectures, even if your HTML/headers returned are exactly identical for the two cases (and they frequently won't be out of the box).

For example, the relevant code for Bingo Card Creator includes the line

user = User.find_by_username(params[:email])

This line takes approximately 30x longer to execute if it finds a user in the DB versus not finding a user. Why? ActiveRecord. You could, if you had a mind to, trivially discriminate between the two cases in under a week over the open Internet. If you really wanted to ruin my day, you could get a prepaid Visa card and get a VPS from Rackspace on the same intranet, then discriminate between the two responses in under an hour, again without tripping anything.

Some folks think there are trivial ways to defeat timing attacks. You are either mistaken or you have a very different view of the word "trivial" than I do.


I totally agree with you here, I have several email addresses and lack of immediate feedback is extremely frustrating.

However, the original article did raise a point - there are some contexts in which leaking membership might be especially embarrassing. I don't think any of my coworkers would use timing attacks to determine if my email address is in the database of someembarrassingpornsite.com, but typing in my email address and seeing if you get an error is within the intuitive grasp of many preschoolers. Perhaps the guidance should be that for 90% of sites you don't need to worry about disclosing membership, for 9% you should consider a naive implementation of the best practice, and 1% should get in a security consultant.


To take a very recent bitcoin-related example, the owners of bitcoinica were using LastPass to manage their passwords. LastPass leaks membership through the "forgot password" function, and probably months ago an attacker determined that info@bitcoinica.com was a LastPass account. Then, when the bitcoinica source code was leaked, attacker tried the api key within the code (set as password by naive investor) and bam: synced with LastPass account to get mtgox passwords and clear out bitcoinica's mtgox account of ~$350k USD.


Has this info leak been fixed? Would you still recommend LastPass in this case?


If you are worried about a simultaneous email and password leak of your LastPass account, they do have several two-factor authentication options that would prevent someone getting into your account.


Valuable reply - thank you.

Have you considered doing what EDW did and compiling a book of your HN replies, expanded by further insights gained while doing so? Would you have time? I'm sure many would find it valuable.


Please correct me if I'm wrong, but shouldn't code of the following form fix the timing attack?

int MAX_RESPONSE_TIME; start = Date.Now(); user = Database.findUser(email); SleepUntil(start + MAX_RESPONSE_TIME); return user;


Yes. Now find every place that can be slow in some cases and do the same thing.

Note: just doing this in your app isn't enough, because you run in a framework and on a virtual server, and things like memory usage affect the speed of these components.


OK. Here's another approach. You offload the task into a separate queue that doesn't even run the task immediately.


Yes. That will make it tougher to figure out. Your assumption that timing attacks are impossible because they are hard to trace seems optimistic, though.

This is specifically a security problem. Saying "I'll sweep it under the rug" and considering it solved is historically a bad idea.

This chain of "I'll just do X", "no, X isn't enough", "then I'll do Y", "no, Y isn't enough", "then I'll do Z" are typical of these problems.

Unfortunately, each "I'll just" is code for "I'm not going to think through this problem because I think it's easy." As with X and Y above, Z is likely to have serious problems.

Example problem with the queue approach: hacker puts in lots of password reset requests for his own domain(s) and sees that the queue workers have delays of 0.22 seconds between sending back responses to his domain, with gaps which add 0.37 seconds or 0.22 seconds, to within some error. Inference: a wrong email takes slightly longer because it often has a retry, and the 0.22 second entries are other correct password reset requests to valid domains.

Now he can send back a stream of passwd requests to his domain interspersed with three or four requests to a given email address. Now he knows if it's valid or not. ("I'll just block his domain?" "He'll use a botnet, he is anyway" "I'll just rate-limit password resets!" "Good idea - now, do you really believe that that was the last hole in your swiss-cheese security?")

Now you could fix this, too. But see how we've gone through yet another "I'll just?" You can make sure to do the delay in lookup and email sending on each entry in the queue (by the way, you've slowed down all your queue workers and likely increased your hardware costs), but trying to not think about this and sweep it under the rug is really hard.

When patio11 says that timing attacks aren't trivial to fix, this is what he means.

Is the "put it in a queue, add a delay to the queue" method perfect? That is, is "I'll just do Z" the answer? No. Having the queue sleep() is less load on other components, which an attacker can check for by pinging other services. Having it busy-wait is a different load profile than receiving an email. That's the microsecond-precision thing that patio11 mentions where an attacker can get another instance on the same Rackspace network and measure with a side-channel attack.

This is a really hard problem because there are an almost unlimited number of side channels here, and your OS doesn't try to keep information from leaking out on them.


Would it be possible to limit these via an outbound filter? I'm thinking something like a 'do not respond before' header and a web server plugin.

That would be relatively expensive, but if you just do it for login/registration/password resets it might go a fair way towards mitigating those sorts of attacks.


How is your "do not respond before" implemented? Does it sleep? Or busy-wait? Either is basically detectable because it affects (or doesn't) the performance of everything else on the machine (such as pings, other web requests, SSH login, etc).

But yes. Every layer you add (maybe) makes it a bit harder for an attacker. You simply can't stop a dedicated attacker, reasonably speaking.

"Do not respond before" would at least make an attacker have to use a (somewhat) less reliable second channel to find out. Expensive, but it would do you a bit of good against casual attackers.

Beyond that, you can't really block unless you block all the channels -- i.e. add a "do not respond before" to everything, not even just web requests.

Also, when you say "do not respond before" is a header, I assume you mean set by your reverse proxy before the back en handles it. Clearly setting that from the client won't do you any good at all :-)


I have no idea how it would be implemented (other than "some sort of web server plugin"), and I was thinking from the PoV of a web app, so ssh would (might?) be out of scope.

The concept was just that no request for paths related to logins or passwords would take less than an amount of time, eg. 0.1s or 0.5s. It could even just be a config option.

Configuring it at the firewall/web server side would be an easy way to make life harder for an attacker, without having to fiddle with (or even understand) the internals of a web app.


SSH isn't out of scope for your attacker. If it's out of your scope as a defender, that's a problem.

That's why this is hard. You don't get to control what channels of information the attacker looks at.


I'm not sure how attacking SSH would help you crack a web app (the users in the app won't be unix accounts), but I'll take your word for it :)


If the web app uses extra memory or CPU, SSH response time may be affected. So delaying in the web app doesn't do a good job of concealing CPU and memory use by the web app, because it's a shared-resource system.


Trivial way to defeat the timing attack: Don't do anything but enqueue the request before returning to the user. Process the request and send the email asynchronously. Which you should be doing anyhow; you're not going to wait for the email to send successfully before displaying the message, are you?


See below. There are truly not trivial solutions to timing attacks. Patrick knows what he's talking about on that one.


Not showing "user not found" errors may not be a strong security measure, but it's an effective and important privacy measure.

And while you may think that the fact that your app isn't a risk because it doesn't deal with sensitive issues, your users may not agree.


I have had many users complain about password and login problems. I've had zero users complain about someone being able to figure out they have an account.

Sometimes security is important. Other times its just a waste of energy.


If the site has a registration form, then hiding e-mail in password reset is pointless.

Site's registration form validation will give the same information and without side-effects.


That's false for well designed signup forms. Signup forms should never leak "email/username does not exist" but only "this combination of email/username and password is unknown". This prevents the trivial attack. Timing attacks might still be possible, but then at least you're on the same level for both attack vectors.


Signup forms should never leak "email/username does not exist" but only "this combination of email/username and password is unknown".

Can you explain this to me? Is the only difference the error message used, or are you suggesting that apps should support multiple accounts with the same email/username but with different passwords? Or something else entirely? :)

Personally, I don't understand what difference a message makes; it'd be trivial for a user to test by signing up for an account themselves. And not having a unique login identifier would be a nightmare. Which makes me think I'm missing something obvious (damn this headcold, it makes me feel so much more stupid than usual!).


I'm sorry, my mistake here. I was thinking login forms. But the same thing can be done with signup forms, as explained in the response to the comment following yours.


You're thinking of a login form.

If you were to try to register a new account with an email address already in the database, you should get some variation of "A user account with that email address already exists," verifying the existence of an account with that email.


You're right, I was thinking login forms.

But the same trick works for signup forms as well: I could get an email to my mailbox "hey, thanks for registering again but you already have an account. If you forgot the password, click here." That plugs that hole. Note that this won't work with usernames, but usernames are far less tied to a specific person than email addresses. (apart from some people that have very specific and well known usernames).


But the same trick works for signup forms as well: I could get an email to my mailbox "hey, thanks for registering again but you already have an account. If you forgot the password, click here." That plugs that hole.

But how would that work for users who aren't you? My name is - surprisingly, to me - quite common, and the number of registration signups I receive at my [firstname][lastname]@gmail.com email address is really quite surprising.


The point is that you know that you own your email-address. Nobody else could have registered with that address and (rightfully) expect a confirmation email. If somebody else tries to register in error it's the same as with a fresh account at service x: The mail goes to digital nirwana.

So if you suddenly get a flood of "hey, thanks for registering again" mails you'd at least know that somebody is trying to tamper with your account. The email could even say so and add a "please notify us if you think somebody is trying to play tricks on you" link.


I'm sorry, I think that I may not have communicated my point as clearly as I would have liked :)

Nobody else could have registered with that address and (rightfully) expect a confirmation email.

I regularly receive confirmation emails from websites where the user believes their email address to be john.doe at gmail, instead of johnathan.doe at gmail. If this is common enough for my name, it must be really common for more popular names.

So, following your example through, john.doe receives the "Hey, you're already registered!" email, and johnathan.doe first thinks they have registered successfully, and later on thinks that my service sucks because they can't log in, reset their password... and registering appears to do nothing at all. User confusion - and support headaches - ensue.


You communicated your intend clearly, I guess I failed on that. Let's take HN as an example. john.doe actually owns johnathan.doe at gmail. He now registers at HN with john.doe at gmail where no john.doe at gmail is currently registered. The real john.doe gets an email saying "oh, you're now registered to HN" and poor jonathan waits for his confirmation and won't ever get a password reset mail. At some point he'll remember that he actually doesn't own john.doe at gmail and register again, with the correct email. That means that the issue you're bringing up already exists, independent of how you design your signup process [1]

So the variant of always sending an email and always accepting the registration provides the required benefit with a minor drawback.

[1] Unless you don't send confirmation addresses at all which would be pretty much illegal for most services in germany since double opt in is required for pretty much everything of interest.

edit: Since this was regarded as a statement on legal matters I herein clarify to mean "pretty much anything of interest": I loosely intended to say "most things a commercial service might want to do with data, including but not limited to sending me emails which might be regarded as an offer or an incentive to buy any paid service or any promotional email." As has been stated further down it's not a legal requirement to confirm email-addresses in all cases.


So the variant of always sending an email and always accepting the registration provides the required benefit with a minor drawback.

Ah OK thanks, I understand now. (have a headcold that is confusing me right now, so if in doubt, it's my fault ;)

I think that the only thing were quibbling about is what a "minor drawback" is to each one of us. For me, it's not such a minor issue, but it's been an enlightening conversation with you, so thanks :)


> I think that the only thing were quibbling about is what a "minor drawback" is to each one of us.

I agree. But that's always the case with security and I think in this case you can easily fix the drawback with a clear messaging such as "This is what you entered: (replay form data). You should receive a confirmation email within (x) minute. If you don't make sure the email you entered is correct." You'll need that message anyways to catch those users that enter a completely false email address anyways.


You don't need double opt in unless you are sending them emails. Assuming we're talking about a typical web app, this isn't the case. The method you propose actually has a pretty massive usability flaw. When someone signs up on my site, they are immediately logged in and free to use it. Content they upload/create isn't made visible to other until they confirm their account registration via emailed link. The email verification is to prevent spammers. Signups occur almost always directly because "I want to do X", and if you put a barrier between the sign up and the "do X" like waiting a half hour for an email confirmation, then a significant number of people will simply give up and never complete the signup.


Yes, true, there are some cases where you don't need double opt-in, but we're in trade-off territory here [1]. If you decide that your customers value privacy less than usability, fine with me. The point is: It can be done.

[1] Most services that I've signed up to lately log you in once you confirm the email. That's what I regard as the best compromise and for that case, the scheme works perfectly without leaking information.


You are confusing two issues. The first is that you don't need double opt in. This is simply the reality, you don't. There is no such German law. The only law like that is that you need to prove people opted in if you send them commercial email.

The second is the usability question. Sites where you can't login until you have received and clicked the activation link are throwing away signups. The usability of "wait a half hour before you can do anything" is really, really poor. You can certainly argue that having fewer signups is a worthwhile trade-off to gain some privacy, and in some cases I might even agree. But I don't think that is true in most cases. As others have pointed out, you get thousands of emails about signup/login/reset related issues when you try not to leak this info. You get zero emails about leaking it.


I'm not confusing anything. Yes, technically you're right: Only if I send them commercial emails I need their explicit consent. However, pretty much any email a commercial service sends may treated as a commercial email by a court - even the reminder "you signed up here". There's a whole bunch of special cases where that doesn't hold, however, if you're ever planning on sending emails you're making your companies lawyer sleep better if you have double opt-in.

There's other reasons to use double opt in. I register for your service with no double opt in and I have a typo in my email address. I then log out and forget about it. I just lost my account. Double opt-in prevents that. Think of a forum where you can register with an email address and make public statements - if said forum has no double opt in and you register with my address and slander someone I'd take that forum to court since they neglected to prevent that. I might not win, but the forum would be drawn in the fight.

I know that most corporate lawyers I've worked with get twitchy if you propose removing double opt in - even in cases where it's technically not required. I guess lawyers are more the "play it safe" kind of people.

I agree with you that double opt in is not the silver bullet that magically fixes everything, but as I said - we're deep in trade-off territory here.

I also dispute the point that you get zero emails about leaking the information that someone is registered. I have worked on projects where that information was absolutely privileged and it was of utmost importance that no info about who's registered could be leaked.


You clearly are confusing things, because you acting like I am arguing against email confirmations. Your response basically makes no sense as you are ignoring the method I use, and creating a false dichotomy of "double opt in or not".

Your view of the legal situation is laughable. You are welcome to do whatever you like, but don't try to claim it is a legal requirement unless you are going to back that up with facts.

>I also dispute the point that you get zero emails about leaking the information that someone is registered. I have worked on project where that information was absolutely privileged and it was of utmost importance that no info about who's registered could be leaked.

It doesn't seem like you are trying to discuss this in good faith. Read my post again, I was pretty clear that privacy matters in some cases, but that I do not think it is the common case.


> It doesn't seem like you are trying to discuss this in good faith. Read my post again, I was pretty clear that privacy matters in some cases, but that I do not think it is the common case.

I am actually trying to discuss in good faith, but you're last sentence in your post is:

> "You get zero emails about leaking it."

That's what I dispute. If information would have leaked on that project I'd have had a very angry email from my customer in the inbox. Probably rather a written letter in the letterbox ;)

I guess basically we're both in agreement. If you go back an re-read my statements, I do agree with you that double opt in is

a) not the golden end of it all and the one-size-fits-all approach won't work b) and legally not required in some cases (though we differ on how many cases there are)

However, I argue that im my experience most projects will end up with double opt in because

a) they're legally required b) or they might be legally required to do so in the future (like when they plan to send advertisement emails) c) they have risk-averse stakeholders that want every anchor they can have in a (potential, probably imaginary) lawsuit that some bone-headed user might trigger.

In any case you're kinda missing my original point: The starting point of the discussion was not that you're required to have a privacy protecting signup scheme. My only point is that it's possible to have one. If you don't need one, that's fine with me.


>That's what I dispute.

Because you are taking it out of the context of the implicit "for a typical web app" that had already been established in the previous post.

>I do agree with you that double opt in is

You are still arguing a false dichotomy of "double opt in" vs "not double opt in". Double opt is entirely irrelevant. The only time I mentioned it was pointing out that it is not in any way a legal requirement.

>My only point is that it's possible to have one

Nobody said it wasn't possible. People said it is a huge usability flaw.


> Because you are taking it out of the context of the implicit "for a typical web app" that had already been established in the previous post.

I don't get that point. What's a typical web app? Most of those that I've built had in some way or another email connectivity. Many had a newsletter component somewhere that was used to inform users about new features/offers/whatever promotional content. Even more had the tentative idea of at least keeping the option open. And if you do that, you need to confirm the email address. So probably we differ on the notion of "typical" here and I guess that's a point that can't be resolved.

> You are still arguing a false dichotomy of "double opt in" vs "not double opt in". Double opt is entirely irrelevant. The only time I mentioned it was pointing out that it is not in any way a legal requirement.

Sorry, you kinda lost me here. I don't understand what point you're trying to make.

> People said it is a huge usability flaw.

That's the whole point. IMHO it isn't that "huge" when you have double opt-in anyways. And as I pointed out that there are some reasons to have double opt-in regardless of legal requirements as well, in fact, most services that I signed up for use it. That might be different for you, but it's certainly not a minority or a freak occurrence if you encounter some service that uses double opt-in. So it can't be that bad either.

I fully acknowledge that you have a different view here, that's completely fine with me.


>Sorry, you kinda lost me here. I don't understand what point you're trying to make

This is what I have been saying for the whole thread. And you are only getting worse. I do not know how I can make myself any clearer, sorry.


I have _never_ seen a site do that. I think your original point was weak and now you're needing to resort to weak attempts to support it. The original point stands: it's almost always unnecessary during password recovery to try to hide that an email address is in the DB.


Timing attacks like you mention here are trivially easy to prevent.

Step 1: Determine how long a query for a not found user may take... Let's say 1-2 seconds since there may be many rows to deal with.

Step 2: Multiply the value in step 1 by two or at least add a comfortable margin, perhaps 5 seconds total.

Step 3: On the start of the request performing the look up spawn a separate timing thread.

Step 4: At the end of the request join the timing thread.

This will tie the processing time of the request to the uniform time out value instead of your database processing delay.


I think Patrick's point is still valid here. Even if you try to even out the time it takes for a valid login, and an invalid login.... there will still be a detectable difference under certain circumstances based on site traffic even.

Let's say a valid login takes 4.9 seconds, and you are able to rig an invalid login to take 5.0 seconds. You might think that is an imperceptible difference. But a determined hacker with a week can try your script a billion times, and get to statistical certainty that the email doesn't exist.

Not sure WHY they'd ever want to go through so much effort for that info, but it's always going to be possible.


No, trying to even out times by matching is not what was suggested. Instead you make a sleep mechanism that ends uniformly. In this example you would always sleep until the ten second mark, valid or invalid.

Trying to make two code paths take the same time without waiting on an external timer is deep voodoo and should be left to experts.


I think the idea is to rig both cases to return after the set timeout to make them undistinguishable (so both to return after 5.0 in your example). Of course the timeout has to be set above the worst case timing for a valid login


Why would they differ? Both take exactly 5 seconds, since you pad the difference anyway.


> naive implementations of the best practice will also leak that via a side channel

Shouldn't the best practice of sending emails include using at least some sort of message queue? You're probably right that most web apps will use a synchronous call for this, which is easy to run a timing attack against, but it should be easy to fix this without spending $20k on a security consultant.


I don't think he means somebody is going to do the timing attack through emails. Much easier to do on the login screen or registration. You can measure the time of response much better than with email.


I thought he meant the response time of the site when an email is sent. If you synchronously sent the mail within the request-response-cycle without precautionary measures, there will be measurable difference to when no mail is actually sent.


Omnibus response to the omnibus response: Certainly a determined attacker can find the timing difference in under a week over the internet. However, now he needs a week of preparation and leaves a trace I can at least attempt to detect. Or I can insert a random microsecond delay, throwing his measurements off. He could then make more measurements to factor out my random delay. Security is a layer cake game. You penetrate my first layer, I add another one until either you or I get bored. Point is: not leaving an obvious indicator in place is the first layer. It defeats anyone not willing to rent a rackspace server or spending a week at figuring out how my app works an behaves in which case. That's probably good enough for most cases.

(btw: 30x longer is not of interest for timing attacks. If it's 1ns vs 30ns then it's still 30 times longer but doesn't buy the attacker anything when he can only measure ms precision)


Random delay will not help, it will get averaged out.

Which is why you are you not really adding any security with this stuff by just plugging the obvious holes.


Maybe I'm missing something here, but wouldn't it be relatively easy to defeat a timing attack by sending an HTTP response with the standard "Email has been sent" message BEFORE the db lookup, irrespective of the data received?


Yes. Now if you have a problem and the email isn't sent, you have still told the user it has been.

Also, you have to fix this problem every place else that you touch the user database, such as your signup process. There are normally many places you would be affected by the user database, because often the service you provide is slightly different for each user.


I fail to see how point 1) applies. The advantage of the naive implementation of "look it up in the db and send a message in any case" is that the basic amount of work done is pretty much the same in any case. There might be some cases where the database query returns faster if the email is (not) in the database, depending on how the app is structured (index, query schema) but I'd bet that mounting a side-channel attack on that difference either requires a deep knowledge of the app or a long experimental phase. Adding some metrics that trigger an alarm if suspiciously many requests to the lost password page happen should be trivial.


I would think the bigger time difference would come from sending an e-mail message (or not), not the database look-up itself.

But I'm still not convinced that's easily measurable in practice, considering most sendmail implementations dump messages into a queue rather than trying to deliver them immediately.


But that's the whole point: You send an email in any case to the given email address. If the address is registered, you send a "this is you password reset link" email and if not you send a "ok, somebody requested a password reset for this email-address and we don't know it" email. This sort of reduces pain point 3) as well since you can offer some explanation to the user [1] but I guess there's always someone who doesn't get it, so it's not a complete solution to that problem.

[1] "If you're registered here, you might be registered with a different email address. If you're not, ignore this one. etc."


The inclusion of the large graphic taken from a pornographic web site is not necessary (he could just write a single paragraph describing the potential to discover subscribers of porn sites) and shows poor judgement by the writer.


The style of the article was showing screenshot examples of relevant sites. That may or may not be the most productive use of screen space, however the porn screenshot fits in with the other ones.

Just to be clear, I think the inclusion of screen shots is an excellent use of screen space.


Screenshots are useful. But it would have been trivial to keep a technical article SFW.

As-is, this is a solid article that I can't send to my team because of that one poor choice.

Here's why:

1) I don't know the personal histories of everyone I work with, and it's none of my business (they aren't expected to tell me).

2) Some people would be seriously distressed to have their boss or co-worker email them anything with remotely sexual content.

3) I can't reliably identify who these people are, and so I simply shouldn't send anything with sexual content to co-workers/employees.

Doesn't this same logic apply to just about everyone?


Screenshots are great; however, that particular screenshot could have been heavily redacted for the benefit of those reading in a professional environment.


I don't usually do this, but I am genuinely curious about this happening. At about 10:31 BST this morning the following item was submitted (not by me):

http://news.ycombinator.com/item?id=4280213

It quickly gained an additional 5 upvotes, and I was looking forward to the HN community's discussion, because I am about to revamp a certain site, and was looking to see what people thought about the specifics listed on that page.

It got to number 7 on the front page at about 11:10. Then it disappeared. It's now down between 900 and 950 or so. Yes, I went and checked.

I was disappointed - I always look forward to in-depth and/or illuminating discussions on HN - but more than that, I'm really curious to know why this item has been flagged so heavily, or, alternatively, buried by the moderators/admins.

So please, here it is again. You're (obviously) free to flag it again if you feel the need, but please, let me know why.

Thanks.

Added in edit: If you upvote this submission then you should probably go upvote grn's original: http://news.ycombinator.com/item?id=4280213 .


I didn't flag it, but I remember reading about it some time ago. Probably here http://news.ycombinator.com/item?id=4005239


P.S. This article is a) very good even if I disagree with the advice and b) NSFW due to an unfortunately chosen example using a large image sourced from a pornographic site.


His flowchart is missing an edge, between "user changes their password for any other reason other than password reset" and "password resets corresponding to that user are deleted from storage".

It's a minor detail, but an easy one to get right.


Something like this?

http://www.solipsys.co.uk/images/PasswordResetFlow.png

I agree that there's more going on, but this is beyond the original remit. The flowchart can be made seriously more complex by adding in the flow for the simple changing of a password, which needs similar attention to details.

Would you care to assist in producing such a chart? I have provide the DOT source for this image if you like.


Can the porn picture be removed so people who want to keep their jobs can read it? Really, most of us keep-up with HN from work and would violate work-place policy by reading this article due to the choice of picture.


Actually, I'm a bit curious (and even though it's off topic) and since I'm from europe I'd love this to get a peek into american workplace ethics:

Why is this NSFW? The picture does not show nude girls, actually the level of nudity is about what I can get to see in a public pool or on the street on a hot day. The picture certainly is suggestive and hints at something more. The article does not link to a porn site, load images from a porn site or even discuss a porn site. Is the hint that porn sites (or this specific porn site) exists sufficient for the NSFW marker?

I'm asking since this is certainly not considered NSFW for anyone I know - I can see "worse" in respectable TV documentaries which air in the regular evening programme. And I guess there's little chance of minors being around here either.


I'm also from Europe, and find it shows a poor sense of judgement, mostly because the image is not necessary to prove the points being made.

The other problem I have is, as you say, the image is suggestive. Imagine someone at your office glancing at your monitor whilst walking past. Would they bat an eyelid? I can't think of many workplaces outside of the porn industry where your professionalism wouldn't be called into question, however briefly.


Actually, yes. I'd be absolutely unconcerned to have the picture on my screen since it's obviously a screen cap in some article. In some work places I worked I'd be more concerned about coworkers/clients seing me reading a blog on paid time instead of coding than about that screencap.


Actually, yes. I'd be absolutely unconcerned to have the picture on my screen since it's obviously a screen cap in some article.

Wow, that's interesting and surprising; where are you based, and what types of companies have you been working for?

I'm definitely haven't worked in lots of cultures, but have worked in offices in the UK, Poland, Belgium and Germany, for major companies and for small, and in each one of them, Questions Would Be Asked were I to have that image on my screen, and in most cases - but admittedly not all - my professionalism would be called into question.

In some work places I worked I'd be more concerned about coworkers/clients seing me reading a blog on paid time instead of coding than about that screencap.

Hehe, well that's a very different argument, and one that you and I agree on :)


I'm currently berlin based, but worked all around germany from bavaria in munich, stuttgart, karlsruhe and cologne. I worked for all kinds of companies that revolve around web stuff, been the tech lead of a top 25 ranked web-development shop and had anything from tiny via government jobs to enterprise-class customers. However, I didn't do short-term jobs with changing coworkers, so I guess it's a bit about how well you know your coworkers and how much trust you have in them. It might draw a tongue-in-cheek kind of remark, but not a serious question of my professional abilities.

I'm more curious about the fact that people could obviously be afraid that this might be a firing offense - and I can't think of a single place where this could be possibly used to construct a case.


Just for another data point, I'm in Vancouver, Canada and have never worked somewhere where I would be concerned having that article on my screen.

That includes working at a very socially conservative workplace run by an evangelical christian and consulting for a government agency in a fairly corporate environment.

Some people might do a double take because of the subject matter of that image I suppose but nothing I would need to worry about, it's embedded in a technical article and it's PG-13 material.

I would worry far more about reading blogs than the PG-13 screencap that only really references the existence of porn sites.

I find it interesting and surprising that there are this many people with such a different experience. Are you working in very corporate environments?


>mostly because the image is not necessary to prove the points being made

That's not totally true. The screenshot fits perfectly well, because it not only shows the effect of a now know-to-be-known email, it also shows in which context this might be an issue.


Why is this NSFW?

Did you not see the word Porn in large, bold white letters against a black background with three young women just underneath it showing ample cleavage in suggestive poses?

There's a time and place for that, work is not one of them. And reading blogs (that are work related) is fine and even encouraged, but visiting sites with graphics like that is not.


I'm actually totally unimpressed by that picture. To me the word and even the logo are known. You probably can't get around on the internet without ever seeing it. Yes, I know what porn is and yes, I know it exists on the internet. And yes, we do agree that work is absolutely not a place to actually watch porn if you have a shred of decency left. However, seeing this screenshot on a screen of a coworker would not trigger the "oh, he's watching porn" reflex for me [1]. That seems to be different for a lot of people which kinda makes me curious of "why is that so and what's the trigger." So I gather from your response that it's mainly the logo since it's well recognizable (a good brand logo :)

[1] For me the "it's interesting and educating" trumps "there's naked women in suggestive poses" any time.


It is NSFW because it is a screen capture of a porn web site. Not because of nudity.

Some US employee avoid anything that could be used against them.


So the screencap of a (rather boring part of a) porn site is actually enough? I remember a blog a while back that discussed how porn sites push account selling to the max with full-page-unredacted screengrabs. It was an interesting example, but I can't imagine that anybody would be punished for that here. Some people might look twice at the screen to confirm that it's not actual porn, but otherwise I wouldn't expect any consequences. So where's the border drawn here? Would the screencap in this case be fine if the girls were painted over? Or would that still be NSFW? (I'm aware that that's a fine line that can't be answered with a definite "here's the line and don't cross it")


I don't think it is a question of forbidden or crossed line. It is a question of self brand (reputation) that may be ternished.

In some situation when they have to pick employee A or B for a bonus, a promotion or to be fired, and there is no clear difference between the two, the choice will rely on subjective and detail differences like beeing once caught with a porn page on its screen. Some employees could jump on such opportunities to smear their competitors.

Thus, since it was not required to use such type of screen capture to illustrate the problem, and it is not the usual type of content we read here, it make sense it was initially flagged and frown upon. Though I don't think fair to say the author is lacking judgment. To me it is the result of a culture difference. None good, none bad.


Ok, thank you for your insight. No offense intended, but I still think it's narrow minded if people (or rather their peers) think along those lines, but alas, that's probably a culture thing. I think that a (work) culture that punishes reading this article looses out on some valuable insights.

As a completely unrelated side note: I think that picture was an excellent choice. It's a graphic example that pretty much everyone can relate to. I'm not embarrassed by having that picture on my screen but I'd be embarrassed if my email address could be verified to belong to a registered porn site account [1]. It gets the authors point across so much better than showing an MSDN account.

[1] It doesn't. Don't bother looking ;)


I'm from Canada and this wouldn't be considered NSFW even at an old workplace where my boss was an evangelical christian and very conservative.

But I am close enough to the US to see why people are nervous. Even acknowledging the existence of porn seems to be enough to make some Americans nervous.

I'm surprised at the number of people here making that many complaints about a PG-13 image, I honestly came here thinking I would see more people making fun of the NSFW warning for being completely reactionary.


Have you considered contacting the blog owner, rather than posting here?


I posted a comment on the blog, asking politely for a SFW version so that I can share it with co-workers.

Though: when a post from a personal blog makes it to the front page on HN, the author usually finds out fairly quickly, and will read the feedback here.


Even if the password-rest-page doesn't leak information about which e-mail addresses have accounts associated with them, the register-new-account-page usually does. Of course websites could work around this by sending an e-mail message to the address to confirm ownership first, but this is rarely done in practice.

An easier fix is for users to randomize the email-addresses they use. For example, if you have a Google Mail account, you can receive messages at username+randomdata@gmail.com; if you use different random data for each site, nobody will able to probe for your account as described in the article, even on poorly-designed websites.

The advantage of this approach is that it's something that privacy-minded users can do themselves, without having to rely on website developers to get things right.


As requested, marked NSFW. It will be interesting to see if the mods change it back, or to something else.


is it NSFW because of graphic or nude images, or because of language?


It is NSFW due to the fact that it contains an image of the password reset screen at a porn site.

The reason it contains this is to illustrate that, if your services says "We have sent an email" vs. "User not found!" then it can leak the information that an account exists for a given email, which may perhaps be embarrassing given the content of the service -- especially if it tells you your User ID.


screenshot of porn site with nudes.


Er, except they're not nude. Suggestive, yes, but not nude.


I was in my garden, the sun was on the screen. Looked more-or-less nude from where I sat.


Really good article, well written. Easy to read.

>> Everything i believe is important

In the spirit of completeness, you could expand the details of this paragraph:

>> What we want to do is create a unique token which can be sent in an email

There should also be a Time to Live on the token, and it should only be usable once. On the landing page for the link, the user needs to enter* their email address to avoid someone arriving at the url from a means other than the original email.

I feel like i should give an example there, let's say the attacker figured out your random id generation method, say it's a hash of a timestamp for simplicity, if they generate a few thousand links for around 6pm, they may get lucky).

* They don't actually need to enter it, UX guys would be having minor heart attacks at that suggestion, but they could choose their email address from a small table of plausible looking but made-up addresses, or something to that effect, e.g. Bank of America uses a photograph the user setup beforehand.


Rather than force people to do an extra step, why not just use a secure method of generating reset IDs?


I'm late to this party, but one thing I take minor issue with in this article, is the "log everything" section. I agree that logging is good, and don't want to give a wrong impression on that, but, logging everything can have some problems. Users regularly clicko the username/password fields and you'll get exploitable logs of: odd username from IP followed shortly by real user name from same ip with successful log-in. If you don't secure your logs well, you now have a potential security breach, and those log systems can be vulnerable to all the stuff you went through that password securing rigamorole to avoid (bad software, sql injection, etc, etc.) I bring it up, because a lot of times people don't think to secure the logging system as much as the other stuff, partly because it is regularly behind the dmz, and partly because it is just off the standard security map for user security issues.

Just $.02 for this conversation :)


> Bottom line: treat secret answers as secret!

Bottom line: never give real answers to secret questions! I can change my password, I can change my credit card if it's compromised, I can even change my social security number if need be, but I can't change the name of my first pet or where I first met my spouse. Never give real answers to secret questions.


The one thing I've seen mentioned before that wasn't in here is to delete the reset token if the user successfully logs in.


As expected, this submission has been flagged.

As expected, no one has said why it's been flagged.

Unexpected bonus, the original submission has gained enough points to be back on the front page despite having been flagged off it. So that's a win.

Added in edit: Now it's been flagged off again. There are times I really don't like the dynamic on HN.


I've seen this happening on a couple of accasions with stories that are uncomfortable for HN. E.g. see http://news.ycombinator.com/item?id=4160121

It might be interesting for one of the sites that scrape HN to reverse engineer the decay factor for different type of stories and find out which stories are "disappeared" quicker.


Colin, it's probably been flagged for the image in question. (I haven't seen it, nor did I flag it.)


Don't store reset tokens in database, PITA and not scalable. Instead HMAC sign reset URL, using Rails' MessageVerifier class for example.

The reset URL should also expire in about a day, so include and sign timestamp as well.

And the way we know this is good advice is that most large sites do it this way already.


You'll also want to implement some defense against short-term replay attacks (i.e. before the token expires).

You could include a sequence number in the token, but this, of course, involves a database write, which is what you were trying to avoid in the first place.

A better approach would be to store in the database the time that the user's password was last changed, and refuse to honor any reset tokens that are timestamped prior to that time.


Beter yet, mix in a random data field from the user record into the reset link (have it signed with HMAC), and regenerate the random each time reset it done. This way no reset link can be used twice, and no db writes are done when reset link is created.


That's a good point. You want the signed url to be one-use only. last_changed_password_at is a good solution, and a useful field to keep anyway.


How about this:

    if database.contains(email) or md5sum(secretSalt + email).startsWith('4'):
        print "An email has been sent to this address with further instructions"
    else:
        print "Sorry, this address does not exist in our database"
Now one in 16 (pseudo-randomly selected) emails will appear to be in the database, whether or not it is really there.

The attacker will still see the "confirmation" when they enter the victim's email address, but they cannot know if it was really in the database or if it was one of those random false positives.


Timing attack vulnerability. A negative containment check is probably faster (at least statistically measurably so) than a positive one, so you'll see three peaks:

- contained - false positive - not contained

Probably with the first two being close together and the latter two being close together.



There's still a timing attack there, just compare the time it takes for the server to send the email, instead of the time it takes to return the message.


What? How does the attacker know when the server sends mail?

Nice summary, sandstrom. Not as useful as reading the whole article, but much higher value-per-effort. ;)


You can leak that data in a number of ways. Trivially, the server might timestamp the email. Less trivially, you could position yourself on the same network as the server and just time how long it takes to receive the email (I'd opt for this technique personally). You might be able to monitor IP IDs of the mailserver to see when it sends an email (technique: http://nmap.org/book/idlescan.html). The emails themselves might have some kind of predictable sequence/message ID in the headers so you can cause the server to send many many emails and watch for the sequence increment. You could create a custom DNS server for your email domain that causes the email server to hit your DNS and log when the lookup was made. Probably other techniques.

edit: Not to mention that this chart still leaks the fact that joe@example.com is or is not a member of the site in question.


This is welcome, useful and covers many of the salient points. However, without wanting to detract from the content, this is 2012. None of these issues are new. For people who aren't doing some or all of this, isn't it time to consider not reinventing this well-worn wheel? Libraries of reusable code; now there's a good idea! Now I know what mom meant when she talked about the OO silver bullet.

I'm off to invent distributed processing and BSD sockets.


You have libraries that do all this? Fantastic - please submit a reference so we can all benefit from it - thanks.


I have to disagree with this point

    Despite plenty of guidance to the contrary, the first point is really not where we want to be. The problem with doing this is that it means a persistent password – one you can go back with and use any time
If a developer can't work out how to implement a one-use password that forces a password change after login, they shouldn't be writing code.


That puts the onus on the user to immediately log in after receiving the temporary password. Until they do (which could be months later), they have their password to the site stored in email. The DoS issue that was mentioned in the next paragraph is also a pretty good reason to use a reset link instead.


So you're suggesting that a user who goes to a site to login, and can't (forgotten password) will then completely lose any and all reason to login to the site, AFTER doing what's required for a new random password to be generated and sent to them?

Sounds pretty far fetched to me.


While it's not common practice, I strongly believe that for particularly important services, there should be a time delay built into the reset process, so that if a user's email account is compromised in such a way that both the attacker and the victim receive the emails, the victim gets a chance to stop the reset process before any damage is done.


Please mark this article as NSFW!


Now we know what kind of sites this Microsoft MVP likes to peruse. :-)


i just came to say I am really looking forward to reading this.... when i get home. why did you post porn in this?


Just wanted to ask: Are you one of these people who can't distinguish between the author of a post, and someone who posts a link to it?


it was most of a generalized comment on the post than an actual question. I promise to be more careful next ColinWright.


So I read it at work because it was marked NSFW


National Science Fair at Wisconsin?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: