Hacker News new | past | comments | ask | show | jobs | submit login
Hack of Cupid Media dating website exposes 42 million plaintext passwords (arstechnica.com)
181 points by fmavituna on Nov 20, 2013 | hide | past | favorite | 166 comments



Before the bcryot/scrypt advocacy and general shaming starts... I'll just make the same comment I always do when this happens: the answer is not more sever side hashing.

Trusting remote services with plaintext passwords is broken to begin with. We shouldn't give them the chance to mess this up. We need client side hashing and key-stretching that only something like SRP can provide:

https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...

The sooner we stop pretending there are no better answers than sending the contents of a password field raw over the wire, over SSL or not, and the sooner the web browser vendors and W3C start fixing this, the better. TLS-SRP is a ray of hope, but we need lighter, easier to deploy solutions that work at the application level rather than below HTTP.

On what alternate reality are we living where the W3C are working on Javascript cryptography before improving basic, fundamental, built-in authentication?


This is flatly wrong. SRP stores serverside verifiers that are derived from plaintext passwords. Stealing an SRP verifier is equivalent to stealing a password hash.

The math for the brute force attack on SRP verifiers is slightly more elaborate than that of a salted hash (it involves a modexp), but is significantly cheaper than bcrypt. Using SRP is, from the perspective of a compromised server, worse than using bcrypt.

You may have become confused about SRP because Thomas Wu goes through some effort to explain how SRP is resistant to dictionary attacks. If so, you've mistaken which specific dictionary attacks he was talking about: previous challenge-response protocols were dictionary attackable off the wire, meaning that a passive attacker could grab the challenge-response sequence and crack that as if it was a password hash.


> This is flatly wrong. SRP stores serverside verifiers that are derived from plaintext passwords. Stealing an SRP verifier is equivalent to stealing a password hash.

I'm not wrong, you're just missing the point. The derivation process happens on the clients machine, so they get to 'chose', and verify, how it's performed, and can therefore ensure the KDF is extremely strong. The server doesn't need to know how it's strengthened and cannot, in fact, even discern if the verifier it sees is derived from a random value or through some purely deterministic process.

The objective here is to stop trusting the server with our passwords because they're prone to being lax, not prevent every conceivable technical attack from supercomputers from, and algorithms known only to, the NSA.

> Using SRP is, from the perspective of a compromised server, worse than using bcrypt.

First of all, you can use Bcrypt inside SRP, so the password cracking route is a wash. It's the same, by definition. Secondly, I never said SRP6a as currently specified is what we should use. I said a protocol like SRP and even put emphasis on like. Elliptic curves are the way to go for compactness and speed these days.

An attack on the verifier is going to be vastly less feasible than attacking the KDF. The current best algorithms for breaking a good asymmetric verifier would be many orders of magnitude more complex than just bruteforcing any real world password input, even if it's heavily strengthened with Bcrypt or Scrypt.

100ms of PBKDF2/SHA-256 on a modern server CPU gives you maybe ~20 bits of strenghening on top of a weak ass ~30-40 bit password. Scrypt will give you maybe ~16 more bits, with a sigificantly higher hardware (memory) cost. Total, that's still a long way off the ~120 bits achievable trying to solve the DLP on an elliptic curve. In the end, cracking that verifier is more expensive.


If the client can choose any mechanism it wants to create a verifier, the system you propose is (a) not SRP, as you concede, and (b) really just a clunkier way of doing 2FA†.

I like 2FA. Most people like 2FA. If you had started out by saying "passwords suck, the answer isn't bcrypt, it's 2FA", I'd have agreed with you. Instead, you said "SRP", which simply doesn't solve the problem you're purporting to solve.

To see why: imagine your SRP-derived client-specified verifier generation protocol was adopted: you'd have three mainstream implementations (NSS, Safari, and Internet Explorer) based on passwords all of which would have the same security as simply doing scrypt on the serverside; the additional "win" of your proposal would be the oddball clients that did something other than scrypt, where SRP would give them a challenge-response framework they could layer their system on. Awesome. That's what 2FA systems do. And they don't rely on passwords at all.

If you're going to reimagine all of HTTP authentication, why on earth would you drag passwords along?


For reference, iirc last time I ran the numbers, SRP brute forcing has a work factor of 1000 compared to a single round of md5. Aka, not enough.


Oh, neat. I'm happy to have a number to file away. Thanks!


Is there a variant of SRP that would provide the same level of protection against a server hack as bcrypt?

Edit: it looks like it should be possible to replace H() in SRP with bcrypt or scrypt. Wouldn't that mitigate brute forcing from server hashes?


Yes, in the same way that simply using scrypt would.

But, of course, the point isn't that SRP is a weak construction. The point is "client side stretching", whatever that was intended to mean, has nothing to do with the problem of losing a database of password "verifiers", be they plaintext, salted hashes, password hashes, or SRP v-values.


"The answer" doesn't exist and it never will. Everyone has to do their part. Services that store passwords in plaintext should definitely be publicly shamed, every single time.


The problem is that "publicly shamed" means "shamed amongst security geeks". Most websites main demographic is not security geeks.


I dunno, things like this make their way to more mainstream media too, making "the general public" more conscious of security, and making them wonder whether their passwords will be secure at parties they leave them at.


The spin in mainstream media always puts the blame squarely on the hackers, and in my anecdotal experience typical users have no interest in the argument that the companies that were hacked are to blame.


Because most people have not even thought of the possibility that password can be stored in anything but plaintext, and take it as granted that all websites do. Thus the websites can't possibly be blamed, and the hackers must be the bad guys.


Not necessarily. If a relatively mainstream news site reported the event and quoted a security professional saying: "Website X didn't even try to keep your love life and dating profile a secret. Every single user's password is now publicly available. If you use the same password on multiple sites, be sure to change it quickly!"

I think even the most non-tech people would react to such a news story.


The adobe breach was fairly widely reported, but how many designers have sworn off adobe products? The same thing could be said about Linkedin or Microsoft products in the 90s.


It means being shamed among tech professionals and that's exactly the community in which those responsible for the disaster have to spend the rest of their professional lives. Peer pressure is very effective.


We don't even know specifically who is responsible for this particular thing, as I've mentioned in other comments it's likely this came from some non tech person anyway.


True, but I think this is one of those instances where you kind of have to take what you can get in terms of negative PR for Cupid Media and other companies that store passwords stupidly.


A browser plugin that pre-hashes anything in a hidden text field (with a user secret key and the password origination domain) could mimic this as a layer on top of existing technology.

It would have the obvious portability issues, and I'm sure other implementation issues.


Your idea sounds like an automatic version of SuperGenPass: http://supergenpass.com

I've been running it for years, which feels quite nice when sites start leaking passwords left and right.


How have I functioned as an adult for so long without knowing that this plugin exists?

Thank you for sharing, I am going to download this immediately. My current algorithm for passwords is human based so it is an unfortunately simple algo. It provides some additional security over raw reuse but not enough for me to be comfortable.


This is (hopefully) a problem that platform owners can solve for us, and if they do I'll love them for it, but they very likely won't take it on.

Apple/Google/Microsoft can bake in cross-device syncing into the browser/OS and apply this sort of thing on form auto-fill.

It'll break down when:

a) Hashes don't meet site-specific password requirements (too long, missing special characters, etc.)

b) Sites store passwords in plaintext and you forgot your password and ask for them to email it to you (user: What the hell is this garbage of text, I didn't type this in!)

Unfortunately the above problems usually occur on sites where you need this most, and are very hard to solve, and so this problem is therefore unlikely to be solved "neatly" by anybody big.


If I understand you correctly, that exists already and is called pwdhash: https://www.pwdhash.com/.

Available online and as browser plugin for the major browsers.

The already mentioned supergenpass seems nice as well.


well, you could even do it manually

echo 'this is my pass' | shasum 3b0c5dc943cd30dcd2ca1ff760145f219d3ba3f3

And use that as the password. Of course, this is a very basic example, you should make it more safer by adding a salt and running more iterations.

May be easier (and safer) than installing "One password" kind of software.


That doesn't quite work in the unfortunately common case of user password re-use. If your hash is constant and stored in plain text somewhere and that place gets hacked -- then your password at every site is compromised.


If you use an actual bash script instead of a one-liner, and you can do things like 'silently' read in the password with `read -s` in the standard *nix convention, even read it twice to avoid mistyping your master key and temporarily locking yourself out of an account you just created.


Here's a tentative crude solution

http://pastebin.com/MuV8vtcR

you can pass a "salt" as the first argument as well (it will merely be concatenated with the password)


You should quote your vars in the if check, otherwise it doesn't handle spaces very well. (And really the master key should be a master passphrase.)

A few years ago I whipped this up: http://www.thejach.com/public/pw2 (I don't recommend other people use it but it works for me.) I type in something along the lines of "my secret passphrase ycombinator.com". It doesn't do hash iteration and uses the hash as a seed to Python's RNG which I use to get random bytes and then have a password character-space of any printable character -- it also outputs an alpha-numeric version along with different string sizes to handle those dumb sites that put restrictions on your password.


Yes, you need to add a salt, it can be the URL (and something else added)


echo "this is my pass for $URL" | shasum


Awesome idea, should randomly salt them for users who insist on using the same password for different sites.


"i created my account on machine x, now i can't log in from machine y".


Might be an extreme comparison (beg your pardon if I'm being naive), but it's a similar hinderance to using SSH keys. Carry it around on a USB key, or deal with the security outweighing convenience.


I like that. Could even be coupled with a portable browser.


What about determining the salt from the domain for the given login? (perhaps slightly more refined than that to mitigate cases like *.google.com)


I think that's right. It would also need some ability to add mappings like "when I log in from gmail.com, hash like it is google.com"


That being the obvious portability issue.


I propose a scarlet letter "P" (for plaintext).


How does SRP help? With SRP, you still have a server-side brute-forceable table of password hashes. The server stores v = g^x, x = H(s, p), for each user, where s is the salt and p is the password. g is known to the server (and hence the attacker), so if the attacker downloads the database, he can go through the dictionary and brute force. If you're rolling out SRP, you ought to use scrypt for H.

Edit: and BTW, Cupid.com has nothing to do with OkCupid. They were the cause of much confusion throughout our history. We'd tell people to sign up for OkCupid, and they'd go to Cupid.com. Eventually, Cupid.com had to say, in their radio ads, "Not OKCupid.com, go to Cupid.com!"


> The server stores v = g^x, x = H(s, p)

Where H() is chosen, and only ever performed by, the client. As you suggested, I'd recommend using Scrypt for H. The poor evolution of SRP is tangential to the concepts and protocol


Realistically we need HTTP digest authentication [0] to use a better hash function then md5, and we need it to be deployed by websites.

[0] http://en.wikipedia.org/wiki/Digest_access_authentication


The trouble with digest auth is it requires the password or a usable representation thereof to be stored in recoverable form. This means that if you can get a dump of a site's database, you can use the stored credential data to authenticate to that site, which isn't possible when you store hashed passwords.

This is better than transmitting passwords in the clear, but worse than transmitting them over an encrypted link.


Your point is correct but AFAIK your facts are incorrect.

Digest authentication can indeed store the password in hashed form. The problem is the the client doesn't need the plaintext password; this hashed form suffices.

See <http://en.wikipedia.org/wiki/Digest_access_authentication#Ad....


That's what I meant by 'a usable representation thereof' -- sufficient data for a client to be able to use it to authenticate.


Ya that wasn't very clear. Thanks for the downvote.


As far as I can tell, it's not possible for someone to down-vote a response to a comment they've made -- certainly I'm not able to.


Additionally, you'd need new browser APIs (AFAIK) to detect failed logins, render the login UI, etc. Otherwise, it's going to look out of place and confusing. Product managers are unlikely to accept the default browser/OS handling.

And digest auth means a DB dump has enough information to authenticate as a client, which is considerably worse for security if a server gets compromised.


Realistically the problem with that isn't even MD5, it's the modal popup that hasn't changed in any browser since 1999.

It's so unbelievably popular, Chrome copied the behavior despite not existing in 1999. It's all over mobile, too...


I wonder if the original reason for using a modal OS-level popup was to prove that it's not a fake prompt displayed by a malicious site.


Kinda. The important thing to note is that back in 1997, when the digest auth RFC was written, we recognised the hashing before going over the wire... and somehow we lost sight of that


The browser implementers completely punted on making HTTP authentication usable: there's no friendly way for a user to logout, for the server to force a logout, for the user to change a password, etc. All of these were well known and discussed from day one but they weren't seen a priority by any vendor and most web developers went with usability over security.


It's interesting that no one's pointed out that SRP is trivially breakable by having the client send zero as its "A" value.

Of course, you can guard against that particular case. But the point is that SRP has pitfalls, just like every other solution. And those pitfalls aren't well known; the Wikipedia pseudocode makes no mention of that exploit, for example.


Yes it does [1]. In the narrative example with Steve and Carol, it states 3 safeguards.

1. Carol will abort if she receives B == 0 (mod N) or u == 0.

2. Steve will abort if he receives A (mod N) == 0.

3. Carol must show her proof of K first. If Steve detects that Carol's proof is incorrect, he must abort without showing his own proof of K.

[1] Ok, the python code doesn't seem to, you're correct. However, that's less a demonstration of a protocol implementation, and more a demonstration of the protocol's math. The page does mention it though in the protocol section. It would be appropriate (and maybe later I'll do this) to break that out so it's more obvious.


"Up next on VH1's Where Are They Now?, Alice and Bob!"

    Alice: At the time I thought Eve was the only one I had to worry about. Little
    did I know, Carol would be the one who'd really replace me in the end.
Edit: Hmm, downvoted. I guess humor isn't welcome here?


Unfortunately, no.


A bit of transparency: Before the edit I received one downvote. After I received 5 upvotes, which is on the higher end of most of my comments. Either I reacted prematurely, or calling it out reversed the perception bias to the positive. I think it's some of both, as the joke really wasn't funny enough to warrant the score.


I agree, trusting remote services (and the communication infrastructure in between) is naïve.

Meanwhile, I use KeePass and generate a different key for each service.


I also use Keepass....but why the smeg do I need to do this? My browser should be deriving per-site passwords for me at a minimum


LastPass has browser plugins that do this.

Edit: Additionally LastPass supports login to your LastPass account via password + OTP combination such as Google Authenticator and Yubikeys.


LastPass is awesome.


Installing lastpass on every machine I happen to stroll by and want to use isn't the cleanest of solutions... and I shouldn't have to trust my keys to one closed, proprietary application. With a standard protocol at least I'd have a choice about my client.

There are also peripheral issues with password databases, like the fact that they make the mere fact that you're using one transparent to anyone investigating your activities.


You can ofcourse log in to lastpass.com on any other machine you are sure doesn't have a keylogger (well they provide virtual keyboard also if thats the case).

But ofcourse, that's matter of trust, even when they say that data is encrypted client side and they store only blob of gibberish. However I feel so relieved by using LastPass - not having to worry about remembering yet another password.


...(well they provide virtual keyboard also if thats the case).

If you assume a key logger, you should also assume a mouse logger that captures a partial screenshot for every mouse click, as well as the possibility of capturing the contents of password fields (malware in the Windows 9x era would iterate through all OS widgets to find password fields and save their contents).


Right, but even if we can check all the boxes and trust LP, there's still the practical matter of getting everyone to use it. Generally it's going to be easier to convince users to use something that's built-in rather than 3rd party, right?

And one solution, whether it's backdoored or not, is still one target for bad actors to focus on (viruses, spoofing, etc).


Safari on 10.9 does this.


Deriving has a problem if you have a site with a login system across multiple domains. You can't salt with the domain if the domain is inconsistent.


Another alternative is https://OneShallPass.com. Free, auditable and open-source.


This is an is/ought fallacy. As professionals we don't get to propose the ideal universe as the solution to the problems of the actual universe. We have to take what we can get right now.


I don't think nly was proposing an ideal universe, just an improvement to the current one.

It's not beyond the realm of possibility that browsers might be updated to support some form of Secure Remote Protocol standard. And to encourage web sites to use it browsers could display a little 'padlock' icon similar to the HTTPS icon.


Right. But in the meantime?


And that's a false dichotomy. Are you saying we should dismiss dangerous flaws in the trust model, just because we have some workarounds for bad industry practice, and can 'make do'? because I'm not saying we shouldn't advocate strengthening existing databases right now. To me, the two concerns are completely separate.


The is/ought fallacy isn't about saying that the actual world and the ideal world are mutually exclusive; that you can't get to the ideal world from the actual world.

It's that dismissing actual options because they are not as good as hypothetical options is fallacious reasoning.

It's about taking the actual world seriously, on its own terms, without getting tied up in knots about the parameters of the ideal world.

Right now we don't live in a world with a better browser security model. Regardless of whether any one of us individually argues for such a world, the current world is where our professional duties must be discharged.


The false dichotomy I was referring to isn't the 'introduce bcrypt' vs 'introduce a new model' thing, those things clearly aren't mutually exclusive, it's that you inferred that me pushing for one means that I'm totally against the other in the present day. In any case I have no desire to join a thread about fallacious reasoning and semantics.

> Regardless of whether any one of us individually argues for such a world, the current world is where our professional duties must be discharged.

And there are plenty of people pushing for bcrypt/scrypt and such every time this happens. My duty in this case is to point out that this will never end all the time we allow the possibility of recurrence.

There's a real danger in going too far and making bcrypt/scrypt solutions doctrine. There are still plenty of people out there who continue to tout hashing with SHA-1 and salts of a certain construct, because at some point they understood why it was important, continue to have the security conscience, but are not up to date with the new realities.

This is why solutions at the architectural level, and not in the application or framework are so so important. Why oh why oh why, don't we have a column type in SQL databases specifically for storing passwords?


If that column type existed, it would be a database-resident implementation of scrypt.


What would a sample query look like with a password column?


What about Facebook login (or other oauth based systems)? Seems to me this solves most of the problem.


It solves the problem with offline password cracking, but it introduces the need to trust a third party.

Since the client side of SRP can be implemented in Javascript [1,2,3], there's not really much in the way of compelling reasons not to use it.

[1]: There's a client in the SRP bundle at http://srp.stanford.edu/download.html

[2]: https://github.com/symeapp/srp-client

[3]: https://github.com/clipperz/javascript-crypto-library


One of the key aspects of SRP is it provides mutual authentication. The server is considered untrusted until authenticated. If you run it in Javascript you lose the benefits it has over simple challenge-response client authentication.

TLS-SRP could replace ordinary CA based server authentication, but I think there's a middle ground somewhere. Both are complimentary. CAs should be authenticating servers to my browser, and SRP should be authenticating servers to me.


This isn't quite right: you still keep the benefit that the server in no sense has a copy of your password, and it is publically visible what the program is that implements the client-side authentication.

That said, I agree that there are strong benefits to having support for SRP in the browser.


Yes, I love trusting the keys to all my accounts to one corporation.

Less tongue-in-cheek, would you trust Facebook login for banking?


Well, I do actually think the fb engineers working on login and password security are more competent than the ones that my bank currently has (8 character MAX length....). The only edge that banks have is that they know to scrutinize suspicious logins more heavily, which is something that fb would certainly do if their login was ever used for bank logins.


No friggin way. I disabled that for a reason. I won't use Facebook login for anything else except Facebook. I don't want my Facebook identity being shared with a third party.

Isn't good for the tin foil hats.


Until the site that you use for OAuth gets hacked.


Security is all about pragmatism: is it more likely that Facebook / Google will be hacked and not deal with it quickly or that users will react to incessant password nagging by using weak ones? When someone's account is compromised, is it better if they have to change one password or hundreds?


I agree with you that a technical solution is needed. Forcing developers to store passwords securely, experience shows it doesn't work. But how exactly would your solution work? If I work at a computer I haven't used previously, how am I supposed to authenticate to a site without using a password?


The first step in the SRP protocol is to retrieve the cryptographic salt from the server, so you're essentially free to roam.

This step has concerns of its own though... for one thing the request itself reveals that the username/id is valid, and if you cache the userid / salt pair on the client machine it's vulnerable to snooping by other people with physical access. There are some fairly straightforward tweaks that can be made to the protocol to work around these issues though, since the salt isn't sensitive. In fact, I'm pretty confident storing the salt server side can be eliminated.


> This step has concerns of its own though... for one thing the request itself reveals that the username/id is valid

Does it have to? Can you not have an implementation that always responds with a fake salt if the username is not valid?


Yes, or you can just use a deterministic salt like H ( service id || user id )


I think the idea is that the browser would implement SRP. That means you would still type a password into your browser, but it wouldn't be sent to the server. Instead, the browser and server would authenticate you by means of exchanging specially-designed hash codes.


What's the point of the key stretching on the client side, if your stretched key gets sent to the server and is stored in plaintext.

It adds no security.

Server-side salting and hashing is the answer.


It's not a vanilla stretched key that gets send to the server. It is, in effect, the equivalent of your public key (that just happens to be kept secret by the server so it can use it to prove its identity, as the bearer, to you)


I think the use of that is really just isolating the cross site hacking at that point. You are absolutely correct that if they have your password, its game over on that particular site.


Had to look it up - unrelated to okcupid. For those interested, here's a list of their web properties: http://www.cupidmedia.com/services.cfm


Ironically, one of their sites is "OnlineDatingSafetyTips.com".


Pretty sure it's not ironic since dating safety != system security.


I don't think that is the kind of safety they are talking about


> .cfm

Looks like coldfusion to me, which has had a host of vulnerabilities pop-up in the past couple years. Wouldn't be surprised if that was an easy entry point.


Two thoughts: 1. Thanks for the tip. I came to commetns mainly to see if I needed to change my okcupid password.

2. There are a lot of specialty dating sites out there.


I can't get my head around how this still happens.

I few years back I took over development of an old PHP website, which had a horrible code base (no framework or library, not even MVC). This site had around 30,000 users, all with plain text passwords.

It took me all of a couple of hours to get the site using bcrypt.

I'm not saying I'm some kind of super-rock-ninja-star developer, just that this is so easy to fix, even on monstrosity, legacy code bases.

There really is no excuse.


Because of this:

If a site is using plaintext password often the owner asked for it. They wanted their users to be able to recover their passwords. And the dev didn't understand why this was a bad idea. They need to be educated, convinced and then convinced that the time you're about to spend on fixing this is more important than the 101 other things going wrong because the original dev wasn't very good. And isn't actually causing a single problem right now.

That also means that the password recovery function uses this code. Maybe there's an auto-user generator when you sign up for the newsletter. The email system obviously also uses that code. That also sometimes even means the password is stored twice, once in the framework's user mgmt tables and once in a user or person or the tblPRSN_UPDATED_OPTIMIZED_mc table. And just deleting that column might cause all sorts of other problems. Setting it null might cause bugs. Setting it empty string might cause a massive security hole as there's a login mechanism you haven't even found yet. (remember the dropbox breach?)

It's never a 3 hour job to fix unless you're very bad at estimating, are working on something extremely trivial or do a half-ass job, potentially introducing a far worse security hole.


It's quite possible the dev knew it was a bad idea and maybe even argued against it but was told to implement it this way anyway.

The problem a dating site probably has is people who sign up accounts and then stop using them. They want to send these users reminder emails in the hope that some of them re-engage.

Problem is that some of these users have probably forgotten which password they use for that website, and some % of those will not bother using the password reset mechanism.

So someone in marketing has the bright idea of sending emails that include the username/password combo, the dev explains why this is a terrible idea and then gets overruled.


I include an "Instant Login" link in each mail so the users don't need to remember their password. It contains a unique time-sensitive token to identify the user and instantly sign them in (much like a password reset). I learned this technique from OKCupid, so no idea why they still had plaintext passwords.


It turns out Cupid Media is unrelated to OkCupid.


Maybe simply emailing the password still has a higher conversion rate than the one time link, for example if the user does not see or understand the link or perhaps they want to login later when their email is not open.

It can be difficult to argue for security in cases where even small % of short term revenue might be affected.


I hope you educated your users not to forward any of that mail.


After how much time will the token expire?


Yeah, I'm being a bit hard on the original dev!


Most of the times I think it's due to owner and/or developer wanting to harvest passwords. I've seen it happen.


Sure, it's not hard to use bcrypt for passwords. It's also not hard to make your login forms use https, or to do weekly backups, or to use source control, or to keep Wordpress up to date. Yet many many websites/apps/companies do not get these right.

The barrier is not difficulty. The barriers are lack of time to spend on infrastructure/security improvements, lack of motivation, and distraction due to new feature requests.

Of course forgoing these things will bite you in the end, but this is the internet age; people don't tend to plan that far in advance.


Well they were lucky you fixed it.

There are loads of sites built year ago that just work so nobody is looking into the code base ever.


What gets me is that security professionals keep talking about layers of security. I don't understand how many recent attacks have resulted in complete breaches.

Adobe had source code taken, vB gave over pretty much complete server access.

You now have Cupid Media not even hashing passwords. The final defense of user information ignored..

It took me 3 days to implement password security on a legacy system. Implemented password strength requirements. Users trying to sign in with weak passwords were flagged and forced to change their password to meet new requirements. Plain text passwords were hashed with bcrypt. One guy.. 3 days.

The UK has ICO. I would like to see these getting involved in cases like this. Where they can fine websites catering to UK users who show negligence when storing user information. If it is not currently within their powers I would like to see a law change. There should be more accountability for website owners.

http://www.ico.org.uk/


Often times, the hack is through a web front-end. Back-end systems (such as DBs) are heavily firewalled, logged, monitored, etc. and are generally very well protected. Systems guys (OS and DB) know security pretty well and have been doing it for a long time now.

Much of the web software that powers the front-end is complex (PHP, Java, .Net, JS, CSS, SQL, includes, 3rd-party libraries from everywhere, etc). That complexity has a broad attack surface that is difficult and time consuming to test. And many devs are late to the security party (unless we're talking OpenBSD developers).

Management wants to push out new features by X date. Devs have very little time to test and are behind on security anyway. Hackers have all the time in the world to poke at the web front-end and test every possible combination of things until they finally get in.

In a nut-shell, that's the problem as I've seen it.


What this story shows is that sometimes '12345' makes sense as a password - i.e. when credential security doesn't matter to the user. If I use '11111' to sign up for a onetime visit to a website, then there's no nexus with my online banking account other than an email address - assuming even the most feeble attempt at picking a 'secure' password for my banking.

This is why it is often silly when articles condemn users for weak passwords when a password list is stolen. The proper assumption is that any password I use is stored and transmitted in plain text and just now falling into the hands of bad people.

This is the reason that until I started expressing this idea on HN, that my HN password was "hackernews". If HN was breached, I was no less secure. Sans the pursuit of lolz, it wasn't even worth trying to guess.

Of course, I changed it to something harder to prevent mischief since some individuals might have seen my comments as a challenge.


This is a good argument for unique passwords, not for weak passwords. Weak passwords only "make sense" if you really don't care whether your account is compromised due to a very weak password.


But often you don't care. The value of a throwaway account you made to download a file is practically zero to you or an attacker. In the tradeoff in simplicity (all my crappy throwaway accounts have the password 12345... easy!) against security, simplicity wins.

If I only made an account on one of Cupid media's sites because I wanted to see a picture, I wouldn't care whether my password was easily guessable. Additionally, I'm fairly sure that an easy to make account with no access privileges is completely worthless to an attacker as well, and so the likelihood of anyone even attempting to compromise it is next to nothing.


This is 100% true in my case. If I value the service, it gets a unique password. If it's Gawker Media, well here's a hash for 12345 and have fun with it.


You're not completely in the clear if you still use the same username throughout the web or as the start of your email address. If your account gets compromised on a site that allows users to make public posts (comments, articles, whatever) your online identity is now linked to whatever crap the compromiser posts under your username.



People who read HN know this. People who know nothing about security don't. Most people are lazy about things they don't know or care about. Thus it is a problem.


Use a password manager that generates a random password for every site.


Providing strong passwords on throw away accounts provides useful information to an attacker - I.e. that I am using password manager "foo". If "foo" is comprised, then all my passwords generated from it are at risk.

To put it another way, where there is no reasonable chain of trust, I provide useless data for determining my strategy where there is a reasonable chain of trust. Revealing '11111' does not improve the probability of an attacker generating my bank password.


>Providing strong passwords on throw away accounts provides useful information to an attacker - I.e. that I am using password manager "foo".

What? The attacker could infer:

* that you consider this account to be valuable and have created your own strong password for it.

* that you are a security conscious individual and keep a list of hand-generated passwords written down in a safe by your desk.

* that you may be running one of any number of password managers.

... in other words, it tells the attacker nothing.

That's not to say that using a junk password for a junk account is bad, just that your logic doesn't follow.

>Revealing '11111' does not improve the probability of an attacker generating my bank password.

Neither would me revealing my password for a throw-away service is "Sfx16tJ{=DK=x4A0v". What can you infer about my bank password from that?


Just a random question: Is there anything that gives companies incentive to prevent such hacks? It seems that there is no consequences at all, except for some loss of reputation in tech community. Is there a way to put legal pressure on tightening up security?


You are looking at necessity products built by large corporations whose products are usually regarded as the best of breed in the market.

If you are a designer for instance, you'll most likely come to depend on Adobe Photoshop. That means at some point you've created an account. Adobe got breached and your data got leaked, you'll likely whine a little about it online but unless you are willing to:

- shift your work and relearn a new tool than Photoshop

- navigate your way around closing your account (with the assumption that your data is actually deleted after account closure) which is rather hard in most cases, no one likes losing users.

Then you'll likely just suck it up, do what you can and hope for the best.

On the other hand, you got small time (but growing - not web-scale yet-) services/products that can't really afford losing a large number of users. Those would worry most about security. Ironically, they'd stay off the grid for long enough and wont become attack targets until they make it big.

But that's just really the security industry, no system is 100% secure. And you never know if you've tightened your security enough until someone drills a hole. Then you patch it.

Any self respecting corporate will have a security auditing policy. The so called white-hat hackers or pen-testers. Good companies will run security audits every now and then in hope to discover new security holes introduced by software updates, system policy changes...etc.

As for legal pressure, it depends on what we are talking about. If you are a payment processing company then any data breach is a violation of your PCI compliance, which leads to a lot of bad PR and legal consequences.

If Facebook got breached and data was exposed, I doubt there is anything in the law that reacts to such issue. Unless someone sues Facebook for damages, then that's a whole different ball game.

The incentives are there for any business of all sizes. Legally? It depends. It's those schmucks that screw us all, plaintext passwords and shit.

Edit: Fixed formatting.


That's pretty much country dependent, local legislation on data protection varies. (And even then AFAIK is limited to sensitive data, such as race/religion/banking, not the password to some website...)


In the European Union we have the Data Protection Directive, which means that organisations that store personal data are legally required to protect that data.


I often wonder, when there is a massive password breach if anyone outside the tech world even really knows about it, even if they are affected by the breach.

I mean most people don't really understand about web security or computers right? So if their email gets "hacked" they think "hackers" magically get access to their computer/email/etc. and Adobe, etc. isn't at fault.


> Making matters worse, many of the Cupid Media users are precisely the kinds of people who might be receptive to content frequently advertised in spam messages, including male enhancement products, services for singles, and diet pills.

Oh wow. So Internet dating users are generally stupid, under-endowed, desperate and overweight?


I was blown away this was included in the article. There's no citation, no explanation or anything. It's just blatant.


They wrote "many", not "generally".

And yes.


[citation needed]


And now we wait for the dump to show up... will be usefull, larger then the rockyou plain.


We can help in some small way. Advocate for the use of password managers like LastPass and KeePass that use a different securely generated pw for each site.


Still in web 0.0 storing passwords in plain text? Awful.


Anyone know where to find the password dump?


While this might sound not so appropriate at first but I too would like to have the dump. Can result in some interesting password research.


This was already very clear. The whole website was flawed and it probably was known by individuals for a longer time... Bypass payments, change other people's profile, read other people's messages. It does not stop here.


This is getting ridiculous.

When are we going to see legislation enacted to take these people to task?

Surely there is a case to be made that their negligence causes (or has the potential to cause) real harm to their users.

We need a Saul Goodman to put together a class action.


Yes, the government would surely do a great job legislating development standards. Just look how terrifically they've handled software patents.


Free markets currently doing a pretty awful job of it as we keep learning. Perhaps some government legislation would help.


All legislation would do is put a big layer of ineffective bureaucracy between developers and getting work done.

Get the government involved and it won't be long before you need to fill out 15 forms and hire a lawyer to put your weekend project online. Is that the kind of internet you want?


Yeah, all those building codes do is put a big layer of ineffective bureaucracy between the people who architect and do construction and getting work done.

Yeah, all those sanitation codes do is put a big layer of ineffective bureaucracy between the people prepare food and getting work done.


You can't legislate away stupid behavior.

There will always be an endless parade of poor choices. If it's not 'this,' it's 'that' and then the next 'thing.' There is no scenario under which a government entity can enforce, keep up with, or properly control such.


That's like arguing that there shouldn't be penalties for drink driving (called DUI in the US) because people will always make poor choices.

There needs to be a strong deterrent.


Who are you responding to? It can't have been me, because I didn't say anything about the government legislating development standards.


> When are we going to see legislation enacted to take these people to task?

And how would you enforce this ? mandated paid audits provided by companies that have lobbyists and friends in Washington ? Enough with the laws, laws are not an answer to every problems. If there is harm , let the users sue, but stop with your laws...


Pro-active enforcement is not always necessary. The possibility of a large fine if found in breach of the law is usually enough for responsible companies to take the matter seriously.

Of course, you also need guidelines for implementation.


> And how would you enforce this ? ... mandated paid audits

No, that would be quite silly and wouldn't work.

It could simply be reactive rather than proactive. When an incident occurs where sensitive user data is exposed, simply launch an investigation into whether there were "adequate" protections in place. If it is found that sensitive data was stored unencrypted, for example, put the directors of the company behind bars for negligence.

I'm dreaming of course. Steal a loaf of bread, life in jail without parole. Expose the private data of millions... have a strong whisky, put out a press release, head to the golf course.


Even more simpler:

If "adequate" protections are missing: Pay every breached user $10.

That way such a breach gets a hefty price-tag and devs/PMs could argue with management, that it is economically feasible to implement these measures.

I know of a PM, that tells devs, that report security-problems inside his product, that they should not care, but instead finish "that news shiny little thing" and that that is their job in his opinion, not detect some strange security-problem.


Well, what would happen would be, any sane bushiness would take out insurance, and the insurance companies would mandate / audit security best practices.

If a shop wants to insure it's stock, the insurance company tell them what they need. An alarm? Big metal shutters? A night guard. Depends what you are guarding.

In practice, a password is pretty valuable. Not to the company, but the loss to the user can be pretty significant.

In the UK, the Data Protection Act does allow fines against companies that fail to encrypt specific information (inc passwords). They handed out 2.5MM in fines last year, but I think they mostly go after people selling data, rather than just messing up and losing it.

Recent UK fines: http://www.ico.org.uk/enforcement/fines

OH WOW they fined sony after the geohotz hack http://www.ico.org.uk/enforcement/~/media/documents/library/... (only £200'000, but still)


It would be next to impossible to track down the majority of users, and I doubt each user would like to fork over more personal info for just $10. Paid money would be very small vs amount of accounts.


I agree that there should be legislation against saving plaintext passwords. Otherwise there is no repercussion for these kind of parties that put confidential information at risk.

Honestly, people are not going to change their password habits, i.e. we can't expect users not to reuse the same password at different sites. Moreover, I would wager that most people trust that websites are inherently secure and that password-related functions are safe.

The party to blame, then, is squarely the website that allows the leak of unhashed, unsalted passwords.

I believe we should seek legislation similar to HIPAA wherein the damages are inversely proportional to the ignorance and mitigation preparedness of the party to blame. (Damages might also be directly proportional to number of accounts breached, but only after the former is considered.)

As an example, imagine that a kid acting alone gets his forum or website broken into. There's probably not much that a teenager would have known about password security. Additionally, they probably weren't servicing millions of users as a part of a commercial service. You can apply this same kind of situational blamelessness to small businesses, clubs, churches, and so forth.

If, however, a multi-million dollar company gets breached, it's a likely different story. Such a company has employed engineers that are familiar with such topics as scaling and cross-browser support. If these types of business concerns are known and handled, then it's almost a certainty that they also know about password hashing. (If not, I would bet that new legislation would result in widespread education on the issue.)

If the cost of changes is estimated to be too high, we could even go as far as to lower or absolve damages if the 3rd party were to inform its users that its passwords were not hashed or not salted; a "use at your own risk" notice, if you will.

I feel strongly that we need to do something drastic about unhashed/unsalted passwords. This is becoming absolutely ridiculous; it makes our profession look like a circus show, and all for something that can be easily avoided.


Time for civil liability for these breaches. At this point the risk of storing plaintext passwords is known enough that it should qualify as negligence.


Next article on Hacker News: "Hack of Cupid Media dating website exposes xx million fake dating profiles"?


So that's what - 30 million male users, 1 million female, and 11 million spam-and-scam bots?


Tangentially related: I am a lifelock member (not sure if its worth it, but gives me some peace of mind), and recently got email alerts from them saying my adobe login info was found for sale on several blackmarket sites.


After the adobe/cupid breaches, it is high time some governing body mandates every website to reveal on their privacy policy page how passwords are stored on their servers.


Anyone have a copy of the password list? 42 million passwords would be fun to analyze.


maybe i am just stupid, but how are password managers secure?

i've seen people using them, and if i were of a less honourable persuasion i could abuse that quite easily... on the other hand, its impossible for me to steal information from out of their brain (so far at least).


I've used 1Password (a popular password manager for the Mac) for several years now. How could you "quite easily" hack those passwords, assuming that's what you're implying?


Keylogger.


Sure, but if you have that level of access to the machine, what's the point you're making?


I dunno, he asked a question and I answered it. I'm making no point.


> its impossible for me to steal information from out of their brain (so far at least).

It's easier than you think to steal passwords from people. Just ask for it!

http://www.veracode.com/blog/2013/03/hacking-the-mind-how-wh...


In reality, unless you are using services that do not allow any kind of email-based "forgot my password" recovery/reset, your email account is your password manager.

I use passpack to store different random passwords on all my online services, EXCEPT for my email account. That one is stored only in my brain.


"how are password managers secure"

Because now you need to steal two things. My password file and my password.

Now admittedly the one weakness is that once you have both those things you have access to all my passwords, but on balance I believe it is an acceptable compromise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: