> I think this article doesn't just blame the victims of those attacks
That's a grossly uncalled for rewriting of the reality involved.
The victims are NOT the company whose servers are compromised. The victims are the customers of the company, whose data has been lost to the wilds.
Edit:
> It is far more probable that all of the companies cited in the article have expended massive efforts to protect themselves
Another misunderstanding of reality. I've worked with and in quite a few companies of tiny and impressively large size, and in all instances so far security has been a non-topic, or at best a bullet point on a slide. In all cases so far has it been utterly trivial for literally anyone inside the company, and mildly trivial for people outside the company to create data loss scenarious of disasterous scale for many many customers.
Maybe he has worked exclusively with companies who have crack security teams and never even thought of using md5 to hash passwords. In my, and many other's, experience, such companies are as rare as unicorns though.
> These commentators have presumably not been the victims of a breach
> themselves. I have trouble swallowing that anyone who's been through the
> terrifying experience of being breached, seeing a breach up close or
> even just witnessing a hairy situation being defused could air those thoughts.
I do emergency incident response for companies that get breached, and this is off the mark. Without going into detail, an ounce of prevention is worth a pound of cure.
It is entirely standard operating procedure in the startup industry to play fast and loose with customer data, ignore best practices, get breached, make a blog post about how "we take security seriously", fix one or two things, and continue like nothing happened. It's the users that suffer, not the startups that get breached.
It's bullshit, and Troy is precisely on the mark for calling it out.
I don't think we disagree. I'm certainly in full agreement that companies should take it seriously and often don't. However, being breached does not mean that you did not take action. The article I was responding to doesn't distinguish between a breach after an ounce of prevention, or no prevention at all. Is it realistic to expect a health company to have prevented a breach if that breach is a consequence of an Exchange 0-day?
If you are hiring security professionals, it might be helpful for you to keep in mind for your new hire folks and for your customers that there is a new metric underscoring the thesis of Troy Hunt, who is quite experienced in this industry: MTBCA -- mean time before CEO apology: http://blogs.forrester.com/rick_holland/15-05-20-introducing...
Is it realistic to expect a health company to have prevented a breach if that breach is a consequence of an Exchange 0-day? The topic of the article is that the breaches have gone on for a significant fraction of a year. If that is the case, that there are intruders waltzing in your network, it is hardly appropriate to say that you take security seriously.
If we extend your logic, the victims aren't the customers whose data has been lost. The victim is the common good of not having to be all that careful about what you do, because you live under the rule of law and people aren't just going to attack you.
Allow me to illustrate. It's not physically difficult to get murdered, but you don't have to walk around in a suit of armor or tank in 2015, because you live under a code of laws. So safety/security is a public good.
By the logic that the operator of the servers that have been illegally attacked aren't the victims, we can say that public safety is the real victim: people who aren't even involved are the real victims, because there are more of them, and they all have to be more careful as a result.
But that reasoning is kind of silly. The victim of the crime is the operator of the server that was illegally accessed and compromised, and if these criminals had something better and more productive to do with their time it wouldn't happen. It's up to laws to make that be the status quo.
It's great when technology can protect us - but let's admit it, it's like going shopping in a tank: a technological solution to a social problem. You can't always do it. (Technological solutions don't always exist.)
You are actually right. Everytime a company gets compromised it is a stark reminder that people DO have to be super careful with their data online.
That they have to choose passwords of string complexity, that they have to use different passwords for EVERY single thing where a password is used, and optimally even that they use a different login for thing requiring one. That they be very careful about what exact data they share with any given service. etc. etc.
You may have intended to be snarky in your response, but you have unintentionally hit the nail on the head.
> The victims are the customers of the company, whose data has been lost to the wilds.
Yes, but the article points out two complications:
1. We only know about those cases where the company has publicly disclosed their breach. There may be lots of victims, and just caring about the publicly-known ones seems misplaced.
2. In many cases, the data has not been lost. "Servers compromised, encrypted data stolen, decrypted keys not stolen" is generally more-or-less fine under a threat model. Victims of the LastPass attack last month are much less at risk than, say, victims of the 2013 Adobe attack.
It's important that we build a world in which prospective password-managing companies are incentivized to act like LastPass and not like Adobe, and I'm worried that LastPass is only acting like it is because they're good people (leaving room for a smooth-talking amoral competitor to undercut them). My general impression is that the companies that don't care about security are being very rational about it: getting it right is expensive, you're at risk either way, and the PR blowback is not well-correlated with how good your security design was. So you might as well not care, and as long as nobody cares, you can just write "industry-standard security measures" in your blog post once you're hacked.
I see your point and voted you up, but I think you're wrong.
If I am running a company or even just working there, and someone attacks my servers, even if Customer ABC's and Customer XYZ's data is what they want, as far as I am concerned, it's my system that is under attack.
Customers ABC and XYZ are under attack as well. We all are.
Take the incident at JP Morgan Chase. Chase PR was on the point about keeping the public in the know that at no time was their money under threat. You can't tell me this was anything other than coordinated or even orchestrated.
After keeping the breach(es) a secret for at least months, Chase kept strong with its spin on the topic. It is clear that Chase does not consider the breaches too important to its business. As far as Chase was concerned, their perspective is that they dodged a bullet. If there was any indication that account balances were fudged, I doubt Chase could keep nearly as calm.
My tired point is that for many companies, they make a distinction between "them" getting breached vs their customers' data getting breached. How can we believe that Adobe does not know the best practices to store passwords?
I think it depends on the company. For a tech-focused company where a breach is a slap in the face of their core competencies, sure. For a megacorp that sees IT as a loss center, it's harder to believe.
I didn't mean to insinuate that the users are not victims; they are certainly the (primary) victim.
I also don't mean to say that that doesn't mean the company didn't mess something up. However, that's a completely different story: there are worlds between "made fatal mistake" and "doesn't care the tiniest bit".
This is my problem with the piece. In general, these stories aren't "day 0 exploit compromises website, company responds quickly".
These are companies failing to enact the basic security practices taught in introductory college courses. They're companies disclosing a breach six months after it happens, just before (or after) independent researches make it public.
If a few of these companies have used best practices and been caught with new exploits, they deserve to be given kinder news stories than the usual. In practice, though, the story is almost always one of neglecting even basic security.
MD5? Just a couple of years ago I had to "fight" several managers and executives to get our product to hash passwords at all, until then, they had been in the clear.
It takes zero effort to have your public relations department issue a press release claiming that your company "takes something seriously". That claim rings hollow when it comes right after you demonstrate the opposite. As I said in the other thread, these companies falling all over themselves to say how seriously they take security after they've been compromised are like companies gushing about how seriously they take quality--after issuing a major recall (or getting sued) over faulty parts. The proof is in the pudding.
In my experience you are wrong. Two examples, one on security the other on quality and recalls.
On security, we were building an Home Gateway, the modem you have at home to access Internet. Before releasing it we realized the web page was vulnerable to a CSS attack. Since through the attack was possible to change the WiFi or the Firewall configuration we released the first version without these functionalities despite they were required by the users. The next release went through a two week pen test (consider that many of the competitors had the same CSS issue and were available on the market). Just a couple of day before releasing the new version, after the pen test was ok, I notice a phrase in the release note, I investigated a little bit and I discovered that the CSS issue was still there, it was just more difficult to reproduce (and a 2 week penetration test failed to spot it). What do I mean? That also the best security practice aren't enough to protect from all the possible issues.
On product recall, same product, but on the hardware. Our hardware vendor was Chinese but we inserted in the contract strict clause on tests, product performance etc etc... After around a year we discovered that a huge product lot used a capacitor with problems and we have to recall all of them. The capacitor was produced by a reputable company and matched all the production standard when new. Lesson learned? Even if we obsessed over quality, you could have problems.
The difference between a mediocre company and a good one is not that it doesn't make error, but that it is quick to solve its mistake and learn the lesson for the future.
I'm certainly not trying to defend them. I'm just saying that a serious commitment to security doesn't preclude breaches any more than a serious commitment to quality precludes recalls. You only need to screw a minute detail up to get in trouble, and so far no distinction has been made between companies that have actually been incompetent/irresponsible and others.
How about instead of just saying "We take security seriously," disclose some evidence. What specific things do we normally do to keep customer data secure and prepare for attacks? How did that preparation fail this time? What was this particular attack vector? What exactly was compromised, and when? What specific, verifiable steps are we taking to make the victims (customers) whole? What specific, verifiable corrective action are we taking in order to prevent this kind and other kinds of breaches going forward?
Just saying "We take security seriously" is like saying (to quote Chris Rock) "I take CARE of my kids!" What do you want, a cookie? That's what you're supposed to do.
Most corporate communication is hollow, but these PR platitudes are especially absurd when juxtaposed with evidence to the contrary. It's like AT&T telling me "Your call is important to us" every 5 minutes for a half an hour while I wait on hold.
On the other hand, I can think of a number of companies that have a track record of making high quality products, and none of them regularly feel the need to issue press releases touting their commitment to quality.
Firstly, there's one thing all of the victims being ostracized have in common: they disclosed the details of the breach. That is exactly what they should have done; punishing them creates a perverse incentive for victims to hide breaches in the future, a decidedly worse end-user outcome.
Then how do users ask for anything better than the status quo?
> How can the security industry build deep relationships with clients when we publicly ridicule them when the inevitable happens?
Simple: Call out the competitors of the clients you seek. There, now it's positive PR for your clients and security researchers aren't practicing self-censorship. Win-win.
Congrats. Now recognize that you're an outlier. Most companies don't even think about these things, much less do anything to mitigate the risks.
It's not a complicated procedure to keep most things reasonably secure. Best practices aren't that hard - it's just that most startups don't bother with them.
The comment I think you were referring to was talking about an unlimited-budget, as-secure-as-possible scenario. It's a great thought experiment, but the bar doesn't need to be that high to stop almost all real-world attacks (as was also addressed in that comment).
You don't need to run faster than the lion, just faster than your companion.
Even considering state of the art security (e.g. Ed448-Goldilocks + ChaCha20-Poly1305) offers a finite security margin. (Probably on that is never going to be reached.)
You work in a company whose trade is credit cards. Security is 90% of what you must do, not only because it's credit cards, but because you're legally forbidden from doing your trade if you don't fulfill certain requirements.
So, not talk your team down, you likely are really interested in security. However keep in mind that that stems from the fact that you must and that your hiring likely is influenced by that as well.
Has the same kind of focus been true for all other companies you worked with as well?
Also, out of sheer curiosity, how do you encrypt the passwords on your accounts?
> The explicit assumption is that these companies wouldn't have gotten in trouble if only they had taken security more seriously.
They didn't care about security at all. That's the assumption. And it seems I'm not the only one to think that, because that's how thing are generally.
Although I don't agree with all that is said in the article. I find that it makes a nuanced and well structured point.
Depending on what the security industry wants to achieve, they can either ridicule (punish) or ignore (reward) companies that at least publicize they've been breached.
Keeping in mind that companies have a certain tendency to work short-term angles over longer-term alternatives. I think the carrot is more likely to achieve better security than the stick.
Wow, as someone who sells proactive security software and has to constantly explain why we are hampering user experience, I have to say this post is deeply unhelpful. Troy is just pointing out what we all understand - these people simply don't take security seriously and need to WAKE UP. If public ridicule helps, great, all I can really say is nothing else has...even the parade of compromises that has occurred.
This is precisely the issue I'm referring to. The assumption is made that if you got breached, you must not have known what you were doing, and that's bullshit.
> I think this article doesn't just blame the victims of those attacks, but subjects them to public ridicule. Neither helps anyone, least of all end users.
I think public ridicule is necessary as basically the only way users have to encourage security. Users are being harmed by a company penny pinching on security and virtually never receive any compensation.
I've no pity for devs that end up getting ruined because they didn't know what to do. It's like going too fast on a freeway you don't normally take, getting pulled over and trying to explain that you didn't know. You knew better. Ignorance is no excuse.
Ridicule is absolutely an effective response to companies who have put MILLIONS, literally MILLIONS of people in fiscal and possibly even physical danger. There's absolutely no room for error when it comes to safety for the users, and a tweet saying "Oh yeah, well uh we take opsec real real serious" doesn't cut it.
Of course they take security seriously, I mean obviously. But the point of the article that this article is responding too is that it doesn't matter if apologize after something that could've been prevented, it's too late.
You're assuming that all security issues are obvious known ahead of time, and that's clearly not true. You can't compare your average remote code execution vulnerability with a speed limit. Speed limits are posted; RCEs typically aren't as clearly documented ;)
I'm assuming that a hundred million users aren't made vulnerable by sheer wizardry. In all of the cases that the original article listed, I'd bet my salary that there was oversight in terms of the critical components or the architecture. RCE should not be enough to make that kind of dent, there should be more security there.
That's a grossly uncalled for rewriting of the reality involved.
The victims are NOT the company whose servers are compromised. The victims are the customers of the company, whose data has been lost to the wilds.
Edit:
> It is far more probable that all of the companies cited in the article have expended massive efforts to protect themselves
Another misunderstanding of reality. I've worked with and in quite a few companies of tiny and impressively large size, and in all instances so far security has been a non-topic, or at best a bullet point on a slide. In all cases so far has it been utterly trivial for literally anyone inside the company, and mildly trivial for people outside the company to create data loss scenarious of disasterous scale for many many customers.
Maybe he has worked exclusively with companies who have crack security teams and never even thought of using md5 to hash passwords. In my, and many other's, experience, such companies are as rare as unicorns though.