Having spent almost 4 years in healthcare IT. Very few healthcare organizations take security seriously. There is very much a security by anonymity ideal. I worked for a small medical company that had access to 20,000 PHI records, and I was explicitedly told, "why would anyone want to hack us, we are small potatoes." I left that company shortly there after.
Yet companies I work with now big and small look at security as just a bunch of checkboxes on a government audit form. As long as upper management continue to see security as a cost loss center, and continue to only do the minimum nessissary to pass said audits. These breaches will continue to happen.
Exactly my experience. We had all the production passwords for servers and databases in a text file in the repository because the chief architect didn't like to remember passwords. When I pointed this out as a HIPAA violation the CTO told me they passed their audits so it didn't matter.
What provision of HIPAA does this actually violate?
Its clearly a bad practice (and obviously increase the risk of a breach, which, if it occurs, becomes an issue under HIPAA and related laws), but AFAIK neither HIPAA and subsequent modifying statutes nor the regulations adopted thereunder actually mandate particular password handling practices. Or is there something addressing that in the "guidance" issued under the HITECH act (I remember that establishing, by reference, some standards for encryption, and it wouldn't have been out of place for it to establish password-handling practices)?
Covered entities must "[protect] against any reasonably anticipated threats or hazards to the security or integrity of such [electronic protected health information the covered entity creates, receives, maintains, or transmits]" (45 C.F.R. § 164.306(a), http://www.law.cornell.edu/cfr/text/45/164.306). Storing passwords in the clear "obviously increase [sic] the risk of a breach", hence this is a reasonably anticipated threat.
HIPAA and similar laws don't codify whatever we think is good computing practice today. Down that path lies madness. Congress would have to re-write the law any time GCPs change, or else the law would become a hindrance to the very goals its trying to achieve (in this case, healthcare-related information security). Instead, the law is written more generally, with "reasonable" being the keyword that lets the legal system refer to current practice.
> Covered entities must "[protect] against any reasonably anticipated threats or hazards to the security or integrity of such [electronic protected health information the covered entity creates, receives, maintains, or transmits]" (45 C.F.R. § 164.306(a), http://www.law.cornell.edu/cfr/text/45/164.306).
But they also have freedom to select the particular security measures to use, considering: "(i) The size, complexity, and capabilities of the covered entity. (ii) The covered entity's technical infrastructure, hardware, and software security capabilities. (iii) The costs of security measures. (iv) The probability and criticality of potential risks to electronic protected health information." 45 C.F.R. § 164.306(b)
> HIPAA and similar laws don't codify whatever we think is good computing practice today.
No, but that's what implementing regulations usually do. HIPAA regs mostly don't include minimum technical standards (most of the security minimum standards are procedural).
> Congress would have to re-write the law any time GCPs change
Well, sure, if the minimum standards were written into the statute, which is why they are usually in the much-easier-to-change implementing regulations. The guidance under the HITECH act in effect did some of this for HIPAA PHI, as it created minimum standards for PHI to be considered "secured". But, generally, there's not much there, and its very difficult to make a solid case that any particular technical practice is necessarily a violation of the HIPAA Security Rule.
To be fair, if your systems relied on your chief architect not being hit by a bus, that would probably be worse than having the passwords stored someplace.
During an auto accident & court case everyone- doctors, lawyers, insurance- used my SS# as a case identifier even though I never gave it out. If a few percent of these are sloppy, them one could be screwed. Some meth people like to dumster dive insurance companies and the like.
I'm waiting for a case of plausible deniability through abstraction of identity to cause the whole SSN based system to collapse. As sloppy as our government and corporations have gone about handling our SSN, there is ever increasing lack of confidence that it individually and uniquely identifies anyone.
If you are ever unfortunate enough to be unemployed in California, and claim unemployment benefits, they print your social security number right at the top of every correspondence. Idiotic.
That's what an SSN means; it's literally what it's for. The problem is the idiots who use it as some kind of secret unique identifier for people, when it was never meant to be that.
You should be upvoted way more. I also worked in healthcare IT for three years and security was basically an exercise in checking boxes to claim HIPAA compliance. So long as everyone could avoid fines or being sued by the office of civil rights we were considered secure.
Don't lose heart - working in the industrial safety business, the 'just check the boxes' tactic is very familiar. Things are finally coming around after many years (and many, many incidents) though, as companies, and the courts, realise that ticking boxes isn't the be all and end all.
At the moment, it's very easy for companies like Anthem to claim that they were the victim of a 'very sophisticated' cyber attack, when in reality they were probably just wilfully negligent. As understanding seeps into the regulators and law-makers minds, businesses will start to comply with the spirit of security / HIPAA, not just the boxes. In the mean time, the best you can do is continue to advise clearly and calmly why things should be done. If the management doesn't accept your reasoning, at least you have done your due diligence.
Most security decisions aren't taken by senior management. I am sure it is not Sony's senior management who decided to store passwords in clear text in the PSN.
Management focus would ensure everyone in the organisation focuses on security but most security breaches are the result of IT people doing stupid things or making stupid decisions on the ground. It's not senior management's role to check that you didn't introduce a SQL injection risk in your code. Like it is not senior management's role to check that the accounting department followed properly the latest US GAAP guidelines. It's down to employees being competent at what they do.
That's just not true. The direction of IT certainly is set by upper management, as well as the budget. If IT says 'we need an IDS' and management says 'it's not in the budget', what can IT do about it? If IT says 'it will take this long and this much money to change our password policy' and management say 'work on new things, not changing old things', what can IT do about it?
Senior management might not directly set the password policy, but they do say what you should work on, and by proxy, what you're not going to have the time to work on. And besides, what is the role of management if not keeping track of their employees? If an employee fucks up that bad, it's their managers fault.
And yes, if the accounting department was that incompetent, SOX says it's senior management's fault. That's what the yearly attestation is for.
Well, somehow engineers and architects manage to resist management pressures in favor for security, you don't see many bridges collapsing but they have financial constraints too. And accountants resist management pressures to bend the accounting standard, or they go to prison too.
IT is in many respect an unregulated profession. Pretty much anyone can declare himself a programmer. There are some regulations on certain systems but not on people.
I am not a fan of regulation but the current pace of data breaches is just unacceptable. If we don't find a solution, some old lawmaker will.
> Well, somehow engineers and architects manage to resist management pressures in favor for security
When this does happen, it's often because management's "security" request is either nonsensical--based on some puff piece they saw in an airport magazine--or they won't accept the necessary financial-costs/organizational-changes of doing it right.
It's no coincidence that the people with the most exemptions from security policy are usually the upper management.
> Well, somehow engineers and architects manage to resist management pressures in favor for security, you don't see many bridges collapsing but they have financial constraints too. And accountants resist management pressures to bend the accounting standard, or they go to prison too.
Generally, yes, but not always. Sometimes they get boxed in by management and forced to make bad choices. When this happens, it usually leads to a spectacular failure ... and then the scapegoat is found.
The most famous case is probably the Challenger space shuttle. At Texas A&M they make sure every engineering student reads the story [1] of the Morton Thiokol employees assessing whether or not it was safe to launch at such low temperatures. The engineers had solid doubts, and refused to declare it safe. The managers had pressure from every direction to get to "yes" and launch the bird already.
Finally, during the teleconference the night before, one of the engineers' superiors said "Take off your engineering hat and put on your management hat." A new recommendation was put out (bypassing the engineers who still refused to sign off), and the next morning they got their launch like they wanted.
Just over a minute later the shuttle exploded, killing the crew and putting American manned spaceflight on a multi-year hiatus.
There are always cases like this in every profession. Some dam in Italy which collapsed for a similar reason, incompetent doctors, Enron, etc.
But when you go see your doctor, you do not rely on the fact that he is one of the best doctors in the US. You have no way to tell. You rely on the fact that even an average doctor is good enough to not miss something important.
Well the problem with IT security as it is now, is that we see major breaches almost every week. Some were difficult to avoid but in many case it is just because of bad security design, and in some case people developers ignoring completely security.
And I am sure Sony's management didn't box the developers in storing passwords in clear text.
There are certain regulations about IT, enforced not by the government but by private companies (such as PCI).
I'm just going to have to disagree with you and move on about regulating the people, though. I see your point, but I just don't agree. If anything, I feel managers should be regulated, so they are only allowed to oversee positions where they have the knowledge to fully understand what their direct reports are doing, from front-line to c-level. That's what I feel is the problem.
SOX isn't a regulation of the programmer, it's a verification that his management actually doing their job overseeing him.
I am not saying that regulation is desirable. In fact it is going to be a major obstacle to innovation. What I am saying is that we pretty much see a major data breach every week. There are some instances where one can call them force majeure, like a zero day on a major security component in windows or linux. But there is no excuse for SQL injections vulnerabilities, unencrypted personal data, IT professionals logging-in to websites without checking the URL, unpatched systems, etc.
Complaining about budgets to fix these issues is like saying that the problems with collapsing bridges is that we don't spend enough fixing the structure. Well, it should have been built properly in the first place.
Yes, resources will have to be allocated to fix existing systems but I think the problem here is more fundamental than a problem of budget and management focus. We need to have a profession competent enough to build a bridge structurally sound even with average engineers.
And this is a general comment. We don't know yet how this particular breach happened.
"Complaining about budgets to fix these issues is like saying that the problems with collapsing bridges is that we don't spend enough fixing the structure. Well, it should have been built properly in the first place."
I'd like to think that engineers do want to build things properly, however to build something properly it usually involves more resources. The problem is when it comes down to brass tax the low bidder wins. I've worked in many big companies over many years, and yes I've hacked things together MANY times because of budget/time constraints. A lot of it was because of Management wanting to come under budget, or because they wanted to rush the product to be a hero.
I don't get where this idea comes from that engineers just want to do things in the worst way, do you think a doctor wants to kill his patients or do something that would endanger the patient if they didn't have to?
Actually in most cases they are, Most security issues are shown on the yearly/quarterly/weekly compliance scans. A majority of the time this gets forwarded to senior management requesting resources to fix said security issue. I've seen first hand senior management direct IT teams to sweep it under the rug so to say. Any competent IT team is well aware of their security concerns, however if management isn't on board it just becomes another skeleton in their closet.
>I worked for a small medical company that had access to 20,000 PHI records, and I was explicitedly told, "why would anyone want to hack us, we are small potatoes."
I don't think we proactively pentest our stuff either. I've never heard of any security discussions but that may just mean I'm not being included. We have a few more zeroes after our PHI record count too.
I can't even imagine a healthcare company acceding to a proactive pentest. Even if it was compared to a vaccination or a routine health check, they still wouldn't do it. The gaping holes in security that would be uncovered. Unreal. No way in hipaa hell. lolwut, find problems that exist in our current system? Our system is fine. It's not broken, so don't break it.
I agree. One medical student I know personally (whom I shall not reveal publicly) was interning in a hospital recently and during the course of their learning there had a very good opportunity to just take away tons of private medical data with them. They of course didn't take that opportunity, but from what they told me, it's very, very easy to do that. Frighteningly easy.
>I worked for a small medical company that had access to 20,000 PHI records
I can only imagine the data protection standards at small equipment manufacturers and old-school pharmacies. I'd guess their biggest security measure is keeping paper files in a locked office.
I think that might overall be a better strategy than a really half-assed digitization plan. Paper records aren't that high security, but they are moderately resistant to bulk theft: leaking 20,000 paper patient records takes a lot of physical effort.
A lot of those size companies are offloading their EMR security to larger EMR cloud providers. While it helps to protect the smaller companies, and is more cost effective then trying to manage it internally, I still wonder about security of those records, being how most of those are a web login that most if not everybody on the Internet has access too.
> Anthem learned of the hacking last week and called in Mandiant over the weekend. The company was not obligated to report the breach for at least several more weeks but chose to do so now to show that it was treating the matter seriously.
Could this be any more patronizing and offensive? Look, if you are Anthem member, or if you were an Anthem member, you've been doxxed... and quite comprehensively:
have obtained personal information from our current and former members such as their names, birthdays, medical IDs/social security numbers, street addresses, email addresses and employment information, including income data
And you were doxxed nearly two months ago. Or maybe not, because Anthem goes out of its way to NOT tell you when this occurred. If you were affected here's how they will notify you:
We continue working to identify the members who are impacted. We will begin to mail letters to impacted members in the coming weeks.
So sometime within the next month you will get a snail mail telling you that you were doxxed... and that letter will probably be extremely vague about the details, but will be quite heavy on the PR and perhaps even have a nice picture of Grandpa CEO at the top.
Anthem is not taking this seriously. No matter what they are trying to communicate with their PR gloss, they seem to care about covering their asses first and really don't seem to give a hoot about all your personal data that is out there in the wild.
In modern usage of the term they don't. The term originated in underground circles where anonymity by all participants was assumed, and where there were probably legal or criminal revenge consequences for tying a pseudonym to a real identity.
Kind of like how troll now means 'person who is an asshole on the internet' instead of 'post designed to rile up and elicit frivolous responses'. The meaning has changed over time for better or worse.
What are you talking about? I never said this was "doxing"; I know there was no reveal. I was referring to the definition of "doxing" which I felt didn't fit.
Did you read the comments completely? If Randi, Anita, and Brianna weren't anonymous but they were "doxed" it seems to me that "doxing" doesn't have to refer to revealing info of an anonymous person.
This is really frightening. Honestly, what are these people supposed to do now? I'm not even really sure how to vet insurance companies' data privacy, since most get their insurance through their employer.
The domain could have been created as part of a crisis preparedness plan. I know a lot of boards were scrambling to beef up their incident response plans after the Target and Home Depot incidents.
You seem to be implying that the domain was registered in response to the breach.
Could it be that the anthemfacts.com domain was intended for a different use, or to prevent someone else from registering it, and was re-purposed after the intrusion to present Anthem's case? I don't know much about SEO, but quarantining negative information on a separate, immediately available domain might be the motivation here.
Perhaps? In August 2014, they registered stanfordanthemfacts.com on which they've posted the November 11. 2014 announcement that they have continued their contract with Stanford Health Care: http://stanfordanthemfacts.com/
Maybe after the Stanford (and other such announcements), they had decided in mid-December to snag anthemfacts.com and then, after learning of the breach, decided to put it into action for this monumental event. However, what are the chances that it took one week for a health insurer, upon discovering the breach, to launch its PR campaign, nevermind fully understand the nature of the breach to be able to publicly announce it. Given the delicate nature of the situation, as well as its historic size, this is not something that a health insurer would want to prematurely make an announcement on without being very sure that the damage is contained. And they contained it within a week? I realize that I'm slightly begging the question here, but yes, part of my skepticism comes from how quickly they were able to move...One week would make it one of the fastest discoveries-to-announcements, which given the scope of the breach, is pretty amazing.
Edit: It's worth pointing out though that there would be records of them contacting the FBI and Mandiant, and I would give them the benefit of the doubt that they would make such contacts upon discovery of the breach...so if the FBI confirms that the contact happened a week ago, I would take Anthem at their word.
A domain name that is being used exclusively to address to data breach, and was registered within the past 2 months. And it's just a coincidence?
Seems very unlikely.
And I'm sure they're quarantining negative info on an unrelated domain, but why would they even need to consider repurposing an existing domain name, instead of buying one? We're not talking about somebody doing a side project and hoping to save a few bucks by repurposing another domain name. And it takes all of a few hours to buy a domain name and have it propogate.
"A domain name that is being used exclusively to address to data breach..."
My point is that while it is used for that purpose now, that doesn't mean that it was registered for that purpose back in mid-December. Your theory about the breach occurring earlier and being concealed until now is certainly possible, but the domain registration date on its own is not supporting evidence.
What's unlikely about it? Maybe if the domain name was "anthemdatabreachinfo.com" or something more specific, but "anthemfacts.com"? Many companies register lots of variation of domain names that they aren't using. I don't think it's unlikely at all that this came up, and there was a meeting where they said "OK, do we have any existing domain names we can use for this?"
So what have they been using the domain for in the few weeks since registration? The domain doesn't appear in web.archive.org until today, and searching Google for the domain between December and the end of January shows nothing.
The website itself says "we have created a dedicated website ... anthemfacts.com" for this incident.
[Edit: replaced two egregious uses of "website" with "domain"]
If they misled the New York Times, then they've also misled the Wall Street Journal[1]:
> "Anthem’s Mr. Miller said the first sign of the attack came in the middle of last week, when a systems administrator noticed that a database query was being run using his identifier code although he hadn’t initiated it."
AnthemFacts was registered 54 days ago, which would be within the legal timeframe for disclosure that the Wall Street Journal notes in their article:
> "Federal law requires health-care companies to inform consumers and regulators when they suffer a data breach involving personally identifiable information, but they have as many as 60 days after the discovery of an attack to report it."
Lastly, some more "specifics" that NY Times didn't mention:
> "Investigators tracked the hacked data to an outside Web-storage service and were able to freeze it there, but it isn't yet clear if the hackers were able to earlier remove it to another location, Mr. Miller said. The Web storage service used by the hackers, which Mr. Miller declined to name, was one that is commonly used by U.S. companies, which may have made the initial data theft harder to detect."
Domain registration doesn't necessarily mean anything; my employer owns similar domains, just in case we need them. This is standard crisis plan stuff these days.
I feel most for those who have young children. If you consider the long term viability of SSN over the life-span of a person who is under the age of 5 today they'll likely have been exposed to a breach that will contain their dox a few times over by the time they reach a legal age - that is likely a conservative estimate given the frequency of these events. SSN is broken and we're going to see a lot of push back going forward as these people come of age.
TL;DR
If you're a parent, monitor your child's SSN for activity. Especially considering this is a healthcare breach, nobody is immune.
SSN is not some secret number - they're actually public information and can be obtained through legal channels with minimal effort. SSN is simply used as a "primary key" to differentiate one John Smith from another; it's not a private passcode or anything (even though many places treat it as one). The main benefit of an SSN is that it's a unique identifier of a person, but it's not sufficient for establishing identity and should not be used for that purpose.
But it is private and it does unlock keys to lines of credit. It is not simply a "primary key" as stated, whether or not that was the original intent is not the argument here however. Recall the LifeLock CEO* plastered his SSN publicly and felt the repercussions. While I won't suggest you do that here - just knowing that if you did the assumption is bad things will happen in due time. Keeping SSNs private today is a risk organizations have to deal with and it does impact people, sometimes to an extensive length.
Also, for clarification, the breach involved all of the information required to establish identity - which was my main point in the protection and monitoring of the SSN, with special regard to children/minors.
I heard about that, but when you publicly tell a bunch of hackers "come at me bro", you have to expect that kind of reaction.
But realistically, the cat is out of the bag with regards to SSNs. Legally you can obtain someone's SSN for very little money. If you go the illegal route, I'd be willing to bet that there is black-market identity data on over half of Americans. We really need to treat SSNs as about as secret as your e-mail address, because for all intents and purposes they are already. I wouldn't be surprised if online ad networks were using your SSN as a primary key in the background - the information is so easy to get and it would solve a lot of problems.
I guess I'm saying that sticking your head in the sand and pretending that SSNs are secure won't make them any more so. I'd doubt that a whole lot of SSNs were gathered in this hack that weren't already effectively disseminated widely in black market circles or marketing databases already.
Conversely getting 100k SSNs vs spending a significant amount of time and/or money on a handful is a different story. I'm not disputing that SSNs might be easy to obtain, but obtaining them at scale via breaches as this are apples and oranges. One is targeting a company because it is known to handle this information, the other is targeting an individual.
The difference is simply data dispersion. If a breach dump ends up on the public Internet everyone has access to that data, worst case scenario, infinitely. Individual targeting has a similar risk but the overall impact is smaller.
Not sure what your point is with the "head in the sand" comment - I happen to work for a security company in an engineering role. I'm not, in any way, defending security through obscurity or the way SSNs are used or (mis)handled. Reading through these comments it is apparent credit agencies don't even get it - and that is disturbing in itself.
But stating that you "doubt a whole lot of SSNs were gathered in this hack that weren't already effectively disseminated widely" is, in fact, a head-in-sand approach compared to doing everything you can to preserve and prevent in the mean time. I, personally, don't agree.
edit: added "compared" to second to last sentence for clarification
A company and it's customers are both victims when it gets hacked, but when it has millions of customers the external cost of poor security is so great the bad outcomes seem inevitable.
However, there would be less harm from these kinds of breaches if consumers were not obliged to prove their own innocence whenever someone loaned money in their name without rigorously verifying their identity. If someone claims to have loaned a bunch of money to me without ever interacting with me, the recovery of that foolish loan should really not be my problem. It would still be bad for an insurance company to expose private information, but there wouldn't be such a tremendous incentive to steal, agregate, and distribute this kind of data if there wasn't so much easy money in it.
Stolen credentials of the kind described in this breach are valuable largely because there is an asymmetry of effort favoring thieves: it's so much easier to borrow money in my name than it is for me prove my innocence that the process of borrowing money with other peoples' identity can be done in bulk, and to some extent automated. This situation is only sustainable because the lenders have shifted the responsibility of authentication onto their customers, retroactive to the issueance of credit. Identity verification prior to extending credit to a debtor is trivial and automated, while retroactively proving fraud has a large cost to the debtor in actual human labor.
It seems like payment systems and consumer creditors have colluded to force a Faustian bargain on us: to gain access to utilities and payment systems you have use credit, even if you don't want it. Therefore, if you want to be able to have municipal water, a place to live, or a phone, all of which are practically contingent on credit rating even if you pay with cash, you have to protect your credit rating.
It would be nice to decouple payment systems from consumer credit, but we won't. Nobody, whether they are a buissiness or the state, can afford to cross the credit card companies or the ratings agencies. They are buisiness titans with big lobbying clout. If you get taken by theives, it doesn't mattter if you're a consumer, a big corporation like Target, or a government agency like the VA, you're going under the bus because the status quo is too profitible to fix, and security is your problem. Nothing can be allowed to slow down the issuance of easy credit, or to create the slightest friction in CC transactions. Look what just happened with chip and pin? We can't even _opt into_ a pin for CC transactions because it might confuse us. While we're on the subject, go read about what happens to people who to try to build alternative payment systems that cut out MCVISA...
How many data breaches would there be if bad actors had to take the trouble to personally hassle each of the millions of people they had data on before they could take our money?
Probably some, but how much would we care who knew our SSN's or addresses if they couldn't easily be monetized?
SSN should not be worth anything because it's really not different from a name. Instead of saying "hi my name is exelius", you're saying "hi my name is 302-45-9522". You wouldn't trust me if I said the former, so why the latter?
I don't know any solution to this problem that would realistically be any better. Crypto isn't a good long-term solution -- any crypto we use today will be trivially cracked by a cell phone 20 years from now. Trust mechanisms seem better, but even then they can be simulated (see: twitter bots, facebook bots, click fraud, etc.)
Identity theft is far too easy today, but even if we had an effective system that could prove identity... I'm not sure we would want that societally. It basically guarantees big brother and wraps it in the guise of security.
Tldr: this is a tricky problem where the situation caused by the solution may actually be worse than the original situation.
> any crypto we use today will be trivially cracked by a cell phone 20 years from now.
This is completely false for correctly implemented crypto unless mobile phones of the future are made of something other than matter and occupy something other than space. It could also be that fundamental understandings of math and physics are incorrect. But the idea that just because of improved technology we'll be able to crack today's crypto is ludicrous
We really do need to find a better way of authenticating and identifying people. SSNs were never meant for this and they clearly don't fill the role successfully.
I've long been a proponent of the government announcing that they will publish everyone's SSN 2 years from now. Banks, insurance companies, the govt, etc have until then to figure better methods.
There are plenty of better ways already. A simple public/private keypair would go a long way towards this goal.
The problem is that everyone working on crypto products focuses on just developing technology, often attempting to make existing crypto systems easier to use for ordinary people. This is fine, but it's only a partial solution. We need to educate people who don't know and don't care about proper security. Nobody is going to use the most secure and easy to use crypto system if they don't see the benefit and think that a SSN or a driver's license is a good way to show their identity.
There is a lot of hand wringing about how hard it is to get ordinary people to take security seriously, but honestly this is a problem that will solve itself given enough time and enough breaches such as this. Until people understand that only secret information--which they and only they know--can be used to authenticate them and protect their information, this will just keep happening.
SSNs are already public information. There are numerous legal ways online to enter a person's name and 1 or 2 past addresses and get their SSN back.
Their main purpose is to serve as a primary key - many people have the same name, but SSN is unique. It should never be used for establishing identity - it's about as effective as asking someone for their middle name.
> We really do need to find a better way of authenticating and identifying people
What about not doing that at all? Hear me out. Not relying on "identity" would cost many orders of magnitude less. And besides, why should I care who you are-- what does your identity matter to me? And why should anyone else care?
It sounds like you're putting the bait out so somebody will disagree with you and then you'll explain the alternative to using identity as a form of authorization.
Can you save us a long and stupid discussion and simply explain your plan to practically deploy a better authorization system that will cost many orders of magnitude less?
How in the world did society even function before we had a unique number to identify people? It must have been utter chaos! Before 1935 (when the first SSN was issued), there was no way to borrow money, go to college, buy land, open any kind of account anywhere, or do any of the things that somehow we need a unique number for today.
I couldn't care less, but someone who is granting you credit has a legitimate reason to know who they're giving money to and how likely you are to pay it back, and who to chase after if you fail to do so.
Let's not pretend there aren't valid reasons for identity to be established.
Identity is tantamount to verifying education. Colleges need to know who you are, and potential employers need to be able to identify that yes this John Smith is the one that graduated from Stanford w/a 3.8 GPA and a degree in computer science.
If you live in California, you have the right to put a security freeze on your child's credit file. This will prevent one of the most serious types of identity theft with a stolen SSN. Other states might have similar laws.
I'd just like to point out that you can put a freeze on your and your children's credit files in any state.
In most states it will cost in the neighborhood of $10 for each of the three bureaus, unless you're already the documented victim of identity theft (ask me how I know this).
It is not possible to put a freeze on a child's SSN until they have a credit file. So unless they have been a victim of an identity theft or have a credit file (because they have a credit card from their parent for eg) this won't work. The only thing to do for parents with minor children is to monitor their SSN for activity.
Start with Trans Union, they have a child specific application so you can find out if your child's SSN has been used by identity thieves: http://www.transunion.com/corporate/personal/fraudIdentityTh... If they don't have any reports, there's a good chance you're probably ok.
Just filled out the Trans Union site with dummy data to check, and none of the transaction is over SSL. So, to kind out if my child's identity has been stolen I have to expose them to identity theft....
That shocked me as well. Doesn't exactly give you a warm fuzzy feeling about a major credit reporting agency's security measures. Do some of these companies just view these breaches/vulnerabilities as a joke?
Had never really thought of that angle. Being a father it does shake me up a bit more than usual as it seems my peers and I are quite used to these headlines being the norm. Imagining a future in which you have already been "doxed" prior to elementary school is a bit disturbing.
"A very sophisticated external cyber attack" which is a "security vulnerability"... The more "sophisticated" they claim this "cyber attack" is, the more I think it's a garden-variety SQL injection fuck-up.
They've done a bad job of protecting their customer's data, and an even worse job of explaining what actually happened.
That's really my problem -- that they're leaving their victims to speculate.
It's great that they "made every effort to close the security vulnerability". How's that going?
They hired Mandiant to "evaluate our systems and identify solutions based on the evolving landscape." Is "evolving landscape" CEO-speak for "Oh, god, we're still leaking customer data like a sieve, make it stop!"?
I'm just going to keep speculating, because if Anthem's not going to bother speaking plainly, I'm just going to assume the worst.
>It's great that they "made every effort to close the security vulnerability".
I love that quote, they try to cover their asses by saying we closed the vulnerability. My question is why did you wait till it was taken advantage of?
If we combine the Check Point firewall job posted on the Anthem Inc's website on 1/30/2015, add in the "discovery" on 1/29/2015, and think about Check Point's vulnerability to Heartbleed and Shellshock last year, one might also guess that a VPN stolen-credential compromise (like the major CHS breach last year) or a generic firewall compromise (via shellshock) are in the running as possibilities.
I hate the tone of that letter, has the typical PR tone all over it.
Basically to sum it up: "Your Social Security Number, Name, Birthdate, Address, and everything else needed to steal your identity is at risk. But don't worry! Your credit card number is safe."
The whois[1] records for http://anthemfacts.com was registered in December. It took them months to create that PR report and prepare for damage control. They should have notified victims much earlier.
To give 'em the benefit of the doubt-- perhaps perhaps perhaps they needed that particular domain in anticipation of some other instance where they dropped the ball but your conclusion is more compelling.
So, today it was announced that they knew something was up as early as 12/10:
"The company also confirmed Friday that it found that unauthorized data queries with similar hallmarks started as early as Dec. 10 and continued sporadically until Jan. 27.
...
The hackers succeeded in penetrating the system and stealing customer data sometime after Dec. 10 and before Jan. 27, Binns said."
Who cares about credit card numbers when you are protected for free and your credit card can be reissued unlike your SSN. I can't believe than in 2015 there's no modern way to verify and protect your identity! There are still so many stupid system relying on your last 4 of your SSN or DoB as authentication!
In Sweden we have a personal number. It's unique to every person but its not secret at all. You use an official identity card or passport or the electronic variant to identify yourself. I'm guessing its some kind of privacy issue behind there not being a similar system in US? Because it works pretty well.
Just rename it Patriot ID and I'm sure people will come around :/
Seriously, if California is giving driver's licenses to whoever wants them (and who's 16 and can learn to drive), I don't see the harm sending out centrally verifiable identity cards. The costs of implementing such a system have gone way down over the years, but to be sure, bid out the job and finance it with surcharges on credit report checks, and any other transaction that involves verifying identity. There are surcharges everywhere else in the transaction. What probably concerns a lot of people is that they don't want the government to know every time they get a credit check. Not sure how you solve that, other than making this a GSE or legal monopoly.
Sweden's entire population is about the size of the Chicagoland area. Now imagine 320+ million people all living in different semi-autonomous states all with their own bureaucracies and hundreds of taxing authorities. Now imagine proposing a national ID card to these people. Yeah, its not that easy. The US isn't centralized like a lot of European nations. Governance of very critical things are done on the state level and that isn't going to change anytime soon.
>I'm guessing its some kind of privacy issue behind there not being a similar system in US?
The social security system, which is a federal program, produced a unique number for all citizens. The states quickly started using this number in their own bureaucracy and everyone else followed (banks, etc). Now its a defacto numeric identifier.
The big problem here is how easy it is to get credit in my name if you have my SSN, like its the root password to my finances. Credit is far too easy to get in the states from a paperwork perspective. I should not fear other people getting my SSN. Banks and other organizations need to realize that if someone presents my SSN, that doesn't mean its me. More numbers or psuedo-SSN's aren't the fix here. The fix is due diligence and better fraud protections.
Not to mention everyone already carries a unique identifier thats easy to verify - your fingerprint. I think SSN + fingerprint plus a letter sent to my home that needs to be signed should be the minimum to open any line of credit. SSN alone should be worthless.
Its also bothersome that PCI-DSS and other regulations treat credit cards like NSA secrets, which is fine as they should be encrypted, but there's no legislation or guidelines to make SSN's encrypted. SSN's sit as plain-text in every database in the US. That's kind of scary and probably invites hacks.
Not to mention everyone already carries a unique identifier thats easy to verify - your fingerprint.
Actually a surprising number of people don't. For either medical (dermatitis), work (manual laborer wearing them out, operation room personnel scrubbing them out...) or age related reasons.
Well, the US has States and that complicates things quite a bit for this type of thing. Most states will give you a driver's license number as an id (with the appropriate "ID Only" mark).
There is also the Real ID Act[1] that trying to establish federal id requirements. This is going to cause some problems and look for it in the news. It is a DHS enforced national ID law.
And yes, some of the folks in the US believe a national ID that is needed to buy, sell, or get a job would be a little too close to the Bible's mark of the beast. That gives quite a lot of friction to any national id.
some of the folks in the US believe a national ID that is needed to buy, sell, or get a job would be a little too close to the Bible's mark of the beast.
You're exaggerating a bit into a strawman.
I strongly oppose REAL ID (which, by the way, was around for a while before the DHS existed). And as a "tooth fairy agnostic" as Dawkins would say, I'm not the least bit concerned about the number of the beast.
What I am concerned about - and this goes the same for anyone else with whom I've discussed the issue - is, why is it the federal government's business at all when and how I "buy, sell, or get a job"? This seems like a tool for the federal government to get its grubby mitts into more stuff that's not within its enumerated powers.
Perhaps I should have separated that from the DHS stuff, but it is a belief of some folks (enough who vote to have made a long difference) and it goes to why we don't currently have a national id. It is part of the history in the US and the original poster is not from the US and wanted some reasons.
DHS is the agency currently charged with Real ID Act oversight. I'm not sure the who is important before the law is implemented.
A social security number (SSN) is the same idea. The difference is that not every citizen has one.
The problem with this number is that, similar to Sweden, it can be used as an identity number and as a password. This is a terrible thing to do. In your small country of homogenous socially protected people, you may not have a widespread problem of theft. In the US, however, there is an entire industry of stealing these numbers in order to take out new lines of credit, buy items at stores, and then not pay them off.
Privacy Rights Clearinghouse has a couple of excellent fact sheets on identity theft
https://www.privacyrights.org/how-to-deal-security-breach covers situations like this where there's been a security breach - how to order and monitory credit reports, put in a security freeze (which makes it harder to open up new credit cards or credit lines in your name), etc.
Good job issuing the release in the middle of the night to try to avoid the PR, too. What a trainwreck. Anthem basically passed out identity theft kits, and you can even sort by income to go after the rich ones first! (Why does Anthem know your income? It doesn't seem relevant to offer you health insurance products.)
> Why does Anthem know your income? It doesn't seem relevant to offer you health insurance products.
Your income is strongly correlated with your health. The lower your income the more likely you are to suffer from conditions such as obesity and diabetes, and the higher your mortality rate will be. Health insurers can use income figures as one factor when calculating the overall risk of a policy.
Yes, but they can use that as grounds to not pay a claim if they find out.
It's not illegal, but it violates the contract you sign with them and lets them off the hook for paying for things. Mind you, they'll still keep the money you paid them.
It makes me wonder. For several years the US government, Medicare, and private insurers have been pushing hard for health care providers to adopt Electronic Health Record systems. Now in the current phase "interoperability" of EHR systems is the catchword.
A question to ask is how secure is a large network of EHRs going to be? I don't know of data showing the frequency or severity of EHR security breaches but it would be surprising if there were not at least some. In any case, this kind of info would probably not be made available to the public, even though it should be.
Anthem's poor job of keeping confidential info private is especially distressing given the fact that many health insurers are also health care providers (e.g., hospital systems). Computer systems are very hard to operate securely, and after what happened, it's hard to trust these corporations will take the task seriously.
I've been quietly predicting that security of health information is going to become the Next Big Privacy Issue as the Internet of Medical Records grows ever larger.
This is why, increasingly, my view is that people should be in charge of their own data, and only what is specifically required to complete a transaction should be disclosed.
How to implement that technically becomes an interesting question, but between pocket spies with storage measured in tens of GB to TB, and various forms of key authentication, it seems that there are several possible options.
The whole discussion above regarding the false crime of "identity theft" (it's impersonation fraud facilitated by the data holder's negligence) is another point of increasing frustration for me.
I've been having a few related discussions with David Brin (a data cornucopian) on Google+. Brin, hardly to my surprise, responds with extreme derision.
Ultimately, the web is an attack vector that no one is immune to. Did you read the Syria hack recently? Just a skype chat with an attractive opposite-gender is enough to download a piece of malware masquerading as a picture you really want to see. While the human aspect has always been a key element of getting hacked, products that claim to distinguish the good vs. bad are failing big time. And this has been the pillar of enterprise security (classifying good against bad) for the last 20 years and is starting to show its age.
"A question to ask is how secure is a large network of EHRs going to be?"
LOL, everyone 'on the inside' (by that I mean: at least anyone who works on computers, software or networks professionally) knows the answer to that question: it's going to be a train wreck. There is not a single person on this planet who really understands just 1% of the software, hardware and network infrastructure they/we work on every day; let alone how all of these interact. Computers, in 2015, are so complex, and our 'engineering' is so shoddy, that there is no way to safeguard networked data for anyone but the most determined and resourceful parties (by which I mean organizations of which there are but a handful in the whole world, and even those can't seem to keep secrets really secret.) Either way, there is no way at all that a non-IT focused organization like a healthcare insurer or provider will be able to keep data secure, and it's only a matter of time before incidents like this will become commonplace.
Consider: I have an in-law who is a partner in a largish practice in my area. We talked a bit about the business aspects of the practice when she became a partner because she had to put up with all the management crap all of a sudden and it was nice for her to vent to people who had similar issues. Anyway, point being I know a bit about the finance and management of a rather typical organization like that. These people will in the next 5 years somehow get access to our, by then, country-wide EHR system. They work on computers they buy from the local computer shop because the prices 'seem reasonable' and Jimmy who works there dates the secretary or whatever; so Jimmy (whose training was in swapping out hard disks and reinstalling Windows) is the one who 'maintains' their systems, too. Their cash flow is so precarious that some months they can't pay full wages to the partners. How will an organization like that ever be able to secure their network? Their 'security' consists of the cable guy setting a non-default WPA key on their wireless router.
And of course, they're required by the organization that maintains the EHR system to have 'regular auditing of their systems' to ensure security. Which consists of a couple of big 4 consultants who interview the management, tick some boxes on their checklist and make a 50-page CYA report out of that, without ever having touched a server or network.
I got out of the security game 10 years ago, and it was already scary back then. Maybe somebody who still works there will feel otherwise, but computer security (on the blue team) is like FEMA sending two guys with a shovel and a Walmart plastic bucket to a dike breach. (whereas on the red team it's shooting fish in a barrel, of course.) We are truly fucked, because too few people understand the magnitude of the problem and as long as there are no problems and you don't look too closely at the robustness of things, using computers is much cheaper than the alternatives.
No, you're about right. On the bigger corporate side, security is at least the big buzzword. The VP- and C-level positions want to be sure that action is being taken to improve security, but day to day requests to poke holes in the walls come in. That is not to even mention the huge, ancient systems that are in the middle of multi-year replacement processes that began before security was so important. That means at best the replacement will have the security best practices of the last few years stapled on awkwardly, but more likely nothing will change given the millions poured in already.
Why were they storing sensitive data of former customers?
It seems like a risk with no benefit, with the only justification being "all data could be valuable eventually so let's never delete even the personal sensitive data." Ironically, the data did eventually become valuable - to someone else.
It used to be common for insurance companies to look carefully at your coverage record, and if you had any time during which you were not covered, they'd say stuff like "Oh, that horrible cancer you have? Yeah, we're not paying for it because it was a 'pre-existing condition' that you got during that weekend you had between two jobs six years ago." And the law let them do that.
Health care in the US is . . . the phrase "utterly broken" isn't strong enough. We need a good fifteen syllable German word for how fantastically fucked up it is.
Of course I'm trying to explain Anthem hanging onto data. Probably it was totally selfish ("we can send them spam") or sheer laziness.
> they'd say stuff like "Oh, that horrible cancer you have?
> Yeah, we're not paying for it because it was a 'pre-
> existing condition' that you got during that weekend you
> had between two jobs six years ago."
Can you give a link to an article about this? I didn't know "pre-existing condition" worked like that.
The example is a little bit exaggerated, but basically if you have a major medical problem with huge bills, the insurance companies will look for a ways to get out of paying. It may not be right or even legal, but the process of disputing claims is a confusing hassle. I can tell you from personal experience that it takes a lot of determination to dispute with an insurance company and I can imagine a lot of people just give up.
Between jobs I had my wife's insurance cover me for a weekend. The HR types I talked to made very sure that I had documentation of unbroken coverage, saying that it was pretty common for insurance companies to argue pre-existing conditions for even a day of not being under some insurance umbrella.
I believe this is no longer allowed under relatively recent law.
IIUC, not since Obamacare went into full effect in 2014. One of the main provisions of it was that it became illegal to deny coverage based on pre-existing conditions.
They still need the records because one of the other effects of Obamacare is that it became illegal to not have health insurance, but it's broken in a different way now.
Don't forget having to deal with billing nightmares even after you're no longer using an insurance company. You still could end up fighting with them over their failure to pay for something.
I wonder why they needed to store SSNs online. They use SSNs to run a credit check and identity a person. Why then is it not stored encrypted and over an air gap? They can use email and phone numbers to recover passwords. This is absolutely ridiculous.
They said in an email that they would pay for one year of credit protection for all those that they say were victimized. I don't think that they are capable or trustworthy enough to state who was victimized. It looks to me that they are just ignoring their responsibility for this attack. They also stated that they do not think health records have been compromised. I believe that they are just trying to avoid HIPAA fees. If so much personal data was stolen, it is likely that health information was also stolen. Generally, the patient's personally identifiable information is stored more securely than their actual health record.
Now I'm off to get credit protection for me, my wife, and my one year old. Does anyone have any advice on where to begin?
That's very vicious PR to me. By acknowledging some guys thre were hacked too, they implicitely say that : "we're in the same boat, anthema and their customers, we'll fight together". Which, at least for me, is completely wrong. They fucekd up and they put the customers in the siht.
This is so infuriating. Good luck trying to do anything sensible like freezing your credit. Each credit bureau competes with the next for making the process as painful as possible. 500 errors, timeouts, invalid challenge questions, ambiguous or just broken password requirements. They don't give a fuck - you're not the customer. The customer is the debt industry that pays them for your info. Oh and they each charge $10 to freeze your credit but hey you can mail them a copy of a police report and they might waive it. I gots to shell out $30 because anthem fucked up assuming I can even get their broken ass web applications to take my money.
Enterprise hacks are sadly becoming more common, and more sadly, it appears security is abysmal in all cases of large scale hacks. Many attacks of the past 24 months included simple exploits, social engineering or both. These are the kind of attacks a small group of rogue individuals can accomplish from computers anywhere in the world.
If small groups of individual "hackers" are capable of executing high-profile operations, just imagine the capabilities of nation-state cyberwarfare forces. The intelligence agencies of large governments employ thousands of professionals, all at least as qualified as the hackers behind these attacks. The difference is that government employees (or contractors!!) have no fear of legal repercussion restraining their operational activities.
When attacks like this move the market, any scrutiny of the attack must include analysis of market trading in the days following. Who profits from the drop in Anthem stock price? I imagine the SEC investigates this as part of due course, but one should consider that nation states are active investors in the stock market, whether directly or through hedge fund proxies. If a nation state can hack a large enterprise, and a nation state can trade large volumes of securities against that enterprise, then it follows that nation states can profit from cyber warfare.
The next five years are going to be very interesting.
Greeeeeeeeat. Anthem just became my health care provider. This fills me with confidence.
I'm especially unimpressed by Anthem's failure to hire a good copy editor for such a vital message, as evidenced by the painfully obvious error at the end of the penultimate paragraph: "share that information you" should read "share that information with you".
It's important to remember that many of the security folks at these companies are actually pretty good. This is more of a C-Suite problem than a security team problem - security people can't get much done if senior management doesn't prioritize a good information security program.
Makes you wonder doesn't it, the dollars that got spent to have something blatant happen. Its not an industry I'd like to be in, with everything so compromised.
This is a big company, publicly embarrassed by a breach in data security and worried about their stock price. Now they're in damage control mode.
Call me a cynic, but my intuition says the whole page is a lie. My guess is the data was simply pilfered and copied to a USB stick by a disgruntled ex-employee or even a corruptible current one.
Curious if the HN community has any recommendations for identity-theft monitoring services?
Each time this happens, the breached company partners with some firm or another to offer "one free year of identity monitoring" or somesuch. e.g. ProtectMyID after the Target breach.
Go to any of the three credit reporting agencies and fill out the "fraud alert" form. That will place a hold on your credit report at all three credit agencies and anyone applying for credit under your name will be blocked. The entity that the person is applying for credit with has to contact you using the contact information you provide to verify that it is indeed you that's applying for credit.
It looks like it's sufficient to do it with one as the alert propagates to the other two. And it lasts 90 days.
"Ask 1 of the 3 credit reporting companies to put a fraud alert on your credit report. They must tell the other 2 companies. An initial fraud alert can make it harder for an identity thief to open more accounts in your name. The alert lasts 90 days but you can renew it."
I have had several scares, and each time I just call them and they give me the steps to verify if it has been breached. I like the terms of their contract better as well. Just be advised that this is identity insurance. Not protection. It is designed to be reactive rather then proactive. I feel that everybody will have their identity stolen at some point, so instead of trying to prevent it. I chose to insure the consequences of it happening. I feel it's a much better return on my investment, as a lot of the protection cosines don't do much for you if they miss a theft.
P.S. A million dollar reimbursement clause really helps me sleep at night.
What a great deal for them. They know that in almost all cases you won't be liable for the losses suffered due to identity theft (e.g. loans, credit cards, etc), so they "insure" you that if, somehow, you ever are liable they'll pay it...
However what are the situations where the person for whose identity was stolen is asked to pay back the fraudulently obtained goods? I can think of no examples.
I believe the breached company partners with some firm and offers one year of identity monitoring because it is what is usually offered with the new "data breach" insurance available. By paying for the (opt-in) credit monitoring they are actually limiting their own liability.
Boy it sure does fill me with confidence to know that I am hearing about my personal information having been compromised through a news website rather than through the incompetent organization that allowed my information to be leaked in the first place...
When I woke up this morning they had sent me and my spouse an email overnight with the same letter that's posted on the anthemfacts.com site. Maybe they don't have your email address?
It would be responsible of them to alert their current students and alumni of the breach, because as of now, I don't think they have. At UCB, there is a medical facility on campus and when you have ship insurance it almost feels as if your provider is the school itself. Dues are paid as part of tuition and most services can be rendered on campus, as well as, most questions about your insurance answered at their front desk. Easy to forget that you're actually a client of Anthem.
How about if companies holding sensitive data were required to subject themselves to pen test attacks by properly incentivized third parties? Even if an attack were not successful the deliverables would quickly tell an experienced hand whether the attempt had been sufficiently rigorous. And that would allow for a good audit mechanism.
I know my credit card company allows me to set a password to prevent unauthorized access from someone who might have stolen this kind of data. Is there a similar system in place to make it harder for an identity thief to open accounts in my name or do other things that might damage my reputation?
You can freeze your credit. I don't claim it to be a comprehensive solution to a complicated topic like identity theft, but it helps, and is fairly easy and inexpensive to do.
Lifelock doesn't catch everything though. Neil Boortz, a former syndicated radio talk show host, had someone open a bank account and take Social Security distributions for several months and Lifelock didn't catch it. http://www.wsbradio.com/weblogs/nealz-nuze/2014/sep/04/my-so...
Most companies only focus on perimeter defense and are soft bellies once opened up or to an internal job #sonylearning
And as long as it is not practice to sue companies and Cxx for negligence when they do not internally protect the data (no unencrypted data at rest) this will not change.
I'm in the process of getting Anthem to pay for my credit monitoring now. If you're in the same boat of not wanting to wait for a snail mail letter, call 1-877-263-7995 and escalate twice.
They were supposed to call me back and didn't. I ended up just setting a 90 day fraud alert on my credit profile[1] and with ChexSystems[2]. Both are free for people who believe their identity may be compromised; you don't need a police report. They also give you a link to get a free credit report. Both may be renewed after the 90 days expires.
So, to stress out that they are not morons, they call this "sophisticated". You can safeguard your personal info as much as you want, but these big data warehouse will always leak it!
They're fantastically better than any other insurance I've had. What they cover for my family is easily another income every year. What did you have before?
edit: I had group insurance with Unity for like 8 years. Never once had a scrap of paper to review or a bill to squabble over; everything always covered. Now I'm on a group plan for Anthem and I almost choked when I received the summary of benefits which was greatly reduced in scope.
Anthem is very schizophrenic about their group vs. individual plans. I was covered by them under Google's group plan and they were easily the best insurance company I've had. They paid for all sorts of things that other insurers wouldn't bother for, no questions asked, and were great to deal with.
Then I tried continuing with one of their individual plans after leaving, and they were easily the worst insurer I've ever dealt with. Things like not informing me that my PCP (who'd certainly been part of the group plan) was not part of the individual plan's network, or finding out that the nearest available PCP who was is 40 miles away (I live in a major metropolitan area with several million inhabitants). Not being able to change my address through the website - they have a form up that doesn't work, along with a message saying "If this form doesn't work, please call ..." Taking hours to get ahold of a human on the phone. Billing hassles. Sending out "your coverage is ending in 30 days because of non-payment" notices even though I'd faithfully paid online on-time. I'm actually quite glad that their terms are "Your policy ends automatically when you don't pay", because they've made it pretty much impossible for me to pay them - their online billpay refuses to take my payment (failing with no error message), which I suspect is because my address changed, but their website makes it impossible for me to update my address, calling them takes more time than I'm willing to invest, and I don't have any trust that if I send them a check it will actually be credited to my account. I just started a policy with Blue Cross Blue Shield instead, which has been a joy in comparison, and let Anthem lapse.
If you read the Yelp reviews, they're far worse than my situation - folks being promised coverage for hospital stays and then denied coverage afterwards, and multiple lawsuits outstanding against them.
The cynic in me thinks that Anthem is basically unable to continue as an operating business, and so they're triaging accounts. The big group accounts like Google get top-of-the-line service, so that they can keep them and hopefully bring in enough revenue to tide the company over. The individual accounts - anything that's small enough to (presumably) not have many other options and unable to sue - get screwed. So if you're in one of those groups, be thankful; if you're an individual, start looking elsewhere.
The question is not whether it will be efficient but whether it will happen.
It will mean licenses and certifications to have the right to store personal data, regulations to comply with in term of system architecture with audits and penalties for breaches. More bureaucracy and processes. You won't create a website over a week end.
Currently any idiot can create a database and store sensitive information without even knowing what a SQL injection or a rainbow table is.
Most professions are regulated: architects, doctors, pilots, farmers, bankers, even restaurants! And each time regulations come as a result of fk ups: banks or homes collapsing, conmen selling snake oil, food poisoning, etc. IT is the only sector where mild amateurism is not only acceptable but rather the norm more than the exception.
The security products arent great, true, but the ppl working as security engineers in companies are often quite decent.
It seems to me that its the usual issue. People don't see the need for protection until they've been hit. It seems to be a cost that doesn't make sense to them. They don't even care anymore.
I've actually had the exact opposite experience. Security Engineers at most companies have no idea what they're doing beyond running the scanner and parroting whatever it spits out.
"The scanner says your server is vulnerable"
"Ya, we patched that vulnerability weeks ago"
"The scanner says it's vulnerable"
"OK.... looks at scanner - oh, it's just reading the banner, and not taking into account that the major rev didn't change, it's patched"
"The scanner says it's vulnerable"
"OK... so what if I change the banner so it doesn't pick it up as vulnerable?"
"The scanner says it's secure now, thanks!!"
The guys who know their stuff in security generally have a desire to actually get paid well, and have time to do legitimate research. They don't really have a desire to sit in a corporate job dealing with the mountains of bureaucratic bullshit that goes along with security in a corporation. Do you really want to be the guy who gets thrown under the bus because you had to disable strong passwords because the CEO was angry he needed both upper and lower case letters in his AD password?
>Do you really want to be the guy who gets thrown under the bus because you had to disable strong passwords because the CEO was angry he needed both upper and lower case letters in his AD password?
Except those strong password policies don't strengthen security at all, neither in theory nor practice. Congratulations, the CEO's password is now "qweRTY" and it's written on a yellow sticky-note on his monitor.
A post-it note on his monitor of a secure password (they generally require a number or special character, as well as being 8 characters long), is actually better security than an extremely simple password. I can have him lock his office door... I can't prevent someone from brute forcing the password he's re-used on every site on the internet.
I literally tell my parents to have a secure password they write on a post-it note. The odds of someone breaking into their house for their password is about 1/10000th the odds of someone cracking their simple password on a website and getting the keys to the kingdom.
True that about the security engineers, but they are at the mercy of products that claim to distinguish good from bad and this has never worked, IMHO. How the hell can you write signatures against malware/documents/web-sites/files/attacks/blah when there's so much diversity and quantity of stuff to keep up with?
Disclaimer: I built the first IPS to be commercialized and yes we used signatures amongst other things.
Turned 26 in January. Purchased Anthem medical insurance so I don't get penalized by Obamacare. Surprised how expensive it is, but bit my tongue and continue. Anthem gets hacked. My Name + SSN is probably somewhere it shouldn't be; ugh.
You are required to give your SSN if accepting the Obamacare subsidy, look for it in the "fine print," which doesn't come close to meeting the intent of the Federal Privacy Act of 1974 (Public Law 93-579). Unfortunately, the Obamacare subsidy is a form of government assistance. The healthcare.gov operation is a joint venture between the government and non-government entities. If you are personally paying for your coverage, you "voluntarily" gave it to them when you filled out their application. I personally haven't been known to any insurance company, especially health and life, by my SSN since 1979! Remember, they can't lose (or be hacked out of) misplace, abuse or misuse what they don't have. All government agencies (but not anyone else) are required to follow the Federal Privacy Act of 1974 and it's requirements prior to you disclosing your SSN to them. Unless someone is paying you a salary, wages or interest don't give up your SSN! Don't ever give up your SSN and accept a lifetime of liability and potential ID theft for some else's 3 seconds of convenience. You are not numbered like a head of livestock. Stand your ground and take your business elsewhere when dealing with a non-government entity who insists on having your SSN! Information travels in one direction and you're not going to get it back.
Yet companies I work with now big and small look at security as just a bunch of checkboxes on a government audit form. As long as upper management continue to see security as a cost loss center, and continue to only do the minimum nessissary to pass said audits. These breaches will continue to happen.