To be honest, I feel bad for the engineering team at Equifax. The vulnerability that compromised their system was a bug in an open-source Java library, Apache Struts, and security researchers only noticed it a few days ago. It seems that the Equifax team had very little time to react and update their software. In some sense, I feel that more blame should be placed on the engineers who built the highly popular open-source software, not the Equifax team. Some large number of Fortune 100 companies also experienced the same vulnerability simply because they trusted a widely used library.
Makes me wary of trusting other big OS libraries, but since rebuilding every part of the stack from scratch is infeasible and unproductive, we don't have much choice but to use them.
Technical announcement:
Severe security vulnerability found in Apache Struts using lgtm.com (CVE-2017-9805):
There is some debate as to which Struts exploit was used. If it was the one from Sept, 2017 then you make a valid argument. However, if the exploit used was years old then the fault clearly lies on Equifax for not keeping their servers up to date.
Also, didn't the Equifax breach happen in May, 2017? If so, I fail to see how the Sept, 2017 exploit plays into this unless it was in the wild months before it was published in Sept, 2017 - which I find hard to believe.
> In some sense, I feel that more blame should be placed on the engineers who built the highly popular open-source software, not the Equifax team.
I completely disagree. It is open-source for a reason. If you find a bug in it, fix it and everybody wins. Otherwise, nobody would ever publish any code/software because you would get sued if you did any mistake. On top of that, the software is free. So you basically want to blame some group which gave you something for free which you used to make big money and expect to also sue them for consequences if they made a mistake.
I also feel bad for the engineering team at Equifax. But on the other hand, you have to take into account that any software you employ could have a security flaw in it. That is why you should have additional means to protect it and no single point of failure. And this is especially true if your whole business depends on that data!
But why were 143 million records of personal consumer information stored in a way that they could be accessed via a vulnerability in a web server in the first place?
I would have expected this type of data to be stored in such a way that even if someone got access to one of their web/application servers they wouldn't be able to dump 143 million records from it without serious red flags going off.
It doesnt help if that data is being accessed all the time by applications. You just have to break into one application in order to exfil the data or to get the decryption method along with the encrypted data.
'Encryption at rest' only works for data that is not actively used, like backups or if a physical storage device is stolen.
A better additional safeguard is to have quotas and alarms in place for data access. Is data being accessed sequentially in a application environment where data is usually accessed randomly? Is data access bound to individual credentials and do indivudals access more data than usual?
I think, there is actually potential for new database products or addons, which can reduce the impact of breaches in the vicinity of these 'core databases'.
so writing your own software is "unproductive" but you also want to put the blame onto the people who made a framework available?... do you want open source to go away? or do you think that companies that protect such valuable information should be spending more on security assurances?
Makes me wary of trusting other big OS libraries, but since rebuilding every part of the stack from scratch is infeasible and unproductive, we don't have much choice but to use them.
Technical announcement:
Severe security vulnerability found in Apache Struts using lgtm.com (CVE-2017-9805):
https://lgtm.com/blog/apache_struts_CVE-2017-9805_announceme...