Hacker News new | past | comments | ask | show | jobs | submit login

As someone who works in this space, I can tell you: it's because big companies buy Cyber Security Insurance, and the insurance forms have a checkbox along the lines of "do you run Endpoint Security Software on all devices connected to your network", and if you check the box you save millions of dollars on the insurance (no exageration here). Similarly, if you sell software services to enterprises, the buyers send out similar due diligence forms which require you as a vendor to attest that you run Endpoint Security Software on all devices, or else you won't make the sale. This propagates down the whole supply chain, with the instigator being the Cyber Security insurance costs, regulation or simply perceived competence depending on the situation.

So it's not necessarily government regulation per se, but a combination of things:

1. It's much safer (in terms of personal liability) for the decision makers at large companies to follow "standard industry practices" (however ridiculous they are). For example, no-one will get fired outside of Crowd Strike for this incident precisely because everyone was affected. "How could we have foreseen this when noone else did?"

2. The Cyber Security Insurance provider may not cover this kind of incident given there was no breach and so as far as they are concerned installing something like Crowd Strike is always profitable.

3. The insurance provider has no way to effectively evaluate the security posture of the enterprise they are insuring, so rely on basic indicators such as this checkbox, which completely eliminates any nuance and leads to worse outcomes (but not to the insurance provider!)

4. "Bad checkboxes" propagate down the supply chain the same way that "good checkboxes" do (eg. there are generally sections on these due diligence questionnaires about modern slavery regulation, and that's something you really want to propagate down the supply chain!)

Overall I would say the main cause of this issue is simply "big organisation problems". At a certain scale it seems to become impossible for everyone within the organization to commicate effectively and to make correct, nuanced decisions. This leads to the people at the top seeing these huge (and potentially real) risks to the business because of their lack of information. The person ultimately in charge of security can't scale to understand every piece of software, and so ends up having to make organisation-wide decisions with next to no information. The entire thing is a house of cards that noone can let fall down because it's simply too big to fail.

Making these large organisations work effectively is a very hard problem, but I think any solution must involve mechanisms to allow parts of the business to fail withing taking everything down. Allowing more decisions to be taken locally, but also the responsibilities and repercussions of those decisions to be felt locally.




Yes, "cyber insurance" is a common driver behind these awful security and system decisions. For example, my company requires password changes every 90-days even though NIST recommends against that. But hey, we're meeting insurance requirements!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: