Crowdstrike though is not part of a system of engineered design.
It’s a half-baked rootkit sold as a figleaf for incompetent it managers so they can implement ”best practices” in their companys PC:s.
The people purchasing it don’t actually know what it does, they just know it’s something they can invest their cybersecurity budget into and have an easy way to fullfill their ”implement cybersecurity” kpi:s without needing to do anything themselves.
Exactly, and this is why I've heard the take that the companies who integrate this software need to be held responsible by not having proper redundancy, and while its a fine take, we need to keep absolutely assailing blame at Crowdstrike and even Microsoft. They're the companies that drum the beat of war every chance they get, scaring otherwise reasonable people into thinking that the Cyberworld is ending and only their software can save them, who push stupid compliance and security frameworks, and straight-up lie to their prospects about the capabilities and stability of their product. Microsoft sets the absolutely dog water standard of "you get updates, you cant turn them off, you can't stagger them, you can't delay them, you get no control, fuck you".
Perhaps true in some cases but in regulated insustries (example fed regulated banks) a tool like crowdstrike addresses several controls that if uncontrolled result in regulatory fines. Regulated companies rarely employ home grown tools due to maintainance risk. But now as we see these rootkit or even agent based security tools bring their own risks.
I’m not arguing against the need to follow regulations. I’m not familiar what specifically is required by banks. All I’m saying Crowdstrike sucks as a specific offering. I’m sure there are worse ways to check the boxes (there always is) but that’s not a much of a praise.
My rant is from a perspective in an org that most certainly was not a bank (b2b software/hardware) and there was enough of ruckus to tell it was not mandated there by any specific regulation (hence incompetence).
A properly used endpoint protection system is a powerful tool for security.
It's just that you can gamble compliance by claiming you have certain controls handled by purchasing crowdstrike... then leave it not properly deployed and without actual real security team in control of it (maybe there will be few underpaid and overworked people getting pestered by BS from management)
I think a lot about software that is fundamentally flawed but gets propelled up in value due to great sales and marketing. It makes me question the industry.
It's interesting that this is being referred to as a black swan event in the markets. If you look at the SolarWinds fiasco from a few years ago, there are some differences, but it boils down to problems with shitty software having too many privileges being deployed all over the place. It's a weak mono culture and eventually a plague will cause devastation. I think a screw up for these sorts of software models shouldn't really be thought of as a black swan event, but instead an inevitability.
That is how all of these tools are. I have always told people that third-party virus scanners are just viruses that we are ok with having. They slow down our computers, reduce our security, many of them have keyloggers in them (to detect other keyloggers). We just trust them more than we trust unknown ones so we give it over to them.
CloudStrike is a little broader of course. But yeah, its a rootkit that we trust to protect us from other rootkits. Its like fighting fire with fire.
That’s my experience as an unfortunate user of a PC as a software engineer in an org where every PC was mandated to install crowdstrike. Fortune 1000.
It ran amok of every PC it was installed to. Nobody could tell exactly what it did, or why.
Engineering management attempted to argue against it. This resulted in quite public discourse which made the incompetence of the relevant parties in it-management related to it’s implementation obvious.
Not _negligently_ incompetent. Just incompetent enough that it was obvious they did not understand the system they administered from any set of core principles.
It was also obvious it was implemented only because ”it was a product you could buy to implement cybersecurity”. What this actually meant from systems architecture point of view was apparently irrelevant.
One could argue the only task of IT management is to act as a dumb middleman between the budget and service providers. So if it’s acceptable it managers don’t actually need to know anything of computers, then the claim of incompetence can of course be dropped.
If you realize something horrific, your options are to decide it's not your problem (and feel guilty when it blows up), carefully forget you learned it, or try to do something to get it changed.
Since the last of these involves overcoming everyone else's shared distress in admitting the emperor has no clothes, and the first of these involves a lot of distress for you personally, a lot of people opt for option B.
> overcoming everyone else's shared distress in admitting the emperor has no clothes
I don't disagree, but why do we do we react this way? Doesn't knowing the emperor has no clothes instill a bit of hope that things can change? I feel for the people who were impacted by this, but I'm also a little bit excited. Like... NOW can we fix it? Please?
The higher up in large organizations you go, in politics or employment or w/e, the more what matters is not facts, but avoiding being visibly seen to have made a mistake, so you become risk-averse, and just ride the status quo unless it's an existential threat to you or something you can capitalize on for someone else's misjudgment.
So if you can't directly gain from pointing out the emperor's missing clothes, there's no incentive to call it out, there's active risk to calling it out if other people won't agree, and moreover, this provides an active incentive for those with political capital in the organization to suppress the embarrassment of anyone pointing out they did not admit the problem everyone knew was there.
(This is basically how you get the "actively suppress any exceptions to people collectively treating something as a missing stair" behaviors.)
I've not seen that at my fortune 100. I found other's willing to agree and we walked it up to the most senior evp in the corporation. Got face time andbwe weren't punished. Just, nothing changed. Some of the directors that helped walk it up the chain eventually became more powerful and the suggested actions took place about 15 years later.
Sure, I've certainly seen exceptions, and valued them a lot.
But often, at least in my experience, exceptions are limited in scope to whatever part of the org chart the person who is the exception is in charge of, and then that still governs everything outside of that box...
It's a nice idea, but has that worked historically? Some people will make changes, but I think we'd be naive to think that things will change in any large and meaningful way.
Having another I-told-you-so isn't so bad, though - it does give us IT people a little more latitude when we tell people that buying the insecurity fix du jour increases work and adds more problems than it addresses.
Sure, on long enough timescales. I mean, there's less lead in the environment than there used to be. We don't practice blood letting anymore. Things change. Eventually enough will be enough and we'll start using systems that are transparent about what their inputs are and have a way of operating in cases where the user disables one of those inputs because it's causing problems (e.g. crowdstrike updates).
I'd just like it to be soon because I'm interested in building such systems and I'd rather be paid to do so instead of doing it on my off time.
there are way too many horrific things in the world to learn about... and then realizing you can't do something about every of those things. But at least you can tackle one of them! (In my case, antibiotic resistance)
My issue is WTF do sooooooo many companies trust this 1 fucking company lol, like its always some obscure company that every major corporation is trusting lol. All because crowdstrike apparently throws good parties for C-Level execs lol
https://www.youtube.com/watch?v=NcOb3Dilzjc
Interconnected systems can fail spectacularly in unforeseen ways. Strange that something so obvious is so often dismissed or overlooked.