Hacker News new | past | comments | ask | show | jobs | submit login
It's not just CrowdStrike – the cyber sector is vulnerable (ft.com)
123 points by jmsflknr 84 days ago | hide | past | favorite | 127 comments



Security: You must install Microsoft Defender on all Linux VM's

Devs: Ugh...why?

Security: For safety!

Devs: Fine, we won't argue. Deploy it if you may.

A few moments later...

Devs: All of our VM's are slow as crap! Defender is using 100% of the CPU!

Security: Add another core to your VM's. ticket closed

Management: Why are our developers up 30% on their cloud spend!?


We had cylance take out all of our kubernetes clusters a few years ago.

The whole cybersecurity concept of installing third party mystery meat in the kernel controllable over the internet by a different company seems contrary both to good security practices and software quality assurance, immutable production architecture and repeatable builds.


Third party mystery meat is mostly intended for scapegoating if a problem does occur.


But sometimes the goat escapes and then breaks everything. Then somebody's got your goat.


Escape Goat!


and all microsoft have to do to increase cloud revenue is make defender chew up 5% more CPU every few months

directly incentivised to make shit software


Isn't this kinda the hardware model for Apple devices too? Eg batterygate


Batterygate was just making sure the phone doesn't shut down suddenly as the battery deteriotes and is less capable.


People say this like it lets Apple off the hook. Let me explain why it doesn't.

Apple had full control over the whole phone's software stack, in a very good way, meaning they built a good mobile OS that had good systems for power management and an app lifecycle that could actually kill apps at will to maintain efficiency, without disrupting the user.

With this, they decided to ship smaller batteries so they could make slimmer phones.

Except, they used garbage batteries. They were so small (1600mAh on the iPhone 6) that normal wear and tear of a few years degraded them to the point that the battery chemistry could not keep up with normal processor frequency and power ramping.

Apple started getting a lot of complaints because people were understandably upset that their 2-3 year old phone couldn't run for more than an hour off the charger. Apple didn't like increasing support load, even though they weren't covering anyone's battery replacement. Instead of putting out a press release that they had shipped sub-standard batteries in their phones, and offering free battery replacements with a new battery that wouldn't have the same problem in another 2-3 years, they included code in the new version of iOS to SIGNIFICANTLY slow down your 3 year old or less phone.

Apple made a product that deteriorated way too quickly, and then tried to hide it. That's batterygate. If LG sold a fridge that would die after five years because of compressor fatigue and then silently updated their fridges to not operate colder than 45 degrees F to extend the life of the compressor, I would hope you would be pissed at that, right?

A reminder that the iPhone 6 was also "Bendgate", which internal apple memos showed they knew was a serious problem before they sold it, and then claimed two years after release they only had 9 complaints of phone bending and that it wouldn't bend in normal use.


> If LG sold a fridge that would die after five years because of compressor fatigue and then silently updated their fridges to not operate colder than 45 degrees F to extend the life of the compressor, I woul

Apple sycophants are willing to put up with any bullshit from Apple. It is very tiresome to argue against blind faith.

Any other company would face incredible scrutiny if that happened. Imagine if MS did that to their surface devices. And this level of scrutiny from consumers is healthy.


It has a simple reason, any other device is much worse. All my Lenovo laptop batteries died in 2 years, meanwhile my MacBook from 2015 still gets 3 hours of battery life. That one was expensive, but now with Apple Silicon the Macbook has best power to performance ratio by far.

And it's not like other vendors are not full of crap either. I had a Dell laptop with a clearly broken display that they never acknowledged or repaired - and many other problems of all kinds. Apple was always least (but obviously not zero) problems and best build quality.


> It has a simple reason, any other device is much worse. All my Lenovo laptop batteries died in 2 years

Pff. I had a macbook battery (2018 model, brand new, issued by my employer at the time) that died in 1.5 years. Died in the sense that I couldn't use that crap unplugged for more than 10 minutes.

Since every place I work issues me a MacBook, I am very experienced with these luxury toys, and I wouldn't ever buy one for myself. I actually think Thinkpads are much better.


As I said, it's obviously not zero issues with Apple either. But this particular issue is an exception imho, my own experience and everyone with a Mac around me is saying the battery lifetime is much better than any other brand they tried. Also, 1.5 years is below 2 years of warranty (in EU) - if you're around here, try to have it replaced. I had only good experience with Apple customer care - much better than HP, Dell and Lenovo. Again, while it wasn't always perfect and sometimes required visiting again, at least they really wanted to help - unlike the other vendors.

BTW you're saying it was 2018 model, and employer issued, so if I'm correct in assuming it was a top model Intel CPU, these really were chewing through the batteries because of the heat. It's very different with i5, less powerful i7 and Apple Silicon.

I really don't think anyone is claiming that Apple is perfect - it's just that the experience with other vendors is so, so utterly bad. For example ThinkPads - nice performance and cheap, I give you that. But the non-existent customer care (for consumers, not enterprise), the build quality, the bad sound and displays and the absolutely terrible touchpad make me avoid it. Also Windows - and I never got Linux properly working on a ThinkPad as well as MacOS does on a MacBook, even though they claim it's Linux certified.


You’re the first person I’ve ever seen making these claims. Do you have anything to back them up?


To a point. After a while they're wasting hardware and another vendor will undercut them. After some decades people will start leaving


That is very long term. Not interesting to shareholders.


I work supporting software that processes millions of small files a day, with a lot of these scripting languages. The speed difference in total iops where AV is installed vs not installed is huge. 30-50% loss is no joke.


The problem is that the AV does not increase security.


Even worse, at least aesthetically, than the AV is the pacct stuff to log 1 line for every single syscall. Talk about sublinear scaling. And for what, no one can say.

Actually, even worse than that was we had to install AV on all the images that the ephemeral map-reduce/Hadoop clusters we spun up, but the way the AV stuff worked, the computes were gone by the time the registration for the new compute had gone thru whole process. And, in AWS accounts where there were maybe 63 IPs and say 600 EC2s/day they used IP as the primary key for the "list of compute in the VPC." So they stitched together totally unrelated stuff as if it was the same continuous compute. I guess it would be eventually fixed, but the bad security data was not a real concern of the devops team that was building out stuff as rapidly as possible, nor were the EC2 made in a sealed off VPC and only lived for a few tens of minutes or hours at best a serious security concern to the actual security people. Just a check list solution hitting a novel environment.


That's a management problem. IT security didn't communicate that to the finance folks. Microsoft didn't communicate that to IT security. And if they're on Azure, it's more money for Microsoft.


Oh finance doesn't care. Security is paramount and not something you can save money on, is the mindset. Besides, the real costs of this is allocated to many individual cost centers so in the bigger picture you won't see it.


Speaking of which, does anyone use offline Microsoft Defender updates (maybe to have direct control of canary/staggered updates)? https://learn.microsoft.com/en-us/defender-endpoint/linux-su...


there was a production incident at a customer that was this exact scenario


Cyber Security is a matter of national security, but currently we sacrifice our national security for the convenience of companies.

The disconnect is that companies are both (1) the only entity in control of their system and how it is tested and (2) not liable if a security breach does happen.

I believe we need to enable red teams (security researchers) to test the security of any system, with or without permission, so long as they report responsibly and avoid obviously destructive behavior such as sustained DDoS attacks.

A branch of the government, possibly of the military (the Space Force?) could constantly be trying to hack the most important systems in our nation (individuals and private companies too). The bad guys are doing this anyway, but hopefully the good guys could find the security holes first and report them responsibly.

Again, currently this doesn't happen because it would be embarrassing and inconvenient for powerful companies. We threaten researchers who do nothing more than press F12 (view HTML source) with jail time and then have our best surprised Pikachu faces ready for when half the nations data is stolen every week or major systems go down. Actually, we don't make faces at all, half the nation's data is stolen every week--no, actually we don't even take notice, we just accept it as the way things have to be. Because, after all, we can't expect companies to be liable, but we can trust companies to have exclusive control over the testing of their security. How convenient for them.


CISA offers services to public and private providers of infrastructure deemed critical that include pen testing, but they don't have the resources to offer it to all who want it.


Isn't this what the NSA is for? Also, I think we have plenty of reason to believe they regularly try to penetrate powerful companies, they just don't necessarily tell us when they do.


I've never heard anything about the NSA telling a company they have a security vulnerability. Have you?


Not the NSA, but I know of at least one time the FBI did: https://arstechnica.com/security/2024/01/chinese-malware-rem...



That was probably because the NSA and other critical government agencies use Microsoft Exchange and it was a bug found in the wild.

But if it wasn't a bug found in the wild, can you imagine the fights between the NSA red and blue teams on whether to alert Microsoft about it?


Probably not a lot at all tbf


I don't have citations on hand, but it's commonly held that NSA fixed the S-boxes in IBM's "Lucifer" cipher design for DES to improve its resistance to (then publicly-unknown) differential cryptanalysis.

Of course they also crippled the key length to 56 bits...


They absolutely have bugs up their sleeve, but if they tell the companies to allow them to fix them then they can't use the bugs for spying (or at least, not as effectively)


they're correct, all the others are similarly shit

sentinelone, tanium, guardicore, defender endpoint, delina

all running as root (or worse), sucking up absurd amounts of resources, often more than the software running on the machine (but advertised as "LOW IMPACT")

they also cause reliable software to break due to bugs in e.g. their EBPF

also often serialises all network and disk on the machine through to one single thread (so much for multi-queue NVMe/NICs)

the risk and compliance attitude that results in this corporate mandated malware being required needs to go

this software creates more risk than it prevents


So whats the alternative? Have no endpoint protection? Have nothing in place to warn you when malware ends up in your system?

(Just playing devils advocate. I hate Crowdstrike as much as anyone here :)


One option may be to use locked read-only systems. Many of these computers at airports etc do not need a writeable local filesystem.


Does it actually work?


Yes it works very well for the intended purpose (which isn't actually security). The intended purpose is CYA. As head of security, if you install CrowdStrike or some other vendor, then a compromise becomes that vendor's problem, not yours.


When has Crowdstrike taken responsibility for a hack?

I think it's more like, security is heavily check mark based. Crowdstrike and friends have managed to get "endpoint security"[1] added as a "standard security best practice" which every CSO knows they must follow or get labeled incompetent. Therefore "endpoint security" must be installed everywhere with no real proof that it makes things more secure, an arguable case that it makes things less secure, and an undeniable case that it makes things less reliable.

[1] I also never understood how "endpoints" somehow are defined as "any computer connected to any network." I tried to fight security against installing this crap on our database servers with the argument that they are not endpoints. Did not work.


When has that ever worked? Cloudflare blamed some no name vendor for their broken design [1]

People and companies that hide behind this bullshit don’t deserve to be in leadership positions. Cowards

[1] https://www.datacenterdynamics.com/en/news/cloudflare-claims...


The obvious alternative is to build secure systems instead of making them insecure first and then trying to fix the inevitable problems post hoc.


Or maybe switch to an operating system that isn't a security dumpster fire?


How do you objectively assess an operating system's security? I wanted to convince friends that Windows is insecure but I couldn't find unassailable evidence. Got some? There are confounding variables like the age of the operating system and size of the userbase (distorting the event volume), its attractiveness to attackers, and the tendency of organizations of different levels of technical ability to prefer different operating systems...


I'm a pretty die hard linux guy, and I think Windows is a bloated nightmare, but it's not insecure IMHO (unless you consider "privacy" to be security, but most people do not (even though I think they should)). There was a time when that wasn't as true, though. If Windows were rewritten from scratch today, I'm certain there would be some different architectural/design decisions made, but that's true for pretty much every piece of software ever written.


None of this matters. For example, you could build an operating system with security signatures that are generated by the intrusion detection system and only executables with valid signatures can be executed. This would get rid of a lot of pointless online security scans since a secure system mostly consists of already vetted executables. Interpreters must let the operating system verify signatures of the source files.

Note how the intrusion detection system here only needs to do offline scans that are unaffected by security updates.


Here is the official Windows security certification page [1]. They certify against this standard [2]. The maximum security they certify is provided is:

Page 53: “The evaluator will conduct penetration testing, based on the identified potential vulnerabilities, to determine that the OS is resistant to attacks performed by an attacker possessing Basic attack potential.”

That is the lowest level of security certification outlined in the standard. The elementary school diploma of security.

To see what that means, here is a sample of the certification report [3].

Page 14: “The evaluator has performed a search of public sources to discover known vulnerabilities of the TOE.

Using the obtained results, the evaluator has performed a sampling approach to verify if exists applicable public exploits for any of the identified public vulnerabilities and verify whether the security updates published by the vendor are effective. The evaluator has ensured that for all the public vulnerabilities identified in vulnerability assessment report belonging to the period from June 8, 2021 to July 12, 2022, the vendor has published the corresponding update fixing the vulnerabilities.“

The "hardcore" certification process they subject themselves to is effectively doing a Google search for: “Windows vulnerabilities” and checking all the public ones have fixes. That is all the security they promise you in their headline, mandatory security certification that is the only general security certification listed and advertised on their official security page.

When a company puts their elementary school diploma on their resume for “highest education received”, you should listen.

That is not to say any of the names in general purpose operating systems such as MacOS, Linux, Android, etc. are meaningfully better. They are all inadequate for the task of protecting against moderately skilled commercially minded attackers. None of them have been able to achieve levels of certification that provide confidence against such attackers.

This is actually a good sign, because those systems are objectively and experimentally incapable of reaching that standard of security. That they have been unable to force a false-positive certification that incorrectly states they have reached that standard demonstrates the certification at least has a low false-positive rate.

All of the standard stuff is inadequate in much the same way that all known materials are inadequate for making a space elevator. None of it works, so if you do want to use it, you must assume they are deficient and work around it. That or you could use the actual high quality stuff.

[1] https://learn.microsoft.com/en-us/windows/security/security-...

[2] https://www.commoncriteriaportal.org/files/ppfiles/PP_OS_V4....

[3] https://download.microsoft.com/download/6/9/1/69101f35-1373-...


Unreasonably idealistic solutions are some of the worst kind of solutions because they make you feel like you have the answer but the benefits never materialize. The moment you pick any other OS to be the "80% of the world" one, reality will quickly deflate any sense of superiority.

And whether you can see it or not, they're all still some form of dumpster fire, be it security, usability, price.


We have had kernel exploits like dirty copy on write that got you root, but got blocked by selinux.


And what if this bug happened to affect Linux somehow too? What then?


What makes you think windows is "a security dumpster fire"? The fact that most infections are on windows machine doesn't really count because most machines are also windows machines.


for one, normal person can't even install it with local account


low permission systems

allow nothing and then gradually allow some activities that are deemed safe

do not allow software to be installed from arbitrary locations

app sandboxing and third-party vendors cannot break their sandbox

basically, iOS, Android, ChromeOS

50% of the people impacted today probably only need a browser


> also often serialises all network and disk on the machine through to one single thread

Do you have more info about this ? I am very interested. Is it impacting SAN fc storage ?


yes but, did it help us meet the compliance targets for this year?

keep'er running...


Any experience with Wazuh?


For most corporations, security and robustness are -- and for a long time have been -- an afterthought.

Making systems hard to hack and robust to rare events:

* is really hard,

* costs a lot of money, and

* reduces earnings in the short term.

Faced with these inconvenient facts, many executives who want to see stock prices go up prioritize... other things.


To them, they are thinking about it though. They installed this ultra secure thing called CrowdStrike that checked a regulatory box for cybersecurity


CrowdStrike is a textbook example of a single point of failure:

https://en.wikipedia.org/wiki/Single_point_of_failure


To be fair, most corporations signed up for Crowdstrike as a way to address some issues. I'm sure it wasn't cheap and CS was probably better at security than an IT admin at a 50 person shop.


But what's worse, hundreds of maybe insecure companies or creating a big single point of failure?


Globally or locally?

To each individual company, it’s better to have the big single point of failure. That’s the problem.


Much like the RSA attack, now we get to see how Crowdstrike handles damage control.


Yeah, it wasn't cheap.

It's still not enough.


I feel like I would be doing everything in my power to de-Windows my operation.


It's rapidly getting to the point where the cure is worse than the disease, when it comes to this kind of product.


It isn't a cure in the first place it just fights the symptoms.


it's a kind of palliative care if you extend the analogy


It's almost as if we're seeing the downsides to our cloud based decisions. Uncontrolled costs, lack of visibility, placing control of critical processes in the hands of other groups...that also have control of critical processes globally.

Am I bitter at losing the business decisions that push ease of management by sending control to service providers? Not really. It's been dozens of times, and I lose every time.

I can raise the concerns to make sure the decisions are educated ones, and then let the decisions be made.


If you managed your own servers, you would still need some sort of endpoint security solution too tho…


Not saying it outages wouldn't happen, saying it might not happen on a _global_ scale....and it's not the only downside. There are pros and cons to each solution.



For a business that relies on SaaS applications over cloud and uses dumb machines (windows, iPad, whatever) as client terminals, can someone please explain what are the actual threat factors that these EDR tools like Crowdstrike Falcon address? And if SaaS applications can restrict access, detect anomalies with user behavior, have MFA for auth, etc.. will that mitigate these risks? I guess common issues like key loggers, malwares, virus attacks have much simpler solutions than a complex EDR which seems to need root access!! Someone, please educate.


Cyber was a 90s buzz word that died out and became vogue when cyber security became cool. I cringe every time I hear it drop.


I still think of how 'cyber' was used in AOL Chatrooms back in the 90's...


I forgot about that until this mention. Definitely not relegated to AOL chats.


government and military loves that word and will probably never let it go.


I roasted some Booz Allen booth people when they asked me about cyber with a," the 90s called and wants its buzz word back."

The look they gave was priceless.


Air Force. Space Force. ...Cyber Force? Inevitable.


We do actually have US CYBER COMMAND but it isn't a branch of the military, it's just a unit inside the DoD.


"Cyber sector" is reeeeally pushing it. Full body cringe.


I remember being proud of the fact that I had an intimate knowledge and understanding of every single process running on my dev machine. Things felt sane. I could fully comprehend what was happening on my system at all times. Then the button pusher configurator class got called a new name, "DevOps" and started pushing all this crap on us. I'm ready to just start doing work on a private machine at this point.


Don't do automatic updates....roll updates manually. That would be a nice thing for the beginning.


In any sizable organization, you can't get around automatic updates.

But updates should be rolled out slowly and you need enough telemetry to detect problems as it's rolled out. Reboots, crashes, cpu/memory use, end user reports, etc should all be used to detect issues and pause the rollout.


Then you end up like some of our customers with log4j. We are consultants and notice that a cave for log4j comes out. We inform our customers that we have detected an issue under active exploit, and we performed an update to non vulnerable versions and want to deploy. Customer waffles for days and gets exploited before he decides to upgrade. Threats are often only minutes away. We are currently away to slow and manual updates are slowing you down even more.


So do automated canarying! Deploy to 1000 machines, wait 60 seconds, deploy to 10,000, etc. All done in under 5 minutes while automatically rolling back or at least halting if any metrics look bad.

Canarying is by now not a very new practice. This is like doctors not washing their hands.


Testing in software dev is taken completely for granted by many companies on mission-critical updates... Back in the day, our deployments would get tested on configs new & old and with several different variables, we always made sure deployments went smoothly. Now it seems as if most of these companies hire junior devs and skip testing to cut cost and then just put the blame of failure all on them. Burnout levels are high in these settings.

This whole incident would have not happened if just a basic deployment test was conducted. It's so widespread, it would have been impossible to miss detecting the issue.


It's easy to make comments like this against automatic updates, but then you get popped because something that would have been automatically updated misses a patch because it was too critical to risk automatic updates.

In practice, failing closed (or crashing) is probably fine for most businesses, and lower cost than a breach, but the correct solution is automated testing across a broad spectrum of devices, staged and rolling updates to prevent entire fleets going down at once, and ensuring that there is an effective, tested rollback mechanism.

But that shit's expensive, so shrug :/


> but then you get popped because something that would have been automatically updated misses a patch because it was too critical to risk automatic updates.

This kind of logic only works if you ignore any kind of possible nuances in the problem and just insist on throwing the baby out with the bathwater. Just because someone let you do automatic updates (or let's be real, you probably didn't give them much of an option) that doesn't mean you should use it for everything.

Automatic update of data (like virus definitions) != automatic update of code (like kernel driver)

And really, the only time you could justify doing automatic updates on other people's machines is when have reason to believe the risk of waiting for the user to get around to it is larger than the damage you might do in the process... which doesn't seem to have been the case here.


> This kind of logic only works if you ignore any kind of possible nuances in the problem and just insist on throwing the baby out with the bathwater. Just because someone let you do automatic updates (or let's be real, you probably didn't give them much or an option) that doesn't mean you should do use for everything.

Oh, I agree - automatic updates are nuanced in many cases. Generally speaking, automatic updates are a good thing, but they offer trade-offs; the main trade-off is rapidly receiving security updates, at the risk of encountering new features, which can include new bugs. This is kind of a big reason why folks who buy systems should be requiring that updates offer a distinction between Security/Long Term Support, and Feature updates. It allows the person who buys the product to make an effective decision about the level of risk they want to assume from those updates.

> Automatic update of data (like virus definitions) != automatic update of code (like kernel driver)

Yep, absolutely, except for the case where the virus definitions (or security checks) are written in a language that gets interpreted in a kernel driver, presumably in languages that don't necessarily have memory safety guarantees. It really depends on how the security technology implements it's checks, and the facilities that the operating system provides for instrumentation and monitoring.


From what I read they automatically updated data. But the pre-existing code had a bug, which crashed on reading the updated data.

Even if this is not what happened, it is possible, and shows the data/code update separation does not prevent problems.


> shows the data/code update separation does not prevent problems.

Sure they do? This is like saying seatbelts don't prevent injuries because people still die even while wearing them.

I never said that one weird trick would solve every problem, or even this particular one for that matter. What I was saying was that if you look for ways to add nuance... you can find better solutions than if you throw the baby out with the bathwater. I just gave two examples of how you could do that in this problem space. That doesn't mean those are the only two things you can do, or that either would've single handedly solved this problem.

The problem in your scenario is that kernel mode behavior is being auto updated globally (via data or code is irrelevant), and that should require a damn high bar. You don't do it just because you can. There's got to be a lower bar for user mode updates than kernel, etc.


That's definitely what I meant - just because you wear a seatbelt does not mean it is now impossible to get hurt.

You still need to drive carefully. To me, it looks like these people relied on the safety of seatbelts and drove really fast, and there was, predictably, a horrible crash with a lot of damage.

Crowdstrike themselves seem to have missed the nuance.


Can auto updates be turned off on the crowdstrike falcon client?


Centralisation is really the core of the problem here.

Take ZScaler, which is a service that proxies all network connections of a computer to a central cloud proxy server, mitms it (decrypt, inspect, log, and encrypt), and then forwards it to the target server. Imagine that this is hacked, and this isn't immediately discovered. Hackers listening in and being able to tap off cookies, bearer tokens and other confidential information for weeks. That would affect so many companies. And if they would want to cause a DoS, many computers and servers would be left without an operational internet connection.


Yes, Zscaler or any other Zscaler clone (e.g., Netskope, Cato, etc) -- they're all just sitting ducks, and once they are compromised, what happens to all the customers? It doesn't make any sense and shows how much we're willing to give up for convenience.


I've been trialling application allowlisting, but wow is it ever frustrating. So much stuff isn't signed, and when it is the accompanying DLLs aren't. or the signature is invalid. or some of Windows' own executables/dlls aren't signed (why?? you make applocker??) or the installer is, but none of the actual resultant end files

Is it just me?


It's not just you. Windows software management sucks and people just find excuses for it. WDAC is really difficult to use directly because of it.

OEM software is usually the worst offender here, all these installers and support utilities should be fully abolished. Drivers should stop loading if they aren't coming from Windows Update and haven't passed some quality control.


We build our "cyber fortress" out of the Turing Complete analog of Crates of C4... and wonder why things go wrong all the time.

As I say every time this happens (and it will keep happening for the next decade or so)... Ambient Authority systems can't be secured, we need to switch to Operating Systems designed around Capability Based Security.

We need at least 2 of them, from competing projects.


The real financial problem is that cybersecurity is mostly box checking. It's an industry that is open to commoditization, as startups in lower-cost global regions manage to check the box as well as the next-most-expensive region, and cost conscious companies keep migrating. But the power of the box checking is strong.

I do not invest in cybersecurity companies, it is very risky IMO


The problem with cybersecurity is that there are hundreds of attack vectors; you can get pwned by supply chain attack or by some random zero-day exploit or by an insider....It is literally impossible to 100% prevent breaching of your computer network.


It is impossible to write bug-free/exploit-free code.

But some companies are using this as an excuse to not care about the chances of an exploit at all, and just write code in a cheaper way.

We need a middle ground, where there is at least a reasonable effort towards security.


"It is impossible to write bug-free/exploit-free code."

Right, and this should be the single deciding factor for most system programming and core infrastructure development. One doesn't throw away 20+-year-old battle-tested code simply because it's grown ugly bug fixes for edge conditions no one wants to worry about. The idea that it's possible to throw away, say 30-year-old font rendering code and replace it without revisiting a lot of the problems along the way is peak hubris.

And the same goes for choosing and building internal IT systems, KISS should rule those choices because each layer adds additional code, additional updating, etc. Monolithic general-purpose software is not only a waste of resources (having software that 9/10th is just taking up disk/memory/cache space because only 10% of its features are used), but it's a maintenance and security nightmare.

This is the problem with much of the open-source world, too. Having 20 different Linux filesystem drivers or whatever is just adding code that will contain bugs, exploits, and a monthly kernel update containing 80 KLOC of changes is just asking for problems. Faster processes, updates, and development velocity in projects that were "solved" decades ago are just a playground for bad actors.

So, to go back to Andrew Tanenbaum and many others, no one in their right mind should be writing or using OSs and software that aren't built from first principles with clearly defined modularity and security boundaries. A disk driver update should be 100% separate and compatible with not just the latest OS kernel but ones from 10+ years ago. A database update shouldn't require the latest version of python "just because".

Most software is garbage quality written by a bunch of people who are all convinced they are better than their peers. And yet another code review, or CI loop, isn't going to solve this, although it might stop a maintainer from throwing poorly tested code over the fence instead of subjecting it to the same levels of scrutiny they give 3rd party contributors.


> A disk driver update should be 100% separate and compatible with not just the latest OS kernel but ones from 10+ years ago.

People, companies, countries that do this, will be overtaken technologically by others that accept the brittleness and move faster.

I think the solution is to have a balanced approach, both to advance relatively fast and keep things relatively robust. Who knows, in the end, maybe this crash is a reasonable price to pay for all the security Crowdstrike has provided over some time. It's not at all easy to tell.


It is certainly possible to write bug-free code, in terms of meeting a formal specification of behavior, and guaranteeing no behavior outside that specification. It requires formal methods, and it's much more expensive than ordinary software development.

Creating exploit-free code is another matter - you have to be able to craft exploit-free specifications, and there's no real understanding what that might even mean. But bug-free software would be a start.


You're just moving the bugs or exploits to the specification then. Cool trick, what can I say.


Also very often software quality is absolute trash... With so many issues developers spend no time on thinking about most basic things... Like applying access control on reading/editing data or what field should a request update and what not...

And these parts are the simple ones. Not even talking about operating systems, networking and so on... If even easy stuff is wrong, what hope is there for complex...


Most software in indeed trash. There's neither the budget nor the will to fix it. The existence of "security" software is a symptom of systemic sickness, not the underlying disease.


The problem with cybersecurity is that it's impossible to prove a negative. So it's easy to sell products which produce tangible downsides in return for hypothetical upsides.


That's because the reviewing the checklists can then be - no offence intended - offloaded to cheap workers in 2nd and 3rd world countries who are judged by the checklists they sign off. There is no room for critical thinking or adapting to the particular situation. I see this happening daily in a large company where the enforcement of infosec is offshored to east Asia.


thats true, it is disheartening when security controls are only seen as a checklist to comply with some framework, and not actually implemented. https://medium.com/@confusedcyberwarrior/what-is-soc2-how-to... This gives a false sense of security, which is further bad for cyber space. Crowdstrike incident on other hand shows that how we still have single points of failure on our supposedly secure and safe systems. https://medium.com/@confusedcyberwarrior/when-security-becom...




So how do you actually cybersecure a company in a compliant and practical way?


Today, we comply by ticking all the boxes in a checklist; it takes care of the most obvious hacks. Is it good enough? Today, we got our answer.

Practically speaking, that is all the end-user can do with Windows machines. My point is Windows is fundamentally unsecure. It is a dike with thousands of holes some of which are not even visible to Microsoft themselves. The reason of that is security has been after-thought. It is band aids/plasters put on top of other plasters.


Because security experts seem to have this slight dismissive attitude about companies' and individuals' attempts to do security, while not usually having answers or providing secure systems.


Sadly most so-called security experts are not hands-on professionals but hands-off "cybersecurity persons". They do not do any real work themselves, they only generate useless busywork for others.

There are people in that category who are not hands-on themselves but still have sufficiently deep understanding of technical details. But as one might guess, they are about as common as four-leaf clovers.


windows security experts are as useful as astrologists, but astrologists are cheaper


I firmly believe that most routine security issues are really just operations issues and vulns are just bugs and security largely doesnt need to be its own category at all.

I know everybody hates the C-word but if I look at 27001 requirements or the CIS benchmarks, there is nothing in there that I do not want for myself. If you can keep a list of the products and services you are running, have actually put the time into implementing it correctly, and have an ongoing maintenance plan then you are probably in the top 1% of networks.


Probably get a lot more of this when the full force of the cyber resilience act kicks in.


Honestly until we can get rid of the perception that SOC2/Sables-Oxly/HiTrust provides any meaningful security we’re stuck.


The "cyber sector" is... awful? Nah... irresponsible? Nah... immature? Yeah, probably!

Right now, pretty much everyone is looking to outsource their "security" to a single vendor, disregarding the fact that security is not a product, but a process.

That... won't change! And incumbents will get less-awful about their impact on "protected" systems.

And yet, there's an opportunity here! Do you truly understand Windows? And whatever happens on that platform? And how to monitor that activity for adverse actions? Without taking down your customers on a regular/observable basis?

Step right up! There are a lot of incumbents facing imminent replacement...


There can't be a person alive who "truly understands" Windows; though made by humans (allegedly), any modern OS is going to be beyond the understanding of any individual. This is the fundamental problem of managing modern systems.


I can name two: Raymond Chen and Mark Russinovich ! I don't know if Mark is still up-to-date into the latest Windows internals now that he is the CTO of Azure but Mr Chen sure is.


I was thinking Linus for Linux, but in spite of how talented these people are, it's still hard to imagine them having a detailed grasp of the entire codebase at this point.


Linus would definitely defer to lieutenants when it comes to understanding the specifics of code in specialized parts of the kernel.

Even these experts would probably want to consult with others instead of trying to just grow what's going on on their own.


Yes, we're in a complexity crisis.


> immature? Yeah, probably!

I find it funny that adult security researchers still get away with identifying themselves with hacking monikers in public as if they were teenagers probing the local telco back in the 1980's.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: