Can someone explain to me why the protections that Falcon provides, are not provided by the OS itself? I am not completely naive, I've secured quite a few critical Linux servers, but with Windows it seems that there do not exist the same clear roles of security. Contrast with Red Hat or even Canonical, where is feels like I'm (correctly) fighting the security of the systems to get them into a state where my users can use my applications.
I read an article that stated that Microsoft lost an anti-trust court case against the EU in which the EU mandated that they allow third party competitors to provide this service. Microsoft has its own solution called Windows Defender.
It's more nuanced than that. They have to provide the same APIs to third party security vendors that they use themselves.
They can come up with something more shielded as Apple has done, they just have to eat their own dog food and can't make an exception for defender. That's all.
yes (it's a spin) also e.g. on Linux Falcon could have conceptual created the same kind of driver as for windows but opted to use eBPF
for a lot of things on Windows there isn't anything like eBPF (yet, it's wip, but likely will still take quite a while until it's usable)
the EU spin would only work if CrowdStrict is fully incompetent like a lot of people want you to believe. I.e. they don't do any testing, don't do any config validation and doesn't know what they are doing at all
but that simply isn't true at all
This doesn't mean that they didn't act negligent, as far as we can tell they relied on some data format validation instead by their server + signing (or something similar) instead of _also_ having robust parsing and that is enough against best practices to be called negligent. And there were other points which bubbled up in the last week which point to other negligent behavior unrelated to the bug. But company ending up with some negligent behavior and them being fully incompetent are very far away, let's be honest most IT companies today have ended up with some negligent behavior they have lite direct/short term/fast feedback motivation to fix (hence it doesn't happen)
And Microsoft doesn't even offer the option of userspace anti-malware hooks, which they could easily do in conjunction with the kernel stuff. I think all they have is AMSI, which is only for scanning PowerShell scripts and such.
If you want to hook process execution or file access, you're writing a kernel driver.
Yes indeed. But the point they keep making is that the agreement with the EU somehow stopped them from doing this. Which is BS.
They could easily have added a userspace API if they wanted to. It could have existed side by side with the kernel option, as long as they keep using that for Defender too. Only once they stop using kernel access in their own security products can they force the other vendors to use a new API, which makes sense. Otherwise they'd use it as a sales bullet point ("Our product has full system access, others don't"). Which would destroy the antimalware market. The US benefits from this too.
Falcon provides many levels of protection (in principle - in practice, given the extreme incompetence demonstrated in this case, I doubt they do much more than sell snake oil), some of which have OS-native alternatives, some of which do not, and most of which Linux definitely doesn't have built-in. For example, the Linux kernel team doesn't have a DB of known malware signatures that the kernel or init system runs or shell runs any new software component against - Falcon does this. Another example - neither Linux nor any common Linux userspace natively integrates with with a fleet management system to check if the current user is allowed to run a particular piece of software. And there are many other similar questions.
Finally, even when the OS does natively provide services like these (Enterprise versions of Windows do provide all the features I mentioned above), it's perfectly reasonable to prefer a different vendor for those solutions. Maybe people trust CrowdStrike's malware signature lists more than they do Microsoft's, for example: a good reason to buy CrowdStrike instead of using Windows Defender.
I'm not trying to defend CrowdStrike or Windows here. But I think it's obvious that there are many features that fall under the umbrella of security that you wouldn't want to build into the OS itself, and even when a version of them exists built-in, that a company may wish to source from a different vendor.
Windows does have Defender, which does some amount of tracking signatures and heuristics of various types of malware.
It has not, however, proved enough to fend off different real world problems like ransomware.
Hence, the market for 3rd party solutions that are more aggressive. And to keep up with real world threats, they have to update often. And have to run at high privilege levels. So now you have the situation where those third-party solutions have the ability to create a bsod and/or a boot loop. Which should mean that they have a very well thought out way to roll out updates.
Very much every 3rd party anti-virus software I tried (and paid for) caused data loss or other problems (a few catastrophic) in the long run. One product didn't even stop a virus getting in.
Since then I just use Defender and never had any trouble or a virus or ransomware. Only issue is that sometimes the antimalware service takes a lot of CPU.
Microsoft has a high share in this area but enterprise security is generally a very competivite market. Microsoft may even move into #1 position as a fallout from this debacle becasue the market share between them and the #1 CS is very small (that does not mean people actually buy more Ms btw... if that needs to be said ;)
This is not neccesarily a good thing for MSFT as it will 100% trigger regulator rage in the EU.
While Clowdstrike Falcon EDR is in some sense an AV on steroids and Crowdstrike not only does EDR. While they are obviously deployed on lots of systems, less than 1% of Windows systems means it still operates in an absolute niche. Most people didn't know CS even fewer know any of the competitors.
I think one massive difference between CS and AV is also, you don't expect a human to be in the loop because it would be too expensive. Nor would it be feasible for consumer software because of privacy.
Also even within this small niche, the solutions are very heterogeneous and make little sense for single boxes - in fact may even be designed to run on a network level.
How do you actively detect a malware agent running in user space using stealth or a kernel. Authors of such are fully aware of Linux hardening like SELinux / AppArmor and work around it.
> How do you actively detect a malware agent running in user space using stealth or a kernel.
You start with correct design.
The system has a root of trust (ideally you skip the insane level of complexity that is Secure Boot + TPM and use something simple, testable, and verifiable — this isn’t actually that hard). Only authorized images will boot, and, more importantly, nothing else on the network trusts the machine until it proves it’s running the right image.
Then you make the image immutable. Want to edit a system file? You can’t. Maybe in developer mode you can edit an overlay.
All configuration is stored in a designated place, and that configuration is minimized. A stock image from the distro vendor has zero configuration, so there is no incomprehensible soup in /etc to audit. Configuration is also attested.
Persistent data is separate from configuration. All persistent data is considered suspect. Any bug that allows malicious persistent data to compromise anything is a blocker, including corrupt filesystem metadata.
A root-of-trust attestation has limited lifetime. The system forcibly re-verifies periodically. This either means rebooting or doing a runtime “dynamic root of trust” attenuation. The latter is complex.
Complicated messes like kernel “lockdown” and the stock Secure Boot signatures have no place. Usermode root and the kernel are approximately equally trusted. SELinux is barely necessary, if at all, unless the actual user code wants it to control access to persistent data. But there are simpler, better schemes that are easier to reason about.
Sadly the industry doesn’t think this way. I’m regularly surprised that Apple hasn’t gone in this direction more aggressively than they are with their MacOS products.
You haven't answered anything interesting. Any software system that anyone cares about operates on state - user documents, a database, other bespoke systems etc. If the operator of that system accidentally deploys malware to it, how to you ensure that this malware doesn't destroy, replace, or exfiltrate this state the the system normally operates on?
Malicious code doesn't need to run as root in order to completely destroy a business.
Not to mention, all of the things you describe are very nice if the kernel is perfectly secure. But it's not, so it's always possible and even likely that compromising any user on the system is equivalent to compromising root. And if you compromise one system, you can then exploit bugs in other systems' kernels that might allow RCE through well-crafted packets or other exploits that gain access without running through any user-space code that might validate those attestations.
And finally, when a vulnerability is found allowing such exploits, you now need to update all of these readonly systems - and this happens at least once a month. Do you go with a USB stick to each of 10k systems on five continents to update them?
This kind of smug "I know better than the rest of the industry, security is easy if you do things my way" is rarely productive or applicable.
What you've answered is a great (if not the best) way to defend against attackers, but not what was asked,They asked how to detect.
I'll strongly disagree on selinux, I have seen it work in practice to defeat attackers many times, that provide features that seccomp and cgroups etc do not.
If your system is a flight information display, then you may well have two userspace processes that do anything of significance: the display manager and the actual app. There is no persistent state. At this point, SELinux is purely overhead and extra attack surface — what would it even protect.
If your payload is a container (database server, microservice, whatever), and you’re doing some form of best-practice volume management, then only the database’s own data is mounted for it. SELinux is a real PITA to get working in a context like this, and it’s not really clear what it would add. (Okay, maybe you get fancy and use it to restrict what can talk to the microservice. Or maybe you use network namespaces.)
If you’re running a desktop or a more conventional server setup, then, sure, MAC policy has its place.
Absolutely depends on the use case. I'm attempting to talk in the generic case. If you limit policy to the minimum attack surface from outside the process including permissions and capabilities which are significantly more fine grained in selinux compared to normal Unix permissions, you reduce the the capability of the attacker once they gain access to the system.
Imagine if they got access to local code execution... Binding to sctp protocol would instantiate the whole protocol in kernel. Effectively opening up whole new attack vectors. I can't see any other techniques (other than selinux like AC) that enables this kind of attack space reduction as easily.
I am aware that you can blacklist modules,etc but this is just one of many examples.
> How do you do this on modern commodity hardware without secure boot?
It’s not necessarily easy without Secure Boot, sadly. The actual straightforward solution is boot ROM. It would be nifty if someone made SD cards, eMMC devices and such meant for this use case for independent use. Most Android vendors manage to use boot ROM.
If you have good examples, I'd love to see it, A writeup even more so on the techniques they used. My findings so far in the wild (and on my honeypot) is really amateur level garbage.
I spent a weekend and abused a c&c infrastructure server to fix the clients and remove the flaw and malware. I see very little sophistication there.
> How do you actively detect a malware agent running in user space using stealth
Depending how advanced the attacker is, check the executing binary maps back to the actual expected name and location on disk. Make sure the executable and libraries used at runtime are the correct ones matching hashes of known good qualities.
Ensure the process tree structure has an expected structure, ie "bash" isnt starting a process called apache.
Make sure the selinux policy is correct for the process that is running. (I have no idea about apparmor)
Check to see if its linking to the expected binaries, that its not using 'hidden' files (starting with a dot or directory with a dot), or deleted files.
Confirm that the process is opening sockets and files that. you expect it to (ie, apache shouldn't open files that are outside its configuration directive).
The process should not be making outgoing socket connections unless it is a client.
It should not be running with capabilities(7) that it does not require. It should not be executing from a setuid binary.
Check the process name, quite often attackers rename the running executable, so you'll see /proc/pid/cmdline renamed with a bunch of null bytes at the end.
Some malware has 'anti debugging' tactics, ie, they have traced themselves to prevent you tracing them, you can find this as one of the lines in /proc/pid/status iirc.
There are more, but thats the few off the top of my head.
> or a kernel.
This is a MUCH harder problem, because attackers can always disable any security mechanism assuming they kernel code execution. However, assuming they are not too focused..
If the system is booting in secureboot mode, it should be enabled, and no extra / unused / out of date kernel modules loaded.
I know that code injection at the memory level means that attackers can inject unsigned code, so in this case you would want to periodically sample the code and ensure that execution context would only have the processers EIP in known areas where the kernel would map executable code. You could do an additional check to see if the areas are mapped by userspace processes (it might be too late) so you can find offending attackers.
If the host is virtualized, this becomes easier to do and mapping and comparing memory from the guest kernel for the executable code sections means that its harder for an attacker to work around by being able to disable a mechanism.
Usually attacker kernel exploits do not persist long temr in kernel space, (they abuse kernel space to allow for userspace privilege escalation ie make a binary setuid or modify permissions on a /dev/) because the longer they are there the more likely they are to panic the system.
Some of the more advanced attacks I have seen are from people uploading system kernel panic images, where I have a 'snapshot' of the running system and can work around attackers mitigation techniques.
> There are more, but thats the few off the top of my head.
And that is probably like 80% what EDR product will be doing, checking that the code that is executing is trustworthy and not doing some weird unexpected things.
Who collects and maintains all these lists of known good/expected configurations? Should the kernel know that apache shouldn't be launched from root? How about autocad, is that ok to be launched from bash? What directories should autocad be reading/writing?
Seeing how on users' machines the most interesting data to read is in the user's home folder, I'd argue it's actually pretty easy to partition these. Autocad should read and write in ~/autocad. Maybe in ~/Downloads? But definitely not in ~/.ssh or ~/.aws.
Stock Windows actually implements something along these lines, called "protected folders" or similar. It's inactive by default (meaning every program can access every folder). It's quite easy to define a list of "protected" folders. But the implementation is quite stupid: if a program asks for access to one of the folders on the list, you can either refuse, or allow it to access it... as well as everything else on that list!
Linux can't be secured out of the box to do anything that Falcon does. If you use AuditD, eBPF and things like GRSecurity patches you might get into a good state, but it's still not the same thing at all. it might be secure depending on your linuxfoo, but it's not the same thing as running EDR which will help correlate system behavior across different systems etc. and look with much more depth into process behaviors and system interactions.
Also, you don't want operating systems to provide this actual EDR program. They need to provide the facilities for EDR vendors / creators to tap into and do their work properly. You don't want a butcher to rate their own meat... you want a third-party to do this. As Example: MS Defender is totally rubbish (general sentiment for a lot of people in security, hence they run falcon or cortex XDR etc.) at defending Windows.... and it's by Microsoft. They should focus on building an auditable OS and let auditors do the auditing...
The best thing imho is a tool like CSF but integrated with network appliances (which CS doesn't do i think), which is where the strength of such tooling really comes together, correlating network data / behaviors to endpoint behaviours and having a full 'causality chain' of processes / systems and network traffic invovled in an attack.
And you are right on the balance of security being dramatic. using crypto is still hard as ever, and allowing external parties to interact with your users is just impossible to do right (let alone have users in the right awareness mode). This last is a problem of security industry imho, making tools so difficult.
Someday maybe rather than EDR tools and firewalls, cybersecurity companies will deliver 'secure business services' which are easy to use, userfriendly services that are secure by default. - maybe in like the year 3042.
not to defend its "you must accept updates" insane /inane fail, but, the suite of crowdstrike inc falcon stuff we have enables the response side of EDR pretty well, and for a mixed windows, linux, mac shop, where we would like the same agent on all systems, it does a better job than most. Not as good as Jamf on Mac mind you, but better than than most "windows ecosystem". And if you run jamf for policy and detection, but not response, you sort of get it all. So, that's why not "just defender" - at 10k+ systems the anti-malware is just the beginning. What do you do when that fails and ...yeh.. anyway.. there is more to it.
As to why windows is not more locked down- that's on the shoulders of the admins. But out of the box, you are right, it is to permissive. But apparently users and management like it that way.
(1) - Why is a crutch like "anti-virus" software needed? Essentially trying to reactively cat-and-mouse hostile software that the OS has let execute on the computer.
(2) Why doesn't Windows provide AV?
Question (1) is more interesting - and (2) is addressed by other comments.
I think both MS and their customers have very seldom prioritized security over even small compromises in functionality. We loudly blame MS but they are the vendor MS customers deserve. While it's not a democracy, there are parallels to the popular sport of blaming politicians for eg not doing hard choices against climate change while holding the voters innocent.
The cat-and-mouse game is between OS security features and hackers. AV software is not a crutch, it's an extra level of defense. All OS kernels are vulnerable to malware - this is a 100% given at this moment in history. The question is how to mitigate this problem, and AV is one component of that, as are firewalls, network-level intrusion prevention systems, and a whole host of other security software.
Maybe some day someone will write an OS that is "fully secure" and then they'll be able to confidently run a system whose users can confidently click a link in an email, download an .exe from there, and run it, without fear of losing or leaking a single bit of data. That day is definitely not here, and until then, we all do the best we can through education and security appliances.
> AV software is not a crutch, it's an extra level of defense.
The issue is that it's the only "level of defense" which introduces arbitrary non-deterministic behavior. An executable which correctly follows all the APIs as documented and implemented, and which does nothing malicious, might arbitrarily be denied or even erased, and this behavior changes daily or even hourly due to factors outside the control of the computer's user. Even ASLR, which uses non-determinism in its implementation, doesn't cause non-deterministic behavior when an executable correctly follows the API.
And it's also a "level of defense" which famously causes frequent performance issues, to the point that "tell the AV to ignore that folder" is a common recommendation. I wonder how many gigawatts of electricity are wasted daily due to AV software slowing things down.
Finally, it's been reported several times that this "level of defense" is often poorly implemented, to the point that it can act as a backdoor to bypass other levels of defense. If you can compromise a parser running as SYSTEM, or even within the kernel, you don't have to worry about all the normal rules which prevents you from running code as SYSTEM or within the kernel.
People's dislike of AV software does not come only from some abstract purity ideal; it also come from plenty of negative experiences with it.
All of these are legitimate issues with AV software. However, they don't mean that AV is a crutch, or that it could easily be supplanted by other security features. There is simply no good alternative to AV for systems where it's likely the user will interact with untrusted input, such as receiving documents, receiving email, browsing the internet, downloading code from GitHub etc
Of course, when you have a locked down system such as a server or an embedded device, the need for AV protection drops down significantly. But on a wide open system, there's really no alternative.
Is there an example of a real OS for desktops and servers that is secure from this point of view?
I think SeL4 might qualify, but that can only realistically be used for embedded applications, it doesn't have, at this time, many of the features you'd need to build, say, an HTTP API server for it.
I think the absence of real world usable secure alternatives is not really strong evidence, operating systems are like web browsers, there's such huge inertia and network effects in the apps that competition doesn't tend to spring up, "build it and they will come" doesn't work.
On the research side there's lots of stuff. Singularity, the various capability based systems, Qubes (granted more towards the adding-features dimension), etc.
I agree to some extent, but still: if you were starting your own company, would you wait until someone wrote a secure OS? Or would you provide your developers and sales people etc. with an existing OS, and run your servers on an existing OS, and deploy other security tools to mitigate the bugs in those existing OSs?
I don't want an OS that lets me run executables from email - I've never actually has to do that. I do want an OS that I can tell to run "Firefox, Anki, Thunderbird", once, and nothing else will run.
Ok, how about an image in an email? Or a PDF receipt? How about clicking a link online? All of these have a serious potential to infect your system with malware.
PDF parsers, and really all complex format parsers, are very often exploitable. Maliciously crafted documents trigger a buffer overflow, and now they can take control of the process and execute arbitrary code, code that almost certainly has access to your other documents as well.
Also, how about malicious scripts that I convince you to explicitly give execute permissions to and run? How about Git repos that I convince someone to clone, compile, and run, that have malicious code?
Signature-based heuristics can help protect from all of these things that the OS is powerless to help against with only traditional security measures.
Actually, arguably Windows has some impressive security features unseen on any other mainstream OS, they're just not used by default and - realistically - would be hard to enable on general purpose / non-corporate computers.
For example, by comparison, Linux is in the stone age here.
Do you even need AV if untrusted code can't run in the first place?
* Application whitelisting - with just bare old AppLocker, Windows can be configured to only allow execution of trusted executables, DLLs and scripts by path, hash or software vendor (digital signature). Now, technically AppLocker is not a security feature, i.e. a hard security boundary.
The next level functionality, Windows Defender Application Control (WDAC) [1], however, is. I believe Microsoft was offering up to a $1M bug bounty for WDAC bypasses?
With WDAC kernel mode code integrity enabled, only trusted digitally signed kernel modules can be loaded into the OS kernel [2]. WDAC user mode code integrity provides the aforementioned protection AppLocker provides.
With AppLocker / WDAC enabled, the OS built-in script interpreters (Windows Script Host, PowerShell) either refuse to execute unsigned scripts completely or operate in restricted mode with reduced functionality.
- By comparison, Linux only has fapolicyd which is only supported on Red Hat and can only rely on path-based rules because binaries are not directly signed on Linux. None? of the common interpreted languages (Python, Perl, Ruby, Bash) on Linux support digitally signed scripts and locking down interpretation.
* Authentication material protection - Windows has Credential Guard [3] for protection of authentication material - Kerberos tickets and other material are placed in a separate container protected by hardware virtualization [2] and accessed via RPC so you can't dump process memory to compromise them. Even kernel level compromise is not enough.
- By comparison, Kerberos tickets on Linux reside as files on disk, SSH user & host keys reside as files on disk and loaded into sshd/gpg-agent memory, x.509 keypairs reside as files on disk & process memory etc etc. Wouldn't it be nice to have them protected somehow? To my knowledge, nothing exists for this on Linux.
>- By comparison, Kerberos tickets on Linux reside as files on disk, SSH user & host keys reside as files on disk and loaded into sshd/gpg-agent memory, x.509 keypairs reside as files on disk & process memory etc etc. Wouldn't it be nice to have them protected somehow? To my knowledge, nothing exists for this on Linux.
I have always wondered about that; there has to be a more secure control method for those secrets.
> Can someone explain to me why the protections that Falcon provides, are not provided by the OS itself?
They are. It doesn't, y'know, do anything. It ticks the box for your auditors and occasionally makes your computers stop running, which is par for the course in regulated environments.