Hacker News new | past | comments | ask | show | jobs | submit login
Linux runtime security agent powered by eBPF (github.com/exein-io)
193 points by ExeinTech 10 months ago | hide | past | favorite | 69 comments



Relevant read from TrailOfBits on the pitfalls of using eBPF for security monitoring: https://blog.trailofbits.com/2023/09/25/pitfalls-of-relying-... (Pitfalls of relying on eBPF for security monitoring (and some solutions))

> eBPF is a powerful tool for Linux observability and monitoring, but it was not designed for security and comes with inherent limitations. Developers need to be aware of pitfalls like probe unreliability, data truncation, instruction limits, concurrency issues, event overload, and page faults. Workarounds exist, but they are imperfect and often add complexity.

Maybe eBPF tooling should come with warning labels on what pitfalls are accounted for.


This is a great post, but there aren't non-problematic alternatives to eBPF for most of these pitfalls. If you're building a reference monitor out of tricky eBPF primitives --- if you're relying on eBPF to somehow make go/no-go decisions about system usage --- those pitfalls are super-problematic, the same way seccomp filters are.

But if you're just generating events for monitoring... well, it's good to keep in mind the limitations of all monitoring systems.


As someone who uses bpf for security monitoring every day, every one of those problems is a has a workaround. And the only real competitor is writing a kernel module which has serious safety issues.


Some of these apply to the audit framework as well.


While the TrailOfBits post is informative, they extrapolate the wrong premise from Brendan Gregg's original post[1]. Brendan's statement was originally about the particular set of eBPF tools created for observability (especially his own). These tools (like bcc and bpftrace) are specifically geared towards observability and were not built for security in mind, nor will they be "patched" or fixed for these purposes.

Does this imply that eBPF is a bad foundation for security-focused tooling? Brendan states the following at the close of this blog post:

> There is potential for an awesome eBPF security product, and it's not just the visibility that's valuable (all those arrows) it's also the low overhead. These slides included our overhead evaluation showing bcc/eBPF was far more efficient than auditd or go-audit. (It was pioneering work, but unfortunately the slides are all we have: Alex, I, and others left Netflix before open sourcing it.) There are now other eBPF security products, including open source projects (e.g., tetragon), but I don't know enough about them all to have a recommendation.

> Note that I'm talking about the observability tools here and not the eBPF kernel runtime itself, which has been designed as a secure sandbox. Nor am I talking about privilege escalation, since to run the tools you already need root access (that car has sailed!).

This reads to me that security tooling built on top of eBPF is possible and various organizations are in-flight making it happen (such as Falco[2], Tetragon[3], and Tracee[4]). These teams have recognized the shortcomings of eBPF and are layering other kernel-instrumentation capabilities such as kprobes and LSM hooks into their solutions.

Additionally, the TrailOfBits blog post states:

> Developers need to be aware of pitfalls like probe unreliability, data truncation, instruction limits, concurrency issues, event overload, and page faults. Workarounds exist, but they are imperfect and often add complexity.

These inherent limitations exist primarily because eBPF is a virtual machine within kernel space. Many of these constraints exist because eBPF programs should _never_ lock up the kernel. The eBPF verifier[5] does some checks on the possible code paths the program can take, such as finite bounded loops, null checks on variables, etc. The foundational aspect here is that the eBPF virtual machine is designed to protect the kernel while running programs in kernel space, and that imperfect/complex workarounds may be needed by security-focused projects to respect that foundation.

[1] https://www.brendangregg.com/blog/2023-04-28/ebpf-security-i... [2] https://github.com/falcosecurity/falco/ [3] https:/github.com/cilium/tetragon [4] https://github.com/aquasecurity/tracee [5] https://docs.kernel.org/bpf/verifier.html


None of that sounds worse than commercial offerings in the space.


The idea is quite interesting, but it would be more useful if we could perform an action when a threat is detected. For example, in the network module, we can use the eBPF-provided actions: drop, forward, or log.


Someone remarked upthread about Artem's very excellent Trail post, which you can sum up as "attackers can create conditions where eBPF events might not be generated, or where eBPF probes won't be able to see the requisite data". In a pure logging monitoring situation, those problems aren't that big of a deal (the things you'd have to do to force those conditions will be noisier than the events you'd be trying to generate, and, more importantly, the things you'd do instead of eBPF to do that kind of monitoring have their own gruesome limitations).

But as soon as you add a "drop" response to events, you've switched from a monitoring tool to a reference monitor; at that point, this stuff has to work.


Yep, and from my experience too (made a tool that monitors network traffic with eBPF [1]) in addition to those issues there is also a sizable latency hit.

[1] https://github.com/elesiuta/picosnitch


Usually any policy you'd be able to implement as a reaction can just be implemented as a pre-emptive policy to prevent the action.


True! Would be a very interesting addition to the project


Interesting its a bit like Aqua security's tracee open source project. The policy engine and set of rules approach to this problem is an interesting one in the age of AI malware.. will be surprised if this can stay ahead of the latest flavors of malware as they evolve.

The NSA and John Hopkins + MITRE have a project like this, but it goes above and beyond the rules/policy approach and is more about measuring integrity of the linux kernel at runtime using eBPF


> Check out out security tool!

Okay.

> Install via `curl | sh` (and requires sudo!)

Are you fucking serious?


A lot of the requirements that come with typical frameworks and their usage are often installed via shell script due to the complexity of installing and configuring them, which I know from my own ebpf stuff is the reason they probably had to ship it with a bash/shell script.

I specifically chose golang because it allows me to abstract away that, and I can tell my customers just to download the binary from xyz.url and then execute "sudo binary install" while it does the rest, with zero dependencies.

Doing that in say, rust or C++ might be possible but often is not trivial when you take a closer look at what library needs what other linked library. Including the "internet of headers" usually explodes in size of the binary and is very hard to manage, strip out, update and keep up with upstream.

I think that is also why I prefer CGo over other FFI-based bindings or the Rust binding ecosystem. Purego is just way easier to use than anything else in practice.

But yeah, that's just my two cents. Nobody that uses Rust seems to care about "installation experience" these days. But if you tell your customers to install an SDK first, they likely gonna nope out of your product before they even tried it out.


Rust ships static binaries fwiw You can even statically link your libc.



Is there any advantages to bpf over the built in kernel auditing functionality i.e. https://wiki.archlinux.org/title/Audit_framework?

It'd if these types of things focused less on {performance, security, observability} and could just export data into whatever system you want.


The audit framework is less flexible in what you can get events for. eBPF gives you basically unlimited options, though you have to implement it yourself. Additionally the audit framework is wonky. There's the kernel side plus a userspace daemon that logs the audit data to disk. If you want to implement your own daemon to act as the audit agent, maybe to do something more sophisticated than just logging to disk, it gets complicated. There's very little documentation on how to implement such a custom daemon, you basically have to just copy what the existing agent does and hope you aren't subtly breaking things.


eBPF allows for some in-kernel logic, which can allow for filtering of data before it ever gets to userland. This can make it much faster than audit. eBPF is also much easier to extend with more complex logic since you can write near-arbitrary programs.


Also eBPF does not seem to require a reboot with a special kernel parameter.


The problem with eBPF is there are tons of old Linux servers without eBPF support. I had to go with auditbeat instead of Sysmon for Linux because of this. Not only that, from the quick glance I gave their read me, I think auditbeat (which includes more than just the auditd subsys) collects more security relevant logs.

The huge problem with all these solutions, including Sysmon for windows is they don't let you define throttling or not at a usefully granular level. Which means you either log and risk unexpected performance issues in some situations or lose visibility.

Even with commerical EDRs, depending on kernel versions or other system properties is really bad and makes it hard to manage. It should "just work", lend itself to config/runtime external management.

Of course, log storage is another nightmare but if you can easily tune configs and selectively drop or throttle at the edge you can save a lot of money.


> Behind the scenes, when an application performs an operation, it gets intercepted at kernel level by the Pulsar BPF probes, turned into a unique event object and sent to the userspace.

It would be nice if we could dereference pointers in eBPF. I think this is maybe a thing soon? I know that landlock is doing something there, I believe with eBPF + LSM.

Anyway, this looks very cool and interesting. The FOSS instrumentation space is not amazing right now but the underlying capabilities are rapidly improving. I'm quite excited for this generation of tools.

edit: Ah nice, and it's in Rust. I hope that that's the future for eBPF and userland agents - performance, stability, and security are critical for this domain.


Landlock doesn't use eBPF; it has its own syscalls and supporting infrastructure in the kernel proper. I think the relationship to LSM is simply that the LSM component is what ties other syscalls into the subsystem, otherwise the Landlock infrastructure is never hit when operating on files.

AFAICT, Landlock works much the same way as OpenBSD unveil--it doesn't compare path strings at the point of open, but rather opens and caches dentry objects when you install the rule. Rules are basically attached to inodes. So, for example, if you permit read access at or under /some/path, then when installing the rule a Landlock application opens /some/path to acquire a file descriptor to the then-existing directory or file from which the Landlock syscall will retrieve and cache the dentry object. (OpenBSD unveil takes a path string, not a file descriptor, but the process is otherwise identical--the unveil syscall effectively performs the open, but there's no need to actually create a file descriptor.)

On Unix, when a file open is performed the open syscall traverses the string path, iteratively looking up (or creating) dentry objects for each path component. What Landlock does is that as each dentry object is traversed, it looks for Landlock rulesets attached to that dentry and accumulates a bitmask of permitted operations across all traversed dentries. When it reaches the terminal dentry object, if the requested operation (e.g. read-only, read-write, execute) isn't in the accumulated bitmask, then the open is rejected.

On OpenBSD if the directory or file with the attached rule is unlinked then you effectively lose access to that path (including any subpath) as any newly created file or directory at that path will have a different inode. Notably, a rule effectively pins a directory or file for the life of process, so it's possible to temporary leak disk space, same as unlinking something while retaining a previously opened file descriptor. I assume (without looking deeper into the code) the same is true with Landlock, but it's possible it might prevent unlinking or do some extra gymnastics to provide slightly more intuitive behavior.

Attaching rules to dentries neatly avoids fatal TOCTTOU security issues from which path name string comparison-based access restrictions have typically suffered. But the mechanism leaks Unix VFS and FS semantics--existence of and relationship between dentries, inodes, and path components.


Thanks for the detailed information, this was very clarifying for me.


You can access memory in ebpf.


I don't think that's been true historically. You can access maps that you provide, but dereferencing userland pointers (like path names passed into an 'open' call) have not been available. Or has that changed recently?



I don’t know about the past, but you can absolutely dereference userspace memory in ebpf.


Gotta love security theater like this:

    curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Exein-io/pulsar/main/pulsar-install.sh | sh
So secure, I mean it's forcing https AND tls1.2 - maximum security! Especially when you just pipe a random script into shell.

Also, no information on its authors, who audited the code, what makes it "secure" enough to warrant giving an administrative entitlement over my system. Hard pass.


Fair point on the security theatre, but I don't get this:

> Also, no information on its authors, who audited the code, what makes it "secure" enough to warrant giving an administrative entitlement over my system. Hard pass.

The code is hosted by an organization on GitHub, who have clear links to their website, email, twitter. On their website they have their office addresses and it seems to me that they are registered with the Italian government as a business.

They don't mention anything about it being audited or make any claims that it is secure. Both of these would be something to follow up on if someone is planning to use it within their org.

I'm not sure what it is you want from them. They've released something, and made it open source. It is a little over a year old, so they're probably still getting their footing and figuring things out.


I want "security" software authors to stop doing harm. Just as physicians take up the Hippocratic Oath, I think we need to set higher standards for programs, applications and processes which avow to improve security, because too often they seem to have the opposite effect. Traditional anti-virus tools are a classic example of this.

I do see their website now, but I'm still confused:

> We are a team of visioneers, engineers, designers, and security experts backed by top-tier VCs who believe in making security work for everyone.

Great. What makes you security experts? In other words, why should we trust you?

You say the code is open source and that makes it trustworthy, but source availability is not a panacea. It must really be vigorously stressed, tested and abused by competent hackers (as a consensual contracted service) - again, if we are to call it "security" software.

Plainly, this is a beta test for a company that wants to make some money and has generously externalized bugs, issues and other nuances of development unto the open source community, which through some sort of Stockholm Syndrome has actually convinced itself more code - even if it's not good code - is still a good thing.


This is such a ridiculous sentiment. You're blaming these people for Antivirus now? Your whole post is just a nonsensical rant.

If you want to say "I'd like to see software like this audited for security" just say so.


Then there's the followup question: Who trusts the auditors?


Well - since it's open source, theoretically you can build it yourself and "trust but verify" the audit, although there we're also assuming you trust your own judgement or that of your security team.


Well, right, of course. My comment is more along the lines of "paying for an audit implies putting faith in those auditors to do a good enough job"


It’s open source. I’ll inspect the source myself. Cross check against auditor findings. Build from source


Consider this a medical trial


The curl | bash thing gets talked about so much here, but I wonder who’s stupid enough to not perform their malicious tasks in the binary itself as opposed to the shell script which can be inspected at runtime with `bash -x`.


curl|bash is possibly worse in one minor way; the binary can be dynamically generated or swapped out based on analysis of the target.

Downloading directly from GitHub or apt-get is a tiny bit safer in that regard.


That's not security theater at all. It prevents unsafe redirects (--proto='https'), which I've found in the wild exactly because I pass that flag. And specifying a min tls version seems like a good practice in general.


just because you got the resource from GitHub successfully and securely doesn’t mean it’s a safe resource to execute.


If you don't trust the installation script to be non-maleware, why are you trusting the binary that the script installs?

It's not as if the instructions would allow an attacker to pass you another binary, after all.


I didn't say it was. The poster was commenting on the arguments that enforce HTTPS and TLS1.2, and they were wrong - those arguments are not security theater at all.

As for 'is curl | sh safe?' I am done arguing with people who don't understand that it's absolutely fine.


the poster was not suggesting those things are not legitimate and secure maneuvers ... the poster is suggesting that you can have the most rock solid and authentic transport mechanism and - in spite of that - can still end up compromising your system by running the wrong thing.

> As for 'is curl | sh safe?' I am done arguing with people who don't understand that it's absolutely fine.

eep.


None of this is security theatre.

It's ensuring you'll get the exact script they provided and run that to install on your system. It will install something with really high privilege, so if you trust them enough to install that, surely you trust them enough to run their install script.

If you can't trust their install script, I'd say you should probably trust them even less with running something in-kernel so none of this is an actual issue.


> It's ensuring you'll get the exact script they provided and run that to install on your system.

Cryptographic signatures can prove that. Curl bash'ing really doesn't.

The very few software I install that aren't shipped with Debian (and hence signed and, for the most, also bit-for-bit reproducible btw), I do verify their signature (when they're signed).

For that one program I really need that thinks bash curl'ing is somehow as secure as Debian signed (and reproducible) packages (it really isn't) or offers as much guaranteed as a signed software "because https" (it really doesn't), what I do is download the binary myself, take its cryptographic hash, and store that cryptographic hash for later use.

So, at least, if I need to reinstall it I know I'm reinstalling the exact same binary I curl bash'ed last time.

Curl bash'ing is really a pathetic way to ship software.


> Cryptographic signatures can prove that.

Assuming you have a fully trusted and bootstrapped side channel to get the public key from. And assuming that the compromise that resulted in this maliciously published binary also didn't compromise the private key.

Both are tall orders.


I see people rant about this, but not about other equivalent ways of getting scripts from the author to the system and running them.

If the instructions were "download, unpack, and run ./configure; make; make install" would you post the same reaction?


It’s a fair rant since it is their recommended installation method, and it’s a security project.


Your comment is confusing because you are quoting from how to install Rust, instead of quoting the OP readme.

The OP readme is asking to run a different install script using the same method.

    curl --proto '=https' --tlsv1.2 -sSf hxxps://raw.githubusercontent.com/Exein-io/pulsar/main/pulsar-install.sh | sh


Is this a joke that I don’t Linux enough to get? That’s the same command with https redacted for some reason.


They edited their comment.

When I responded their comment read

  curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Which is from https://www.rust-lang.org/tools/install and therefore was confusing


OP edited their comment.


Shell security is a joke anyway, there's plenty of other things to be cynical about than an install script. Such as how I can elevate privileges on your machine by phishing your password with a fake sudo binary in $PATH.


Probably a copy and paste from docs for installing rust toolchain

https://www.rust-lang.org/tools/install

> curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Sure it’s sketchy, but if you are doing your DD. You are inspecting the script itself.

The script itself (https://raw.githubusercontent.com/Exein-io/pulsar/main/pulsa...) doesn’t appear to do anything out of the ordinary. It downloads a few helper scripts to temp directory and installs to /usr/bin.

Haven’t looked at the pulsar code yet since on mobile.

If you find anything abnormal then yea I wouldn’t install it either. Especially if it needs escalated privileges.


It's missing a sudo.


> We do not recommend build Pulsar from source. Building from source is only necessary if you wish to make modifications. If you want to play with the source code check the Developers section of the documentation.

Suspicious.

> The recommended approach to getting started with Pulsar is by using the official installations script. Follow the guide in the Quickstart section.

Doubly suspicious.


Security stuff should probably have a Hermetic build. And it should be a one-liner to do the build and take a hash of the resulting binary. They should publish source, binaries, and the resulting hash - allowing anyone to easily verify and call them out if the hash doesn't match.

Bazel lets you easily do this.


> Suspicious.

Why do you think that should be suspicious? Most open source software comes with multiple installation options: some faster ones if you only need to use it as is (installation script), some more flexible ones if you intend to modify it (build from source).


It's fine to have options but telling people not to build from source is suss.


They don't tell people not to build from source. They don't recommend it. They don't recommend that you don't they just aren't saying it's the blessed way to get up and running. They have a whole "Developers" document about it.


Most vendors refer customers to their compiled releases as support is easier than having everyone compile their own binary.

The fact they stated this overtly does come off a little bit strange sounding.


What's hilarious is that their build process is just `cargo build --release`, which is exactly like every other Rust project. I do not see how they expect people to prefer using an installation script when `cargo install` exists and people generally prefer to build Rust utilities from source anyway because of how painless the build system is.

My best guess is that they use the container as a deterministic build environment and don't want to deal with a billion different LLVM point-releases being used in the wild.


I don't see what's hilarious about that. You justified their choice pretty much perfectly - they want to ensure that when someone reports a bug that they're running exactly the software that's expected.


> You justified their choice pretty much perfectly - they want to ensure that when someone reports a bug that they're running exactly the software that's expected.

As a user, I'd rather build it against my local environment so that it works on my machine. I can understand their perspective, but it's still hilarious to me because the Rust community is generally build-oriented, as opposed to binary-oriented. So you get comments like OP's which balk at the idea of being so strongly discouraged from building.


> the Rust community is generally build-oriented, as opposed to binary-oriented

That is true for the case for developer tools and libraries, but is it also true for end-user applications?

I've noticed many Rust projects e.g. on Github use the CI to provide proper releases with binaries.

> As a user, I'd rather build it against my local environment so that it works on my machine.

To me it sounds likely that this particular user is also a developer. I don't think people are compiling that much stuff these days, even on unixy operating systems. Of course developers often tend to prefer to just cargo install them because they have everything set up for it to work, but thinking users want to first set it up with rustup sounds like a tall order to me.


But then just do it? They have a whole doc on how to build it locally.


I was replying to a comment with agreement? not saying that this is an obstacle for me.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: