Hacker News new | past | comments | ask | show | jobs | submit login

In the era of modern CI and build infrastructure, I don't really think that's materially an issue.



Those CI and build infrastructures rely on the Debian and RedHat being able to build system packages.

How would an automated CI or build infrastructure stop this attack? It was stopped because the competent package maintainer noticed a performance regression.

In this case, this imagined build system would have to track every rust library used in every package to know which packages to perform an emergency release for.


I... don't see your point. Tracking the dependencies a static binary is built with is already a feature for build systems, just maybe not the ones Debian and RH are using now, but I imagine they would if they were shipping static binaries.

Rust isn't really the point here, it's the age old static vs dynamic linking argument. Rust (or rather, Cargo) already tracks which version of a dependency a library depends on (or a pattern to resolve one), but it's besides the point.


Rust is the issue here because it doesn't give you much of an option. And that option is the wrong one if you need to do an emergency upgrade of a particular library system-wide.


It's really not, it's not hard to do a reverse search of [broken lib] <= depends on <= [rust application] and then rebuild everything that matches. You might have to rebuild more, but that's not really hard with modern build infrastructure.

Not to mention if you have a Rust application that depends on C libraries, it already dynamically links on most platforms. You only need to rebuild if a Rust crate needs to be updated.


> imagined

Cargo already has this information for every project it builds. That other systems do not is their issue, but it’s not a theoretical design.


So, I know that librustxz has been compromised. I'm Debian. I must dive into each rust binary I distribute as part of my system and inspect their Cargo.toml files. Then what? Do I fork each one, bump the version, hope it doesn't break everything, and then push an emergency release!??!


> I must dive into each rust binary I distribute as part of my system and inspect their Cargo.toml

A few things:

1. It'd be Cargo.lock

2. Debian, in particular, processes Cargo's output here and makes individual debs. So they've taken advantage of this to already know via their regular package manager tooling.

3. You wouldn't dive into and look through these by hand, you'd have it as a first-class concept. "Which packages use this package" should be table stakes for a package manager.

> Then what? Do I fork each one, bump the version, hope it doesn't break everything, and then push an emergency release!??!

The exact same thing you do in this current situation? It depends on what the issue is. Cargo isn't magic.

The point is just that "which libraries does the binary depend on" isn't a problem with actual tooling.

People already run tools like cargo-vet in CI to catch versions of packages that may have issues they care about.


> The exact same thing you do in this current situation? It depends on what the issue is. Cargo isn't magic.

False. In the current situation, you just release a new shared library that is used system-wide.


Okay, so the analogous situation here is that you release a new version of the library, and rebuild. Done.


Except that's not the case at all with Rust.


Except it is. The system package maintainers release a new build of the package in question and then you install it. There's not really anything else to do here. There's nothing special about Rust in this context, it would be exactly the same scenario on, for example, Musl libc based distros with any C application.


Fundamentally there is no difference. In practice Rust makes things a lot worse. It encourages the use of dependencies from random (i.e. published with cargo) sources without much quality control. It is really a supply chain disaster to happen. A problem like this would propagate much faster. Here the threat actor had to work hard to get his library updated in distributions and at each step there was a chance that this is detected. Now think about a Rust package automatically pulling in transitively 100s of crates. Sure, a distribution can later figure out what was affected and push upgrades to all the packages. But fundamentally, we should minimize dependencies and we should have quality control at each level (and ideally we should not run code at build time). Cargo goes into the full opposite direction. Rust got this wrong.


Whether a hypothetical alternate world in which Rust didn't have a package manager or didn't make sharing code easy would be better or worse than the world we live in isn't an interesting question, because in that world nobody would use Rust to begin with. Developers have expected to be able to share code with package managers ever since Perl 5 and CPAN took off. Like it or not, supply chain attacks are things we have to confront and take steps to solve. Telling developers to avoid dependencies just isn't realistic.


> It encourages the use of dependencies from random (i.e. published with cargo) sources without much quality control. It is really a supply chain disaster to happen.

Oh I 100% agree with this, but that's not what was being talked about. That being said, I don't think the distribution model is perfect either: it just has a different set of tradeoffs. Not all software has the same risk profile, not all software is a security boundary between a system and the internet. I 100% agree that the sheer number of crates that the average Rust program pulls in is... not good, but it's also not the only language/platform that does this (npm, pypi, pick-your-favorite-text-editor, etc.), so soloing out Rust in that context doesn't make sense either, it only makes sense when comparing it to the C/C++ "ecosystem".

I'm also somewhat surprised that the conclusion people come to here is that dynamic linking is a solution to the problem at hand or even a strong source of mitigation: it's really, really not. The ability to, at almost any time, swap out what version of a dependency something is running is what allowed this exploit to happen in the first place. The fact that there was dynamic linking at all dramatically increased the blast radius of what was effected by this, not decreased it. It only provides a benefit once discovered, and that benefit is mostly in terms of less packages need to be rebuilt and updated by distro maintainers and users. Ultimately, supply-chain security is an incredibly tough problem that is far more nuanced than valueless "dynamic linking is better than static linking" statements can even come close to communicating.

> A problem like this would propagate much faster. Here the threat actor had to work hard to get his library updated in distributions and at each step there was a chance that this is detected.

It wouldn't though, because programs would have had to have been rebuilt with the backdoored versions. The book keeping would be harder, but the blast radius would have probably been smaller with static linking except in the case where the package is meticulously maintained by someone who bumps their dependencies constantly or if the exploit goes unnoticed for a long period of time. That's trouble no matter what.

> Now think about a Rust package automatically pulling in transitively 100s of crates.

Yup, but it only happens at build time. The blast radius has different time-domain properties than with shared libraries. See above. 100s of crates is ridiculous, and IMO the community could (and should) do a lot more to establish which crates are maintained appropriately and are actually being monitored.

> Sure, a distribution can later figure out what was affected and push upgrades to all the packages.

This is trivial to do with build system automation and a small modicum of effort. It's also what already happens, no?

> But fundamentally, we should minimize dependencies and we should have quality control at each level

Agreed, the Rust ecosystem has it's own tooling for quality control. Just because it's not maintained by the distro maintainers doesn't mean it's not there. There is a lot of room for improvement though.

> (and ideally we should not run code at build time). Cargo goes into the full opposite direction. Rust got this wrong.

Hard, hard, hard disagree. Nearly every language requires executing arbitrary code at compile time, yes, even a good chunk of C/C++. A strong and consistent build system is a positive in this regard: it would be much harder to obfuscate an attack like this in a Rust build.rs because there's not multiple stages of abstraction with an arbitrary number of ways to do it. As it stands, part of the reason the xz exploit was even possible was because of the disaster that is autotools. I would argue the Rust build story is significantly better than the average C/C++ build story. Look at all the comments here describing the "autotools gunk" that is used to obfuscate what is actually going on. Sure, you could do something similar for Rust, but it would look weird, not "huh, I don't understand this, but that's autotools for ya, eh?"

To be clear, I agree with you that the state of Rust and it's packaging is not ideal, but I don't think it necessarily made wrong decisions, it's just immature as a platform, which is something that can and will be addressed.


And Alpine Linux is largely a mistake.


That's not an argument, nor is it productive. Nobody even mentioned Alpine. Go away.


Ok well have a nice day I guess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: