Hacker News new | past | comments | ask | show | jobs | submit login

I am very concerned about Rust.

Rust’s “decision” to have a very slim standard library has advantages, but it severely amplifies some other issues. In Go, I have to pull in zero dependencies to make an HTTP request. In Rust, pulling reqwest pulls in at least 30 distinct packages (https://lib.rs/crates/reqwest). Date/time, “basic” base64, common hashing or checksums, etc, they all become supply chain vectors.

The Rust ecosystem’s collective refusal to land stable major versions is one of the amplifying issues. “Upgrade fatigue” hits me, at least. “Sure, upgrade ring to 0.17” (which is effectively the 16th major version). And because v0.X versions are usually incompatible, it’s not really possible to opt not to upgrade, because it only takes a short while before some other transitive dependency breaks because you are slow to upgrade. I recently spent a while writing my code to support running multiple versions of the `http` library, for example (which, to be fair, did just land version 1.0). My NATS library (https://lib.rs/crates/async-nats) is at version 34. My transitive base64 dependency is at version 22 (https://lib.rs/crates/base64).

This makes it nearly impossible for me to review these libraries and pin them, because if I pin foo@0.41.7, and bar needs foo@0.42.1, I just get both. bar can’t do =>0.41, because the point of the 0.X series is that it is not backwards compatible. It makes this process so time consuming that I expect people will either just stop (as if they did) reviewing their dependencies, or accept that they might have to reinvent everything from URL parsing to constructing http headers or doing CRC checks.

Combine this with a build- and compile-time system that allows completely arbitrary code execution, which is routinely just a wrapper for stuff like in the zx attack (look at a lot of the low-level libs you inevitably pull in). Sure, the build scripts and the macro system enables stuff like the amazing sqlx library, but said build and macro code is already so hard to read, it really takes proper wizardry to properly understand.




You have perfectly put into words, all my thoughts.

I have been thinking about ways to secure myself, as it is exhausting to think about it every time there is an update or some new dependency.

After this attack, I think the only sure way is to unplug the computer and go buy goats.

The next best thing? Probably ephemeral VMs or some Codespaces/"Cloud Dev Env thingy". (except neither would save me in the xz case)


> In Rust, pulling reqwest pulls in at least 30 distinct packages

This would be less of a problem if each dependency (and in turn, their dependencies) were individually sandboxed, and only allowed to access specific inputs/files at runtime in the capability security (https://en.wikipedia.org/wiki/Capability-based_security) fashion.

This way the attack surface would be hollowed out as much as possible, and exploits limited to the (sub)program output or specific accessible (writable) files.


Or you vendor everything.

You don't automatically download anything at build or install time, you just update your local source copies when you want to. Which to be clear I know means rarely.

It's 1970 all over again!


Vendoring is nice, and I usually prefer it, but you don't always have the time or people for it.

Vendoring + custom build system (Bazel?) for everything is basically googles approach, if what I have read is correct. Definitely better than everything we have, but the resources for it are not something most can afford.

P.S also what mrcus said, if we trust the upstream build process, we may as well trust their binaries.


That was what the 1970 crack was about.


Yes, but this doesn’t prevent issues like the xz issue, where the code looks fine, but the build scripts alter it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: