Hacker News new | past | comments | ask | show | jobs | submit login

Not all distributions suffer from this, because they have the tooling to patch all crate versions given a name. Compile times are actually the worst part but can be mitigated. But these distros also fundamentally accept the language's compilation model in question, so they have to develop solutions if they want to supply software to their users rather than make up excuses. Most distros don't like vendoring, for reasons. But many code maintainers do like it, for reasons. So it is what it is.

There are also more tools than ever to do things like correlate versions of OSS packages with reported vulnerabilities e.g. Google's osv.dev. 95% or more of Rust vendoring isn't actually "vendoring" in the sense you have to figure anything out. The dependencies are literally written in the lock file with the version and that can be traced back to a Git hash automatically in almost all cases. Some crates that bind C libraries might require this work, where you have to look at the code to see "What version of the code did the `cp -R` into here?" but they are practically limited in number, and detectable. You aren't doing any detective work for 98% of this stuff. It can be scaled pretty well with a few (good) heuristics.

Rust even has the advantage that large, vast amounts of code is actually statically analyzable under known sets of configurations/features. In the future I suspect a tool like `govulncheck` for Rust will be developed, making accurate vuln tracing even easier.

While I'm admittedly biased because I work on a distro where this problem isn't as directly relevant, my answer is that in 2024 if you don't have the tooling to do this stuff, it's absolutely on you. Whether as a policy decision or a technical limitation, whatever.




What do you mean that they can patch all crate versions given a name? Sometimes the same patch might be trivially applied to many different versions of a library, but in the general case, this is not true: the amount of work is in direct proportion with the number of library versions you have to patch, even if you know exactly which versions those are. So the difference between maintaining, say, 3 versions of a library, and 30, is going to be ~10× the work, even with tools that instantly identify the exact versions that need to be patched.


Tooling to apply the patches is trivial. This isn't the problem I'm describing.

As the sibling comment says, the issue is when the patch doesn't apply cleanly to some subset of versions in use, or even worse if they appear to apply cleanly but leave behind logic bugs.

If distributions did approach this by "distro-patching" specific versions, then that would reintroduce the upstream complaint that distros are using versions of dependencies not sanctioned by them. So your approach, even if it did work in the general case, would take you back to square one.


Yeah, I don't think either of these are really big problems. People identify the vuln, the fix, and they tend to share the patches for backports across versions + across distros, and/or immediately update downstream users to fixed versions because, again, the nifty lockfiles literally tell you if they are vulnerable as they contain the dep graph. You query every package's dep graph and look at the bounds. Maintainers with RUSTSEC advisories will yank the crate on crates.io, forcing upgrades across the stack. You can apply patches to most versions of a crate (again: you can find all of them!) and loosen or make more targeted fixes if needed. But even if 30 versions are vulnerable, in practice 30 patches don't need to be written in the p99 case. So I just disagree with the sibling on that, based on my experience. Most of these packages aren't the velocity of the Linux kernel. They aren't rewriting 10,000 lines of code a version so you have to re-interrogate the fix.

But most importantly, I don't see any argument this is all meaningfully more work than all the manual shit Debian maintainers do to get all this stuff working, on top of the fact they try to ship frankenstein versions of packages that silently break and piss off users (every distro ships some busted packages but it's a matter of give and take.) Other maintainers of other distros don't spend time on all the stuff the OP spent time on. They don't spend time loosening lockfile bounds on Rust crates for filesystem-sensitive tooling(!!!) in an effort to make a 35-year-old cultural rule about C/C++'s compilation model apply to them. Mostly, they run some script like 'import crates.io package XYZ version 1.2.3', commit the result, and get it code reviewed and merged. They can then spend time on other things. And often, these same processes are used for all updates, including many security updates. This stuff is insanely well oiled in other projects.

Sorry to be the hater, but this is squarely a Debian dysfunction as far as I can see? If they spent more time actually solving infrastructural problems they had, they would be able to spend time on actually delivering more working software to their users in the world they actually exist in.


> in an effort to make a 35-year-old cultural rule about C/C++'s compilation model apply to them

> this is squarely a Debian dysfunction as far as I can see

Nothing I have said is specific to Debian or to C/C++. I've presented general arguments that hold regardless of distribution policies or the language toolchain used. Nothing you have said demonstrates why the general points I have made would apply to Debian or to C/C++ but not to Rust-based projects.

If you still believe that this is the case, then I think that demonstrates that you do not understand the matter deeply enough.

The only thing that Debian is insisting on is basic software supply chain hygiene. Every so often, there is a very public failure in ecosystems that don't bother with this hygiene, while Debian continues unaffected.

> You can apply patches to most versions of a crate

The point of the complaint that spawned this discussion is that there's an upstream that considers it unacceptable for downstreams to be doing this patching (or indeed running any dependency version that is not sanctioned by upstream). It is a contradictory position to justify downstream patches as a supposedly easy solution while at the same time complaining that downstreams patch at all.

The fact is that distributions users do not expect to be beholden to laggard upstreams while awaiting security fixes. This requires distributions to modify the dependency versions (or patch, which is the equivalent "off-piste" behaviour) used to build their packages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: