Hacker News new | past | comments | ask | show | jobs | submit login

How crazy would it be to have a package repository that also builds the artifacts it distributes? You’d need a high barrier to entry to save on costs and time sifting through garbage. Perhaps it’s this high barrier that would prevent such a repository from taking off though. Perhaps this is just a really dumb step on a path leading back to simple checksum validations… though with those, you’re only validating that whatever was uploaded is what you downloaded, it doesn’t ensure that it was built from a known set of source files… hard problems.



Distro repositories (like the one you have on Debian / Ubuntu / Redhat etc) do this.

They work on a different model, where only packages that are deemed "worthy" are included, and there's a small-ish set of maintainers that are authorized to make changes and/or accept change requests from the community. In contrast, programming language package managers like cargo, pip or npm let anybody upload new packages with little to no prior verification, and place the responsibility of maintaining them solely on their author.

The distribution way of doing things is sometimes necessary, as different distributions have different policies on what they allow in their repositories, might want to change compilation options or installation paths, backport bug and security fixes from newer project versions for compatibility, or even introduce small code changes to make the program work better (or work at all) on that system.

One example of such a repository, for the Alpine Linux distribution, is at https://github.com/alpinelinux/aports


That's what nixpkgs does for Nix/NixOS. The package set is continuously built by a CI system and made publicly available: https://github.com/NixOS/nixpkgs#continuous-integration-and-...


go kind of solves that by making the git repo the source of truth for a package, and host a cache for it.

The problem with it is you need the full git url in every file you import it. which is a pain if the repo changes locations, or you want to use a fork or a local version. Versioning is also tricky, to the point that go recommends creating a separate branch for a major/breaking version, which requires updating every import statement.

I think a good middle ground would be to have a central repository and/or package configuration file that maps package names to git repos and versions to commits (possibly via tags). And of course use hashes to lock the version to specific contents.

Bazel kind of does this, but it doesn't have any built in version resolution or transitive dependency resolution (although in some cases there are other tools that help). And it can add a lot of complexity that you may not need.


bazel has modules now: https://bazel.build/external/module

Not tried them but they look like a reasonable dep handling solution on paper - each module can declare it's own dependencies and bazel will figure it out for you like a package manager. Their old workspaces way of doing it was a nightmare, as while patterns emerged where repos would export a function to register their dependencies, the first declaration of any name would win and thus you weren't guaranteed to have a compatible set of workspaces at the end.


Isn't this gentoo?


No, Gentoo does something far from it - it builds everything on the host machine every time, more or less.


Bruh. I mean this as a genuine ask. Have you heard of Nix and is there a reason it didn't land on your radar or was rejected?

Because what you want exists and has a thriving community, and a package set that outclasses, well, statistically every other package manager in existence.

I swear, it's a daily occurrence for me to see software engineering challenges posited here as damn near impossible that Nix has been solving for over a decade.

What if you could run a single command and have exact insight to the source you're using for every single package on your system with the context of the dependency graph it exists in.

I cannot wait for this wave to crash and for people to realize how much engineering effort is reduced by using Nix. And that all of these things they know they want for years, already exists. But hey, the syntax takes time to get used to and how do you compare that against the countless blog posts and hours and institutional knowledge you need to actually use docker properly. And then later on some Go-based SBOM tool made by a VC-backed startup that fundamentally still does an inferior job to Nix. Sigh.

Well anyway I guess nix will keep being used by hedge funds, algorithmic traders, "advanced defensive capabilities" companies, literal (launched, in space) satellites, wallet manufacturers, etc, while everyone else listens to the syntax decriers.


crates.io already builds the artefacts.

But the code-source that is sent to crates.io is not necessarily the same as the one in the public repo linked to the crate.


It's possible that crates.io might attempt to build a crate when published as a sort of sanity check (I don't know if this is true, but it's certainly feasible), but it doesn't distribute binaries, it distributes source code.


> it doesn't distribute binaries, it distributes source code.

It definitely does contain generated files, at least one crate has Rust code generated by a Python script that is not in the crate, only in the upstream Git repository.


Yes, let's clarify: crates.io expects a Rust crate, which itself can contain whatever junk the uploader wants. But crates.io isn't taking your source, building it, and then distributing those executables; at the end of the day it's distributing the source code of a Rust crate as given by whoever published it.


Do you have a source for crates.io building artefacts? I have a couple of crates on it and never saw any sign it tried to compile them, even when they were broken.


Ah yeah, I suppose that’s what I really mean, a means of verifying builds link to source that is publicly available. Sounds like the source repository has to be in on it too




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: