Hacker News new | past | comments | ask | show | jobs | submit login
Zb: An Early-Stage Build System (zombiezen.com)
254 points by zombiezen 52 days ago | hide | past | favorite | 122 comments



Whoa, nifty. Can you speak more to the interop issues with Nix? I've been working on a pretty large Nix deployment in the robotics space for the past 3ish years, and the infrastructure side is the biggest pain point:

* Running a bare `nix build` in your CI isn't really enough— no hosted logs, lack of proper prioritization, may end up double-building things.

* Running your own instance of Hydra is a gigantic pain; it's a big ball of perl and has compiled components that link right into Nix internals, and architectural fiasco.

* SaaS solutions are limited and lack maturity (Hercules CI is Github-only, nixbuild.net is based in Europe and last I checked was still missing some features I needed).

* Tvix is cool but not ready for primetime, and the authors oppose flakes, which is a deal-breaker for me.

Something that's a barebones capable of running these builds and could be wrapped in a sane REST API and simple web frontend would be very appealing.


Tracking issue is https://github.com/256lights/zb/issues/2

The hurdles to interop I see are:

- Nixpkgs is not content-addressed (yet). I made a conscious decision to only support content-addressed derivations in zb to simplify the build model and provide easier-to-understand guarantees to users. As a result, the store paths are different (/zb/store instead of /nix/store). Which leads to... - Nix store objects have no notion of cross-store references. I am not sure how many assumptions are made on this in the codebases, but it seems gnarly in general. (e.g. how would GC work, how do you download the closure of a cross-store object, etc.) - In order to obtain Nixpkgs derivations, you need to run a Nix evaluator, which means you still need Nix installed. I'm not sure of a way around this, and seems like it would be a hassle for users.

I have experienced the same friction in build infra for Nix. My hope is that by reusing the binary cache layer and introducing a JSON-RPC-based public API (already checked in, but needs to be documented and cleaned up) for running builds that the infrastructure ecosystem will be easier.


I've been wondering idly if it's possible for Nix to support the Bazel Remote Execution API that seems to be catching on[1] more generally.

[1] https://github.com/bazelbuild/remote-apis?tab=readme-ov-file...


I’m very interested in better bidirectional interop between bazel and nix; it seems such a travesty that for two projects that are so ideologically aligned to work so poorly together. Nix should be able to run builds on bazel and bazel builds should decompose and cache into multiple store paths in a nix environment (think how poetry2nix works).


If you're attending BazelCon I'd love to have a chat with you about this stuff in some more detail. (If you're not I'd still love to have a chat!)


I'm afraid I'm not planning on it; I don't make it to the west coast nearly as often as I should. Feel free to hmu on LinkedIn or something though; I'd love to get plugged into some people interested in this stuff, and I'm about to have a block of time available when I could potentially work on it.


Why are flakes such a deal-breaker? While not ideal, you can still tag your versions in the .nix file instead of the lockfile.

I even had to avoid flakes in a system I developed used by ~200 developers since it involved a non-nixos OS and it involved user secrets (Tokens etc...) So with flakes I had to keep track of the secrets (and was a pain point, since they obviously didn't have to push them into the git repo) but nix flakes doesn't handle well omitting files on git (it ignores them also on nix commands). In the end, the workarounds were too messy and had to drop flakes entirely.


As a new user, I learned flakes first, and the tie-in with git tags/branches and the corresponding cli ergonomics aren’t something I’d be able to give up.


How do you handle flakes pushing an entire copy of the repo into the nix store? Is this not an issue for you somehow?


The repo being evaluated? Not an issue for me. In the dev scenario disk is plentiful; in production it can be garbage collected out or avoided by using copy-closure type workflows or nix2container (eg just not running nix evaluations directly in the target environment).


That makes sense. I naively tried flakes for a nix-shell replacement for a couple environments, one was a node app with large node_modules dependencies and the other was a windows app I was running in a local wine root. In both cases re-evaluating the flake was very slow because of the volume of data being copied. I want to do more with flakes but I’m skeptical that they end up being a good per-app workflow when the whole app isn’t using nix for build and dependencies end-to-end.


Yes, that's fair. I wouldn't be trying to use Nix at all if I was wanting to depend on stuff outside of it— fortunately nixpkgs already has decent coverage in the scientific computing space, so most of what we needed for a ROS/robotics application was present, and the remainder wasn't too bad to just package ourselves.

That said, I think the node story overall is still pretty rough. There are several competing tools/workflows each with different tradeoffs. My frontend team did a brief evaluation and basically just noped out of it— there's so much tooling in the node world that just gets you directly from your lockfiles right to a deployable container, it's harder to see the value of a Nix closure as an intermediate build object; the case is a lot less clear for it than for, say, Python or C++ stuff.


https://github.com/edolstra/flake-compat should make flakes work with tvix


I'd like to know more about the "Support for non-determinism" and how that differs from other build systems. Usually, build systems rerun actions when at least one of the inputs has changed. Are non-deterministic targets rerun all the time?

Also, I'm curious to know if you've considered using Starlark or the build file syntax used in multiple other recent build systems (Bazel, Buck, Please, Pants).


(Hi! I recognize your name from Bazel mailing lists but I forget whether we've talked before.)

I'm mostly contrasting from Nix, which has difficulty with poisoning cache when faced with non-deterministic build steps when using input-addressing (the default mode). If zb encounters a build target with multiple cached outputs for the same inputs, it rebuilds and then relies on content-addressing to obtain build outputs for subsequent steps if possible. (I have an open issue for marking a target as intentionally non-deterministic and always triggering this re-run behavior: https://github.com/256lights/zb/issues/33)

I'll admit I haven't done my research into how Bazel handles non-determinism, especially nowadays, so I can't remark there. I know from my Google days that even writing genrules you had to be careful about introducing non-determinism, but I forget how that failure mode plays out. If you have a good link (or don't mind giving a quick summary), I'd love to read up.

I have considered Starlark, and still might end up using it. The critical feature I wanted to bolt in from Nix was having strings carrying dependency information (see https://github.com/NixOS/nix/blob/2f678331d59451dd6f1d9512cb... for a description of the feature). In my prototyping, this was pretty simple to bolt on to Lua, but I'm not sure how disruptive that would be to Starlark. Nix configurations tend to be a bit more complex than Bazel ones, so having a more full-featured language felt more appropriate. Still exploring the design space!


I mean, to be fair, Nix is nothing more than a big ass pile of genrule() calls, at the end of the day. Everything is really just genrule. Nix just makes it all work with the sandbox it puts all builds in. Bazel has an equivalent sandbox and I'm pretty sure you can sandbox genrule so it's in a nice, hermetic container. (Side note, but one of my biggest pet peeves is that Nix without the sandbox is actually fundamentally _broken_, yet we let people install it without the sandbox. I have no idea why "Install this thing in a broken way!" is even offered as an option. Ridiculous.)

The way Nix-like systems achieve hermetic sandboxing isn't so much a technical feat, in my mind. That's part of it -- sure, you need to get rid of /dev devices, and every build always has to look like it happens at /tmp/build within a mount namespace, and you need to set SOURCE_EPOCH_DATE and blah blah, stuff like that.

But it's also a social one, because with Nix you are expected to wrap arbitrary build systems and package mechanisms and "go where they are." That means you have to bludgeon every random hostile badly written thing into working inside the sandbox you designed, carve out exceptions, and write ptaches for things that don't -- and get them working in a deterministic way. For example, you have to change the default search paths for nearly every single tool to look inside calculated Nix store path. That's not a technical feat, it's mostly just a huge amount of hard work to write all the abstractions, like buildRustPackage or makeDerivation. You need to patch every build system like CMake or Scons in order to alleviate some of their assumptions, and so on and so forth.

Bazel and Buck like systems do not avoid this pain but they do pay for it in a different way. They don't "go where they are", they expect everyone to "come to them." Culturally, Bazel users do not accept "just run Make under a sandbox" nearly as much. The idea is to write everything as a BUILD file rule, from scratch rewriting the build system, and those BUILD files instead should perform the build "natively" in a way that is designed to work hermetically. So you don't run ./configure, you actually pick an exact set of configuration options and build with that 100% of the time. Therefore, the impurities in the build are removed "by design", which makes the strict requirements on a sandbox somewhat more lenient. You still need the sandbox, but by definition your builds are much more robust anyway. So you are trading the pain of wrapping every system for the pain of integrating every system manually. They're not the same thing but have a lot of overlap.

So the answer is, yes you can write impure genrules, but the vast majority of impurity is totally encapsulated in a way that forces it to be pure, just like Nix, so it's mostly just a small nit rather than truly fundamental. The real question is a matter of when you want to pay the pied piper.


You (plural) seem to know a great deal about build systems, so I figured I would ask - what’s your opinion about Mill? It’s a not so well known build tool written in scala, but I find its underlying primitives are absolutely on point.

For those who don’t know, its build descriptors are just Scala classes with functions. A function calling another function denotes a dependency, and that’s pretty much it. The build tool will automatically take care of parallelizing build steps and caching them.

How do you think it relates to Nix and alia on a technical level?


As a current Nix user, what I would really like is a statically typed language to define builds. Recreating Nix without addressing that feels like a missed opportunity.


The Lua VSCode extension adds a type system that works really well IME


There are Lua flavors with typing. Teal is one I have heard that compiles down to regular Lua like a typescript


For me the killer feature is Windows Support, Ericsson is doing a great job bringing nix into Windows, but the process it's understandably slow, If this project is similar enough to nix that I can kind-off translate easily the zb derivations to nix derivations, I'm willing to use it in windows (It's not like nix has windows programs in the nixpkgs either way I have to bring them in my own).

The problem for me is that I see no benefit on using this over nix language (which I kinda like a lot right now)


We're working on rattler-build (https://github.com/prefix-dev/rattler-build/) - which is a build system inspired by Apko / conda-build and uses YAML files to statically define dependencies. It works really well with pixi (our package manager) but also any other conda compatible package managers (mamba, conda).

And it has Windows support, of course. It can also be used to build your own distribution (e.g. here is one for a bunch of Rust utilities: https://github.com/wolfv/rust-forge)


> Ericsson is doing a great job bringing nix into Windows

Is this Ericsson... the corporation? Windows support for nix is something I don't hear much about, but if there is progress being made (even slowly) I'd love to know more.


John Ericson (@Ericson2314)

You can read a post on that here: https://lastlog.de/blog/libnix_roadmap.html


From the Build Systems à la Carte paper:

Topological. The topological scheduler pre-computes a linear order of tasks, which when followed, ensures the build result is correct regardless of the initial store. Given a task description and the output key, you can compute the linear order by first finding the (acyclic) graph of the key’s reachable dependencies, and then computing a topological sort. However this rules out dynamic dependencies.

Restarting. To handle dynamic dependencies we can use the following approach: build tasks in an arbitrary initial order, discovering their dependencies on the fly; whenever a task calls fetch on an out-of-date key dep, abort the task, and switch to building the dependency dep; eventually the previously aborted task is restarted and makes further progress thanks to dep now being up to date. This approach requires a way to abort tasks that have failed due to out-of-date dependencies. It is also not minimal in the sense that a task may start, do some meaningful work, and then abort.

Suspending. An alternative approach, utilised by the busy build system and Shake, is to simply build dependencies when they are requested, suspending the currently running task. By combining that with tracking the keys that have already been built, one can obtain a minimal build system with dynamic dependencies. This approach requires that a task may be started and then suspended until another task is complete. Suspending can be done with cheap green threads and blocking (the original approach of Shake) or using continuation-passing style (what Shake currently does).


I've been using WAF for ages so naturally I wonder how this system compares to WAF? My experience with build systems is that they all get the easy parts rights. You can compile C and C++ code and they successfully scan header files for dependencies. But FEW get the hard parts rights. E.g., compiling LaTeX with multiple figures, custom fonts and classes, and multiple bib files. It requires correctly interfacing with pdfatex which is a complete PITA as it spews intermediate files everywhere and puts constraints on the current directory. Most build tools can't.

What I want in a build tool is universality. Sometimes a whole directory tree is the dependency of a target. Sometimes it's an url and the build tool should correctly download and cache that url. Sometimes the pre-requisite is training an ML model.


I wrote an experimental make replacement some years ago that understands that not every target is a file. eg. You can have targets be a remote URL (for an action of uploading to a fileserver).

http://git.annexia.org/?p=goals.git;a=summary

http://oirase.annexia.org/2020-02-rjones-goals-tech-talk.mp4


latexrun does a pretty reasonable job with LaTeX files, and only runs when needed, etc. Would be nice to have this integrated into a build system for plots, data generation, etc.


is it impossible to fix that issue in pdflatex?


My point is that build systems must be able to deal with tools with insanely stupid interfaces like pdflatex. Btw, WAF's strategy of dealing with pdflatex is to rerun the command "until output files stop changing". That's how dumb it is.


what is WAF?



Definitely interesting, but it's flat-out wrong about the limitations of `make`.

In particular, the `release.txt` task is trivial by adding a dummy rule to generate and include dependencies; see https://www.gnu.org/software/make/manual/html_node/Remaking-... (be sure to add empty rules to handle the case of deleted dynamic dependencies). You can use hashes instead of file modification times by adding a different kind of dummy rule. The only downside is that you have to think about the performance a little.

I imagine it's possible for a project to have some kind of dynamic dependencies that GNU make can't handle, but I dare say that any such dependency tree is hard to understand for humans too, and thus should be avoided regardless. By contrast, in many other build tools it is impossible to handle some of the things that are trivial in `make`.

(if you're not using GNU make, you are the problem; do not blame `make`)


I guess you aren't keen on Java then? Complex dynamic dependency graphs aren't difficult for humans to handle or many build tools other than make.


I'm not keen on Java for other reasons. The fact that a single .java file can generate multiple .class files is annoying but not something tools can't handle (it's actually similar to .h files for C/C++ - remember, we only need the complete dependency graph for the rebuild, not the current build).

The main part that's difficult for humans is if there's a non-public class at top level rather than nested (I forget all the Java-specific terminology for the various kinds of class nesting).


    > I guess you aren't keen on Java then?
Can you explain more? I don't follow.


Java dependencies are too complicated for make. See https://www.oreilly.com/library/view/managing-projects-with/...


This looks really exciting and I absolutely must give it a try. Well done! At face value the vision and design choices appear to be great.


Thank you! <3


Nice to see Windows support. We/I are working on that with upstream Nix too.

Also I hope we can keep the store layer compatible. It would be good to replace ATerm with JSON, for example. We should coordinate that!


Rad! Yes, please keep me in the loop!


Will do!


This looks awesome. I've had this same exact idea for a build system, but I haven't had the time to build it yet. Cool to see someone basically build what I had imagined!


I can't help but wonder whether the major problem is actually API changing from version to version of software and keeping everything compatible.

If the build language is LUA, doesn't it support top level variables. It probably just takes a few folks manipulating top level variables before the build steps and build logic is no longer hermetic, but instead plagued by side effects.

I think you need to build inside very effective sandboxes to stop build side effects and then you need your sandboxes to be very fast.

Anyway, nice to see attempts at more innovation in the build space.

I imagine a kind of merging between build systems, deployment systems, and running systems. Somehow a manageable sea of distributed processes running on a distributed operating system. I suspect Alan Kay thought that smalltalk might evolve in that direction, but there are many things to solve including billing, security, and somehow making the sea of objects comprehensible. It has the hope of everything being data driven, aka structured, schemad, versions, json like data rather than the horrendous mess that is unix configuration files and system information.

There was an interested talk on Developer Voice perhaps related to a merger of Ocaml and Erlang that moved a little in that direction.


One request that I would make of a project like this is to support distributed builds out of the box. Like, really basic support for identical builder hosts (this is much easier now than in the past with containers) and caching of targets. Otherwise, this looks great! Big fan of the choice of Lua, though the modifications to strings might make it difficult to onboard new users depending on how the modification was made.


Yup, remote building and caching is on my radar. I expect it will work much in the same way Nix does now, although I'm being a bit more deliberate in creating an RPC layer so build coordinators and other such tools are more straightforward to build.

The string tweak is transparent to users broadly speaking. IME with Nix this thing works the way people expect (i.e if you use a dependency variable in your build target, it adds a dependency).


Xmake?


Interesting. I feel like I would have gone with Starlark over Lua, but I guess it's good to have options.

Does it support sandboxing?


Not yet, but I've hacked up most of the Linux sandboxing: https://github.com/256lights/zb/issues/29

I want to introduce Windows sandboxing, too, but I'm not as confident about how to do that: https://github.com/256lights/zb/issues/31


Oh and as for Starlark, I went into more detail over in this thread: https://news.ycombinator.com/item?id=41596426


You need bazel if you need starlark & sandboxing


Well yeah. Starlark and sandboxing are the best things about Bazel, but it could still definitely be improved. So I'm still curious about other build systems.

I think making a new build system without sandboxing (or at least a plan for it) would be pretty stupid.

Fortunately he is planning it.


Cool that this space is getting more attention - I just came from the reproducible builds summit in Hamburg. We're working on similar low level build system tools with rattler-build and pixi. Would love to have a chat and potentially figure out if collaboration is possible.


Cool! Contact info is in my profile and on my website. :)


Great idea. Just a tip. You can wrap your lua part into cosmopolitan C. This way you get lua on many architectures and os. Also cosmopolitan can be bootstrapped with tiny cc I guess. And personally wrapping your lua code in https://fennel-lang.org/ would be nice.

This way with libcosmopolitan, you could just checkin a copy of your build tool in a project, to be self sufficient. Think of it like gradlew( the gradle bash/bat wrapper) but completely self contained and air gapped

https://github.com/jart/cosmopolitan


+1 for fennel


Looks great, Nix-with-Lua that also supports Windows would be amazing. Two questions if I may

- Does this sandbox builds the way flakes do?

- What is MinGW used for on Windows? Does this rely on the MinGW userland, or is it just because it would be painful to write a full bootstrap for a windows compiler while also developing Zb?

Also, its great to see the live-bootstrap in there. I love the purity of how Guix's packages are built, and I like the idea Zb will be that way from the start


Nix sandboxes derivation runs on Linux even without flakes, and I'm planning on implementing that, yes: https://github.com/256lights/zb/issues/29 and https://github.com/256lights/zb/issues/31

MinGW is used to build Lua using cgo. I'd like to remove that part, see https://github.com/256lights/zb/issues/28 I haven't started the userspace for Windows yet (https://github.com/256lights/zb/issues/6), but I suspect that it will be more "download the Visual C++ compiler binary from this URL" than the Linux source bootstrap.

Yeah, I'm happy with live-bootstrap, too! I tried emulating Guix's bootstrap, but it depended a little too much on Scheme for me to use as-is. live-bootstrap has mostly worked out-of-the-box, which was a great validation test for this approach.


Thanks for answering and I really hope it works out. A Nix alternative with less friction would be very welcome!


I see the instructions discuss $(mkdir /zb) <https://github.com/256lights/zb#linux-or-macos> and after seeing references to Nix I wanted to ensure this wasn't a hard-and-fast directory choice since macOS has immutable / and it causes no end to the Nix stupidity on macOS


Good point. Opened https://github.com/256lights/zb/issues/47 to track this idea.


I made a graph-based orchestrator - https://github.com/jjuliano/runner - It uses declarative YAML, and preflight, postflight and skip conditions. I think it can also be a full-fledge build system.


I'm excited by this!

Quick question: if the build graph can be dynamic (I think they call it monadic in the paper), then does it become impossible to reason about the build statically? I think this is why Bazel has a static graph and why it scales so well.


According to Build systems à la carte, "it is not possible to express dynamic dependencies in [Bazel's] user-defined build rules; however some of the pre-defined build rules require dynamic dependencies and the internal build engine can cope with them by using a restarting task scheduler, which is similar to that of Excel but does not use the calc chain." (p6)

IME import-from-derivation and similar in Nix is usually used for importing build configurations from remote repositories. Bazel has a repository rule system that is similar: https://bazel.build/extending/repo

So to answer your question: yes from the strictest possible definition, but in practice, I believe the tradeoffs are acceptable.


You should look at Nix's experimental dynamic derivations, which provide functionality entirely at the level of derivation language / store layer.


Interesting! Thanks, hadn't seen that yet. (For anyone else curious, the RFC is here: https://github.com/NixOS/rfcs/blob/master/rfcs/0092-plan-dyn...)


Buck2 can express dynamic dependencies, so it can capture dynamic compilation problems like C++ modules, OCaml/Fortran modules, etc. in "user space" without built-in support like Bazel requires. The secret to why is twofold. One, your internal build graph can be fully dynamic at the implementation level; rather, it's a matter of how much expressivity you expose to the user in letting them leverage and control the dynamic graph. Just because you have a Monad, doesn't mean you have to have to expose it. You can just expose an Applicative.

And actually, if you take the view that build systems are a form of staged programming, then all build systems are monadic because the first stage is building the graph at all, and the second stage is evaluating it. Make, for example, has to parse the Makefiles, and during this phase it constructs the graph... dynamically! Based on the input source code! Rather it is during the second phase done later, when rules are evaluated, and that is now the time when the graph is static and all edges must be known. See some notes from Neil Mitchell about that.[1]

The other key is in a system like Buck or Bazel, there are actually two graphs that are clearly defined. There is the target graph where you have abstract dependencies between things (a cxx_binary depends on a cxx_library), and there is the action graph (the command gcc must run before the ld command can run).

You cannot have dynamic nodes in the target graph. Target graph construction MUST be deterministic and "complete" in the sense it captures all nodes. This is really important because it breaks features like target determination: given a list of changed files, what changed targets need to be rebuilt? You cannot know the complete list of targets when the target graph is dynamic, and evaluation can produce new nodes. That's what everyone means when they say it's "scalable." That you can detect, only given a list of input files from version control, what the difference between these two build graphs are. And then you can go build those targets exactly and skip everything else. So, if you make a small change to a monumentally sized codebase, you don't have to rebuild everything. Just a very small, locally impacted part of the whole pie.

In other words, "small changes to the code should have small changes in the resulting build." That's incremental programming in a nutshell.

OK, so there's no target graph dynamism. But you can have dynamic actions in the action graph, where the edges to those dynamic actions are well defined. For example, compiling an OCaml module first requires you to build a .m file, then read it, then run some set of commands in an order dictated by the .m file. The output is an .a file. So you always know the in/out edges for these actions, but you just don't know what order you need to run compiler commands in. That dynamic action can be captured without breaking the other stuff. There are some more notes from Neil about this.[2]

Under this interpretation, Nix also defines a static target graph in the sense that every store path/derivation is a node represented as term in the pure, lazy lambda calculus (with records). When you evaluate a Nix expression, it produces a fully closed term, and terms that are already evaluated previously (packaged and built) are shared and reused. The sharing is how "target determination" is achieved; you actually evaluate everything and anything that is shared is "free."

And under this same interpretation, the pure subset of Zb programs should, by definition, also construct a static target graph. It's not enough to just sandboxing I/O but also some other things; for example if you construct hash tables with undefined iteration order you might screw the pooch somewhere down the line. Or you could just make up things out of thin air I guess. But if you restrict yourself to the pure subset of Zb programs, you should in theory be fine (and that pure subset is arguably the actual valuable, useful subset, so it's maybe fine.)

[1] https://ndmitchell.com/downloads/paper-implementing_applicat...

[2] https://ndmitchell.com/downloads/slides-somewhat_dynamic_bui...


Austin I think some of these distinctions are not necessary for the theory.

In https://github.com/NixOS/rfcs/blob/master/rfcs/0092-plan-dyn... there is only an action graph but it is dynamic. Dynamic craft would depend on an entire directory, and thus need to be rebuilt a lot. But when individual files are projected out, there is a new opportunity for early cut-off.


You had my interest at Windows support! I'll carve out some time this weekend to see if I can write a build for komorebi


Nice! It might be a little too rough until I've got a working C compiler for Windows: https://github.com/256lights/zb/issues/6 (and Linux for that matter: https://github.com/256lights/zb/issues/30)


Did you consider writing a nicer language that compiles to Nix? A "friendly" tool on the outside with Nix inside.


Yup, that was how I built the prototype: https://www.zombiezen.com/blog/2024/06/zb-build-system-proto...

The last commit using that approach was https://github.com/256lights/zb/tree/558c6f52b7ef915428c9af9... if you want to try it out. And actually, I haven't touched the Lua frontend much since I swapped out the backend: the .drv files it writes are the same.

The motivation behind replacing the backend was content-addressibility and Windows support, which have been slow to be adopted in Nix core.


I don't think nix is that awful, while there are some tasks that are more difficult or can be a little bit verbose (if you want to play a lot with the attribute sets / lists or string manip) When using nix most of the time you'll end up just writing bash or using it as a templating language.


How do you pronounce "Zb"? Zee-bee?


Heh, I think I need to add something to the README. I've been pronouncing it as "zeeb" in my head as in the first syllable Zebesian Space Pirate from Metroid, but TIL that that's canonically "Zay-bay-zee-uhn" so idk.

Naming is hard.


I kinda dig zeeb! Naming is hard. Really awesome project by the way! Should have mentioned that first. Build systems are neat. I've always wanted to try building a build system, in a "learn how it works" sense, not so much "yet another build tool".


Thanks! And go for it, it's a good learning experience! It's a really interesting problem domain and there's a lot of different directions you can take it.


It’s amazing the lengths some people will go to in order to avoid scary parentheses.


It's not Guile I want to avoid, it's GNU ideologues who insist on every freedom except "use proprietary software and hardware" and shame people for doing so.


Russ? Roxy? Is an outstanding developer. Keen to see how this goes.



I'd definitely write a build systen in lua, looks promising!


This looks awesome....


Happy to see someone inspired by Nix, but wanting to carve their own path. Nix popularized some powerful ideas in the Linux world, but it has a steep learning curve and a very unfriendly UI, so there is plenty of room for improvement there.

I'm not sure if Lua is the right choice, though. A declarative language seems like a better fit for reproducibility. The goal of supporting non-deterministic builds also seems to go against this. But I'm interested to know how this would work in practice. Good luck!


If you design it like SCons, it'll look imperative but behave more declaratively.

If I understand the architecture correctly, the imperative calls in the config file don't actually run the build process. They run a Builder Pattern that sets up the state machine necessary for the builds to happen. So it's a bit like LINQ in C# (but older).

I have no idea how that plays out single-step debugging build problems though. That depends on how it's implemented and a lot of abstractions (especially frameworks) seem to forget that breakpoints are things other people want to use as well.


That's accurate (unless the config file attempts to read something from the build process, that will trigger a build).

It's a good point about debugging build problems. This is an issue I've experienced in Nix and Bazel as well. I'm not convinced that I have a great solution yet, but at least for my own debugging while using the system, I've included a `zb derivation env` command which spits out a .env file that matches the environment the builder runs under. I'd like to extend that to pop open a shell.


Surface-level feedback: get rid of the word "derivation". Surely there must be a better way to describe the underlying thing...


Agreed! It's such an alien term to describe something quite mundane. Language clarity is a big part of a friendly UI.


what word would you fit to what a nix derivation is?


I'm not sure, I'm not a Nix expert. The comments here also refer to it as both instructions to build something, as well as the intermediate build artifact. This discussion[1] on the NixOS forums explains it as a "blueprint" or "recipe". So there's clearly a lot of confusion about what it is, yet everyone understands "blueprint", "recipe", or even "intermediate build artifact" if you want to be technical.

The same is true for "flakes". It's a uniquely Nix term with no previous technical usage AFAIK.

Ideally you want to avoid using specialized terms if possible. But if you do decide to do that, then your documentation needs to be very clear and precise, which is another thing that Nix(OS) spectacularly fumbles. Take this page[2] that's supposed to explain derivations, for example. The first sentence has a circular reference to the term, only mentioning in parenthesis that it's a "build task". So why not call it that? And why not start with the definition first, before bringing up technical terms like functions and attributes? There are examples like this in many places, even without general problems of it being outdated or incomplete.

Though I don't think going the other way and overloading general terms is a good idea either. For example, Homebrew likes to use terms like "tap" and "bottle" to describe technical concepts, which has the similar effect of having to explain what the thing actually is.

Docker is a good example of getting this right: containers, images, layers, build stages, intermediate images, etc. It uses already familiar technical terms and adopts them where they make most sense. When you additionally have excellent documentation, all these things come together for a good user experience, and become essential to a widespread adoption of the tool.

[1]: https://discourse.nixos.org/t/what-is-a-derivation/28311/6

[2]: https://nix.dev/manual/nix/2.18/language/derivations


yes, i agree, nix should be considered the bible of bad documentations. it’s very bad at spotlighting the essentials and putting the non-essentials aside. it’s especially surprising for derivations, because nix is really, in the end, a frontend for building derivations. everything else converges on it.

and then i go to nix.dev and derivations are presented after fetchers? no surprise it’s so confusing, even though the concept is quite simple.

a derivation is a dict that is composed of (1) a shell script and (2) environment parameters it will have access to. a nix command will read the derivation, create the environment with only these parameters and execute the script. that’s it.

everything else about nix language is about building derivations. like copying files into its store. for example, evaluating “${pkgs.hello}” will be interpolated into a path. so in your derivation, you can define an env variable “hello = ${pkgs.hello}/bin” and it will be available in your script as “$hello” and will have the value of “/nix/store/<hash>-hello/bin”. nix will do the fetching and storing for you. so you can have “command $hello” in your script. neat!

play around with evaluating the ‘derivation’ built-in function.


What’s wrong with it? It’s a term of art that means a specific thing in both nix and guix; it’d just be confusing if zb renamed it to something else.


I'm 80% finished moving all of my servers from NixOS to Debian. I used NixOS for 3 years (even wrote some custom flakes) before finally giving up (for the final year I was just too scared to touch it, and then said "I shouldn't be scared of my OS"). I should know what "derivation" means, but I can't for the life of me remember...


I don’t know Nix, but I’ll describe how Guix works, and hopefully it will be obvious what the corresponding Nix concepts are.

A “package” is a high-level description (written in scheme) of how to build something, like: “using the GNU build system with inputs a, b, c, and configure flags x, y, z, build the source available at https://github.com/foo/bar”

The actual builder daemon doesn’t know about the GNU build system, or how to fetch things from GitHub, or how to compute nested dependencies, etc.; it is very simple. All it knows is how to build derivations, which are low-level descriptions of how to build something: “create a container that can see paths a, b, and c (which are themselves other derivations or files stored in the store and addressed by their hash), then invoke the builder script x.”

So when you ask guix to build something, it reads the package definition, finds the source and stores it in the store, generates the builder script (which is by convention usually also written in scheme, though theoretically nothing stops you from defining a package whose builder was written in some other language), computes the input derivation paths, etc., and ultimately generates a derivation which it then asks the daemon to build.

I believe in Nix, rather than scheme, packages are written in nix lang and builder scripts can be written in any language but by convention are usually bash.

So basically long story short, the package is the higher-level representation on the guix side, and the derivation is the lower-level representation on the guix-daemon side.


"Derivation" refers to the nix intermediate build artifact, a .drv file, which contains the instructions to perform the build itself. Basically a nix program compiles to a derivation file which gets run to produce the build outputs. The hash in the /nix/store for a dependency is the hash of the derivation. Conveniently if the hash is already in a build cache, you can download the cached build outputs instead of building it yourself.


Ah OK, then I'd actually never actually understood what a derivation is. But then again, the name "derivation" doesn't at all lead to guessing at such a definition, either.


“Build plan” would maybe be a more obvious name, but it’d still be confusing to deviate from what Nox uses, IMO.


Yeah I ended up with the same issue. While I’m technically inclined, I’m not nearly to the point where I can handle the fire hose of (badly named) abstraction at all levels like some people.

I could never have pulled off what this guy did https://roscidus.com/blog/blog/2021/03/07/qubes-lite-with-kv..., though ironically his journal is probably one of the best “how nix actually works” tutorials I’ve ever seen, even though it isn’t intended for that or complete for such a purpose. He’s the only reason I know that a derivation is basically an intermediate build object.


I'm about to start a project where I thought Nix might be useful. What do I need to watch out for? Where is it going to piss me off and send me back to Docker?


There are no guardrails. Whenever something goes wrong, you'll get weird cryptic errors in a seemingly unrelated area and have no clue how to fix it until you post to a support group to discover that you put a comma in the wrong place.

You'll spend a LOT of time fighting the system, which gets old fast. Docker may have a sucky plan format (and they STILL won't let you set your goddamn MAC address), but it's good enough for most things and not too terrible to learn.


Oh, and my personal favorite: Programming by maps.

Any time you put in a key that it doesn't recognize, it just gets ignored. So you get to spend hours/days trying to figure out why the hell your system won't do what you told it to, and won't even validate that what you put in makes sense.


It is the name of a feature in Nix. This is as obfuscated as calling a rock a rock.


Strange thing to say but you do you.

I tried to dabble in Nix several times and the term never stuck.

I suppose for you it's impossible to accept that the term is just bad and unintuitive. And other comments here say the same.


I mean it has variable names, configurations, documentation, a file extension and lots of code and a history behind it, so the strange thing to me is trying to suggest a replacement phrase as if you don't know what it is, acting like it's some high-brow language used in a blog to look smart, complaining about how this makes it less accessible (paraphrasing a little), then rolling back saying you dabbled in Nix and acting like you know what it is.

But then, you do you.


The part you seem to deliberately miss is that what is obvious to people deeply invested in Nix is not obvious to anyone else.

I for one can't trace the train of thought that is going from "intermediate build artifact" and somehow arrives at "derivation".

I found out just enough about Nix to reject it. My take is still informed, I simply didn't buy its pitch.


I geniunely thought you knew nothing about derivations and were criticizing the blogger for writing the term in their blog, not the term standard to Nix itself. Which is just as weird to me as complaining about std::string, well why call it a string? it is obviously text!


> Which is just as weird to me as complaining about std::string, well why call it a string? it is obviously text!

It's really not, though. String is a common technical term used in programming languages for many decades. If a new language decided to call them "textrons", _that_ would be weird. And this is the exact thing Nix did with "derivations", "flakes", etc. There is no precedent for these terms in other software, so they're unfamiliar even to its core audience.

It would be different if Nix invented an entirely new branch of technology that didn't have any known precedent. But for a reproducible build system that uses a declarative language? C'mon.


No need to resort to obvious straw man arguments, you can just accept some people dislike the dev UX of Nix and move on, which is basically what me and others have been trying to say in this entire sub-thread, some much more detailed than me.

No idea why you keep digging at this, the takeaway was clear at least three comments ago.


FYI "here's what I genuinely thought" is not a straw man. Now I am genuinely sorry for ever responding to you. Say hello to others for me.


The straw man was your std::string example. It was nowhere near the same as you claimed.

Say hi to the others in your club of "I'm gonna pretend I didn't get it for no reason whatsoever" for me.


It was an example, you thought it was a bad example, and the rest were just inane accusations.


One thing I like to see is a 'dry run' like 'make -n'. Although, maybe that's not possible in all cases.

Another possibility might be to output a something like a shell script that would do a rebuild the same way, so you can see what it did and hack it when debugging.


Yes. Dry runs at least, and better yet terraform-style planning that produces an artifact that can be applied. These should really be more common with all kinds of software


I would like to see more tools iterate on trying to do terraform-like output because while terraform diffs are interesting, practically most of my teammates couldn’t tell what the fuck they said and a couple times I missed important lines that caused us prod issues. I think we can do a better job than showing a wall of text.


Presentation is a separate matter though, just like with git diffs ideally you could choose a wall of text or a side by side ui, see things at a high level or drill down to line by line. A tag layer plus custom validation between plan/apply gives you an automatic way to short circuit things. But none of it can work without a plan as a first class object.

Thing is the plan/apply split isn’t even primarily for users necessarily, it’s just good design. It makes testing easier, and leaves open the possibility for plugging in totally different resolution strategies without rewriting the whole core. The benefits are so big there that I’d strongly prefer that more software is using it more often, even if I’m going to shut my eyes and auto apply every time without glancing at the plan.


> The goal of supporting non-deterministic builds also seems to go against this.

I think this is actually a great escape hatch. Supporting non-deterministic builds means more folks will be able to migrate their existing build to zb. Postel's law and all that.


Right, could be.

One of the insane things with Nix is that the suggested workflow is to manage _everything_ with it. This means that it wants to replace every package manager in existence, so you see Python, Emacs and other dependency trees entirely replicated in Nix. As well as every possible configuration format. It's craziness... Now I don't need to depend on just the upstream package, I also have to wait for these changes to propagate to Nix packages. And sometimes I just want to do things manually as a quick fix, instead of spending hours figuring out why the Nix implementation doesn't work.

So, yeah, having an escape hatch that allows easier integration with other ecosystems or doing things manually in some cases, would be nice to have.


I thought of creating something similar and I was going to use a personal fork of the Go compiler with some mods, anko (which is a really cool go binding language) or righting my own DSL. It's quite the undertaking.

I like Nix and NixOS a lot, its really cool, but it has some really odd management issues and the language IMO is horrendous. I used NixOS for around a year and I was changing my Nixpkgs version and I got that same generic nonsense error that doesn't have any semantic meaning and I was just over it. I'm not too fond of commenting out random parts of code to figure out where something minor and obscure failed. Sometimes it tells you the module it had a problem with, or will point out an out of place comma, and other times its just like "idk bruh ¯\_(ツ)_/¯ "failed at builtin 'seq'" is the best I can do"

the paradigm is a million dollar idea though. I have no doubt its the future of a large portion of the future, both for programming and generic systems. I just wish it wasn't a pain to write and it had some sensible error handling.


The language has grown on me a bit. I initially hated it but a lot of my pain was not actually the language but the lack of good docs for the standard library.

Still struggle with the tracebacks though. It's painful when things go wrong.


Whatever choices this project makes (I have some opinions, but I think they're not too important) I don't see it mentioning one of the most absolutely critical choices Nix made that was absolutely key to its insane success (at least, IMO, as a hardcore contributor and user for like 10+ years): the monorepo, containing all of the packages and all the libraries for use by everyone downstream, and all contributions trying to go there.

Please do not give into the temptation to just write a version manager and stitch together some hodgepodge and throw the hard problem over the fence to the "community", a set of balkanized repositories to make everything work. It is really really really hard to overstate how much value Nixpkgs gets from going the monorepo route and how much the project has been able to improve, adapt, and overcome things thanks to it. It feels like Nixpkgs regularly pulls off major code-wide changes on an average Tuesday that other projects would balk at.

(It's actually a benefit early on to just keep everything in one repo too, because you can just... clean up all the code in one spot if you do something like make a major breaking change. Huge huge benefit!)

Finally: as a die hard Nix user, I also have been using Buck2 as a kind of thing-that-is-hermetic-cloud-based-and-supports-Windows tool, and it competes in the same space as Zb; a monorepo containing all BUILD files is incredibly important for things to work reliably and it's what I'm exploring right now and seeing if that can be viable. I'm even exploring the possibility of starting from stage0-posix as well. Good luck! There's still work to be done in this space and Nix isn't the final answer, even if I love it.


Buck2 looks very principled. Will definitely be interesting as it gets mature in the open source world.

I'm personally convinced monorepo is strictly superior (provided you have the right tooling to support it).


https://github.com/256lights/zb/blob/102795d6cb383a919dd378d...

TIL I can also use semicolons on lua tables, not just commas:

  return derivation {
    name = "hello.txt";
    ["in"] = path "hello.txt";
    builder = "/bin/sh";
    system = "x86_64-linux";
    args = {"-c", "while read line; do echo \"$line\"; done < $in > $out"};
  }
I like using lua as a DSL, now I like it even more! I've using lua as a html templating language that looks like this:

  DIV {
   id="id";
   class="class;
   H1 "heading";
   P [[
    Lorem ipsum dolor sit amet, consectetur adipiscing elit, 
    sed do eiusmod tempor ]] / EM incididunt / [[ ut labore et 
    dolore magna aliqua.
   ]];
   PRE ^ CODE [[ this is <code> tag inside <pre> ]];
  }




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: