I still don't see the light. I'll try and lay out my reservations, starting with the pain points of the docker image (out of order to coalesce some points):
> The package manager’s database is included in the image
> Most of the files in the docker image are unrelated to my website’s functionality and are involved with the normal functioning of Linux systems
So remove it...? Docker multi-stage builds[0] make this really easy, though I still use separate containers for different times (building, testing, etc), and sometimes even multiple "stages" of containers for languages that take a long time to build (Haskell).
> The package manager is included in the image
Assuming you don't want the package manager itself, why not just use a scratch[1] or distroless[2] image?
Well of course it's harder than one might expect to use those options (distroless is actually fine), because of the lie of "statically building" anything on most distros other than alpine, which brings us to the next pain point...
> An entire copy of the C library is included in the image (even though the binary was statically linked to specifically avoid this)
Using alpine gets you much closer (if not all the way there) to a proper static binary w/ musl libc. You may have to look into things like replacing the use of libnss or certain systemcalls with golang libraries, however[3].
My biggest problem with nix is that it just isn't worth the effort -- The ~5+ paragraphs of nix tools and scripts is just not at all attractive to me personally when I know roughly a day's worth of fiddling to get a proper static build is good enough most of the time. I've heard amazing things about nix, nixops, and all the tools there-in but there just isn't enough pain IMO to warrant completely changing how to do things. Nevermind the fact that disks get cheaper and cheaper, network gets faster and faster, and people are starting to look into pre-emptively sharing container images across meshes of nodes with tools like Dragonfly[4].
Another point that is somewhat related -- if nix is good, guix has to be better on some level (excluding factors like ecosystem) purely because it uses a full-blown language for the config (i.e. terraform vs. pulumi).
It doesn't seem like nix will hold enough value before unikernels "arrive", then the problem of taking your distribution along with the program you want to deploy disappears alltogether.
> It doesn't seem like nix will hold enough value before unikernels "arrive", then the problem of taking your distribution along with the program you want to deploy disappears alltogether.
Unikernels still don't solve the build problem, only the distribution. And you can get that workflow today, by shipping around container or VM images. And Nix can even build either for you!
Correct me if I'm wrong, but they do solve the build problem -- you can build static libraries way easier if you replace all the system stuff libc was doing for you with approaches like unikraft[0] or rumprun[1].
I don't have time to look at rumprun right now, but from the Unikraft page:
> The Unikraft build tool is in charge of compiling the application and the selected libraries together to create a binary for a specific platform and architecture (e.g., Xen on x86_64). The tool is currently inspired by Linux’s kconfig system and consists of a set of Makefiles. It allows users to select libraries, to configure them, and to warn users when library dependencies are not met. In addition, the tool can also simultaneously generate binaries for multiple platforms.
So it doesn't solve getting the makefiles (or presumably the source code depending on how it's organized), the compiler, or assembling multiple projects into one coherent build.
Using Make as the big unifier also sounds a bit scary, since it's so easy to screw up dependency lists or introduce accidental impurities, because it has no way to verify either.
> So it doesn't solve getting the makefiles (or presumably the source code depending on how it's organized), the compiler, or assembling multiple projects into one coherent build.
Ahh I see what you mean -- I was more thinking about the library issue, but yeah building is definitely still really hard.
I think good support for unikernels is a long time off, and will obviously vary by language -- maybe the usage that finally breaks through will be tight integration with some lower level compile tools. For example if a company were to hit GraalVM (which is really trying to push themselves as the better-than-stock VM for a bunch of languages) and LLVM as integration points for unikernels I think they could really make a convincingly easy toolchain (without modifying developers' current toolchains).
> The package manager’s database is included in the image > Most of the files in the docker image are unrelated to my website’s functionality and are involved with the normal functioning of Linux systems
So remove it...? Docker multi-stage builds[0] make this really easy, though I still use separate containers for different times (building, testing, etc), and sometimes even multiple "stages" of containers for languages that take a long time to build (Haskell).
> The package manager is included in the image
Assuming you don't want the package manager itself, why not just use a scratch[1] or distroless[2] image?
Well of course it's harder than one might expect to use those options (distroless is actually fine), because of the lie of "statically building" anything on most distros other than alpine, which brings us to the next pain point...
> An entire copy of the C library is included in the image (even though the binary was statically linked to specifically avoid this)
Using alpine gets you much closer (if not all the way there) to a proper static binary w/ musl libc. You may have to look into things like replacing the use of libnss or certain systemcalls with golang libraries, however[3].
My biggest problem with nix is that it just isn't worth the effort -- The ~5+ paragraphs of nix tools and scripts is just not at all attractive to me personally when I know roughly a day's worth of fiddling to get a proper static build is good enough most of the time. I've heard amazing things about nix, nixops, and all the tools there-in but there just isn't enough pain IMO to warrant completely changing how to do things. Nevermind the fact that disks get cheaper and cheaper, network gets faster and faster, and people are starting to look into pre-emptively sharing container images across meshes of nodes with tools like Dragonfly[4].
Another point that is somewhat related -- if nix is good, guix has to be better on some level (excluding factors like ecosystem) purely because it uses a full-blown language for the config (i.e. terraform vs. pulumi).
It doesn't seem like nix will hold enough value before unikernels "arrive", then the problem of taking your distribution along with the program you want to deploy disappears alltogether.
[0]: https://docs.docker.com/develop/develop-images/multistage-bu...
[1]: https://hub.docker.com/_/scratch/?tab=description
[2]: https://github.com/GoogleContainerTools/distroless
[3]: https://github.com/golang/go/commit/62f0127d8104d8266d9a3fb5...
[4]: https://d7y.io/en-us/docs/userguide/download_files.html