Hacker News new | past | comments | ask | show | jobs | submit login
Packaging Kubernetes for Debian (lwn.net)
112 points by pabs3 on Oct 31, 2020 | hide | past | favorite | 109 comments



When we implemented Kubernetes at my employer, one of the things we made clear from the outset was that there is no LTS release of Kubernetes. Keeping k8s and friends up to date is at least one full time job (ideally a half-pizza team).

This also heavily influenced our choice of OS away from ones that emphasized LTS package management and towards more recent kernels with better BPF and kTLS support.

(We also intentionally used short-lived TLS certificates to prevent anyone from running a cluster longer than several months without updates. Worked great. There are workarounds of course but anyone aware of them is the kind of person who updates anyway.)


That's why k8s is not a good fit for early startups. It will consume too much of your time that is better spent developing a product that the public wants.


imho if you're an early stage startup and you don't have an operations or devops person, you're better off either running everything off a single machine or doing devops the old way (as in baking machine images, updating launch configurations and doing service rollouts with via autoscaling groups).

Once you get more revenue, hiring an operations/devops person might be a good idea. A good operations/devops person might make you save his own salary in infrastructure optimization (depending on the scale of course).

another good pattern that i've seen in action is contracting the operations to an external company that's specialized in doing this. the approach would be to let the external company design and build the cluster for you, define responsibility boundaries for your company vs that company, and let such company take care of daily operations and cluster upgrades, possibly with on-call availability. A small support/consulting package (say, 15-20 hours/month) can get you very far.

We've been doing something similar with SigHup (sighup.io) and their Kubernetes Fury distribution, it's been working very well so far.

(disclaimer: i'm not employed by sighup, just pleased to work with them)


Agreed. That's exactly what I have done with a number of startups. In one or two weeks, they get a full CI/CD pipeline that allows them to be productive. All of them are very happy with the result.

It's so important to be deploying early on. Get the kinks out of the system and the code aligned for production.

If you hire a full time devops, they will most likely build too much infrastructure to keep themselves busy. And if you don't do any devops, the transition can be quite painful.


> baking machine images, updating launch configurations and doing service rollouts with via autoscaling groups).

I tried this approach, using Packer with Ansible to build the AMIs. It took 15-20 minutes to build an AMI. Would you suggest a different way of building the AMIs?


That was meant to be an example really, to aim at something less automatic and complex like kubernetes but still automated.

You might also consider using containers without kubernetes. Keep the same AMI, package your app as a docker container, make VMs download and run the latest release from a private docker registry on startup. Publish a new docker image tagged as latest.

Now you can rollout your autoscaling groups without running packer and ansible.


In that case use a managed cluster and it’s less work.


It has about the same issues but because it's managed and because you are a small company, nobody cares about your issues and you can do nothing about it.


zimbatm mentioned "early startups", I take that to assume you've just started the company and you're working on getting the product market fit right. There is absolutely no need for any clusters for that, unless you're anticipating Google-level traffic on launch day.

Most companies today are better off NOT running distributed anything, as everything will get harder to do, from logging to deployments. People start with clusters when they really should start with centralized services and only go distributed when they need to.


Indeed. A single machine is often more than enough for early startups. It's easy for developers to understand and debug.

What's important is to put the tooling in place to make the deployment reproducible. Deploy using Terraform and Docker to the single VM. Avoid modifying things manually. That's what provides the agility to then scale the infrastructure into bigger shapes as needed.

That's why we developed https://github.com/numtide/terraform-provider-linuxbox , to make that use-case even easier.


You still have to do upgrades every 3 months, sometimes with breaking changes.

Because the underlying infrastructure gets replaced, it means spinning up a new cluster, deploying the application on it to make sure that everything is working. Then switch the traffic to it. If the cluster hold any state this can be quite a bit of work to replicate all of the data.


Kubernetes clusters can be upgraded in place, though? GKE definitely allows you to do that.


It works most of the time. I have also seen multiple in-place upgrades ending up in weird places. Or some resource having changed API and we had to rush to upgrade the code.

What will you do if that happens?

It's a bit like upgrading Postgres. It's a good idea to at least make a backup before changing the underlying storage.


Yes, if you’re careful and do your research. Once you start extending Kubernetes you can back yourself into corners.

We’ve also found breaking bugs in Kubernetes that prevented in-place upgrades in some cases (which we reported upstream and were patched).


Running several production clusters on GKE, I can say that it is definitely not a full-time job to keep it up too date and healthy in that setting. I would imagine it's a completely different story on-prem.


> half-pizza team

Could you define this term? I tried Googling, but it looks like there are just two references and neither is clear to me.


2 or 3 people. It's a reference to Amazon's 2 pizza teams. The idea is your team is too big if 2 large pizzas isn't enough to feed everyone.


When you buy food for the meeting, how many pizzas do you need? If it's one pizza that's a one pizza team.


It's also a specifically American reference. A 1-pizza team in most of Europe, including Italy, would be a 2, max 3 person team. Often it would be a 1 person team :-)


Same in parts of America but for very different reasons.


In Italy a 1-pizza teams is absolutely a single person. We don't share pizzas :-)


Do you use managed Kubernetes or kubeadm?

I think if one uses a managed Kubernetes keeping it up-to-date is pretty easy. I did an AKS update in 10 minutes.


Neither. Both the managed offerings and kubeadm don’t meet our requirements. (Largely related to security, compliance, scale and multi-tenancy.)


What distro did you end up with?


CoreOS, then Flatcar.


Did you have a look at Fedora CoreOS ?


There's another interpretation available: kubernetes is not a mature system, and that's why it's too much work to package properly.

200 library dependencies is not too many for Debian to handle, as long as they don't change very often and the work is useful for other packages. But Kubernetes' dependencies do change often, and there are not enough other Go systems around to spread the work around. This is not a stable system, therefore Debian can't ship it in a stable release.


I think there’s a difference between “stable” and “evolving”. Software with a much lower release cadence than k8s works well with Debian’s distribution model, but it falls down with frequent releases and a large dependency tree. I guess you’d see that with NodeJS applications as well - I don’t think it’s practical for Debian to re-invent NPM and try to distribute and ship the large and changing dependency trees you find in those projects. However that doesn’t mean the software is not _stable_, for the typical definition of stable.


I think there’s a difference between “stable” and “evolving”.

No, that's pretty much the Debian definition of "stable": software that doesn't need changing for multiple years. The Debian term "stable" has nothing to do with whether or not a program crashes regularly, but how often it requires maintenance. In that definition, "evolving" software indeed isn't "stable" (yet).


But by that logic no web browser can be in Debian stable.


From the article:

>Beyond Kubernetes, web browsers clearly fall into this category. Distributors have generally given up on trying to backport patches to older browser releases; they just move their users forward to new releases when they happen. The resources to do things any other way just do not exist.

Exceptions are already made for browsers, but they're browsers. They're practically essential to 99% of graphical debian installs and don't expose the really nasty unstable bits (like V8's api surface) to the world. I doubt the debian TC will make that exception for devops software with much less mindshare and a public API surface that is the software, at least not on the stable channel.


Debian just makes an exception for the Firefox and Chromium packages.

You can tell that is what the current maintainer was hoping for here. Instead, the previous maintainer-- who literally wrote that it would probably take two full-time devs to properly package this and maintain the package-- goes full Vogon and summons the great Debian bureaucracy to solve this with their poetry.


Note that there is no "firefox" package in Debian stable -- only "firefox-esr".


It kind of feels like the same problem OpenStack had, and failed due to.

That being said, OpenStack is technically still around, so it's not completely "dead". ;)


Also I don't get it, what's the problem with shipping a statically linked binary? Yeah, it goes a bit against the debian way, but it will facilitate things like out-of-the-box init scripts etc.


There's no problem with shipping a static binary, except that it means that the binary must be updated specifically every time a security issue or major bug is fixed in a dependency. For a normal major Debian package, a fix to openssl or libc or libgtk-something automatically fixes that bug in the major package as well. For a static binary, Debian has to notice the change, rebuild the static binary, and ship it out.

The benefit of a distro is shared, trustable work. Increasing the workload on the distro maintainers is not great, and Debian is an all-volunteer organization, especially sensitive to that workload.


> a fix to openssl or libc [..] automatically fixes that bug in the major package as well. For a static binary, Debian has to notice the change

Because this argument is often made, I think it's worth pointing out:

Just shipping a fixed dependency like `libssl.so` isn't enough to make the fix effective on the end users' machines. You also have to restart all the running executables that link in that dependency.

As far as I'm aware, Debian does not handle that for you. So even if it ships that small, nice dependency-only package update via manual or unattended upgrades, your long-running nginx will still be vulnerable until you manually restart it.


And that is actually how it's supposed to work. The last thing you want is unattended-upgrades silently restarting services (read: which can fail the start bit) behind your back.

If the upgrade fails to restart nginx properly, your customers won't be seeing the pages they need to. If the upgrade fails to start sshd, you have just lost access to the system(s) you would need to fix. Plus my personal favourite: if the upgrade-and-restart breaks your message broker, EVERYTHING is on fire.

In most non-orchestrated, non-cloud-native environments the right way for security upgrades is to have them available, preinstalled and configured, but not yet active. What you do need is monitoring to tell you these things are waiting so you can apply them as soon as feasible.

Although to be fair, once you have orchestration, robust zero-downtime rollouts and a good CI to rebuild new versions as upgrades become available... that's a different story.


> The last thing you want is unattended-upgrades silently restarting services

I don't buy this argument: It makes an arbitrary distinction between software in which the fix is reflected, and software where it isn't.

For example, if said upgrade breaks an on-demand job which isn't a permanently running process but invoked by your customers through nginx, your customers also won't be seeing the results they need to.

If you wanted unattended-upgrades to not have its changes reflect in the system, you might as well configure it into the mode where it only notifies you, instead of installing the new version automatically.


Needrestart handles that for you: https://tracker.debian.org/pkg/needrestart


Apt occasionally prompts to restart the ssh server, so it does have some knowledge of this.


Correct.

Furthermore, rebuilding and distributing a large number of large binaries every time a vulnerability is fixed is harmful!

- It encourage users to delay security updates. Hundreds of millions in the world have slow or expensive or capped Internet connectivity.

- Makes the distribution unsuitable for many embedded/IoT/industrial/old devices with very limited storage

- It gives the impression that the distribution is bloated.


Debian has an escape valve from its fundamental spirit-defining policies even for non-free software. And if Debian didn't have that policy wrt non-free, I could easily make a similarly general post as you did listing the very real dangers of shipping blobs, which probably carries more weight than the dangers of vendoring you outline.

So, why can't there be a repo for the all the vendored things, and make a policy for the maintainers there?

By not having that escape valve, the pressure shoots out on the maintainers-- in this example the maintainer's work is made impossible as evidenced by the statements of the previous maintainer that it would take two fulltime devs to package this according to the current policies. So you encourage burnout on your volunteers.

This also encourages passive aggressive software design criticisms. Look at the very first comment here that shifts to talking about the "maturity" of the software under discussion. I'd be willing to bet I'd see similar flip judgments on the list discussion of this-- all of which completely ignore the monstrous build systems of the two browsers that Debian ships. So apparently there's an escape valve for exactly two packages, but no policy to generalize that solution for other packages that are the most complex and therefore most likely to burnout your volunteer packagers.

Keep in mind you are already on maintainer #2 for this package that still does not ship with Debian because shipping it per current policy is too burdensome. Also notice that you've got a previous maintainer on the list-- who already said this is a two-person job at least-- calling out the current maintainer for being lazy. It seems like a pretty lousy culture if the policy guidelines put policy adherence above respect for your volunteer maintainers.


> the very real dangers of shipping blobs, which probably carries more weight than the dangers of vendoring you outline.

This is a false dichotomy.

> By not having that escape valve

Please do your research before posting. Building packages with bundled dependencies is allowed, actually.

Having a handful of small files from 3rd parties bundled in few packages is relatively harmless (if they are not security critical) and allowed.

Having 200 dependencies with hundreds of thousand SLOC creates a significant burden for security updates.

Put security-critical code in some of dependency and the burden become big. Make the dependencies unstable and it gets worse.

Now create a similar issue for many other packages doing the same and the burden becomes huge, for the whole community.

> This also encourages passive aggressive software design criticisms.

I would call it outspoken criticism of bad software design.


Statically linked means that a vulnerability in one of the libraries means that all statically linked binaries using the library will have to be patched.

In case of shared libraries, only the affected library package has to be patched.


Tracking security incidents. If every piece of software uses the system-wide copy of a vulnerable library, you only need to patch that library and the entire system is safe. For every embedded copy of that library, a separate patch and release must be made. That's a lot of management overhead.


You are confusing static linking with embedding library sources.


Both static linking and embedded copies have similar issues.

With static linking you have to track where the static library ended up (recursively) and then issue rebuilds in the correct order for all packages that contain the static library (directly or indirectly).

With embedded copies you have to search the source code archive for the copies and then patch and rebuild each copy.

In some ways static linking is more complicated to deal with, due to the case that the static library ends up in some other static library.


> There's another interpretation available: kubernetes is not a mature system, and that's why it's too much work to package properly.

If that were true, one would expect the packagers at Arch to be leveling similar complaints about the "maturity" of the software or the "awful" build system used by Kubernetes.

I'm going to rankly speculate that:

1. That is not the case.

2. Kubernetes was packaged in Arch in less time than it's taken for Debian to discuss the policy disagreement between the previous and current package maintainers, neither of which have been able to deliver a maintainable Kubernetes package in Debian yet.

3. This says more about the efficacy of Debian's packaging policies and package dev UX than it does about Kubernetes.


It's funny. I was discussing looking into packaging kubernetes with another Arch developer yesterday, and noticed this discussion today.

I'm the package maintainer for go and maintain a number of go packages in Arch Linux, along with writing the current guidelines and looking into packaging strategies for golang.

I'll offer a more cynic view then the one presented: golang is not a mature packaging ecosystem.

So 1) is partially correct; go just sucks from a packaging perspective. Not really the issue at kubernetes.

We can probably package up kubernetes within the next hour and drop it into `[community]` as Arch has less structure and QA around the packaging process. However our largest hurdle is that we package the latest version. Is kubernetes going to work properly on go version `1.15.3`. By experience container software brings out the worst of the go runtime and any changes to the syscall, goroutine or memory management is a cause of concern.

The other hurdle is the cadence of when kubernetes decides to support which container runtime. Docker is wonkey at best, but I haven't seen a lot of details on cri-o nor containerd.

So frankly our problems are not really related to packaging, but the challenges of providing recent versions of packages and making sure they work.


Also-- I'm willing to bet there are similar charges somewhere in a Debian discussion list wrt Virtualbox and its packaging. And I'm willing to bet that there's currently a virtualbox package in Arch and little to no discussion about how "unstable" or whatever virtualbox is.


Kubernetes binaries are meant to be statically linked. I can understand the licensing concern, but if each of those libraries was packaged separately the experience of using Kubernetes on Debian would be markedly worse.


In which way would it be worse?


The packaging nightmare is why I left Debian and Ubuntu for Arch. I now maintain multiple packages in the AUR with relative ease, and see how easily other maintainers manage multiple packages like Kubernetes without issue, but if it was using Deb packages it would be a full-time job.


> The packaging nightmare is why I left Debian and Ubuntu for Arch.

I find this comment odd and highly dubious. There is absolutely nothing wrong with Debian's packaging system. At most, some individual packages could have been packaged differently, which is arguably a matter of personal taste. Normally, everything just works, and works very well.

Either you shed some additional light on your personal struggles with Debian's packaging, which you did not do at all, or I have to scratch off your baseless assertion as a kin of someone on the internet shouting at clouds.


In my experience, once you have a .deb package, it does indeed work perfectly. However, getting to that stage, unless you're packaging a collection of plain files with very simple dependencies, is the hard part.

Ignoring the weird scaffolding you need just to package a static "hello world" binary, there's also all the dh_ scripts which you should use for your package to be "well-made".

I remember looking up how to properly package a Python application, found like 5 different ways documented on the debian wiki, couldn't get any of them to work, gave up and just shipped a whole virtualenv.


This mirrors my experience, too. Building a simple RPM isn't too bad, even if spec files are a little arcane. Packaging for the AUR isn't super complicated as far as I've seen.

The "standard" way to build even the simplest deb file seems super overcomplicated whenever I've tried to do it, with layers of complexity, shims and fudges.

For most simple stuff you just want to specify a bit of metadata, a couple of install scripts and specify "X goes in /usr/bin/X, Y goes in /usr/share/...".

I tend to find that fpm (https://fpm.readthedocs.io/en/latest/) does that for me, and I now use it for building my own deb files pretty much everywhere.


>>> gave up and just shipped a whole virtualenv.

That is incidentally the most reasonable way to ship a python applications. :D


Debian and Python purists will say otherwise, and there are some reasons not to do it - e.g. wanting to support multiple Python versions across multiple distributions (such as Debian, Ubuntu, and CentOS). But I agree, in my opinion it is the most reasonable way.


Not to ship, but to install using pip if you're doing it outside your system packaging. Here is the recommended approach for Arch:

https://wiki.archlinux.org/index.php/Python_package_guidelin...


> However, getting to that stage, unless you're packaging a collection of plain files with very simple dependencies, is the hard part.

What do you mean plain files with simple dependencies? A deb file is just a zip that stores metadata and your artifacts exactly where you want then to be in the directory structure you see fit. You specify the packages you depend on in the meditada, zip them up, and that's it.

If you need something fancy you can add shell scripts that are triggered in parts of the installation lifecycle, but those are far from being mandatory.

Hell, if you prefer to follow the lazy way out you can even get build systems to generate your deb files for you. Cmake handles this out-of-the-box.

> Ignoring the weird scaffolding you need just to package a static "hello world" binary

What? That "weird scaffolding" you're referring to is literally the directory you zip with the artifacts in their destination and the metadata! Is it too weird for you to deploy libraries in /use/local/lib before you package them? What are you talking about? You build your static "hello world" binary, you place them in the destination folder, you fill in the metadata to specify dependencies and stuff like your name and email address, and you proceed to zip up the package. Done.

> (...) found like 5 different ways documented on the debian wiki

No you really didn't. At most you found references to 5 different tools that help you do the same exact thing.

Here's a link to a process you tried to described as undecryptable:

https://ubuntuforums.org/showthread.php?t=910717


> You build your static "hello world" binary, you place them in the destination folder, you fill in the metadata to specify dependencies and stuff like your name and email address, and you proceed to zip up the package.

this small "fill in the metadata" part is already half-an-afternoon of reading and understanding all the concepts and mandatory files in there: https://www.debian.org/doc/manuals/maint-guide/dreq.en.html ... and that's only one small part of the manual.

Arch packages in comparison can be a single 15-20 lines PKGBUILD shell script.


The scaffolding I'm referring to is the "debian" directory containing rules, controls etc..

The built-in Debian packaging tools are already wrappers on top of wrappers on top of wrappers, adding another layer on top of that (with Cmake or whatever) only adds to the confusion and goes to show how over-complicated it is.

For example, the article you linked uses dpkg-deb --build while I've seen alternatives use dpkg-buildpackage and debuild.

The 5 different documented ways was specifically referring to Python packages.

There are also helpers like dh_systemd which I could never get to work so just ran systemctl commands in postinst and such.


It is a recurring theme that whenever Arch proponents mention that they don't like Debian packaging, there is a frustrated reaction to it. But I don't understand what people want to hear; there's clearly a reason why people would choose Debian over Arch, Arch users say packaging is a big part of it, and it seems like nobody can accept that this is right.

I can at least tell you that people don't choose Arch over Debian for the installation experience. :)


Debian Developer here. Some people take a very cursory look at Debian and assume the packaging is an exercise in masochism that we inflict on ourselves.

And yet the number of DDs keeps increasing (and Debian is one of most successful projects).

Indeed it takes time to do packaging, and this is by design. Packagers are expected to thoroughly review the code they are packaging and smooth out various sharp corners.

Many times I've found bugs in the upstream code while packaging. Sometimes around security and privacy, often around documentation, usability or non-x86 architectures.

Every time I check if other distributions opened bugs or applied patches for the same issues. It almost never happens.

This is why I might spend 2 hours on a package instead of 10 minutes.


It isn't a cursory look. I worked at Canonical. I used to do lots of Debian packaging. My packages on Arch aren't low-quality. Yet, the packaging process was a lot simpler and a lot more fun on Arch. Plus my package and dependencies are actually based on the stable packages and not a 4 year old patched 10x to maintain compatibility package due to a 3 year old kernel.


> I can at least tell you that people don't choose Arch over Debian for the installation experience

Well, I do (sample of one). The Arch installation has been streamlined a lot. Now it’s all about

0) boot the image (cd, usb, PXE), which is actually an Arch install

a) creating your filesystem (pick you poison), and mounting it

b) telling pacman to install base, base-devel, and a bootloader on the target fs

c) installing and configuring the bootloader

d) rebooting

Done.

d-i barely takes care of that for you, it’s “just” wrapping it behind a UI (which is sort of useful, sure saves one from reading docs, but has been an annoying abstraction/obfuscation/magic layer for me more often than not).

The remainder (setting up X/Wayland/whatever is no different on Debian than on Arch, as d-i does not help much.


Debian supports the same install workflow via debootstrap. You don't have to use d-i.


The parent was a bit aggressive, but i think asking why the first poster likes arch packages better than debian is a fair question, since that's the interesting part.


Debian packaging is so arcane, I use the arch PKGBUILDs to build the file system tree and one of the deb tools that build a package from a file tree.


I'm a DD, and I can see his point of view. The debian build system started form the simplest thing that would work, and grew from organically from there. Building in the same area as the source for example, and then having to clean the mess up as opposed to just building in a clean temporary directory like rpm did from day 1. We only got clean separation of debian patches from the upstream sources a while ago. Kinda. Sometimes debian has to repackage the upstream sourced due top copyright problems. uscan is hardly a nice solution syntax wise, nor is is policy so there doesn't have to be a fully scripted way of going from upstream source to debian package. And there is the dh sequencer, which splits the configuration into a zillion tiny little files. It goes on.

Like any organic solution it works, and somewhat impressively works for all the use cases created by debian's 60k odd packages. However, pretty or simple it isn't. Learning the interactions of the various dh_ helpers with the build systems they try to automate, the "guesses" (term they use in the manual) they make and when they are applicable is a huge job. The learning hill is consequently steep, and I would say unpleasant.

However, I don't want to undersell the end result. The QA checks done FTP masters, lintian, the discipline imposed by testing and the long transition from testing to stable mean the end result is very, very good. Better than I've seen in any other distro, including the .rpm ones I used to use. From a sysadmin point of view, that trumps everything. From what I can tell debian could be described as a bunch of programmers beavering away for free to produce something that none of them could produce on their own - a Linux distribution so good the base their day jobs on it.


I'm not sure this is a fair comparison. AUR has next to no rules. You could just as easily maintain a PPA. As described in the article, the package is available via Debian-testing.


I encourage everyone to go evaluate building and submitting a package using the AUR vs Launchpad (PPA). I worked at Canonical and would never go back to Dpkg. I recommend users try it out themselves and see how much easier and cleaner the PKGBUILD approach is.

AUR does have a process to rate, flag and comment on packages. It is also much easier to review the installation script using PKGBUILD. There are a lot of high quality packages in the AUR but you should review everything you install unofficially, just like installing packages from a PPA.


Arch/Manjaro is often praised as simpler, more powerful etc., but there are also complaints about instability. Mint has several QA links: upstream, debian, Ubuntu, Mint. With Mint sanitizing the corporate influence, ie. snap being off by default. I don't need bleeding edge, just stable desktop. So I've found my optimum in Mint, though nothing is perfect.

Vendoring is shifting focus back to individual projects though. K8s could almost replace an OS in many regards. It is made to be modular, but only within its own framework. So it is changing the way software is run, similarly to initd and systemd. Such systems as we've seen, tend to "take over" and not be made to be part of and integrate well from inside an OS. Vendoring is really old practice, but also makes testing and support simpler for upstream. We all know the dynamic approach tend to be more unchanging, or associated with several circles of dependency hells.

It's just different end goals for software development and deployment, that affect how software is distributed. Kind of similar to touted goals of JEE of java fame, which also is too monolithic to integrate well within complex systems.


Arch doesn’t have to support multiple architectures and releases.

Also, Arch usually puts all files into a single binary package while Debian splits arch-dependent and arch-independent files into separate packages. Also, Debian separates packages into runtime and development libraries, another thing Arch doesn’t do either.


This is incorrect


How is the previous statement incorrect?


> Also, Debian separates packages into runtime and development libraries, another thing Arch doesn’t do either.

Arch definitely has separate packages for runtime and development libraries. It doesn't have as many, but they exist and can be found by simply searching for `-dev`.

> Arch usually puts all files into a single binary package while Debian splits arch-dependent and arch-independent files into separate packages

Arch has `any` and then architecture specific packages:

https://wiki.archlinux.org/index.php/Arch_package_guidelines...

> Arch doesn’t have to support multiple architectures and releases

https://wiki.archlinux.org/index.php/32-bit_package_guidelin...

There are also distros of Arch for ARM and 32bit, however if you're looking for a more integrated multi-arch PKGBUILD-comparable distro then Alpine is it and probably what I'll be migrating to at some point.


> Arch definitely has separate packages for runtime and development libraries. It doesn't have as many, but they exist and can be found by simply searching for `-dev`.

They are an exception to the rule, where the benefits outweights the negatives. It's been done to ensure we have smaller container images, or if the maintaine thinks it makes sense. But as a rule, we do not care while debian does.

>Arch has `any` and then architecture specific packages:

How is the `any` arch related to package splitting?

>https://wiki.archlinux.org/index.php/32-bit_package_guidelin...

> There are also distros of Arch for ARM and 32bit, however if you're looking for a more integrated multi-arch PKGBUILD-comparable distro then Alpine is it and probably what I'll be migrating to at some point.

Those package guidelines exist, but are dated and not used by us, the packagers.

Arch ARM and 32bit are also seperate distirbutions that isn't affiliated with the 64bit Arch Linux distribution.


The packaging nightmare also means that when you stand up a system it won't need upgrades for a decade

Leave an arch system without an upgrade for a month and you're playing Russian roulette.


It will need upgrades for those decades you're just hoping someone remembers how to apply them 9years after they're I'm wide spread adoption.


No, it will need security updates. Which Debian stable provides for seven years currently.


People say this all the time but it isn’t true. Occasionally a package will require manual intervention to update properly. The manual intervention steps are generally pretty easy and documented on the Arch homepage, so if you haven’t updated in a while and are concerned, all you really have to do is check the homepage. But I don’t even do that. I just pacman -Syu and check if it failed. For the past few years this has worked out just fine. Most of the packages, I don’t even have installed, so I hardly ever hit a manual intervention step.

Of course, Debian doesn’t require manual intervention for most updates, but it also isn’t rolling release. If you want packages that are actually recent you have to sit on Sid, which is a lot less stable than Arch. And if you have a Debian machine that’s a couple years old and you want to upgrade it to the latest Debian version? Good luck. Some of that isn’t Debian’s fault, but whereas on Arch major, breaking package changes are rare but irregular, on Debian they tend to hit you simultaneously in one major upgrade. For home machines, I definitely prefer an occasional manual intervention and continuous fixes over major breakages every couple years where I often just give up and reinstall from scratch.

That said I use Arch a lot less nowadays as I’ve moved onto NixOS. Nix is clearly headed somewhere new, though it remains to be seen if the complexity of the approach is actually maintainable.

If I want to make an idiomatic Arch package, it’s usually easy: PKGBUILDs are intuitive and simple. Nix is much the same for most stuff, especially since most of the boilerplate for various build systems has been automated; though for a program as complex as Kubernetes its a nightmare, to be sure. For Dpkg though, it seems like so much complexity, and I don’t think I ever really had a good experience.

And honestly, I have no idea how to make an RPM package anymore.


And if you have a Debian machine that’s a couple years old and you want to upgrade it to the latest Debian version? Good luck.

To counter that with my anecdata, my Debian installations usually outlast the hardware they've been built on. I've found that it's much easier to start new builds with a backup (or the original disk) of the machine they're replacing, that's how easy it is to maintain a Debian system for me.

Heck, my first Linux desktop started out as Ubuntu Breezy, and ended its life as Debian Squeeze. Its replacement started out as Debian Wheezy, and is now on Bullseye.


I'm typing on a machine that's started life as Debian Etch, and has moved from a desktop, to a laptop and back, gotten cloned and is now three machines. It still doesn't use systemd.


> If you want packages that are actually recent you have to sit on Sid

There are other options. For example, you can sit on stable and get specific packages from unstable, so long as they don't have dependency conflicts with the rest of your system. It requires a little more thought than just running with the defaults, but the docs are pretty clear and detailed (man apt_preferences), and it's mostly a one-time operation. I do this with docker, the kernel, and a maybe a couple other things.

> And if you have a Debian machine that’s a couple years old and you want to upgrade it to the latest Debian version? Good luck.

I guess I'm among the others here who have had good luck, then. The first counterexample that comes to mind is a NAS appliance that is now around ten years old. I replaced its stock OS with debian stable (Squeeze, maybe?) and ran it that way until a year or so past the next stable release, then upgraded. It went fine. I did the same thing repeatedly over the years, until finally decommissioning it last month. It now runs Buster, so that's at least four major version upgrades done past their expiration dates. None of them gave me any trouble. The only thing wrong with it is that my needs have outgrown the hardware.


I’ve successfully upgraded two or three version out of date Debian machines: it isn’t usually a huge deal, unless you use software that is careless about backwards compatibility.


Say what you will about Gentoo, but I've actually really liked making some small packages for personal use. It's the closest system I've seen to just writing down the steps you'd normally take to compile the software manually.


That how Arch Linux packages work. Just a shell file with some metadata


> There are other packages in Debian with vendored dependencies, though none, he acknowledged, have anywhere near the 200 found in his Kubernetes package.

How many are vendored in Chromium/Firefox?


What's the driver for people who actually want an OS-provided k8s? What holds them back from using upstream releases instead? (Or given large enough team - building/patching themselves) I guess it also applies to Ubuntu/openstack to a lesser level.

If you're planning to run those as a platform, it sounds like more pain then necessary.


I prefer to use Debian packages because it conforms to conventions that make managing systems easier: predictable naming, integration with service management, logging, docs, manages.

(I use NixOS for personal use.)


I worked on one of the large commercial projects that offers OS-packaged k8s.

The customers were screaming for it. They want a simpla and reliable way to deploy k8s without a team of specialist, and the guarantee to receive security updates for years without having to upgrade to a new version.


If you use microk8s from ubuntu, you know that there is a team behind it who takes care of updating it, upgrading it and making it simple for you to do so.

You also know, that your issue with that setup might get noticed by someone else using the same and will fix it for you.

You get sane defaults and you get k8s optimized for the os.

I run microk8s at home. Its simple, fast, easy and works.

Not seeing any issue with it.


For many it could just be a convenience, assuming the distro way just works. With conflicting goals and approaches, you might end up worse off though. Ideally, you mostly want to spend time and effort on higher pursuits than managing or building your own platform.


> Ideally, you mostly want to spend time and effort on higher pursuits than managing or building your own platform.

I've run openstack for a while and... realistically if you run a platform, you need to have people/time to manage it. I understand from conversations about k8s that is not that much different.


Thinking more about dev end and one man shops than enterprise production. With CI/CD the landscape is shifting once more though.


The decision to share libraries between many apps has some benefit, especially on desktops. It also has costs. It's a decision you can make.

The article uses so many convoluted phrasings and arguments to avoid saying that. A bureaucracy of *.deb.


Avoid saying what? That it has pros and cons? What a shocking revelation you made there!

It's implied in few paragraphs. It's not that hard to infer if you take off your hat. Thou who read Corbet often know a thing having pros and cons is implied in his reasoning.


> Thou who read Corbet often know a thing having pros and cons is implied in his reasoning.

His skill is the warts are laid bare, but not unkindly and often with humour.


Does anyone know of any packaging efforts to run microk8s without snap? I love microk8s, but running snap just for that one package feels like an overkill.


Naive question: I package small and midsize Go projects as deb packages all the time and find it relatively easy. All you need is the binaries and some config. Cross-compilation is easy with Go, so what exactly is a problem with k8s? Isn't it just a bunch of executables for kubectl, kubelet and api-server?


The problem is building and packaging (and versioning) all dependencies. It's a reasonable policy to not allow bundling which conflicts with usual Go developing.


kubeadm kubelet kubectl + socat and a few more standard utils. Where are the 200+ dependencies they mention?


Build time dependencies. Normally distributions would put every library from the dependency tree in a package and build it. Then install all the dependencies on the build host and produce the Kubernetes package.


Good to know, thanks for the clarification. I can see now I oversimplified the packaging process.


Lots of people present their observations and opinions, but this problem is not hard, and the technically best solution is obvious - Debian curbs their ambitions and just distribute the damn thing, with all the vendored stuff, as upstream intended.

Some Debian people don't like it. Users don't care, as long as they can apt install it.


Debian Uses care. Otherwise they would use any other distro.

The good the about the distro fragmentation is precisely that, If debian curbs their ambitions then it is no longer debian, and those that prefer the debian way over ubuntu or fedora or arch would get screwed in the process.

It’s fine to not like it and not use it, but I do like projects that keep their ways.


Well some may care, I agree with that. But Debian already changed its ways in controversial shift to systemd that leave many users disappointed - pragmatism and modernisation efforts prevailed over traditional ways of composability, stability and predictability. Why shouldn't this happen for the policy of packages as well? The Debian way is hitting its limits and is ripe for some updates. The amount of work needed to maintain foreign hard to comprehend software has to be curbed to make it manageable in the future. If enough people say "yeah lets package it as is and be done with it" why shouldn't Debian go that way? It is natural to change and adapt. Big software with fast release cadency is going to get even more complex and harder to package each release.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: