Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: A Build System for Packaging Applications in LXC Containers (flockport.com)
105 points by tobbyb on July 7, 2018 | hide | past | favorite | 72 comments



Rudimentary info for newbies: LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers. It has more capability than a chroot environment but less than a full virtual machine environment.


We have a container basics article [1] that provides a quick overview of Linux containers including differences between LXC and Docker containers.

Linux containers are made possible by the addition of Linux namespaces to the kernel in 2.6. A namespace allows you to launch an isolated process. There are 6 main namespaces including a network namespace and container managers basically launch the container process in a new namespace.

LXC is a userland container manager in development since 2008. Docker was initially based on LXC in 2013 and later developed their own container manager in Go.

LXC launches an OS init in the namespace so you get a standard multi process OS environment like a VM. Docker launches the application process directly so you get a single process container. Docker also uses layers to build containers and has ephemeral storage.

So LXC containers behave more or less like lightweight VMs. Docker is doing a few more things that need to be understood.

[1] https://www.flockport.com/guides/container-basics


The salient point: both LXC and Docker, and any other "container" solution, use the same Linux kernel features that implement containment: chroot and FS mounting to make the container's view of filesystem, namespaces to make the container's view of uids / gids, processes, and other resources around it, and virtual network interfaces + packet filtering to produce the container's view of the networking environment.

On top of this, Docker and LXC offer different ways to build, run, and orchestrate containers. So do other container engines, such as rkt.


Add cgroups to your list of kernel features, for resource metering and limiting, and device access control.


Great. I run everything on LXC and this is a step in the right direction. LXC is so much better to work with than Docker.


Same here. Docker is a collection of anti-patterns while LXC is pretty lean and we use it in production for long time withou issues.


A screwdriver seems like an anti-pattern when you are trying to hammer in a nail with it. Because of the buzz around it Docker suffers from people trying to use it to solve problems it wasn't intended to.

I use both Docker and LXC but for very different use cases. I find both to be great tools when used to solve the problems they were intended for.


Can you elaborate on the different use cases? I've seen a bunch of LXC/Docker comparisons, but they focused on features.


Lxc is a system container Docker is an application container.

Use lxc in place of a VM, where you might want to login or even have others login. It has the same problem as a regular system, snowflakes. Changes can be made that might cause an application to behave differently if there are multiple deployments or you have to rebuild.

Docker is for having a consistent application environment so your app behaves exactly the same every time it's deployed.

I use lxc and did before docker. It took a while for me to accept docker. It takes understanding the difference.


This does not make sense. There’s no such thing as a system containter. You can absolutely leverage lxc instead of docker if you want to.

The major difference is that docker gives you 1 process by container and has 1) nailed down a container definition fornat 2) the registry for images.

So Docker makes the whole container thing more accesible, but in the end both rely on the same kernel features + overlay filesystems.


A “system” container here means a container running an init so it can be multi process and operate like a lightweight vm.


The fact that there are so many base images including an init system and even Docker now includes a small init that you can activate with a command line switch might give an indication that many people are using Docker for "system containers".


if you’re looking at the kernel features used (namespaces, cgroups, etc) containers are multiprocess (even with docker you can go attach into the container and look at things).

This may be semantics, but the first process in the container is an “init” regardless if it’s a proper init or just a process.

As far as light-weight VMs: containers are supposed to be lightweight VMs (But defining lightweight can be challenging)


Docker is designed to be a 'application' container - 1 process per container.

LXC is designed to be a 'system' container - an entire OS except the Kernel per container.

People mis-use tools all the time - that doesn't change their original design or intended use.


let's agree to disagree on this one. I am not sure that system and application container are or should be different, but it's okay if we have a different opinion.


https://linuxcontainers.org

You might disagree, but the distinction is meaningful within the community.


Without giving us a hint of what those anti-patterns are, your comment is quite useless. There seems to be tacit agreement among most participants of this discussion that both Docker and LXC have their place, for different use cases. You seem to be saying something different. Could you elucidate?


Can you give examples as to why it’s better vs Docker?


What's "better" is too ill-defined to have an opinion on, but I can say from experience that I've had to hunt more than a few intermittent problems not only in Docker but in Linux itself due to the bizarre ways Docker tries to reinvent the world, while LXC has been mostly solid even under load.


LXC is more natural if you expect docker to behave like a VM.


Yeah, this is the key thing. People think Docker is the only way to run a container and they do all kinds of silly hacks to try to keep Docker containers alive and to get them to behave like normal VMs. There's no reason for that: you can use LXC, or better, illumos zones or BSD jails.

In the real world, Docker's limited-liftime execution paradigm is the niche requirement. Everyone else just wanted lightweight VMs.


Everyone’s “real world” is different. Docker’s model works great for distributing heterogeneous tools. At work, we have teams shipping python, ruby, and nodejs CLI programs inside Docker wrappers. Greatly reduces packaging frustration on end-user systems.

I run most of my home services in jails, but I am eager to rebuild them as Docker containers, because I’d rather have a single init on the host system run several containerized processes, then my current setup which is a tree of inits that makes monitoring more difficult than a single `sv status /service/*`.


CoreOS rkt also looks like a good competitor to docker.


Weren't early versions of docker based on lxc? Not that it matters for the point you are making, but it's just interesting that docker decided to drop lxc, and I'm glad lxc was able to survive (assuming docker folks contributed meaningfully to lxc when they used them...)


Not sure all the hate with Docker. It works and has a good ecosystem.

The DB command in the first approach seems like something docker got rid of a long time back when they deprecated the —link stuff. Just create a network and attach containers to it and then you get DNS for free.


Flockport uses a standard networking bridge served by a Dnsmasq instance that all containers connect to. They all get their IP by DHCP unless you set static IPs and can be discovered by their name on local systems.

The DB command basically rolls out a fresh Mysql or Postgresql container instance and sets up the databases. The discovery happens over dns.


Positive: it’s lxc based.

Negative: it seems to focus a lot on the “I got hello world running in x minutes”. A dedicated keyword for “database”? What crazy logic is that??


Hi, the build system is quite flexible. It's used to build all the open source apps currently available in the app store.

That DB keyword basically allows you to roll out a linked database container for your app if required. Only Mysql and Postgresql is currently supported. These are Mysql and Postgresql instances that can be used for this.

A lot of apps require databases and instead of configuring it manually this allows some degree of automation so a linked database container can be easily deployed if required.


I would have thought a more generic “requires” or “depends” property (ideally multi value) would be more useful, no?


From the docs (wtf?):

> Please disable Selinux or any firewalls before configuring containers, networks, storage and cluster services. They can interfere in unpredictable ways. Once configured services are working you can add the relevant exceptions and enable them again.


Unfortunately Selinux can interfere with processes in weird ways without clear messages to end users. When a container starts networking devices are created, if using layers or btrfs/zfs overlays or snapshots may be created, bind mounts activated. There is a lot of potential for permission issues.

Similarly when creating overlay networks ports across systems need to be open. The idea behind this is users can ensure the functionality is working as desired before enabling firewalls and other security features so they can debug issues effectively.

We have tried to provide a lot of documentation so new users can get started and get comfortable with containers and networking. Often users get discouraged if even after following the docs they run into issues.


Makes sense...


There are different layers of security. Presumably, when your hardware is provisioning a new operating system, it is protected by a Layer3 firewall.

But, yeah - people tend to only skim the manuals.


I wrote something similar, but much more barebones: https://github.com/kstenerud/virtual-builders

It only offers a deterministic build and install system. The rest is pure LXC.


You say ‘Builds somewhat opinionated virtual environments using KVM and LXC/LXD’ ... what do you mean when you say ‘an opinionated environment’ ?


Opinionated, in general, means that one person or team made as many decisions as possible up front and built the system to use those decisions rather than requiring the end user to configure things themselves. It's great when you agree with the person(s) making the decisions because you don't have to configure it yourself, and it's terrible when you disagree with the decisions that someone else made for you and set in stone.


Opinionated - His configuration preferences to achieve a certain goal.


On the home page it says "Load-balancing and ha". Maybe consider capitalizing HA so it is more obvious it is an acronym. Took me a second. Just a thought.


Thanks for that, that's a mistake, will get it corrected asap.


Loadbalancing and hahaha


Why doesn't flockport use layers? I would think it beneficial on many fronts:

1. If user's machine already has some of the layers for other images, no need to download them

2. Updating an app becomes an actual update to the so layer, reusing the underlying infrastructure layers

3. Layering enable hosting commonly used layers on faster CDNs, making downloads faster.

I hope layering is in your roadmap


Flockport supports LXC so both aufs and overlayfs are available for use but not enforced. Flockport let's you launch containers in a layer so its used at run time if required but not to build containers.

Layers are interesting but they are still maturing and have hard to detect bugs and incompatibilities [1]. The more layers the worse it becomes so they can add management overhead. Containers and layers are separate technologies so its useful to leave it as a choice.

The benefits have also often been oversold. For instance reuse, all container platforms provide a library of base OS images. These can simply to used as required instead of trying to use layers. How many upper layers are there going to be on top the base OS that can be reused? This sounds good as an idea but often does not pan out, usually its just the base OS or base OS plus dev environment being reused so why use layers?

And If there are updates to any of the lower layers for instance security or version updates usually the container needs to be rebuilt so again you are not benefiting from using layers to build containers.

Using it at run time like you would run a copy of a container to keep the original intact still makes sense, but using it to build containers adds a lot of complexity and management overhead.

[1] https://www.flockport.com/guides/understanding-layers


> This sounds good as an idea but often does not pan out, usually its just the base OS or base OS plus dev environment being reused so why use layers?

When the build process is predictable or plannable in advance, this can turn out well. I'm familiar with buildpacks in this respect -- the basic order of operations and layout of the filesystem is the same for all software that passed through a buildpack.

> And If there are updates to any of the lower layers for instance security or version updates usually the container needs to be rebuilt so again you are not benefiting from using layers to build containers.

Layer rebasing will change this pretty dramatically. From "I need to rebuild and roll all my apps" to rebasing the image on new layers in seconds and rolling them out across a fleet in seconds to minutes.


While those are true in some cases, in others layers slow down the build process and increase the total download size. It really depends on what is being built and how it is distributed if layers have an advantage, make no difference, or are at a disadvantage.


I wish the industry had settled on binary diffs (xdelta3 works well) over squashfs for distribution and storage instead of this overwrought layering paradigm.


But then you have to unpack it, don't you? Like, diffs are great for downloading images, but once you've downloaded and want to run it, you still have to construct the final filesystem.


SquashFS is mountable directly as read-only. You can use Overlay just like Docker does to create a read-write layer on top. This combination of SquashFS+Overlay -- plus Xdelta3 for distribution -- has worked extremely well for the internal project I use it for.


I only recently discovered machine containers i.e. LXD. The Try It section of their site is excellent to get a quick understanding. https://linuxcontainers.org/lxd/try-it/


I'm just learning about LXC containers because LXC containers are part of the Chromebook Linux breakthrough.


What does this offer beyond what LXD currently provides?


An app store, provisioning servers, overlay networks with vxlan, bgp and wireguard, distributed storage, service discovery and a build and packaging system.

We have tried to provide a lot of documentation so do visit if you want to learn more.

LXD is excellent and is by the authors of LXC. A lot of users may not need a lot of the functionality Flockport provides.


“Container builds simply automate the process of installing and configuring an application in a container that you would do manually. It is a set of instructions to install and configure the application.“

I’m constantly amazed by the lenghts people will go to in order to avoid mastering OS packaging. Coming up with these elaborate schemes, that makes no sense to me.


I'm at least as amazed that you can't fathom why people would see benefits in applications being self contained. For all of time, anyone installing anything has appreciated applications that have very few dependencies. At scale, much of the Ops complexity of many organizations comes from matching the production environment with the development environment. That goes away when applications are self contained.

I don't mean to argue that one way is better than another. Of course there are down sides to things being self contained. Just that if you can't understand the benefits of things being self contained, you might not be thinking hard enough.


I don’t need to break my head with such things because I’ve mastered OS packaging on several operating systems.


OS packaging itself is an overcomplicated scheme for installing applications, which is why people keep trying to fix it with stuff like this or inventing yet another package manager.


Yet it is the one solution proven scalable with every distro release that uses it.


OS packaging is the simplest of them all and ideal for large scale configuration management. As another person mentioned, that is the proven technology that has consistently provided the best results over the last 40 years. That’s why I’m completely baffled by these irrational, massive, complex efforts to avoid mastering OS packaging. It’s a joy to build one’s own packages and for the OS to recognize them!


You know what's even simpler? Just put your application in a folder and distribute it.


There have been several approaches to distributing applications with their dependencies such as AppImage or most recently Snap (or Application Bundles on the Mac). For some reason none of them really took of. Ironically the official way of installing LXC and LXD is now through Snap packages.


Indeed there have, and they haven't taken off... in Linux. Which is one of the reasons I think the Linux Desktop is unsalvagable. If the community would rather keep recreating the package manager and never fixing any of its problems as a distribution mechanism than go with the obvious and simple solution, then it is no surprise they have such a small share of the Desktop.

Just use folders guys. Mac classic used to essentially do that (it was technically a single file with a resource fork), DOS did that, RiscOS did that, NeXTStep did that and modern MacOS inherited from it and still does that, A lot of Windows applications still work like that even if they don't advertise it, and I'm sure there's a bunch I'm forgetting. Linux Desktop seems like the outlier here, insisting on spreading everything over the file hierarchy and interlocking it all like it's still a server from the 70s.


And then every package comes with its own libraries, which don't get updated and end up with duplicates everywhere. It's the same reason that Linux (the kernel) emphatically refuses to support out of tree drivers. It means that you have to make the effort to package it, yes, but once you've done that you get dependencies essentially for free. And as the end-user, I can update EVERYTHING on my system with one command, rather than the Windows hell of a dozen updaters running in the background constantly.


> And then every package comes with its own libraries

Only if they aren't part of the base OS set. This is how basically every operating system except BSD and Linux do things, and they have an order of magnitude more adoption than the Linux Desktop. Hell Android even uses the Linux kernel and has an appstore and still does that.

> It's the same reason that Linux (the kernel) emphatically refuses to support out of tree drivers.

Well no, that's because they insist that drivers can be better maintained (because it forces them to be open) and don't have to tie their hands supporting an ABI. As an example of the downside of this policy, see nVidia drivers on Linux.

Yes, it's a tradeoff, but there are a lot of downsides to package management that its proponents completely ignore. Case in point: the prevalence of using containers to run software without having to deal with conflicts created by trying to intermingle everyone's dependencies, or install up to date software without having to go through some repo, or distributing for multiple distros without having to maintain packages in two dozen repositories.

Even Linus distributes with AppImage. Probably just a stupid Windows user.


"Case in point: the prevalence of using containers to run software without having to deal with conflicts created by trying to intermingle everyone's dependencies,"

That's a problem on GNU/Linux; it's not a problem on illumos or BSD based operating systems. Don't use GNU/Linux or package 3rd party and unbundled software in /opt, configuration in /etc/opt and configure the software to use /var/opt (as per the FHS specification) and the problem goes away.

It's the clueless developer problem, not an OS packaging problem.


Maybe for that particular problem, it still doesn't do anything for many others. For instance, what if I want to install an application on a different disk? In grand UNIX tradition, the scheme you outlined still spreads an application's files all over the tree.

I suppose you'll call that a packaging problem too, and I agree: you should package applications as relocatable directories that contain all their non-OS-provided dependencies.


"still spreads an application's files all over the tree"

No, only three directories: /opt for application, /etc/opt for configuration, and /var/opt for applications' data. Please read the specification, either FHS[1] or AT&T original[2] from whence FHS came. Good engineers seek out and read specifications before they start any planning and work.

When you package applications in this way, only /var/opt needs to be backed up.

[1] http://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html

[2] https://smartos.org/man/5/filesystem


Right, so instead of your entire application being in one directory, it is in fact spread across 3 disparate ones. Why not /opt/<APP>/(var|etc)? Would make too much sense I guess.

I've read the spec, it's crap. There is no value to following a crap spec.


You might have read it, but you didn’t understand it, and the reason you didn’t understand it is because you don’t understand the concepts behind UNIX. No matter; here is your next stop:

“The art of UNIX programming”

...punch that into a search engine, read the book. Then we shall continue.


I understand the concepts just fine. They're from the 1970s, and probably made more sense then, but it isn't the 70s anymore. Hell, the people who made it moved on and improved it with Plan9 and even that was decades ago.

Stop treating UNIX and posix like they're some kind of religion.


If you had understood them, you wouldn’t have made the statements you made. The delineation between /opt, /var/opt and /etc/opt is intentional: when the content in /opt and /etc/opt is packaged, only /var/opt/application needs to be backed up because that is the variable portion, the data. There are other factors like the linker mapping and ABI versioning consumed by the runtime linker as well as a separate stack of shared object libraries that play into this scheme, since except for libc and libstdc++ the libraries used with the OS aren’t supposed to be linked with. That’s how I can see you haven’t grasped the enirety of the subject matter at hand, which is why you were told to go read some more. Packing each application in her own directory with her own libraries might be convenient, but it’s dumb because of all the library code duplication, storage consumption and the nightmare which will ensue come time to patch the software. These kinds of stupidities are reserved for Microsoft®️ Windows®️ but have no place on UNIX®️ where operational maintainability and stability are the highest of priorities. I’m running infrastructure across datacenters here, not putzing around with a lone application. My worries are ever so slightly broader than concerns of individual lone desktop PC developers with only convenience in mind.


No. That’s so very wrong on so many levels. And UNIX systems have no “folders”, only directories. One can tell you’ve grown up on Windows.


Containers have much greater flexibility than OS packaging: use different libc library easily, install the same versions of the same package in the official sources without needing to re-package, use a different distro’s packages for a single use-case, isolate permissions and users along with the software, ...

Containers are much easier to use than OS packaging: Docker documentation is easily readable online, there are tons of Stack Overflow answers, it makes complex processes like multi-stage chroot builds trivial, it works the same on every OS (including Windows and macOS), running a custom package repo is a single command. ...

With a tool that’s so powerful yet easily to use, it’s no wonder that users avoid single-OS skills like Debian or RPM packaging skills.


"Containers have much greater flexibility than OS packaging: use different libc library easily,"

If you have to use a different libc, that's a kernel engineering problem. On a real UNIX, libc is an integral part of the entire system, is not required to come from another party and is carefully engineered as part of a whole. A good libc requires no alternatives. Case in point: BSD or illumos based operating systems.

"Containers are much easier to use than OS packaging:"

They might be, but that does not make them better, nor does it make them a correct solution, especially if one is running on an illumos based operating system which actually has true containers in form of Solaris zones. Docker is a solution to a non-existent problem, a problem which wouldn't be there if one of illumos-based operating systems is used as a substrate (refer to vmadm(1M) and imgadm(1M) manual pages for a detailed explanation on why that is so[1][2]).

"there are tons of Stack Overflow answers,"

That is symptomatic of poor or lacking manual pages in the system, which in turn is symptomatic of poor or non-existent system engineering practices. Either way, it's an indicator of insufficiently documented as well as insufficiently integrated software: any time one mentions "Stack Overflow", one has lost, because "Stack Oveflow" is full of answers which work, but aren't correct on a system engineering or architectural level, and most who use it to solve their problems don't have the wherewithal to judge that, or they wouldn't be there in the first place. It's a very vicious cycle and a serious, systemic problem with long term consequences detrimental to the IT industry.

[1] https://smartos.org/man/1m/vmadm

[2] https://smartos.org/man/1m/imgadm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: