Hacker News new | past | comments | ask | show | jobs | submit login

The whole notion of containers is basically this. That's why I am not sure why not just fix the OS. If there's anything to fix in the first place.



I always felt, perhaps uncharitably, that the point of containers was "those other programmers are idiots so we need to encapsulate everything for the sake of defense"


One huge benefit of containers is that you can treat a program as something atomic: Delete the container and it's gone, as if it were never installed.

Modern package management systems like APT spend a lot of effort installing and removing files, and they don't do it completely; any file created by a program after it is installed will not be tracked.

You could accomplish the same thing in other ways (as Apple's sandboxing tech does), of course.


"Modern package management systems like APT spend a lot of effort installing and removing files, and they don't do it completely"

Well, there /is/ another way to do it.

STATIC LINK ALL THE THINGS

Which would work if licenses and copyrights didn't exist.


That's one of the main touted benefits of containers (a.k.a build reproducibility). You can view containers as a an overly complicated way to make software with complicated deployment brain-dead easy to deploy.

We're at the point in the hype cycle where it starts getting fashionable to dismiss that as an overkill, but the reality for most of us out there is that most software is way more complicated than a single executable and containers make it easier to deploy complicated software.


I'm talking about mutations to the file system. Things like database files, logs, /var/run, etc.

Managing internal dependencies (like libraries) is another concern entirely. But containers are good for that, too.


> Which would work if licenses and copyrights didn't exist.

I don't think it would.

Dynamic linking allows a library to be patched once and have the patch apply to all the programs using it. If every program was statically linked, you would have to update each one individually.

Not to mention the waste of space.

I'm guessing much of that is moot these days, but IMHO it's still something to aim for.


Patch a library and perhaps you end up breaking some programs that rely on that library.

The benefit of that goes away with containers anyway, you don't share libraries, every instance gets its own install.


Could have sworn that _nix already had mechanisms for loading different lib versions side by side...


I think GP is being snarky/sarcastic.


Programs either shouldnt create such files or the if the user created them they shouldn't be removed.


Are you saying programs cannot create files? That's nonsensical. /var exists for this purpose.


tbh, I see the security point in Docker as a huge risk. Basically you're depending on everyone in the chain to regularly recompile your images or you will get compromised eventually. There is no such thing as an apt-get dist-upgrade or any way to create real useful audit logs with Docker.

On the other hand, deploying stuff is dead easy now. You just tell the hoster "deploy this Docker package, expose port X as HTTP, and put a SSL offloader in the front" and that was it, no more "we need <insert long list> to deploy this" or countless hours spent with the hoster on how to get weird-framework-x to behave correctly.


This is a good reason to build your own rootfs for containers and have as little as possible in them.


I work in a big company. It's much easier for us to spin up a container to run whatever experimental program we've thought might be useful to help us do our job than to provision a real box for it to run on or fit it into the whole bureaucracy.

If it's actually useful we'll find someplace for it to live (or just in containers if that's all that's needed). If not, finding that out was cheap.


This is a more charitable interpretation of what I meant: "we can quickly test the value of an idea via a prototype without the cost of making it super well behaved in other areas." Which is an excellent application of containers!

I just sometimes wonder if the cart hasn't gotten in front of the horse on the whole container front. The fact that the term "bare iron" has been hijacked to mean "not under virtualization" made me start to think that something is seriously fucked up.


It very much has. Mix it in with devops and agile, and before you know it everyone is throwing hot code directly to "prod"...


If you have machines with unused capacity, why would deploying a docker container image be easier than deploying a RPM package?


Because either there is no RPM or the RPM conflicts with other RPMs on the machine.


There is software collections for RPMs, which is basically namespacing. Can have quite an operational overhead if the thing you want to install has a lot of dependencies though.


Sounds like then you should learn what a chroot is. All the existing linux platforms already provide the solution to your exact problem with much less overhead than docker.


I agree this is possible, but the tools are somewhat obscure and slow. You can debootstrap in a chroot and then install packages, but it takes a long time, and likely re-downloads hundreds of megabytes packages you already have on the system.

I don't like Docker, but I think it does some differential compression with the layers when you are modifying the image. So you don't have to re-do this install from scratch.

You may also run into issues with user IDs and various system config files in the chroot. Configuring the service with flags, env vars, and config files is a bit of a pain.

Docker is essentially a glorified chroot... I've essentially tried to rebuild it, and it is unfortunately a lot of work.


Only that chroot wont also run on the development machine, running a totally different OS.


This is a self-inflicted problem. 787 engineers don't have to test their work on a Learjet.


You do realize that the scenario you have described could be carried out with chroot just as easily, right? The only way "containers" help (and not containers as is, but Docker specifically) is that they ship the whole chroot OS image with all the dependencies.


I think the "just as easily" is highly debatable. I'm not a container evangelist by any stretch, but if you were going to take a "just chroot it" approach, the very first thing you'd want to do to ease the operational burden is define some kind of standard app packaging format that defines what's in the chroot and an entry point and an environment, and maybe some scheme for mapping external data and various other niceties you get with containers, and at that point, congratulations, you've reinvented containers.


> I think the "just as easily" is highly debatable.

Heads up, you're talking about a different thing. You want to run these things as normal operations, but gunnihinn was talking about deploying an application to check if it is of any value. Chroot is just enough to make a mess as the application's developer instructed in INSTALL.txt without the need to worry about cleaning up afterwards.

And by the way, you seem to be confusing containers and Docker.


I disagree. Just google "doesn't work in chroot" and you'll be reminded of a litany of issues that come up when trying to build/run things in a chroot, and a container containing a linux distro makes a tidy little sandbox which generally avoids those issues. It's somewhere on a spectrum between a chroot and a VM, which I think a lot of people find value in.

And I'm not confusing containers and Docker, I'm just speaking a bit imprecisely. In my experience, conversations about "containers" are rarely about raw containers, but rather some specific containerization scheme and tools (e.g. docker). I suspect everyone in this entire subthread means "docker containers" when they says "containers."


> Just google "doesn't work in chroot" and you'll be reminded of a litany of issues that come up when trying to build/run things in a chroot

Yeah, some newbie forgot to mount-bind a necessary directory like /proc or /dev, didn't provide sensible /etc/resolv.conf, or messed up host's and chroot's paths, either in request or in configuration. Nothing that would render chroot unviable. Is this what you meant?


Why would anyone want to get all that right manually when containers manage all that automatically?


Good question. Why anyone presented with fancy and fashionable third party software would use a mechanism that has been present for decades, ships with the operating system, works reliably and predictably, doesn't change substantially every quarter, doesn't do any magical things to network configuration, and is easy to inspect, debug, and adjust for an outsider? Why indeed?


So you have an organizational problem and you're hacking one problem with another hack. Clearly nothing can go wrong with this approach.


I don't see that as an organizational problem, but a way to try something out safely and faster.


I was referring to how they are unable to provision the right resources to run experiments because of all the red tape and they are getting around the issue by using docker. What happens when the red tape guard get wind of this?


I don't think so. Consider that computers are generally so powerful these days that when running a single application stack their seriously under utilized. The first way people went about getting this going was Virtual Machines (made popular by IBM with its VM/370 OS :-) and that works well but when all of your clients are running the exact same OS down to the same version, it is kind of waste to have 'n' copies of the OS loaded, so containers are a 'semantic' fractioning of the resources where the OS is common but the set of processes are unique to a client. You still need a way to allocate from the single set of resources so containers provide that abstraction.


What exactly is the problem with process sandboxing and language level VMs? The industry is tackling all the wrong problems. So have we given up on process sandboxing with capabilities? The whole containerization movement is one giant hammer to kill a fly kinda business these days. While you guys are figuring all this out I'm gonna stick with BEAM, JVM, and other tried and true methods. I'll check back in another 5 years to see what this mess has turned into. Maybe that's enough time to figure out how all the overlay networking is working out because, you know, we need a few more overlays between the hardware and the VM.


Because then each piece of software you develop is limited to a single process written in a single language. How do you use "process sandboxing with capabilities" to isolate, say, an Erlang process that uses a NIF embedding a shared-memory section to interact with a companion OpenCL C process? With containers (or VMs), the answer is obvious; with POSIX-level isolation primitives, not so much.


http://man7.org/linux/man-pages/man7/capabilities.7.html

There are several capabilities in that list to address your problem. Since your question is concerned about memory you can search for "memory" in that page and note the exact capabilities you will need.

OpenBSD has even more features (https://en.wikipedia.org/wiki/OpenBSD_security_features) alongside the capability model.

My gripe is that instead of learning how to properly use their tools and platforms people have started looking for golden hammers like docker and now we have a mess like RancherOS. If someone can explain to me how isolating the system services in docker containers (which by the way are not isolated since they are privileged) does anything above and beyond what capabilities and the security features in OpenBSD provide then I'll concede the point. My guess is there are no advantages and people are jumping on bandwagons they know nothing about and since the previous generation of tools was not utilized to its full extent neither will all the new docker hotness. People are hammering square pegs into round holes.


Note that my point wasn't about how "your process" can be non-isolated in certain ways using capabilities, such that you could, in theory, sandbox both of those processes individually and have them still do whatever IPC you want; yes, this is certainly possible, and obviously more sensible if one of those processes isn't so much "your" process as it is some other process managed by some other party that you're interacting with.

My point was more about the "developer UX" of needing to isolate things that way. Containers have the semantics of isolating a group of processes, but not performing any internal isolation between the processes in the container. This is almost always what you want—you want "your app" to be able to have multiple processes, and to not have any security boundary between the parts of "your app", just between "your app" and "other apps" or the OS. In other words, you want to have a single "process"—from the OS security subsystem's perspective—whose threads happens to be composed of multiple PIDs, multiple binaries, and multiple virtual-memory mappings. You want to fork(2) + exec(2) without creating a new security context in the process.

Sandboxing would make perfect sense if single processes were always the granularity that "apps" existed at. Sometimes they are. Sometimes they're not, and people do complex things with IPC-capability-objects. And sometimes, when they're not, that fact pushes people toward avoiding multi-process architectures in favor of monolithic apps that reimplement functionality that already exists in some other program in themselves, in order to bring that functionality into their platform/runtime so it can live in their process.

Containers let people avoid this decision, by just applying things like capabilities at the container level, rather than at the process level.

> If someone can explain to me how isolating the system services in docker containers (which by the way are not isolated since they are privileged) does anything above and beyond what capabilities and the security features in OpenBSD provide then I'll concede the point.

Completely apart from the above, my understanding of Rancher is that it's the Docker part of "Docker container", not the container part, that provides the benefit there. Docker is a packaging and service-management system; that its packages use containers is frequently beside the point. Rancher's system services are Docker images (i.e. Docker "packages"), and so you use Docker tooling to create, distribute, manage and upgrade them. If your own application on such a system is managed through Docker, this provides a neat solution to unifying your operations—you just do everything through the docker(1) command.


Ugh, "apps". If ever there is a tortured term in computing these days it's that one.


Okay, how about "a purchased software product launched through a GUI"? There's no guarantee that it's a single process, but there's an assumption that it's a single security context.


Well at Google where a lot of development on containers took place the problems with process sandboxing and language level VMs were machine resource allocation, both in memory and in I/O bandwidth. Lets say you have three "systems" on the box, one is a collection of processes providing elements of a file system, one is a collection of processes providing computation, and one is a collection of processes providing chat services. Now you want to allocate half of the disk i/o to the file services, and a half to the compute system, then 75 percent of the network bandwidth to the chat system, 20 percent to the file system, and the remaining plus any "unused" to the compute system. You want half of the memory in the system to go to the compute system and give the rest to the network file system processes.

That is a complex mix of services running on a machine, some sharing the same flavor of VM, and you're allocating fractions of the total available resource capability to different components. If you cannot make hard allocations that are enforced by the kernel you cannot accurately reason about how the system will perform under load, and if one of your missions is to get your total system utilization to be quite high, you have to run things at the edge.


Because process sandboxing turned out to be insufficient, and we need layered sandboxing with several levels for protection for security.


"it is kind of waste to have 'n' copies of the OS loaded, so containers are a 'semantic' fractioning of the resources where the OS is common but the set of processes are unique to a client. You still need a way to allocate from the single set of resources so containers provide that abstraction."

Agreed. What I find so odd is that this problem was simply and elegantly solved in BSD with 'jail' and everyone went about their business.

I do not understand why (what appears to be) the linux answer to 'jail' is so complicated and fraught and the subject of so much discussion.

I am not sure that containers and their build scripts represent the $huge_profit_potential that people think they do ...


Do you feel the same way about visibility in OOP? E.g. private data and methods?


Containers are just unifying the concepts of namespaces and cgroups in to environments -- I'm not sure that there's a better way to do that at the OS level, as I enjoy the primitives being separate and having the potential to remix them as my understanding of containerization evolves. (Okay, so there's a few other things mixed in like SELinux settings.)

I think there's interesting ideas of environments in other operating systems, but I'm not seeing how you'd make things better by flattening containers into the OS, per se.


One obvious tweak would be to make it so that every bit of isolation that containers now get by default, you instead get per POSIX process-group/session or somesuch, so you don't have to think in terms of containers to get the benefit of containers—they're just something the OS does transparently whenever you make it clear that a set of processes forms a distinct, separate cluster.

Making existing programs compatible with such a paradigm probably wouldn't be any more work than e.g. adding SELinux/AppArmor support.


Nah, that would be too easy. Let's instead introduce yet another notion of sessions (logind).


(I'm not saying this is a good thing but ...) Containers are often used where you want to run several programs, but each depends on a mutually incompatible set of libraries. Two programs need python-somelib-1.0 and python-somelib-2.5, but both versions can't be installed at the same time. Or you need to upgrade the programs at different times and during the upgrade window they'd depend on different versions of python-somelib.

Now of course the correct way to solve this would be (a) to make both programs use the same version of python-somelib, (b) make upgrades happen atomically, (c) for Python modules to actually have some API backwards compatibility. But in the absence of doing the right thing you can use a container to effectively static-link everything instead. And worry about the security/bloat/management another time.


You forgot possibility (d) allow multiple library versions to be present simultaneously.


See: Virtualenv (for Python), Snappy, Flatpack

The main advantage of docker, as far as my understanding goes, is more of a prebuilt system configuration thing. Need a database? Load the prebuilt PostgreSQL image onto the respective machine.

One thing I think should get more use in general is that Dockerfiles are essentially completely reproducible scripts[1]. Too many companies I've seen still use Word documents full of manual steps one can easily get wrong for all their machine setups (especially in the Windows world). If you want to test something quickly, you're bogged down for a day.

[1]: example: https://github.com/kstaken/dockerfile-examples/blob/master/m...


Over the last 15 years I've built and maintained Linux distros using apt, with custom debs and pressed files which handle configuration, automatic upgrades for most systems, and a small shell script and ssh for the rest.

Now it's all docker rather than packages, ansible (which leaves no trace of what it's doing on the target machine) rather than a for I in 'cat hosts'. Fine, but where's the benefit?


Note that I didn't make any value statement towards Docker - I don't have that much experience with it and found their infrastructure to be rather clunky, when I tried it. So, I'm not sure I'm qualified enough to make definitive statements about its usefulness.

I can see, however, a benefit in encapsulating different services in different containers, as this potentially gives you some control over them (available disk space, network usage etc.) * . On top of that, I imagine starting out on a "machine agnostic" approach can be rather useful if you have to change your network landscape further down the line: If your database already is configured as if it were running on its own server, there's certainly a bunch of unintended coupling effects you can avoid.

That said, I can't see Docker being the silver bullet it gets hyped up to be sometimes. But that goes for most new and shiny things in the tech space...

* And yes, that's already feasible without containers. Docker's approach to this seems to provide a way to do it in a much more automated way than most alternatives though.



> The main advantage of docker, as far as my understanding goes, is more of a prebuilt system configuration thing

I think the main advantage is that it standardizes the interface around the application image/container. This allows powerful abstractions to be written once rather than requiring a bespoke implementation for each way that the application is structured. Imagine writing the equivalent of Kubernetes around some hacked-together allows-multiple-versions-of-a-dependency solution. It would be a nightmare. But because the Docker image/container interfaces are codified, you can build powerful logic around those boundaries without needing to understand what's inside those images/containers. Dynamically shifting load, recovering from failures, automatic deployments and scaling are all much easier when you don't have to worry about what language the application is written in or how the application is structured.


There is a way to do this without the security/bloat/management issues. Take a look at the approach of GNU Guix. Both versions of a package can coexist in the system. Libraries are dynamically-linked and security updates can be applied fast (without building the world), using the "grafting" system.


Actually, Lennart proposed some years ago exactly how you could "fix" the OS to apply the ideas from containerisation. http://0pointer.net/blog/revisiting-how-we-put-together-linu...


This seems horribly complicated and ridiculous especially since when it was written btrfs which was a requirement was pretty terrible.


At the time of writing, btrfs was seen as the hero to be. Things have changed quite a lot. Currently we are solving this problem with a combination of containers and an immutable filesystem (ostree). Anyways, solutions discussed in the open are always a good idea... How ridiculous they may be perceived. Lennart, a Club Mate to you...


Well the author is Lennart Poettering, the guy that "gave" the Linux world Pulseaudio, Avahi and Systemd...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: