Hacker News new | past | comments | ask | show | jobs | submit login
LXC in Ubuntu 12.04 LTS (stgraber.org)
89 points by dylanvee on May 23, 2012 | hide | past | favorite | 34 comments



One of the unsung advantages of LXC and OpenVZ, is that the disk cache is unified.

Full virtualization like KVM or VMWare, require you to give each VM extra RAM for use with disk cache. For instance, if you had a typical set of processes that used 1.5GB, and you gave it 1.7GB, that would hardly be enough, as you want more than 200MB of disk cache.

Under LXC and OpenVZ, any unused RAM becomes globally available for disk caching, giving a decent performance boost and further reducing the resouce commitments per-VM.

One example: a customer had some lousy queries in their SQL, but they really needed to have a good demo of their site. We moved them to a 32GB RAM system and gave the container 8GB.

As a result, nearly the entire 20GB database (or at least the parts that were needed), got loaded into the disk cache after the first batch of queries were run. It was enough to get them over the hump (they later figured out the nasty SQL that was getting them in trouble) and they had a good demo. After that, we live-migrated back to their regular server.


I get the flexibility that gives you, but in that instance you've evidently got a 32GB machine sitting there, unused. For my money it's just as valid to move their guest to the 32GB machine, balloon up to allocate the space and allow the guest to use the extra space as disc cache as required, then balloon down afterwards and migrate off as normal.


True, but then the live-migrate feature would not be possible. Also, even in Xen, the ballooning can only be done in a range (I think not quite 4 times), so you can't set up 1GB and then balloon to 28GB, then go down to 1GB again.


> True, but then the live-migrate feature would not be possible.

Why not?

> Also, even in Xen, the ballooning can only be done in a range (I think not quite 4 times), so you can't set up 1GB and then balloon to 28GB, then go down to 1GB again.

I don't think kvm has that limitation.


Does anyone have any good resources on how Linux LXC compares to BSD Jails from a security perspective? I've long been a fan of BSD jails because of how simple the security model is to understand, and how secure they've been in practice. Jail has long been a killer feature for BSD and very, very good reason to use a BSD-derivative for web servers, etc. as you can run each and every service that has the potential to be compromised in its own jail to minimize the overall risk; whereas the best Linux had to offer has traditionally been a chroot'd environment, which while good, has absolutely nothing on a BSD jail.

I'd imagine that LXC has the potential to change that, though I presume it'll take some time for a) adoption to increase and b) for it to prove itself after that.


LXC is just a set of script/interface to Linux's namespaces.

Namespaces is what is actually used. There are disk namespaces, network, pid, etc. Those are not very widely tested albeit supposed to be relatively secure.

FreeBSD jail provides a all-in-one integration instead. LXC provides the glue to achieve similar integration.

There is also rsbac_jail which provides an integration more similar to what FreeBSD does.

The major issue with LXC so far has been that it's not well integrated/easy to use.


The major issue with LXC so far has been that it's not well integrated/easy to use. That's a bit of an understatement.

LXC is to namespaces as "tc" is to traffic shaping.

It should be replaced.


i didn't want to be that blunt ;-)


How is that so when the article says "NOTE: Until we have user namespaces implemented in the kernel and used by the LXC we will NOT say that LXC is root safe"?


a root in a namespace is a root in a namespace. if its a fs namespace, can't chroot up. if its also a network namespace, can't use network resources that aren't allowed by the namespace. and so on.

but, a regular user can't create a namespace with root in it. that's what's missing for example.

Note that, again, RSBAC does support fully virtual users for example (akin to namespaces for users)


I've never heard of LXC before. What does this technology let me do that I can't do with stuff like virtualbox? Is it the same, but lighter weight?


LXC is OS-level virtualization (similar to openvz and plain old chroot), which has a much lower overhead compared to full virtualization (i.e. virtualbox, kvm, xen hvm) but requires that the guest share the host's kernel. You get easier setup (untar the filesystem and tweak a few config settings) with stuff like LXC, but you obviously cannot run Windows/*BSD/whatever.

> What does this technology let me do that I can't do with stuff like virtualbox?

Fit more containers on your host :)

Share (disk and memory) resources among your containers

Make the same partition/directory/files available to a few containers at the same time without using ssh/nfs/smb/etc

... and so on.


Does LXC have a way to control allocations of the RAM based block cache on a container by container basis, or can an very active container end up monopolizing it?


> of the RAM based block cache

what? cgroups provides resource limits.


I believe not. The disk cache is shared, too.


VMs each have their own kernel, but containers share the host kernel. This tends to be more efficient and doesn't require you to statically allocate vCPUs and vRAM upfront when you create a container.

LXC also has an under-appreciated mode where you can run some processes (but not a full OS) in a container.


LXC also has an under-appreciated mode where you can run some processes (but not a full OS) in a container.

Please elaborate, I'd like to try this out. I've been playing around with LXC for years, but assumed /sbin/init always had to be the root process.


/sbin/init never needs to be the root process in a Linux system. You will end up in a messy situation if your init replacement allow you to spawn processes without wait()'ing for children now and again (e.g. some daemons will "double fork" and wait for the immediate child to die, and then expect init to take care of the real worker later), but that's pretty much it.



Unfortunately I've been running into a bug with lxc-execute (https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/986956) that prevents it from "just working" with interactive programs such as bash or python.


For some context, I believe many/most people doing PAAS (e.g. Heroku, DotCloud, Cloudbees, Node*) are using LXC to create slices/dynos/shards/whatever.


Correct. DotCloud started using LXC in May 2010. Before that it used OpenVZ in 2009 (http://openvz.org), and even before that it used VServer (http://linux-vserver.org) in 2008. Back then the notion of stacking 2 types of virtualization was incredibly weird and remote.

In the 2nd half of 2011 other PaaS players caught on to the wonders of container virtualization - Heroku for example started using it with their Cedar stack in April of last year.


Do you know how LXC is compared vs OpenVZ and vs VServer?

I am considering to use one of those for securing / hardening a set of VPSs I am using. Since I do not have debt with any of these tools, I would like to start with just one. The question is which one?


I would go with LXC simply because it's part of the upstream kernel so you can expect it to evolve faster - not to mention you won't have to deal with patch management, and your setup will be more portable.

I would only use OpenVZ if there is a particular feature that it does better, and you can't afford to wait for LXC to catch up.


Can't vouch for LXC or VServer, but OpenVZ is well documented and I was up and running with it within days. I still have a dev server that's been happily running for about 5 years. Users get root access to a 'container' which has some limits imposed upon it. They can do as they please. I don't have to worry about them messing up the host (, they are trusted users though.) The file system is easy to back up, as it's reachable from the host. The server itself doesn't support hardware Virtualisation so it's a nice fit.


LXC

- Is LXC friendly with IDS/IPS and alike?

- If I place a webserver or a database in a container - what would be the implications in terms of set-up?

- Networking? How it would interact with iptables? iptables only on host, or it is possible to set-up separate iptables in each container?

- How logging is dealt with?

- Can system user sitting in the container escalate to root?

I am looking for a solution to further harden the set of VPSs for a web site/app.

Is LXC a good fit for that? Or smth else might be a better fit?

thank you

P.S.: my CFO experience can not help me here :-(


Any comparisons between LXC and Solaris's Zones feature?

They sound quite similar in concept/execution.


Linux containers are a fantastic development tool and ready for production prime time. BSD has long had really good jails and having implemented this now for (development) purposes for cnx.org I can recommend them.


It's also great to partition a large machine into smaller ones. Containers are not yet completely isolated from each other, but, if you own all the containers, it's a perfectly good solution and more flexible than virtualized hardware.

If you plan on separating your app and database servers to different machines, doing so from the start may be a clever idea.


I believe the only remaining hurdle for full isolation is user namespacing, which is slated to be implemented by the next LTS release.


How are you using them in production? Don't you need your deployment and development stacks to be using the same software versions more or less?


I am not yet using lxc in production, although I have used BSD jails in production for some time

I think their great use is because they are soooo sim ple to create a new "host" one is willing to use configurations of servers, develop onto clusters early and so find the problems early

I would credit virtualisation with the rise of devops - seriously


Does anyone know technically how LXC compares to OpenVZ?


Is this ubuntu only, or is it integrated in Debian too?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: