Hacker News new | past | comments | ask | show | jobs | submit login

Let's hope it will get to docker 'oneliner' level.

Last time I tried LXD it was not at that level.




I've been a big advocate for LXD. Running multiple servers at work where I set up containers that I gave collegues access to and such. I find that the "VM's but not" approach quite nice.

I've since moved away completely, just before the recent moves by Canonicle. Most of if not all of my reasons to move away from LXD relate to Snap or other sides of Canonicle leadership.


> Running multiple servers at work where I set up containers that I gave collegues access to and such. I find that the "VM's but not" approach quite nice.

Out of curiosity, why? Wouldn't it be easier to give them a URL and tell them to execute "docker run ..." to get the same environment?


My use case is to let others use the HW I manage, not to have a known linux environment.

Either if someone wants to run something as an internal service (logging, ect) which should not depend on their desktop being online, or they need beefy HW.


I'm not sure what is meant by oneliner level here.

LXD is meant to run system containers in LXC. It's meant to feel like a virtual machine in the sense that you log in, install software, maintain it, etc.

A similar concept might be images from TurnKey Linux. They're at least available in Proxmox, and are prebuilt system container images, but I can't see them being very popular compared to Docker itself.


Considering I can run Docker containers, log in, install software manually and then snapshot the images, I'm not sure what I get from this other than a terrible sounding workflow.


Docker and OCI containers with the layers and such are (in my experience) best used as a way to make any environment into a "static binary" in that you're pickling all the dependencies into a single artifact; the storage layers and network port stuff allow for abstraction.

LXC is best thought of as a way to make a VM, but each of your VMs shares a kernel and filesystem cache. Each of these "VMs" can have a unique IP address, and even a totally different userland, but is otherwise best thought of (For better or worse, depending on your needs) as a VM / computer on its own.

LXD is an orchestration mechanism to provision / manage these LXC containers -- it's like but not really like nomad or kubernetes.

For stateless things, I tend to use docker / OCI containers; for stateful things I tend to use LXC because the volume mounting abstractions in OCI containers just get in the way.

But that's me. I'm sure I'm doing it wrong in a variety of ways.


Lets say one has a debian server with 256 cores and 100 users. If an user wants to run fedora it makes it possible. Not just build and run once but keep running for 365days and in that fedora give 200 user accounts or 200 webservers. It is possible with lxd. I know some hosting companies use it. Mix and match. All these run as non root. You can do ram/cpu/bandwidth/io controls everywhere.


I already run all my containers as "non-root", I can run 200 Fedora images and I can do ram/CPU/bandwidth.

I can also put that users home on the SAN so their data is retained if the container has to move servers. I'm unclear what that has to do with being an LXD advantage.


“Manually” - you’re using docker wrong.


No, LXC/D doesn't make sense.

I "can" do what they are describing in OCI, but it's bad practice.

I don't do it manually.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: