Stéphane Graber's work is fantastic and LXD had a lot of potential. But Canonical managed to drive me away from it with that abomination called Snap and its auto updates that could bring down a cluster without warning.
It is really sad that more people aren't aware of LXD, because it is a really good solution of a number of projects and it's really well done.
On Snap... Even if you like Snaps packages, I can't imagine that you'd feel like it's a good choice for LXD. For a browser or email client, sure it's a solution, I suppose, but for thing like LXD it just adds an entire level over complexity that doesn't help. I'd go so far at to say that Snap packages makes LXD unusable, it's simply to complex, to messy. Canonical also have Microk8s (I think) which is packaged as a Snap, rendering it completely useless. The mental overhead required to make it function completely eliminates the point.
It also completely ignores highly regulated businesses, because Snap does not care and will just roll out an update whenever it feel like it.
I don't particularly like Snap, but I can see the use on desktops. For the server side... I don't know anyone who'd thinks it's neat.
It's also annoyingly opinionated in stupid ways. There's no supported way, for example, to adjust which services are enabled or not, and especially there's no concept of 'started but not autostarted', which can be important on a server (it's also obnoxious on the desktop as well, since it seems like a lot of the results for trying to fix this is people saying 'how do I stop snap slowing down boot').
There are annoying 'Turn off firefox so snap can update it' messages. Nothing happens when you stop firefox, you'll just get the same message a few hours later.
Shutdown is going to take ages for some reason because it's waiting for snap to do something.
I don't give two shits about whatever snap is trying to do, but having to wait 10 minutes for my workstation to be useful whenever I want to reboot is not what I'm about.
Professionally, the company I'm at ditched ubuntu in our CI builds because of snaps.
The packaging is very limiting, and I don't want to have to fight against the system to get a version of software installed that works.
I personally used to run MicroK8s (snap based). I finally gave up about 6 months ago. Were snaps the root cause? Eh... no, the root cause was dqlite synchronization problems that I was tired of dealing with, but snaps made the whole process so much worse. They're hard to automate, they're hard to interact with using normal tooling. They just aren't part of the system in the same way, and while I understand there's value there (hell - I'm running containers and microk8s for a reason) there's also pain. K8s was an okish trade on that front: I felt a lot of pain and gained a lot of value.
Snaps seem to be falling into the worst possible spot: I'm feeling a lot of pain and the product is usually not any better at all (and sometimes much worse).
HA Microk8s with snaps and nvidia-container-toolkit (The GPU addon) just isn't all that stable (I would consistently get hard lockups on some hardware). That same hardware running Arch and k3s is chugging away beautifully.
So maybe it is an echo chamber - but the echoes I keep hearing are system admins saying this is making life harder and they don't understand the upside, and Canonical pushing forward anyways.
---
So maybe there's light at the end of the tunnel somewhere for snaps. Maybe the grand vision will eventually come and folks will be fine with it. But my suspicion is that Ubuntu is being replaced at most orgs before then.
Steam is throwing weight behind Arch (LOTS of new linux users coming in here). Alpine is a strong contender in containers. Nix is interesting.
Both Flatpak and AppImage seem to achieve the "easy user installs" part without all the pain of snaps, and they're neutral on whether I use them or not.
Basically - if snaps were so good (even if they're only good in some cases) I would expect a conversation about snaps similar to that of SystemD - Strong arguments on both sides, real advocacy from the community for the product, even if it's painful to adopt, or doesn't meet all user needs.
Instead snaps are just "meh" all around. There's a steady stream of folks leaving, and no real advocates outside of Canonical. At most - you have neutral folks like yourself who just don't mind snaps.
To me - that's a bad sign. I bailed. It wasn't worth betting my personal projects on. I've been pretty happy with the decision too.
Well, it depends. I liked upstart, I liked Unity (I still use them as long as I can!) and I didn't dislike Mir, although it never came to anything. But I don't thing snaps are good, at least for me. But what bothers me the most about snaps is not what I consider to be technical deficiencies or annoyances, but the fact that it's becoming the only way to install some software packages in Ubuntu, with no other alternative than leaving Ubuntu for good.
It seems like it was only introduced in December, so I can see why they haven't made it the default. It would be rather surprising to have run a Snap package for the past six years and then suddenly have it stop updating.
You've been able to specifically schedule refresh times and interval's for awhile now. We did it monthly at a specific time and then set a calendar event with a couple notifications to keep track of it. IMO the "I can't turn off the updates" concern is overblown. If you aren't patching your systems ever you have problems anyways.
Overblown? Tell that to people that were hit with production outages. The whole idea of automatic updates for software running critical workloads is so stupid that I lost all respect for Canonical as a vendor.
The problem with auto updates for daemons and services is that the maintainer cannot possibly account for all the different ways their software is used to be able to guarantee that the new version doesn’t break some critical service. It’s akin to a team outsourcing their CI/CD pipeline to some 3rd party who have no idea they exist!
Yeah I'm setting up a Debian 12 Bookworm machine right now after ~13 years of being an Ubuntu user
So far it's very snappy! My Ubuntu 18.04 machine somehow "rotted" in ways that previous installs didn't -- everything became slow and janky, sorta like Windows
And I specifically installed 18.04 in 2021, to avoid Snaps
> My Ubuntu 18.04 machine somehow "rotted" in ways that previous installs didn't -- everything became slow and janky, sorta like Windows
Oh man, this reminds me of two years ago. During Covid lockdowns, I figured I'd try Ubuntu, see what's up with gnome. This was 20.04. I was pretty meh, but it was okay. Then I found out about Regolith [0], which is a "spin" or whatever it's called, that comes with i3 (like Kubuntu has KDE instead of Gnome).
I don't know exactly why, and I was too lazy to dig, but that thing was one of the most sluggish experiences I've ever had. At the time, I'd daily drive an i5-6500 with integrated graphics, and my regular Arch with i3 and light Picom effects (blurring for notifications and dimming inactive windows) worked great on the integrated GPU. Somehow, the Ubuntu version was worse on a newer i5-8500.
I'm in literally the same boat, on Debian now because of snaps. Their other homegrown stuff were mostly out of the way, like Unity or upstart, but snap was just really annoying with how it's integrated into the system.
I switched to Debian 11 for the same reasons around Christmas. I also found out faster than Ubuntu 20.04 and the fan starts spinning less often. I've been on Ubuntu since 8.04.
When I can I'm creating servers with Debian too, personal ones and for customers. The only problem there is that Letsencrypt uses a snap to update the certificates, even on Debian [1]. Not all my servers need a web server but when they do I usually use ngnix. I'm investigating alternative update clients with no hurry. When that is solved, goodbye to Ubuntu.
I'm afraid that this is only a wrapper around the snap. I quote from that page
> If you have any Certbot packages installed using an OS package manager like apt, dnf, or yum, you should remove them before installing the Certbot snap to ensure that when you run the command certbot the snap is used rather than the installation from your OS package manager.
That seems to be a doc issue. It certainly did not install a snap when I just tested this, and the certpot python package it installs explicitly checks to see whether it is running in a snap or not, which would make no sense if running it outside wasn't an option.
That doesn't mean the other means wrap snap. It just means in order to truly install via snap, you should remove the same package installed via other means before
My understanding is that Pop uses Flatpak sparingly, whereas Canonical is pushing Snaps for an increasing amount of software.
That is, IIRC, you get (say) Firefox via Debs in Pop. Canonical wants to install it via Snap.
IMO Flatpak is a great option for a set of desktop software where packaging across distros may be problematic for some reason. I use the Firefox Flatpak on Fedora / RHEL because it doesn't disable the video codecs, for instance. The native package doesn't have some codecs enabled.
Canonical is (AIUI) pushing Snap for desktop and server software. And Canonical is the only source of Snaps, I believe? With Flatpak you get Flathub but AFAIK anybody could set up a repo of Flatpaks.
Flatpaks have some startup time penalty, but it's an order of magnitude better than Snaps. They also have way better disk usage characteristics per installed package than Snaps do (they scale better).
You can use them side-by-side on the same system. Install a dozen of each and take some measurements.
Ubuntu, with the smallest cleanest but richest desktop on Linux -- Xfce -- and neither Snap nor Flatpak. Instead, `deb-get` which finds and installs native DEB packages, configures the repos for you, manages updates and so on.
I was on KDE for a bit, love it on my ultrawide monitor with tiling. But on my laptop Gnome is so sweat. I have a 2nd hand HP ProBook and I swear, for the very first time ever in my Linux life the trackpad feels like my MacBook trackpad. 3 fingers swipe up, overview of windows, another swipe is app grid, swipe left right, move to other desktops. I really find the whole experience very smooth and I enjoy it a lot (all Wayland, on NixOS).
I have suspend/wake working well so far, USB-C charging + screen + all peripherals via 1 USB-C cable, all "media" buttons work. And this is still on 8 GB of ram (soon the 2x32 GB will arrive ;)), I haven't even heard the fans so far! I'm on Teams, camera works, nobody would even know I'm on Linux if it wasn't for my constant evangelizing!
I'm really impressed (yeah, I know, Linux people easily are when it comes to desktops, basically we're impressed if things work).
I did enable AppIndicator. Solaar, NextCloud and Tailscale all really require it to be considered functional. I don't understand why that is not a standard thing (it is on many distros though).
I don't like GNOME. I don't like GNOME accessories; I hate CSD and hamburger menus and that big empty wasted top panel. I want more things vertical, while GNOME is moving its vertical workspace-switcher to horizontal.
The Cosmic tiling is good. That's an improvement. GNOME's window management sucks, and this is better.
I also really don't like systemd-boot and it broke one of my laptops severely.
So, I am happy that it is good for you, but it is not something I'd choose.
And I do agree on the very high amount of pixels wasted in the top panel, expecially on an ultrawide monitor, something MacOS does better. On a laptop screen it's ok for me.
Not yet, but it's fairly simple, I now have 1 week of NixOS experience ;) I installed with the graphical installer, choose Gnome, added some specific things like darkmode and many packages to the configuration.nix. Then I read about Home Manager and put packages under there.
OOTB the Gnome install was as good as can be. It's a nice way to play with Nix, batteries included.
Right now I need to deploy a server used for bioinformatics, and I need Conda... And that is a pain [0], so again I'm on the fence: Deploy Ubuntu or NixOS and persevere... I'm thinking Ubuntu, then perhaps later in the learning curve I go for Nix or perhaps just make a container to use on Nix.
I will use them, at least for a while, when their currently in development Rust desktop env is out. Super curious how that pans out (as traditionally all popular GUI libs are very OO).
I had numerous issues with it, like the inputs would not focus in firefox-esr. Unfortunately I had to install Ubuntu 22.04 which works fine but has all sorts of annoyances like snaps and text ads in apt. Ubuntu didn't even set up grub right after I used custom partitioning to avoid nuking my /home partition, it only booted after I did a manual grub-install.
I did the same by the time Jessie was released. I realised the XFCE version was good enough, and I could use a few tools from the vendor binaries plus a couple of things I could compile.
Yes, it is more work, but since then I've been upgrading Debian and it just gets better!
Did the same ubuntu > debian transition a while ago… but then got fed up with old packages and other stuff. Nowadays I’m on fedora (xfce spin) and it’s just perfect.
I agree. Snap is the weak point of LXD. I have been using it in production servers and until
snap refresh --hold
was made available my team went through the pains of snap workarounds [1]
The LXD team was very proactive in the support side but the truth is that Snap is a pain and without --hold it would be possible for production servers to break out of a sudden due to uncontrolled updates (who did every think that this was a good idea?).
LXD itself is a very good project because it uniformizes the use of containers and VMs with a friendly CLI - the syntax is more intuitive than the one from Docker, by the way. We recently managed to combine LXD and Puppet to manage containers and VMs in a declarative way. I am a big fan of the LXD project.
Somebody else commented here that the latest Debian makes LXD available in the normal distribution packages - that is something worth checking.
Moved completely off ubuntu after snaps were introduced. Sticking with gentoo now because it’s one of a few distros that doesn’t try to control what software you have to use.
To this day about 90% of discourse links I click do not load. Seems to be some adversarial interaction with uBlock origin. As usual Canonical is on the wrong side of history.
Have you tried nixos? Its a totally different paradigm and definitely an acquired taste, but I used gentoo for ~2 years, then used Arch for ~5 and now am switching everything to nixos.
Main benefits is setting up a new machine is much easier, and updates are easier because almost all the normal configuration is managed by nixos (in the nix language) as opposed to regular files floating around etc.
Me too. Turns out alpine is better for server vms for my use case and cachyos is better for desktop for my use case as wrll (at least possibly until i figure out nixos)
I recently transitioned on the desktop from Debian then-testing to Alpine edge. I switched for a newer kernel (4.9 to 6.1 for recent fixes to busted microcode -- little annoyances around charging and the touchpad) but I like a lot of things about it: quick install, up-to-date repos, easy-to-understand init, battery lasts longer...
Can confirm alpine works fine as a libvirtd/kvm/qemu hypervisor, fwiw.
The one gotcha to be aware of here is that libvirtd as of now has two ways to be operated: Either run just the monolithic libvirtd service, or individual subsystems like virtstoraged. Trying to do both at the same time will cause issues.
Due to the limitations of OpenRC compared to systemd for interdependent and interoperating system services like this, I stuck with the monolthic libvirtd approach and it's been running fine so far.
Oh yeah, I hated that when systemd would kill (restart) my libvirtd daemon (and my various VMs/containers whenever I adjusted my systemd-controlled nftable.
Monolithic libvirtd (which is disabling the systemd libvirt) is the way to go.
I was also aiming for Xen initially. However, my host just crashed (can't recall the exact dump) when I tried running a Xen kernel - Alpine or not. I never confirmed if it's due to the Intel N5105 lacking the proper CPU extensions to support Xen or if I could hope to hassle the machine vendor for a firmware update to get it working.
I would expect musl to be less of a problem for the host than guests, though, since a VM/container host only needs to run packaged software from the distro's official repos (mostly libvirt/docker, probably). What problem would you expect to hit?
LXD is really great software. The majority of the contributions already come from canonical, so I doubt this will make much difference in the trajectory of the project.
I will say that I prefer running LXD on NixOS hosts where it isn't packaged as a snap. Hopefully canonical doesn't somehow break that.
LXD isn't an alternative to podman. Podman is meant to run 'application containers', where each container has just one running process. LXD is meant to run 'system containers' where each container is a full Linux distribution with an init system and (possibly) multiple daemons. LXD containers are like light-weight VMs. Unlike VMs, LXD containers share the host kernel.
You could run podman or other OCI containers inside LXD. I use LXD to test multi node K8s (K3s) on my desktop system.
I briefly used LXD once when I needed a full system inside a container.
But podman also supports systemd inside a container and along with macvlan networking you can pretty much build an "independent" container acting almost as a VM.
Would LXD provide any other advantages/differences to that?
Vanilla LXD containers can't run podman inside them. You need privileged LXD containers. There were quite a few settings I had to figure out before I could get K3s to run on it. I'm considering publishing an article about it.
There are a few settings to figure out before LXD containers can host K8s. It's mainly about running the LXD containers in privileged mode. I have the settings written down somewhere. I think I should publish it somewhere, considering the expressed interest in it.
I haven't read much about lxd until just now but based on that link, the differentiating feature between lxd and podman seems to be that lxd can manage full virtual machines (using qemu as backend, according to [1] which was linked elsewhere in this thread). Whatever this distinction is between application and system container, it doesn't appear to be a technical distinction nor a feature that lxd has that podman lacks, unless I'm wildly misunderstanding it. Containers you run with docker and podman are fully capable of running multiple processes (in my experience it's quite common to do so) and Red Hat has blog posts from years ago specifically discussing running systemd in podman, eg [2]. Managing VMs is indeed an additional feature though.
> the differentiating feature between lxd and podman seems to be that lxd can manage full virtual machines
LXD is a management layer over LXC and Qemu(KVM?). LXC is all about system containers. The Qemu support is a recent addition [1]. LXD supported only LXC system containers until then.
> Containers you run with docker and podman are fully capable of running multiple processes
Yes. I have done this. But it was very unwieldy - probably because docker, podman etc weren't designed to run system containers.
K8s doesn't support orchestrating LXC/LXD containers as far as I know. What I did was to use LXD containers as hosts/nodes for K8s. So, it was basically application containers/pods and K8s running inside system containers.
In addition, there are orchestrators which can run LXC containers (LXD is a management layer over LXC). Hashicorp Nomad is noteworthy.
Added later: K8s runs OCI containers. All OCI containers I have seen are application containers. I don't know if OCI specification supports system containers.
There's a bit of a Turing Tarpit here because they're both ultimately just namespaces under the hood so you can with effort get them to do equivalent things.
The original idea behind LXD was to build packages in a system equivalent to your real system but without spamming your dpkg database and filesystem with all the required -dev files. The idea behind Docker (which podman is a better implementation of and extension to) was to be able to have three different versions of Apache ephemerally on your system to match the versions used by three different projects today.
Ultimately you can do both with either (and I generally use podman to build packages, as an example) but if you do that you aren't playing to the project's strengths.
Toolbox and distrobox are sort of middle grounds that do both of them well enough to be tolerable.
LXD use case is totally different from podman, k8$ , docker.
It is like building your own digital ocean , for containers (and vms) . It can be used as a container host for docker/podman containers, virtual machine manager for qemu, it is like having your own VPS to put your servers in.
If you use LXD you don't need Podman, the main difference between LXD and Podman is LXD runs System Containers while Podman runs Application containers.
One advantage of having a system container is you can use a package manager to update your applications. With Podman, you have to replace the entire container, with LXD you can just "PATCH" the container. This results in much faster upgrade / lifecycle management.
I've been a long time user of LXD, it's an amazing project. It basically served as an alternative to kubernetes / docker for me. Enabled me to launch projects and build companies without being bogged down by the complexity of kubernetes.
I've created a project called instellar https://instellar.app which uses LXD under the hood. I'm working towards open sourcing instellar. It basically automates continuous deployment pipeline and automatically manages your infrastructure. Basically you get a heroku in a box.
I see a lot of people complaining about snap here. In the initial days yes, snap's auto upgrade would bring down a cluster, however I believe they've resolved this issue. Currently from my experience things are really smooth between upgrades. Basically invisible to me.
Hope this change brings LXD forward, and while the team may "regret the decision" I do hope they continue to work on LXD.
Canonical could really have really swept up a lot of former Red Hat developers if they'd simply redact and undo the whole Snap fiasco.
For a user-respecting OS, people have few options other than Arch (for quick security updates) or Debian (for LTS) anymore...
I used to think Canonical's tradeoffs were acceptable, but corporate control tends too much to try and declare their position disingenuously via marketting ploys instead of merit.
Snaps "solve" centralized automation but are also the cause of lost trust by making it under their control. Between automation and trust, I would rather they just solved the trust part by trusting the user (by default) to own/control all the automation.
Engineers know better what's actually at stake... it's a shame because this white-knuckle corporate greed is precisely what keeps me from advising anyone to trust this OS - they could have gotten so much growth and profit if they cared more about their real users than they do about their user's management. Instead, people swarm away, telling everybody why (hint: "snaps"), and Mr. Shuttleworth just watches while users drain away, looking for a vendor they can trust and finding, today, none.
Woah I notice that the signatures on that include Stéphane Graber.
It's rather hard to imagine LXD continuing as a going concern without him. Does this mean he regrets the decision, but is going to keep working on LXD, or... does this mean LXD is fucked?
Is LXD's purpose the same as Docker/Kubernetes? I use both LXD and Docker and to me they are tools that use the same technology (containerzation) but for different purposes. Docker is for stateless containers, used to containerize services. LXD is for stateful containers, used to containerize operating systems. LXD can also run VMs while Docker can't.
I think about LXD as a sort of Proxmox alternative (because you can manage LXC containers and VMs with it) instead of a Docker/Kubernetes alternative. In fact, I actually have migrated my systems from Proxmox to Ubuntu Server with Docker, for stateless applications, and LXD, for when I need to containerize an entire OS or VMs for my friends.
Exactly, we migrated to LXD from Proxmox for stateful containers for various servers (DNS, Zulip, Mastodon) as well as user desktops (e.g. Ubuntu Desktop, Fedora, etc).
LXD is nice because it has a nice management layer. It is really easy to migrate an instance (container or VM) and its state from one physical machine to another on the LAN, even if you don't want to bother with setting up an actual cluster.
I think that LXD as a Proxmox alternative is a very underrated idea to be honest.
Don't get me wrong, Proxmox is a pretty good piece of software, but if your workload isn't tailored to VMs + you have private links between your clusters + you have shared storage, you end up adding way too much complexity to your stack even tho you aren't using the real useful features that Proxmox provides.
So if you are in the "I just want to run my services" crowd, an Ubuntu Server (or any other distro, really) running Docker on baremetal + LXD for anything that can't run on Docker is way simpler to manage. Especially because running Docker on Proxmox is not fun (too cumbersome to run it within a LXC container + ZFS, running a big fat VM with Docker defeats the point since you can't backup individual containers with Proxmox anymore, and running Docker on the hypervisor is a big no-no)
At the end of the day, nothing in Proxmox has a special magic sauce that makes it tick, and sometimes that complexity may be super cumbersome when you just want to run some dang Docker containers for your swifty new app.
https://mrpowergamerbr.com/us/blog/2022-11-20-proxmox-isnt-f...
Please note that Docker and LXC (and LXD by extension) are essentially the same technology packaged differently, with different yet overlapping use cases; in fact, early Docker Engine was based on LXC before they switched to containerd.
Therefore I don't think there is anything that can run on LXC/LXD but not on Docker (and vice versa); it's more a matter of preference, whether you want a long-lived persistent virtual system (LXC/LXD) or ephemeral single-application containers with optional persistence (Docker).
Still, nothing stops you from using LXC/LXD for ephemeral containers and Docker for long-lived systems, but you'd be using the non-optimal tool.
This summarizes it well. Sometimes you run into apps that don’t ship Docker containers because they expect to not be behind a reverse proxy. For example, while there are third party Docker containers for the Unifi controller, they only support using their apt repos. The controller expects to be able to send low-level discovery packets and not just HTTP. Lxd makes that really easy.
Also, from a development standpoint sometimes a long-lived container environment is easier. I run zigbee2mqtt inside of lxd because if I want to try a PR, it’s `git checkout … && npm ci` and not building a whole container each time.
For the home NAS and server case, I’ve been really happy with Ubuntu server with zfs + lxd + docker. And, a lxd VM for Home Assistant OS. Basically, the right tool for each job and no worries trying to force software into an environment their developers don’t expect.
Since Linux containers are just namespace tricks, a kind of glorified chroot, what does it mean that you run VMs from within containers? Sincerely curious'
I don't think OP means running VM from within containers (although, who knows, I mean I guess you could). LXD provides a management layer that treats containers and VMs (libvirt/KVM/QEMU VMs) mostly the same.
So commands like this work on both containers and VMs:
Back then KVM, Proxmox, VMWare, OpenVZ, LXD, etc were all ways of virtualization.
Whilst it's not exactly the same purpose most companies have or have been shifting to docker and kubernetes and in the cloud vs running their own VM infrastructure.
> Docker is for stateless containers
Tell that to people that run databases with docker and kubernetes. It is a thing and even commercial products are built that way.
> Tell that to people that run databases with docker and kubernetes
To be honest my original message wasn't clear enough: What I wanted to mean is that I like using Docker when I have a container image ready to run, "stateless" in the meaning that the image itself is unmodifiable and any changes to the image will be lost after a restart. (This doesn't include bind mounts, and stuff like that)
That confusion makes lxd underrated. LXD isn't like docker or k8s. It's totally different use case.lets see this way If docker and k8s are fighter jets and bomber role , LXD is the aircraft carrier.
The point of them owning LXD is to give them total control of the underlying technology they are using for Snaps. Snaps is the play… has nothing to do with kube/docker. It’s actually more of a synonym of ostree and flatpaks.
> Canonical, the creator and main contributor of the LXD project has decided that after over 8 years as part of the Linux Containers community, the project would now be better served directly under Canonical’s own set of projects.
> While the team behind Linux Containers regrets that decision and will be missing LXD as one of its projects, it does respect Canonical’s decision and is now in the process of moving the project over.
I'm curious how that works; if the owner isn't happy about transferring the project, why go along with it? What leverage or right does Canonical have?
When your employer and the owner of the intellectual property you are employed to work on decide to move that intellectual property to a different server, you either acquiesce or you look for another job doing something else. You have to ask yourself if it's the hill you're willing to die on, and only a few are privileged enough to choose a life of full-time leisure.
We use canonical tech in 3000 nodes in 4 different data centers. I would be afraid if LXD is going to be locked with juju for orchestration. Juju is not something the industry has been using and it is the source of most our problems e.g. performance overhead, scalability issues and juju not being mixable with other orchestration tools.
I wish LXD team try to keep it open and stop entangling lxd with the rest of canonical tools.
We do not deploy lxd on scale using snap. We build it and have as part of our packer pipeline on rockylinux 8.
Kind of sucks to see LXD get pulled from Linux Containers. Stéphane's (and the rest of the Linux Containers team) work has been incredible, and honestly it gets slept on way too much. LXC containers have been incredibly powerful and useful for me, though I never bothered to learn LXD. I have this bizzare intuition that Canonical is watching Red Hat, and that there's a non-zero possibility of them aping the whole pay a subscription to get the source and distro thing Red Hat is currently engaging in.
It was like 15 years ago, but I though Canonical threw some red flags when they had that terrible ubuntu GUI update to have a ribbon.
Completely unnecessary, 100% visual, and made everything worse.
I bailed from them shortly after that, even with the option to use other GUIs. As Linux desktop has matured, it hasnt been necessary to go back to Ubuntu.
I imagine most of Canonical's power comes from users who don't try new things, or corporations who are too big to change anything.
As I suspect that op meant that left sidebar that Ubuntu actually introduced with Unity. It was definitely ugly and unnecessary, especially with Amazon button on it by default (if I recall correctly).
Yeah that sounds right. I can't quite remember the details because it has been 15 years and I loved trying different distros in general. I wasn't tied to the Ubuntu platform.
I suppose if I'm tied to anything, its the Debian branch of distros.
Not surprising as the LXD team was the only part of Linux containers community that actually achieved something. Stéphane Graber presented a exciting new functionality every month or so on his YT channel.
With that said, LXD currently builds from source and can install on Debian without needing snap. Canonical's VM counterpart "multipass" while on github cannot (easily) be built from source code and requires snap to install.
> Ubuntu powers the overwhelming majority of Azure workloads
I expect this to shift. Some years ago `FROM ubuntu:something` was a go-to for Dockerfiles when I wasn't feeling fancy with `FROM alpine` or `FROM busybox` (or `FROM scratch`). Today if I need a generic distro base, I'm consciously going with Debian-based images and Ubuntu is never an option anymore (it used to be "Debian, but better", it became "Debian, but worse").
And I strongly suspect I'm very much not alone. So give it a few years while people convert and upgrade stuff and this might be not exactly longer true.
I used to use Debian base image, but now switch to Ubuntu LTS base images almost exclusively due to them offering newer versions of most core libraries, and longer support duration.
> them offering newer versions of most core libraries
Is that actually true? Debian has managed pretty consistent 2 year release cadence, same as Ubuntu LTS, so based on that alone they should on average have similar library freshness. Basically Debian releases on odd years, Ubuntu LTS on even years. I suppose Debian might have slightly longer freeze periods.
Surprisingly enough, if your primary motivation for doing this is to save system resources, it might be misguided. ubuntu-base docker images are slimmer than contemporary debian slim, even, though debian seems to have been closing the gap recently - I recall the discrepancy as larger. Below shows image sizes in MB as of today:
Generally agree: these days almost all of my images are either Alpine-based, or Debian-based. For example, the upstream _/ruby and _/python images offer both of those as the only base OS (well, aside from Windows Server Core apparently?) options, which alone makes up a huge chunk of Docker uses. (this is evidently because buildpack-deps offers Debian and Alpine bases, and these images both base from that to try to share layers, neat)
Unless Microsoft has an Activision-sized amount of cash burning a hole in its pocket, that feels unlikely. Seems far easier/likelier to make a Microsoft branded Debian than to go all-in-on Ubuntu. Maybe I could take the aqui-hire angle, but how many Ubuntu devs would want to stay with Microsoft?
>Maybe I could take the aqui-hire angle, but how many Ubuntu devs would want to stay with Microsoft?
Probably far more devs than they'd get trying to make their own MS-branded Debian from scratch. Even if a bunch of them left after a while, they'd have competent, experienced people working for them for that time, instead of trying to staff up from zero and having a lot time where there aren't enough people to really get much done, and during that time they can hire new people who aren't going to quit just because it's MS.
I'm not hip to all the big names in open source development, but I know Lennart Pottering and Guido van Rossum both ended up at Microsoft, so they might be able keep a few other engineers around.
Add a couple from Java side as well, after all the drama, Microsoft is now an OpenJDK contributor and has their own distribution, after acquiring jClarity.
And a couple from Rust side, after the Mozilla layoffs.
And a couple of ISO C++ members.
And their own Azure Linux distribution went out of preview at BUILD 2023.
It feels like once M&A has decided to buy a company, that money is going to be spent. If Microsoft is prevented from purchasing Activision, I expect a foolishly profligate spending spree to follow.
If it gets us closer to being able to run native Office and Directx apps on Linux I'm all for it. Snap enforcement already scared me away from Ubuntu for the most part anyway.
Were I Microsoft, a custom Linux would be exclusively for server side benefits. Better/more integrated Azure tooling, native VS Code Remote Development containers, improved telemetry, native account/security provisioning, etc. No reason to tempt fate with better cross platform desktop efforts.
This would be self-sabotage for Microsoft, it erodes their moat. If they actually cared about user producitivity and making those specific products as widely used as possible then it would be reasonable for people to expext this, but that's not their real goal. Acquiring Canonical changes nothing about their ability to make this happen, they could do it today, but their corporate strategy forbids it.
Microsoft already has WSL2 so they can already run Office and Directx on the same box running Linux. I'm not sure that they have an incentive to make people run Office natively on a real Linux desktop. Wouldn't that lose some Windows customer? I think Office 365 already runs in browsers also on Linux but I never had a reason to check.
> Canonical employs nearly all the remaining core enterprise open-source developers after IBM acquired Red Hat
I'm sorry, but what? This is not even remotely close to being true. There's even a pretty good chance that Red Hat employs more "enterprise open source developers" now than in 2018.
I may be wrong, but with Red Hat, IBM was buying all the enterprise customers, but there is not much to buy in Canonical. They seem to don't know where they want to move as a business.
For anyone wondering WTF?, Lina Khan is the head of the US FTC, the crowd that reviews large mergers and similar (also antitrust, monopoly abuses, etc).
Under her leadership, the FTC has started doing the right thing (finally!) and saying "No, not happening!" to some of the proposed mergers in very recent times.
They can just wait a year. Either the Republicans will give them what they want, or the Biden Administration won't have to keep Khan around because they made it through the presidential election.
If Biden (or Newsom or whoever) squeaks in, they'll be looking to appeal to the right for the midterms, and lightly smearing and firing Khan might actually help them. The WSJ, in particular, has declared her enemy no.1, and the Biden WH has almost seemed embarrassed about the only person in their administration that could get them unambiguously good press, because they don't actually support her.
Why didn't they make Khan their "Dr. Fauci"? Why didn't they push social media companies to amplify accounts that supported her work, or to suppress accounts that spread misinformation or negativity about it?
I never hopped onto the LXD train because when I needed it years and years ago, it wasn’t quite ready for my use-case so I built my what I needed around LXC itself and it’s worked great.
I wonder how LXC will evolve and if another product will arise from this? It seems like the Linux Containers organization is staying separate with LXC.
I hear a lot of people complaining about Snap. While I am not a Ubuntu user for 15+ years, I am curious why Canonical is pushing Snaps so much. There must be positive feedback from their target market. Maybe the traditional Linux desktop user does not fit anymore into Canonical’s persona
Canonical wants some corner of the ecosystem where they can say that they are driving the progress instead of being just downstream consumers and (re-)packagers.
And to be honest, Snaps had fair chance of succeeding; when they first were released flatpak was still pretty young. Docker has had its own issues, and podman (etc) is even younger yet. If they played their cards right Snap could have been Canonicals chance to shine, and I guess they know it and are still trying to salvage the effort.
LXD is fantastic software, makes spinning up VM-like containers extremely easy. I used it for several years before moving to proxmox for a more mature manageable solution.
Agree it would be vastly more popular and interesting if it didn't require snap.
Ubuntu is already the major contributor to LXD. What's peoples aversion to snaps in the thread? For me it makes for a stable system that allows me to run the latest version of apps in a confined space, it's actually perfect
In my case, because it's being forced and replacing applications that worked out of the box with applications with issues. Firefox, for example. After it became a snap, you'll see a message telling you to close Firefox so it could update itself (?) but simply closing it is not enough and, last time I checked, it looked like you had to run some command line to solve it. Also, due to its sandboxing, external hardware like smart card readers didn't work in the snap version, so I couldn't log into e-government websites anymore.
You mean Firefox is a good fit for a snap due to security reasons? In that case they should've made the experience seamless, and provide the same functionality you get when using Firefox from the deb or the tarball, before pushing it like that.
Because otherwise, the result is people pissed off with that change.
Yeah, I don’t understand the hate in HN. Both snap and LXD are good container technologies. Snap is better than flatpak in several ways, for example, system programs could be confined too.
AFAIK, snaps don't have a (official) way of disabling auto updates which means everything run a snap might update at any time, rather than when the best time is. This might work fine for a personal computer you use for Facetime or whatever, but in a professional environment or for servers, you can't just let software update when it thinks it time to do it, you need to figure out a good opportunity for it to happen.
Ah, supported in 2.58+ which is the latest stable version, seems I'm up to date when it comes to my snap knowledge. Happy that it finally is supported, but not super happy about it taking ~7 years to get in place...
Have you ever used it in a production / server environment?
One of the problems I have with Snap is that it does auto-updates. So you can't control when updates are performed. Furthermore, you can only use Canonical's repository. So if you want to use a custom build of some sort, you can't - unless you pay big bucks to Canonical.
It's funny because I found a way to disable auto update for snaps in 2 minutes by Googling and half of the angry voices in gere are because of auto updates. What is going on?
There is this announcement about a new snap release from November 2022 [1] in which they made this possible. So this is a relatively recent addition. Before that, Snap basically did everything in its powers to ensure that all packages were continuously updated.
My objection is that it's simply fragile and overcomplicated software. Operations that one would expect to be instant take a second to run. I attempted to reinstall snap recently so I could use lxd (which seems good itself), and was greeted with inscrutable error messages so gave up and moved it all over to libvirt.
Contrast with something like systemd that also has its haters, but it is solidly written software so seems to be accepted by a lot more people.
Couldn't that description be applied to the existing Linux packaging ecosystem? It seems that "fragile and complicated" dependency graphs are exactly what Snap/Flatpak/AppImage are trying to fix.
The Linux packaging ecosystem works extremely well for open-source software, and it produces a much leaner, more integrated system than bundling approaches like are common on macOS (and in a less extreme way, Windows).
There are two things that Snap and Flatpak are trying to 'fix':
1. The Unix security model, which relies on the user as a main boundary for security/policy. These tools are trying to add sandboxing that makes running untrusted software safer and accounts for the fact that being able to access everything in a user's home dir (and so on; this isn't just about the filesystem) is 'bad enough'.
2. The Linux packaging ecosystem (and whole userland, down to glibc's ABI policies) has never been intended to serve proprietary software developers/publishers/vendors who want to be able to throw their binaries over the wall or test only on an extremely limited range of library versions.
(AppImage only aims to 'fix' the latter, and poorly at that, since it doesn't even attempt to encapsulate huge parts of the runtime environment that vary from distro to distro.)
The only thing 'fragile and complicated' about Linux packaging itself is the practice of installing everything into a shared global namespace on the filesystem, which is already better addressed within the packaging infrastructure itself, as has been done on NixOS, GuixSD, GoboLinux, and Distri. (The latter two being more research projects than practical distros.) Those distros all prove that resource sharing and dynamic linking are actually red herrings when it comes to what is problematic about distributing software on Linux— you can keep both of those things and escape dependency hell forever (with or without containerization).
LXD is awesome. Let package managers handle updates, automatic security updates, high performance compared to docker/kubernetes and the opportunity to run the same thing outside a container if that's needed.
I really liked lxc but then one day it turned out that it was replaced by lxd and you had to install Ubuntu to use it and I switched to docker (now on podman).
Do you really have to use Ubuntu for LXC? My (Debian-based) Proxmox has LXC capability, and it's easy to find instructions to other Linuxes, like so: https://fedoraproject.org/wiki/LXC. Is there something I'm missing?
I think they're referring to the Linux Containers project focussing more heavily on LXD and pushing it as a better "front end" to LXC - but LXD has not been available directly on Debian until Bookworm was released .. a month ago.