I am very diligent about applying updates as soon as I'm able and generally read the changelogs of the updates I'm applying in Ubuntu's Software Updater.
One thing I will not do is willingly allow somebody else a way to deploy and execute code on my computer without my say so (which snap is).
After reading the whole thread at https://forum.snapcraft.io/t/disabling-automatic-refresh-for... and seeing Gustavo Niemeyer's arrogance (we know better than you when you should be applying updates) I will be voting with my feet and will be installing Pop!_OS instead of Ubuntu, and if snapd is present I will remove it.
The stated goal of Niemeyer, to have users use updated software, would have been fulfilled in my case if I had a way to see what updates would be applied beforehand, instead of the updates being force-installed.
Lengthy dialog with Niemeyer in the forum thread seems to have been a waste of time for all the people who participated trying to convince him to allow disabling of force-installed updates so I suggest you do the same as me and vote with your feet!
It's probably not a surprise to you, but this is a hotly debated topic inside Canonical. And I apologize for that thread, as it really doesn't represent our best attempt at external debate.
Changing a paradigm usually involves pushing the envelope and breaking some existing assumptions; systemd is everybody's favorite example of that in the Linux world. The root of this issue with snaps is the trade-off between built-in security and user control. Some points to consider:
1. Browsers like FF and Chromium [on Windows] simply self-update, and disabling that requires configuration. So there is at least some precedent for taking the position that user applications should just update themselves. Server apps are more complex and are a strong argument counter to the existing behavior, as is the fact that many apps cannot be refreshed without user impact.
2. Ubuntu, since 16.04 LTS, ships with unattended-upgrades enabled, which means that for debian packages the default behavior is already auto-updating (although automated reboots are not enabled by default, as that would be crazy for the general purpose case). That feels like the correct default, too, given the risk of running code exposed to exploitable, public CVEs — and how reluctant users (like my dad and my wife!) are to click on "Install now" in the update-manager dialog.
3. Debian package updates run as root. Snap updates run in userspace, and confined. So in principle the risk exposure for snap updates is much smaller. And snaps do have an auto-rollback mechanism for failed updates [a]. Counter to that argument is the fact that snaps are meant to be under third-party control, and that there is no clear mechanism to separate security patches vs updates which you get with the debian pocket mechanism (i.e. focal-security vs focal-updates).
The lack of any official [b] means of user control over the snap auto-update mechanism feels wrong to many of us, including me. And while we may seem somewhat opaque in these debates, the feedback we get in threads like this one (and the snapcraft.io one) actually feeds into our decision making. So please do keep pushing on this topic and we'll do our part internally.
[b] There are ways to, hmmm, control auto-updating (i.e. refresh.metered, refresh.hold) if you really want to; that thread has a few. That doesn't help the debate, but I'm sharing in case someone has a technical need for it.
Sorry, but as a fellow Canonical employee, speaking from a throwaway obviously, it's evident to me that you're simply not telling the truth here.
You know just as well as I do that if you criticize Snap within the company, you get fired. Especially if Mark overhears you. There is no room for criticism. You either drink the koolaid or you shutup. So, no, sorry, we're going to keep pumping out Snap and those who don't fall inline will just fall out of the company. This is how we've always done these things, despite it failing repeatedly, and Snap is no exception. Actually, Snap is in particular no exception, given how hard it's being pushed by top level management.
[An aside from the main point of this comment: your point 3 is nonsense, and any security guy will tell you the same. For packages that the main sudo-ing user executes, sandboxed or not, there still is effectively no difference between that and root. Snap's sandbox is alpha quality at best, and major platform hurdles remain to make it capable of doing anything remotely useful. Say no to auto-updating snap backdoors. Please. There's a reason why Linux has thrived and benefited with its vetted-by-distros traditional package managers.]
First, has anybody actually been fired for criticizing snaps? Your comment seems to imply through hyperbole that we don't debate inside Canonical, but in my experience that simply isn't true. In fact, I've seen a lot more intense IC to CEO debate in Canonical than anywhere else I've worked. It's not always super constructive debate, but I don't know how much better it is in any relatively small organization with the broad impact Canonical has.
Second, debate and reflexion is how these positions get refined. An idea starts out crazy and radical -- "let's make an OS which costs zero and which anybody on the planet can figure out how to use!", or "Launchpad will only support bzr", or yet again "upstart, not systemd" -- but over time it evolves towards a place of greater consensus. So I don't think we're the destination for snaps; in fact, if these blog posts are only coming out now, it signals we are rather early in the journey.
Finally, I can understand creating a throwaway account to disclose something you're not comfortable with at your workplace, but it's not cool -- nor constructive, civil or all sorts of C words -- to create one to and start with "you're simply not telling the truth". C'mon, I'm your coworker.
I want to just say I appreciate your level-headed response to the anonymous poster.
Separately... "In fact, I've seen a lot more intense IC to CEO debate in Canonical than anywhere else I've worked."
As an outsider, this makes me wonder if the CEO is too involved in day-to-day operations. (And overriding the work of those with more expertise than himself.)
Hmm, let me think a bit about how to respond to your first paragraph.
But meanwhile, I'm curious about point 3 as you seem to have facts that I lack -- when a confined snap refresh runs through snapd, is the upgrade payload not executed entirely in userspace within the sandbox? I haven't looked at the code, but my understanding of the model is that the snap can only modify its own writable areas (and do stuff like add a symlink to /snap/bin, though that's also limited). So a snap update could't, for instance, modify arbitrary files, nor read restricted ones. Whereas a dpkg install can do anything as root. Can you help clarify?
> The root of this issue with snaps is the trade-off between built-in security and user control.
What this tells me is that I am not the kind of user you are targeting. I don't either need or want any such trade-off. I'm knowledgeable enough to make my own decisions about security; I don't need a third party to do it for me. So if your distro will end up insisting that I cede any control over what software runs on my machine to a third party, it's a nonstarter as far as I am concerned.
That may or may not change your overall strategy, and I emphasize that it's your decision either way and I would not have a problem if your response is simply: "Well, we are targeting a particular kind of user and that's just the way it is." (I would simply go find another distro to run.) But I think you need to be clear about what kind of users you are targeting and what kinds of control you expect those users to give up to third parties, so users know what they are getting into.
I also think that this idea of having third parties control the security of your computer is against the basic Unix/Linux philosophy, because the whole point of running Linux or some other variety of Unix is to opt out of the walled gardens and third-party controls that other operating systems that I won't name force users to accept. So if you're going that route, IMO you're going to have a tough time explaining to users why they shouldn't just go ahead and run one of those other operating systems instead.
Ubuntu does stand for a certain set of values; security built in and doing The Right Thing on the behalf of the end user are part of it. That does lead to certain distro semantics that I'm sure not everyone will like. And that's OK, we do appreciate that some people want to control more deeply what happens in the distro.
There is a whole other set of users that would prefer that Ubuntu kept out of their way administrative actions for which they trust the machine to handle. They are probably more silent because they are mostly OK with how things work. And I'll say that I'm not comfortable with everything about snaps so it's more of a spectrum than binary acceptance.
Snaps were invented because we had a problem. We wanted to make it easier for software publishers -- think games, desktop tools, browsers -- to deliver their software to Ubuntu users, but debs were both too hard and too powerful to make that a viable proposition. In the old days, people relied on archive.canonical.com [1] from which debs like skype were published, but it was unsustainable and led to poorly packaged and out of date software.
Regarding third parties, in the end, nobody in the community reads through the source code of every patch applied to binaries in their systems. There is a degree of assumed trust in any update. I accept that trusting Canonical is one thing and trusting third party publishers is another, but given the motivation I describe in the previous paragraph, the decision to use snaps wasn't made in a vaccuum.
[1] Which still exists, if you want to go and check out the pool for third-party binaries. All of those installed as root!
> We wanted to make it easier for software publishers -- think games, desktop tools, browsers -- to deliver their software to Ubuntu users, but debs were both too hard and too powerful to make that a viable proposition.
Is there somewhere--a blog post on Canonical's website, perhaps--that explains this in more detail? Off the top of my head I'm having a hard time seeing how snaps improve the situation, unless it's simply the "dependency hell" problem.
> nobody in the community reads through the source code of every patch applied to binaries in their systems
That's true, but irrelevant to the concerns of users like me. We aren't saying we insist on controlling when all updates are applied to our systems because we want to ensure that we have time to read the source code. We're just saying we insist on controlling when all updates are applied to our systems, period.
> There is a degree of assumed trust in any update.
Trusting that, for example, a particular update doesn't include malware is one thing. Trusting a third party to control when an update is applied to my system is something quite different. It seems to me that you are trying to conflate the two, which might be fine for some set of users (and, again, if that's simply the only set of users you are targeting, that's your call), but is not fine for me, nor I suspect for other users with similar concerns to mine.
> All of those installed as root!
Doesn't any update get installed as root? I certainly don't want system-level binaries installed to my user account; that would be a huge breach of security since my user would then have write access to them.
Let me point out: to me, auto-updating isn't even the crucial issue (I use unattended-upgrades as well, so whatever).
Yes, functionality and workflows around installation and updates are still insufficient for many use cases, but that could have been ironed out given enough time.
But what you messed up badly, IMO, was to force-migrate packages to Snap in an LTS release before it was ready.
Had you waited until Ubuntu 20.10, I'd have been more forgiving. But you (collectively) were so eager to get this in before the window closed for another two years.
If you had made Snap a compelling product, even LTS users might have voluntarily migrated to snaps once they saw how good it was. Now you've kind of pulled off the opposite.
Sadly the ship has sailed: Both in general, since you've pushed so heavily for Snap in a LTS release when it simply wasn't ready yet. And for me personally, where the forced installation of Snaps by some debs (notably Chromium) broke my trust significantly enough that I turned my back on Ubuntu after over a decade.
Not only is the Chromium Snap dog slow, it also can't see my NFS shares. So the snap version is objectively worse, at least for now.
But if I install a deb, I expect to get a deb. You don't want to offer it anymore, fine, take it out of the repo. But sneakily migrating me to a snap, and not even notifying me, is just trust-breaking.
>But if I install a deb, I expect to get a deb. You don't want to offer it anymore, fine, take it out of the repo. But sneakily migrating me to a snap, and not even notifying me, is just trust-breaking
Indeed.
We need to modify the age old saying:
"Computers do what you tell them to do, not what you want them to"
By applying the following patch:
"Computers do what you tell them to do, not what you want them to do - except Ubuntu 20.04 LTS"
The the very least there should be a simple way to turn off snaps entirely. As in, when installing do we want any snaps? If we don't, don't even install the snap ecosystem. After install, have I decided I don't want snaps? If so, uninstall the entire ecosystem.
The challenge is that a) we don't really want to (can't afford to, etc) maintain a forked package for everything which comes in snaps and b) snaps are really, really, much better for publishing and maintaining certain classes of application, in particular complex ones with hundreds of dependencies and a massive surface area, like a web browser. And users want web browsers, so the default is to include snaps.
In general, you can opt of snaps entirely and `apt-get remove snapd`, but you'll miss out on potentially critical components that are only available via snaps.
If there are ever any 'potentially critical components ... only available via snaps', many of us will bail to distros that don't have that weakness.
I don't use, but see the advantages of such packaging. But auto-updating software (FOSS or blob) brings-in it's own barrel of concerns. In the event of something 'critical', a -message- alerting the user ... who can then research and decide ... is far preferable.
I've not used Ubuntu but was about to search for a distro to put on a rebuild machine... I was considering Ubuntu up until I heard about this.
As a former Gentoo user, this whole Snap concept is a huge deal-breaker. I wouldn't consider Ubuntu for anything other than some thin-client/smart terminal/kiosk usage, and I'd still be wary of what gets pushed and what sort of potential holes might get opened.
> but you'll miss out on potentially critical components that are only available via snaps
> I wouldn't consider Ubuntu for anything other than some thin-client/smart terminal/kiosk usage
Until the snap maintainer of Qt decides to upgrade you out of version compliance with your compositor and... Well, hope your kiosk doesn't do anything critical...
You mention that snaps are really well suited for installing and updating large surface applications like web browsers. Then you say that by removing snapd you will opt out of critical components. If Canonical is using snaps for large third party applications so their maintainers can push updates without user intervention, why would it EVER use snaps for critical components? Like, wtf is that? If something is critical to my system, I need to decide when it is modified. The maintainers can't possibly know when that's okay. Using snaps for anything important is a serious regression.
I'd argue you can't just "apt-get remove snapd", when you push transitional apt packages that depends on snapd.
I understand the argument with browsers, but that does not justify the default on ubuntu-server.
LXD being Snap-only really feels like force-feeding Snap.
You could give this a try combined with something like the ungoogled-chromium PPA:
sudo apt-get autoremove snapd
sudo apt-mark hold snapd
(I use apt-mark and some custom scripting to defer nVidia driver updates until system boot to ensure that they don't yank libGL function out from under me at an inopportune time.)
If it doesn't work sufficiently well when I'm ready to upgrade from 16.04 LTS, then I'm switching to Debian. (I'd intended to be on 18.04 LTS by now, but life doesn't always cooperate and I haven't found time to risk a day or two of squashing upgrade-induced bugs)
It sounds to me like the omission of an opt-out configuration is the entire technical problem.
However, it sounds to me like the real irritant here is the presumption that omitting that configuration is in any way something that benefits the linux community.
Defaulting it on is fine. The users who don't care to dive in, will then have it on, and the herd immunity to security problems is achieved.
Telling long-time linux users that "what you want is wrong," is, shall we say, worrysome.
I think this is a great suggestion, and I would also prefer that you could just permanently disable (via config, even if somehow discouraged) the default auto-update behavior.
> Changing a paradigm usually involves pushing the envelope and breaking some existing assumptions; systemd is everybody's favorite example of that in the Linux world.
At the risk of reigniting this particular flamewar, systemd changed that paradigm for the worse, using flimsy arguments that don't hold much water to anyone who knows better. Comparing snap to systemd doesn't exactly do the former a whole lot of favors.
> Browsers like FF and Chromium [on Windows] simply self-update
If I was okay with this being the norm then I'd still be using Windows.
> Ubuntu, since 16.04 LTS, ships with unattended-upgrades enabled
That's horrifying.
> and how reluctant users (like my dad and my wife!) are to click on "Install now" in the update-manager dialog.
Your dad and your wife are reluctant for good reason. Having experienced first-hand how prone Windows updates are to break things, and now seeing an admission here that y'all want to make Ubuntu more like Windows, this doesn't bode well in the slightest for a good user experience.
Like, one of the main points I make to people to convince them to at least try a Linux distro is "well unlike Windows it won't shove updates down your throat, and the updates are quick and easy and painless anyway". And then here comes Canonical wrecking the former assumption (and who knows how it's impacting the latter, but I don't exactly have high hopes).
Not that it matters much since I've given up on Ubuntu (in favor of openSUSE) in my recommendations to others (and switched to Slackware for my own use) ever since God-Emperor Mark chose to double-down on the Amazon Lens instead of listening to His users. Every once in awhile I take a look at Ubuntu again, hoping that maybe Canonical's figured things out, but always come away disappointed. It's always a bummer, given that my first distro was Ubuntu 7.10, and every once in awhile I'll fire that one up in a VM and remind myself what Ubuntu used to be, before it seemingly became a soulless husk of a distro.
> Server apps are more complex and are a strong argument counter to the existing behavior, as is the fact that many apps cannot be refreshed without user impact.
Updates to non-server applications can also have a big user impact and there needs to be a way to avoid/delay them when you know you won't have time to deal with any potential fallout.
I have to use Ubuntu 18.04 for some things and even with a minimal installation, I found some things phoned home or pushed changes.
I could remove some packages, like ubuntu-report or unattended-upgrades, but some seemed to be intertwined with other packages in a (purposeful?) labyrinth of nested dependencies.
They made themselves critical and uninstalling would break or cripple other fundamental system components.
Some I disabled in the config files, like apport, motd_news and kerneloops. Some I disabled and masked in systemd and others like snapd and whoopsie/whoopsie-preferences, I had to do:
dpkg -L snapd |
while read f
do
cat /dev/null > "$f"
done
I wonder how this kind of nonsense percolates through a company?
Is it developers from commercial software vendors changing jobs and solving the problems the same way they solved them for other corporate customers? Or is it marketing carefully plotting a release by release path to dominance? Or is it people who truly believe that having a viable market for linux software will be good?
I mean, there might be some truths - people lag with their updates, people don't defend their privacy, and people would like to pay for software but have no avenue to do so.
But accepting those truths and unilaterally forcing "solutions" might find linux is a different sort of animal.
I think there’s a proliferation of well meaning but ultimately _uncultured_ developers (perhaps some are lazy, but I don‘t think that’s the main problem). It’s almost inevitable as software, and internet culture, has itself proliferated. Seeing how businesses make decisions with such abysmal disregard for anything that puts users in control probably only serves to normalize the mentality further. But at the end if the day the problem really is that people don’t speak with their feet. I find it darkly humorous when someone complains about online privacy yet in the next sentence verbs _google_ or is still using their gmail from 15 years ago or hasn't switched to a privacy respecting browser or would never consider a librem device because it has previous gen specs. Software that gives users freedom and respects their privacy is not the default. Good software is hard to find. Too often software is measured solely on how good it looks as an electron app. So how about the next time you hear someone raking the GPL through the mud you help educate them. If you encounter a business leader deciding to cookie everyone because it lets them target the CISO of Oracle with personalized crafted ads, refuse to acquiesce. Not only donate to the EFF but actually participle. When asked what to do about the widening software culture gap at BSides SF, John Perry Barlow reminded everyone that education is the only real solution.
I was aware of `motd_news` and disable it on each new install, but I somehow completely missed `apport` and `kerneloops`. Trying to keep up with Canonical's attempts is why I switched all my new systems to Debian. Far more stable and predictable.
If they make it automatic and unmanageable by default, they can sell those features back to us tomorrow as an enterprise edition or a private Snap store.
It’s the same thing MS did with Windows 10. You buy the product, but have to pay again if you want any semblance of control. Us normal users are now test subjects for the real customers. Look how non-enterprise Office 365 customers are on a monthly cadence for forced updates and the expensive plans get SAC or better.
> If they make it automatic and unmanageable by default, they can sell those features back to us tomorrow as an enterprise edition or a private Snap store.
I'm not sure whether you're being ironic, but they're doing just that.
This attitude extends to other areas of the project, like the fact that you can't move or rename ~/snap. Developers don't care that not only are there users who would like to be able change it, but that there is a freedesktop.org spec that they aren't following.
I’ve been running Pop OS on the desktop for a year and I’m very happy with it. It might just be me getting better at running Linux, but this is the first Linux partition I haven’t had to hose after about 6-9 months due to some issue with incompatible libraries/updates/settings getting installed.
With regards to updates, it seems that 20.04 continues in the mold of Pop OS 18.04: You get periodic notifications that updates are available, and can go to the Pop Shop to install them. If there are multiple apps receiving updates, you can review and install them piecemeal (although OS/library updates are bundled together as one item in the UI).
The idea of using it for my servers feels weird at first (when I think of Pop OS the UI comes to mind) but after thinking about it, it is a really solid OS and there’s no reason I can think of that it wouldn’t work.
I'm surprised that something like routing software is/was being distributed via snap instead of as a Docker container; snap seems much more targeted towards end-user workstations than to servers.
I always felt that snap was designed for servers with desktop being an afterthought (thus why so many things still don't work in snap, ex: sys gtk themes). Flatpak on the other hand was designed desktop first and barely works for server stuff. But the Flatpak experience on the desktop is vastly superior to the snap one IMO.
Actually, snaps inherit from Ubuntu Phone's app packaging mechanism, so it's neither server nor desktop — but certainly closer to the end-user side of the spectrum.
(For server apps, the auto-update mechanism has a really painful consequence, which is that for clustered apps you have a built-in race condition that might kill your cluster)
> Snap applications auto-update and that’s fine if Ubuntu wants to keep systems secure. But it can’t even be turned off manually.
OMG. Is this real? This is the exact reason I use Linux instead of Windows 10 or macOS. I am not a grandma who can't stay up to date on tech news. At the least there should be a toggle for power users. But no, you can only defer it. Am I the only one who doesn't like it when your already slow internet slows down even further? It feels like hell when you are working.
I am not upgrading to this. I have been using Arch Linux as my personal OS. Maybe I should look into Debian for my VMs.
And just read this thread[1]. Is this how they treat their users? Even Reddit is better then this.
We work at remote sites on cell connections. Part of the reason we moved to Ubuntu from Windows was the ability to control data usage, which is expensive. Automatic updates quickly become a significant slice of the bill when random decisions like these get pushed on users. Ubuntu was supposed to help prevent us from needing to chase this.
Exactly. It seems like these days everyone assumes you are on a stable broadband connection. In many parts of the world getting a fast and stable internet connection is literally impossible.
Windows lets you set your network connection as metered, and doing so prevents it from applying automatic updates.
I recently switched back to linux myself, but there are certain utilities and conveniences and options in Windows that linux distros don't yet provide, and ubuntu definitely is not meant to be light weight in any sense of the term. Is switching to something else an option at this point?
I think part of the problem is that many newish users equate Linux with Ubuntu, good and bad. There are many other options and Ubuntu should not be the default anymore.
You aren't wrong, but I hate to say it... there are a lot of things that Ubuntu got right, and generally speaking derivatives of Ubuntu almost always work without any extra fiddling or driver hunting for me.
An experienced user could probably find a nicely tuned arch / manjaro setup to work better for them than Ubuntu, but if someone is just first getting their toes wet and learning, Ubuntu isn't a bad recommendation for a first go-round.
I've tried several Linux distros, but always go back to Ubuntu as it tends to work with the least fuss. Still a lot of fuss compared to Windows and Mac for desktop software, but less for development, so it balances out. Ubuntu 20.04 is super snappy (catch the pun?) and really has been working well for me. I strongly recommend it. It feels like a new computer.
My experience is the exact opposite, I recently built a desktop, the motherboard is ASUS ROG STRIX Z390 GAMING, despite all my efforts, Ubuntu wasn't able to even boot, While Fedora not only booted, but was "super snappy"
I have a much, much better idea of what process is using network traffic or other resources on Linux than on Windows (I don't know how often tracing tools pointed fingers at the "system" process for weird magic like CPU or network usage). Unless you mean the 20.04 LTS thing specifically (it sounds like you had this issue for longer), it should be exceedingly easy to turn off anything that runs automatically.
Someone on reddit stated that snapd will update snaps regardless of what value `refresh.metered` holds when updates are postponed long enough. Unfortunately, I haven't been able to verify whether this claim is true or false.
Canonical should be up front about this type of information.
I agree entirely. Unavoidable updates were one of the key factors in my choice to avoid Windows 10 for business-critical computing. I standardized on Ubuntu instead, but this could be a deal-breaker for me.
I hope Canonical fixes this immediately. I'm not eager to spend time re-researching to market for a suitable OS.
The problem with that is you are fighting the platform. That's not a great place to be. Unless your disagreement with the platform's design is small enough you are likely to be better choosing a more appropriate platform.
I'm currently on Kubuntu 19.10. I don't have any snaps. So they can't autoupdate. BTW: After installation of Kubuntu there were automatic security updates. But it was possible to turn these off in the Muon Package Manager (Settings | Software Sources | Updates | Automatic Updates)
Another huge differentiator for Ubuntu over Windows was that I didn't think the OS vendor was trying to seize control of my computer. Canonical jeopardized that trust with this choice. I truly hope they take steps to restore it. I don't want the added work of switching OSs.
> GNOME Calculator was put on the ISO as a snap to help us test the whole “seeding snaps” process, not because it was a fast-moving, CVE-prone applications. Chromium, Firefox and LibreOffice fall more into that category.
Ok so the whole snap thing comes down to updating browsers. Is this for real? I want the web, not the browser to change daily, or to consume more bandwidth than my www usage :)
The browser is actually the number one component you should update as soon as a security fix comes out. If you don't want new features ("more free stuff!"), use an LTS version that only includes the security updates?
> The browser is actually the number one component you should update as soon as a security fix comes out.
The problem is that there is no way to have a browser that only pushes updates for security fixes. They're always mixed in with changes to the UI that force people to re-learn workflows.
> If you don't want new features ("more free stuff!"), use an LTS version that only includes the security updates?
There is no such thing. I run Ubuntu 16.04 LTS on all my computers at home and I'm posting this on, IIRC, the fifth or sixth new version of Firefox I've had to accept (and that's only counting major version changes), because, as noted above, there was no way to just get the security updates and leave out the others.
You mean the Ubuntu LTS, I meant a browser's LTS version (a sibling comment just mentioned it's called ESR in Firefox instead of LTS... I meant the concept, not the specific name for Firefox, but my bad) so that you don't keep getting browser feature updates but only get the backported fixes.
At least when I encountered it a few years ago, if you go to Help -> About Firefox in the menu, it'll check for updates in the background, download the most recent version, and upgrade itself the next time you restarted the browser.
And yes, this was on Ubuntu.
So the capability is there, even if it's not on generally.
Which is really frustrating as a user; I don't want to wait 5 seconds for a calculator to launch; it's a simple app, it should launch instantly, like it does on any other Linux distro.
Point taken on the shoddy behaviour, but if you'd like to try it out there's this helpful post on disabling snap[1] shared here[2] when I installed 20.04. Quick and painless!
I've used Debian at home for at least two decades now. It's excellent. Debian is basically Ubuntu minus a lot of user-hostile crap, so if you are familiar with Ubuntu, it should be a fairly smooth transition to Debian.
Watching this snap thing play out, and in the past, watching Mir, Unity, and Amazon Lens, has provided steady confirmation that I've made the right decision to stay away from Ubuntu.
It's dawning on me that it's likely to only become more of a pain with each iteration of upgrades (e.g. install tweaks, synaptic, remove apport... And now remove snaps).
Other threads have suggested various relatively-new distros as alternatives when stuff like this keeps coming up with Ubuntu. The two I have in mind to check out at some point in the future are Pop!_OS and Void Linux.
Pop! is Ubuntu-based, so no idea of the situation with all these other problems, but it intrigues me because they're doing tiling windows first-class.
My understanding of Void is that it doesn't use snaps or systemd, making the system as a whole significantly easier to understand, and simply sounds much much closer to what I want out of a computer (and much like 8.04 was when I first switched to Ubuntu).
Debian used not to work easily on hardware that require proprietary drivers, did it change recently ?
I left Ubuntu almost ten years ago, after 5 years of using it, when they started using MIR instead of Gnome2 and I replaced it with Linux Mint and I haven't looked back. This whole snap thing looks like the new weird decision made by Canonical to make their faithful users leave :/
Debian runs on everything I've come in contact with, or virtualized.
Debian's problem is that it's stodgy updating policy means 'Stable' is still on 4.19, things like Wireguard require a simple, but odd procedure to request apt pull packages from newer releases, and most of the copy/pasteable examples out there assume Ubuntu, and their versions/customization to critical infrastructure packages.
IMHO, the stodgy updates make it a perfect candidate for server based software. Personally, my Debian know-how makes it great for my desktop, and It has not failed for my use case: Development, Sysadmin, Browsers, Steam (or any other games releasing linux versions)
> things like Wireguard require a simple, but odd procedure to request apt pull packages from newer releases
That's not a good idea, as it breaks the assurance that Debian Stable provides. Using the backports repository is the recommended approach if you need a newer version of some clearly-defined piece of software. It will pull the newer dependencies it requires from backports, while still relying on stock-provided packages as far as practicable.
It has been decades since I had to provide extra drivers to a Debian install.
It is true that the first-presented installer ISO images on Debian's downloads page lack the worst proprietary drivers, but another couple of clicks takes you to images with them included. So, worst case, you find that the image you have lacks such a needed driver, and you use another image. In practice, I just start with the latter, and have not encountered hardware not covered. For the absolute newest equipment, a "testing" installer may be the right version to use.
The Debian download pages provide installer images for all needs. I have not needed to look at secondary sites, which also exist for specialized needs.
> you'll need to prepare a USB stick with them downloaded onto it
Not really. Debian also offers one with all the firmware included but explicitly labels it "unofficial" (though very much official in practice and hosted on debian servers).
The "pain" is thus literally to click on another download link.
Generally it's not the drivers but the firmware for those devices, i.e. code that runs inside the device.
I think it's an over-zealous position from Debian not to redistribute firmware. Even systems that are very strict about licensing, like OpenBSD, redistribute firmware, because they have some common sense.
> I think it's an over-zealous position from Debian not to redistribute firmware. Even systems that are very strict about licensing, like OpenBSD, redistribute firmware, because they have some common sense.
OTOH I believe it's a position fully aligned with their ethical standpoint. Equating common sense with your personal preference isn't very gracious.
If you want something that's less zealous about respecting (and eschewing) stupid licensing, but is more zealous about randomly upgrading all your software packages unexpectedly, there's always Ubuntu.
I don't see how it aligns with their ethical standpoint.
Firmware is just a blob you load into the device. The alternative is to have it already burned into ROM.
What exactly do you achieve by refusing to load it? Are you more free in one case and not the other?
For all intents and purposes, firmware is like a key or a password you must supply to the device to make it work. The driver, which is indeed open-source, just says: "here, device, is the firmware you need". That's it. You are not achieving anything useful at all by making people go through some ceremony to download it separately. Maybe they just want to send the signal that people should buy devices where the firmware is already burned into ROM or ASIC or whatever?
Firmware is typically copyrighted, large, obfuscated, and executable on your system.
A password is a string that you can examine and offers no intrinsic threat - either exploit, or legal.
As per the link I provided to you, Debian's policy is that free firmware are shipped in the distribution -- non-free firmware requires you add the 'non-free' and/or 'contrib' parameters to your repository lists.
There is no need to wildly speculate about the motivations of the Debian team -- eg 'send a signal people should buy certain devices' -- when their motivation is explicitly stated.
The DFSG dictates non-free software will not part of the standard distribution. But they've made it easy to pull those files in (as above) via a one word addition to one line of your sources.list file.
While this is indeed helpful why do I want a version of Linux that has to be decrapified like Windows immediately after install and may with a future update may need to be fixed again. If you use non LTS you will have to "fix" it every 6 months.
I just upgraded to 20.04, and minutes later my machine is on its knees OOMing and unable to process remote connections. Apparently there is now yet another new file system indexer to play whack-a-mole with like updatedb in the old days except this one is hooked into systemd and harder to stop. Search for "tracker-extract disable" if you want the full details.
If you're running Ubuntu's server, odds are that you're SSH-ing into the box instead of running graphical interface. Unless you specifically install snaps, I don't see how this would affect you.
>This is the exact reason I use Linux instead of Windows 10 or macOS
Not sure about win10, but macOS won't autoupdate apps if you turned it off.
If the app is not from an app store - it's up to the devs to have option to (auto) update. Most apps allow you to turn autoupdate off (in fact I can't think of single one without this option)
> I've used both Windows and OSX for my professional work and while Windows is the worst offender when it comes to automatic updates, OSX is pretty horrible as well. At least with Windows you can expect some sort of backwards compatibility, while on OSX, one day you have to upgrade your entire OS, otherwise Notes or some stupid application won't launch.
- capableweb
I used to run OS X some time ago. When even Windows supported turning off auto updates. These days I am seeing Github issues saying that they can't use brew, clang etc because there is a update. And most of the time the updates are just huge (even compared to Windows).
Is this not true? Can you put off OS updates for some time (a few hours is enough for me) and keep using XCode, brew etc?
You can turn off auto updates of macOS and Mac App Store apps completely, yes.
I stayed on Mojave for months after Catalina was released and I had a MAS app that broke compatibility with the same companies own (abandoned) self-hosted server software so I just didn’t update it. I’ve since resorted to running that single app (and the abandoned server app) in a High Sierra VM.
The only version issues I know of that sound like what that other person referenced is:
If you update eg iOS to a new major version, sometimes iCloud-linked apps will say they need to upgrade something for new functionality (Notes specifically did this at least once in the last couple of years and iCloud Drive did it a few years ago).
But that is (a) not forced and (b) you’re told exactly what will happen (ie that older macs/iPhones won’t be able to use iCloud until they update too).
Some third party apps will set minimum required OS version (ie to use a new framework or api) but that doesn’t sound like what the other post was talking about?
You can still postpone updates, but yes, they've got more pushy than in the past. The issue is that iOS is the priority, and that has to be updated every year to support new models; so MacOS is also pushed to update in order to keep integrated systems (e.g. notes) in sync. This said, one can simply postpone upgrades indefinitely and just ignore the bits that break. I don't really use most of them, so I'm still not on the latest release despite it having been released some 6 months ago.
We have already installed two new Debian 10.3 VMs instead if Ubuntu. It's quite a breath of fresh air compared to Ubuntu >16.04 which I had to fight all the time to do things my way. Still runnng 18.04 on the dev boxes though.
Yeah, lol, looks like I'm no longer an Ubuntu user. Now I have to figure out how to force-disable this for server software and corporate Linux desktop users. Jesus.
I wonder how this affects offshoot distros like kubuntu?
I'm currently on Debian with KDE, but I think I might need to move to a rolling release distro due to some issues with SMB/CIFS (that have already been fixed in newest builds of KDE) that probably won't be fixed in Debian until the next release.
Maybe I should start looking at distros in general-- but Ubuntu is definitely out of the picture.
snaps are intended for non-power-users that don't want to deal with dependencies. Those users want things to mostly work without worrying about murky downsides. Auto-updating is exactly the right behavior.
If this is of concern to you, why are you using snaps? And why Ubuntu? What's the value-add over Debian?
Ubuntu (and GNOME, who seem to have the same mindset) have no clue in hell how to actually get the billions of proverbial grandmothers to use their software. All they seem to manage is to poorly ape Microsoft and Apple, which will never get them what they want.
Regular people wouldn't use windows if using it required them to understand the concept of an OS and install it for themselves.
People will never buy linux but they might buy computers with linux at some point just like they have bought phones with it. Android had things that iphone didn't have and at a much cheaper price point thus linux based phones are everywhere. I wish steam boxes had taken off.
If you agree that it's not the current userbase of Ubuntu, then you're saying people should quit using Ubuntu en masse, only a fraction staying behind.
The idea that you "misjudged who actually uses Linux" was based on the assumption that you think a product should generally cater to its users. If instead you think most of the users should leave, then okay, that's a valid opinion, it's just surprising.
As I understand it, that one is Linux Mint. For windows users, it just looks like a slightly older windows.
That said, I don't think Newbie friendly and power user friendly must be at odds with each other. If you can figure out what the sensible defaults are, and provide simple toggles to customize things, you can cater to newbies, average users, and power users alike.
So I run Kubuntu on my work laptop (X1 Carbon) and just upgraded to 20.04 last weekend. I had a vague idea there were different competing standards for "linux apps that work across distribution" but didn't know people had such a problem with snap. It just seemed like a useful tool for installing proprietary stuff that wouldn't normally be packaged by the distribution. I just checked and the snaps I have that aren't from canonical are: datagrip, slack, discord, and spotify. I haven't noticed any slow app boot times and I think it's great that it's so easy to install third party software. Is snap somehow user-hostile?
There are some downsides (footprint, forced updates, speed, etc.), though depending on what you're installing those may not be deal-breakers. I'm using plain Ubuntu 20.04 and I tend to install stuff via apt and not snap in general (but I am fine with installing non-essential things via snap). The software store has a subtle toggle in the upper right for choosing to install a package as a snap or via apt when both are available.
They are making a very big bet on snap, I would expect most of their desktop apps to be slowly moved there in the next few years. At that point apt would be kept around strictly for essential system packages and (hopefully) for server usage.
Isn't using the internet insecure then? I can typo bankname.example.nl as well.
Not that I don't see your point: a curated list like the repositories is preferable to a system where anyone can claim any name, but I am not sure that this extrapolates to the statement that "it's insecure" as a whole.
Out of interest (I don't use Ubuntu/snaps myself), is that really the case? Can I actually a publish <insert popular package> without any checks and, once I got half a million users by repackaging the deb file in snap, add some subtle malware? There is no review process or anything?
> Isn't using the internet insecure then? I can typo bankname.example.nl as well.
Sweet strawman argument dude. "That's bad, but this other unrelated thing is also bad, so the first thing can't be THAT bad in comparison". For real though, this is like a picture perfect example. I may put it on the Wikipedia page.
> Can I actually a publish <insert popular package> without any checks and, once I got half a million users by repackaging the deb file in snap, add some subtle malware? There is no review process or anything?
This is already possible with every other distribution method. If you host your own debs, then you can easily get them to do whatever you want. Even relying on the main archive isn't great - apt is typically delivered over HTTP to make mirroring easier, for example.
This is a large misunderstanding of how it works. You can't MITM millions of servers around the world just because they use HTTP for downloading their apt archives.
It verifies the cryptographic signatures. That's why you need to "apt-key add" when you add a custom repository. It doesn't rely on the transport method for integrity.
> [Typo-squatting] is already possible with every other distribution method.
No, the counterpoint we're talking about is apt. In apt, not anyone can just register any package name. My question was whether that's really a thing in snap.
> If you host your own debs, then you can easily get them to do whatever you want.
I'm not quite sure what you're trying to say here. Why would I host my own deb files (in the first place, but even if I did) only to hack myself? I could just install the modified deb files directly or modify the files on-disk, no hosting needed?
> This is a large misunderstanding of how it works. You can't MITM millions of servers around the world just because they use HTTP for downloading their apt archives.
If they used HTTPS then an attacker would have to control the mirror instead of being able to perform the attack as a MITM.
Also using HTTP allows someone in the middle to know what software is installed on a server, which while not critical if it is kept up to date it leaks some information.
I wasn't aware that plain http was what made that attack practical, that's good to know. And while not disagreeing with your point, I meant that it isn't supposed to be interceptable despite being plain text, due to the integrity check, so it isn't inherently insecure, just (significantly?) less hardened than if it would use https only.
For the 'which software is installed' argument (confidentiality in addition to integrity), I agreed but your first link actually argues this:
> the privacy gains [of using https] are minimal, because the sizes of packages are well-known
> In apt, not anyone can just register any package name.
Yes they can. Yes distributions maintain archives and are able to decide what to include. But there's nothing special about apt per se that prevents namesquatting.
If you create a well-used PPA, for example, you could easily add extra packages, e.g. "firefix", later on. The next time someone runs `apt update`, they'll be exposed to the new package.
> My question was whether that's really a thing in snap.
It looks like that from the outside, but in practice that's not possible. All snap package names are tightly managed by Canonical staff.
> I'm not quite sure what you're trying to say here.
Lots of software vendors host their own archives. It's very difficult to get new packages into distros, and they're often updated infrequently once they're there. For example, the OSGeo project distributes its suite of packages this way.
That's not insecure-by-default, that's just you having a (legitimate) issue with the distribution chain. Insecure-by-default is installing software that has known weaknesses. That the process doesn't work for you doesn't mean the software is weak.
Are they sandboxed individually? If not it's insecure by default. I mostly don't mind auto-updates on iOS and maybe Android as they're at least supposed to be each app sandboxed by default. MacOS is getting better at this. Windows sucks at it. Where is snaps on that spectrum?
Are you alluding to possibility of an update containing malicious code?
Is that because the update's authenticiry is in question or is that because of original developer went rogue? Leaving the system unupdated is insecure. I do not see how auto-updates make it insecure, but themselves.
I depends on what we're updating. ATM I suppose I generally trust a core OS feature's devs to be trustworthy. On the other hand I don't trust random app that was trustworthy to stay trustworthy. Maybe they decided to add an analytics library or maybe it's a game and the decided to add an anti-cheat kit, both of which are essentially root kits spying on me. Maybe one of the libraries they use decided it would be good to do something similar so the app devs are trustworthy but the library devs are not.
I find it strange that we trust as much as we do on our computers. Would you let 1000 people walk through your house unsupervised and unannounced? Would you expect everything to be ok after? Yet so many apps are created either directly or more likely indirectly via its dependencies by 1000s of people. Each one of those people has to be trusted. That's insane IMO. And as we get more and more connected the incentives to do evil rise.
From say 1993 to 2010 I mostly didn't care that Windows wasn't sandboxed. Now every app I download is trying to spy on everything I do either for marketing directly or for analytics which is then shared with "business partners" who then share it with others.
I wouldn't have to worry as much a random library is going to do something bad if it wasn't possible for it to do something bad
We are in complete agreement - the current station is untenable. However, what can you do about it as an end-user?
By the time you are deciding whether to update candy crush or whatever application, you can't actually check all of it's dependencies. You can postpone updates, or only update manually, but what difference does it make?
Maybe I'm in the minority but I like Snaps. I wish all software would auto-update silently in the background -- when's the last time you even thought about upgrading Chrome?
The author of this article claims it's too difficult to find Flatpak apps and that the Ubuntu software center prioritizes Snaps over .deb. Are platforms never allowed to migrate to a new standard? Why is it Canonical's fault that authors of individual applications have yet to migrate Snaps?
If we all agree that on the whole auto-updating software is generally better and more secure than manually updated software, why not single out the applications that haven't migrated instead of blaming the whole standard?
Maybe I'm just naive and not doing advanced super user stuff these Snap haters are doing but from a distance to me this resembles the systemd vs init controversy. One which, IMHO, Linux super users seemed unusually attached to an older standard for not always clear reasons. Snaps offer real benefits: maybe instead of complaining that 'this sucks' users could offer constructive criticism about how to improve the new standard.
My two cents, you don't have to agree but thought I'd just add a different perspective.
I prefer my own package manager (pacman) over snaps for the following reasons:
1) I like to upgrade on my own schedule. I use my computer for work, and I cannot have things break in the middle of the day or in the morning, just as I get started with work. I usually save upgrades for when I less things to do, so that in case stuff breaks, I can spend time fixing it. This has happened twice during the last 2 years, or something like that. One time Firefox got broken (or rather, the version upgrade broke an extension I use) and second time some API in neovim changes to a couple of plugins broke. If this breakage would just happen by itself, it would break the two most common tools I use on a day-to-day basis.
I've used both Windows and OSX for my professional work and while Windows is the worst offender when it comes to automatic updates, OSX is pretty horrible as well. At least with Windows you can expect some sort of backwards compatibility, while on OSX, one day you have to upgrade your entire OS, otherwise Notes or some stupid application won't launch. On the other hand, Windows eventually forces you to upgrade no matter if you like it or not. So both of them suck equally, but in different ways.
2) snap seems to create mountpoints for the applications and never removes them. When trying snap apps I always end up with bunch of pollution in my environment. Could be that I'm using snap/snapd wrong, but left a sour taste in my mouth, as I saw snap as something that wants to solve a problem that existed for a long time. Instead, they look a bit amateurish because of this.
My position is not that certain users would prefer to update on their own schedule. Of course there are.
Or that new updates sometimes break things and that's a hassle. Of course they do and it is.
The problem is if you want to distribute an important security update, what do you do? Ask everyone nicely to upgrade? How? Again, what % of users will manually update their software? Not a lot.
For #2, that seems like a resolvable problem that can be brought to Canonical. I'd prefer to see auto-updates fine-tuned rather than have super users immediately dismiss the idea in general.
It's just my opinion but I think the greater good of the Linux community is served by auto updates, even if occasionally it means an update to an individual application has a bug here or there.
Maybe this doesn't apply to you but I wonder how much of the Linux community just doesn't like change. Sometimes Canonical stuff does crazy stuff (Mir?) but auto-updates seem like a noble principle worth attempting to adopt.
> The problem is if you want to distribute an important security update, what do you do? Ask everyone nicely to upgrade? How? Again, what % of users will manually update their software? Not a lot.
The thing is, I do not give a rats ass what the maintainer of the software wants.
I’m sure they’d like their software to be patched quickly on all the PC’s that use it, and more power to them. But I do not want their decision to patch something to affect my system unless I explicitly tell it to, period.
If I do want my software to update automatically, I’ll enable that. Just don’t force it onto me.
> The problem is if you want to distribute an important security update, what do you do? Ask everyone nicely to upgrade?
What the application author wants isn't all that matters. It's the user's system, so they install what they want when they want to.
Ensuring that the user can easily install important updates while preserving the overall order of the system is the job of the disto maintainers for the distro the user has chosen. This is the whole point of distros. The alternative, where each individual application author has free rein to jam their app into the system without coordination, gets you the kind of mess Windows has.
This. It is of no concern what the developer wants -- it is the USER's system. If there is a reason I want to be backreved on app or library XY or Z it is my concern. This is really the most bone headed change I have seen from ubuntu.
It is "fine" to make auto updates the default.
It is "not fine" to make it the only option.
Have some respect for your userbase. Who was the bonehead that make this call -- so stupid.
I guess that, exactly yes, they should ask! The problem people have is that there is no way to turn them off, not that they exist. This is nothing to do with change and everything to do with taking control away from the user. For some users, yes, that may be beneficial. But without at the very least giving the option you're also alienating many more. I want to update my system when it's convenient for me to do so (particularly because I'm currently in a location with very poor internet), not when I'm in the middle of important work, at the whim of my operating system.
Edit: Apparently you can set a preferred schedule for the updates (from another comment)? That's still one more thing I need to think about, that I shouldn't have to. Just make it optional and everyone is happy.
The problem is that you want to control your users, but when and how they upgrade is frankly none of your damned business. This busybody attitude has proliferated in software and it's an unfortunate direction our culture is going in. FLOSS software should be where we fight that the most.
Auto-updating isn’t the only issue. I’m a stickler for security updates; I’m that crazy guy who always reboots his computer immediately whenever there’s an update. I like that Windows forces updates.
Even I recognize that this doesn’t make sense in the Linux world, though. Ubuntu is trying to be something it’s not—they’re trying to appeal to a new demographic, and, in doing so, driving away their existing users.
Even with my stance on auto-updating, snaps are a problem for me because I mostly use Linux in the context of servers. Like it or not, that’s where Linux has the largest market share; Android aside, Linux’s consumer market share is negligible.
In that context, snaps have problems:
- I can’t have my servers updating on their own. Security updates rarely break things, but most other updates need testing.
- I use auto-scaling. That means servers need to come up quickly when load increases. If a bunch of new servers come online and all decide to update, that’s worse than no servers coming online.
- I don’t want or need a sandbox. In a cloud environment, the server is the sandbox.
- Environments and server states need to be reproducible for testing and auditing. If I’m doing a post-mortem, I need the software on the relevant image to be in the exact same state as when the problem occurred.
- Performance is critical. I’m already paying AWS for sandboxing in the form of many small EC2 instances; I don’t need the additional overhead of snaps. I’m not working with bare metal.
All of these issues could be resolved, and I wouldn’t object to this experiment running in a non-LTS or desktop-only release. But it is truly an experiment: snaps aren’t ready for prime time. My options are to pay Canonical for extended support for old software, wait it out and hope the issues are sorted before I stop receiving security updates, or switch to something like Alpine or Debian.
Keep in mind, this is coming from someone who loves automatic updates, generally prefers systemd, rather liked Unity, and didn’t see what all the fuss was with Upstart and Mir:
- systemd works pretty damn well and is a big improvement, although it has its hiccups
- Unity was fine. It looked nice out-of-the-box and wasn’t a resource hog.
- Upstart usually worked well enough, though it sometimes had reliability issues.
- Mir never really saw the light of day, so it didn’t matter.
Snaps are where I draw the line. They might be the future, but they’re not ready for the present. And that’s not for lack of trying on my part—I had no trouble embracing Upstart and later systemd.
All valid points and as you say it yourself, all addressable. The thing is though, Canonical can control this experiment by deciding on what debs get migrated to snaps. They can easily conduct this experiment in an LTS so long as they only keep it to desktop packages like GNOME components, browsers, third party software, etc. That way they don't have to wait yet more years before providing this system for use by all users and developers. For example I love the fact that VS Code and Spotify update on their own with no interaction required. I wouldn't love if something I don't want to update in our server fleet gets updated but I don't see many snaps in that area. But if we do see that I'm sure both of us can come up with a one liner stopping snap from updating if and until that use case is supported. Besides, the server space is gravitating towards immutability anyways, so doing something like `chmod -x $(which snapd)` or `chmod -x $(which apt)` on a production machine shouldn't be a big problem in that context. In fact that's one foolproof way to make sure packages are what you want them to be after installing them. Or read-only file systems.
I honestly haven't used snaps in a server context. I get them from the desktop context. I would probably use docker containers & docker-compose before using snaps. I honestly have more control in pushing updates and the like in that situation.
I think there would be a lot less complaining about the existing unattended-upgrades functionality being enabled by default on dektop, than the new self-updating capabilities of snaps.
Honestly, this wasn't a real problem. If you just turn on automatic upgrade in apt/Ubuntu updates/whatever and leave it that way, a user can change things and all is well.
I have this vague feeling snap is here to stay but I don't like it.
I get your points. In situations where Ubuntu 20.04 machines are on airgapped networks or corporate networks that use mirrors and the like. For personal and other situations I prefer flatpak and snaps...
> I like to upgrade on my own schedule. I use my computer for work, and I cannot have things break in the middle of the day or in the morning, just as I get started with work.
You can arrange snapd to update on your preferred schedule. What you cannot do is defer updates forever.
That is pretty much the thrust of the OP itself, and most of the agreeing comments. So While true, it doesn't seem to be much of an observation.
At the same time, it's also perfectly valid to instead, express what you want and don't want, and generally try to fix something that has broken or correct the aim of something that has veered off course.
Approving of the new change is also valid for that matter.
The only invalid thing is telling whichever camp you're not in that they can leave if they don't like it.
Remember, this is a change, not just the way something has always been.
How about, if you like the idea of everything packaged in the form of snaps, and all those snaps updating themselves outside of your control, you can just go find, or create, some new distro that works that way, instead of changing one that already exists and forcing all it's existing users to either accept the change or move, to accomodate a change you like?
"Take it or leave it" are not the only options, and it says something unflattering about anyone who tries to suggest that they are.
> I wish all software would auto-update silently in the background -- when's the last time you even thought about upgrading Chrome?
Updating everything has always been one click in Ubuntu (and I'm sure there's an option to have it go automatically).
> The author of this article claims it's too difficult to find Flatpak apps and that the Ubuntu software center prioritizes Snaps over .deb. Are platforms never allowed to migrate to a new standard? Why is it Canonical's fault that authors of individual applications have yet to migrate Snaps?
Churn is bad, and having to migrate your application is burdensome. Maybe the benefits justify it, but what are those benefits supposed to be?
> Maybe I'm just naive and not doing advanced super user stuff these Snap haters are doing but from a distance to me this resembles the systemd vs init controversy. One which, IMHO, Linux super users seemed unusually attached to an older standard for not always clear reasons. Snaps offer real benefits: maybe instead of complaining that 'this sucks' users could offer constructive criticism about how to improve the new standard.
I hate systemd because it breaks a bunch of stuff but I'm still forced to use it. So far that's been my experiences of snap as well (specifically it breaks Japanese input for some applications).
What are those "real benefits"? You've only talked about auto-update, which was already working fine thank you very much. Snap, like systemd, seems to be more a case of https://www.jwz.org/doc/cadt.html than something that actually makes my system better.
This is getting OT, but could you elaborate on your issues with systemd? I’m genuinely curious. I often see people complain but haven’t seen or experienced specifics apart from it being complex and having higher learning curve than initd.
(I don’t know if you’ve tried MX Linux BTW; Debian derivative without systemd by default)
I will never defend pulseaudio though, that’s a horrible mess.
My first experience with systemd was when they implemented a default that would kill processes when a user logs off. This may be acceptable in some single-user desktop environments, but it is absolutely unacceptable in any server environment. If I am using tmux, emacs --daemon, nohup, or any other custom program that catches SIGHUP, then it is inexcusable for systemd to escalate to sending SIGKILL.
I know that there is a separate command that can be used to tell systemd to allow a program to live. I know that there are systemd libraries that an executable can link against in order to opt out of the new behavior. These do not matter, because they shows that systemd is willing to break existing programs, and to break specified conventions. Systemd developers cannot be trusted to provide a foundation to build upon.
I know that this default setting can be overridden at the distribution level, or at the system level. This doesn't matter, because it shows that systemd developers do not know how to choose appropriate defaults, and that any changes that are made in systemd need to be continually monitored for stupidity.
Maybe this is just me being soured by a very poor first impression of systemd, but I haven't seen anything since to dissuade me from this impression.
As of this thread [1] in January 2019, yes. The user poettering is Lennart Poettering, the original Creator and lead developer of systemd, and doesn't show any signs of coming to the light.
At this point, my standard .bashrc includes a check of whether systemd is running, and whether this absurd setting is set, so at least I will get some warning, and can either fix it or complain to the sysadmin.
It doesn’t seem to do that on my Ubuntu 18.04 / 20.04 servers; I haven’t tried on desktop though as I don’t use Ubuntu there, would indeed be a huge violation of my expectations of how a system should work if so.
Most distributions have taken the sane route and changed this option at the distribution level. The point is more that any decisions systemd makes may be absolutely nuts, and need to be audited in detail by anybody choosing to use systemd. It is extra work that should never have been necessary, because systemd is untrustworthy.
Systemd gives you less control. I used to pipe stderr to email, cant do that with sustemd. The journaling system is very slow compared to plain text files. Systemd overwrite mounts made by ip netns. Network settings is complicated with systemd. Not all bad though, setting up services is easy, and it has never failed to start a service for me so far, and it do start services in the correct order, which is all it should do imho.
As for networking, it's not like you have to buy in to systemd-networkd, systemd-resolved et al - or am I missing something?
What I've definitely had issues with is the way networking services are configured in recent releases of Debian, but that's mostly from several of the network subsystems being in different degrees of weird limbo with "the new way" and "the old" interfering with each other. For example how resolv.conf is managed. And the whole back-and-forth with network names. Come to think of it, it's a bit reminiscent of snap/deb in Ubuntu 20.04 ;)
I like the init system (admittedly, I have not been using Linux long enough to remember a time before systemd), but components like networkd and resolved are a pain to work with in general. I've had the networkd DHCP client fail in situations that dhcpcd and dhclient handled with grace. It is not clear to me what benefit said systemd components provide over the more traditional solutions.
I've had systems become unbootable, services not start, that sort of thing. All perfectly understandable bugs that have been fixed, I'm sure, but fundamentally my experience is that it's this intrusive thing that I never asked for that's broken a bunch of stuff that was working fine before.
"Less work" undersells it, in my opinion. What snap (and things like it) do is promise to remove the support headache upstream providers get from users still using a version that was current 5 years ago which happened to make it into an LTS distro release because that happened to be the most recent build a packager had working at the time.
Love it when vendors shift their responsibilities onto users. Silently updating some piece of software on that 5-year-old LTS release might break the workflows of hundreds of people. But it's all good! It saves the vendor some "support headaches".
> Updating everything has always been one click in Ubuntu (and I'm sure there's an option to have it go automatically).
And what percentage of users do that? From your experience in software in general how much of the general population manually updates their software? It's almost always a low number and that creates problems. A different set of problems than new updates that cause bugs, but IMHO worse ones.
> Maybe the benefits justify it, but what are those benefits supposed to be?
Security, compatibility, uniformity. Not having to support 18 different versions.
> I hate systemd because it breaks a bunch of stuff but I'm still forced to use it.
Exactly what stuff does it break? And are those things more important than the benefits of systemd?
The auto update because users can't make the decision themselves is just copying windows 10 model.
Do applications break after auto-updates? of course they do and that is something that is important to ubuntu users because they have to manually fix. Given the choice I would rather choose when to upgrade so I could set aside time for fixes.
Ubuntu users are not windows 10 users. Why treat them in the same way?
It's great that you are diligent about updating your software regularly. But how many people do? If you agree that it's a low percentage, then why is not better for the ecosystem as a whole to improve security and compatibility?
If you're making up percentages of people who update when they see an icon and a message telling them updates are available, asking other people to make up their own numbers, then deciding that we should make decisions based on these made-up numbers, I think you're on the wrong track.
How often Ubuntu users who turn auto-update off manually update themselves is an actual thing that can be researched. It's disappointing how many developers just think you should assume the worst based on your imagination and ego, then justify taking away control from users whenever one can get away with it as a safety measure.
Without commenting on the quality of snaps, I find Canonical replacing the App Store that supported all 3 distribution methods, with one that drops support for flatpak, and prioritizes snaps over .deb to the extent it will display a snap result over an exact matching .deb result in a search extremely user hostile.
And the fact that they first did it in an LTS release seriously jeopardises the trust is have in canonical. This is a deprecation which may have significant undetermined consequences. This is exactly the kind of change they should have introduced in 19.04 and then used the ensuing year to iron out any issues before 20.04 LTS.
What this tells me, however, is that for Canonical their business interests are placed above ensuring the stability of the LTS release, and that’s extremely disappointing to say the least.
You claim that snap offers real benefits but list none and dismiss detractors as irrational and suggest that instead of complaining about Snap users who have no desire to use snap should invest their own efforts to improve something they have no desire to use instead of being critical.
You are correct that this appears to be EXACTLY like the systemd debate.
Snap HOPES to provide an easier environment for developers to target and thus provide a richer ecosystem for users to enjoy. This like trickle down economics probably isn't real. Like literally every other time Canonical decided to go their own way they will provide an inferior option that isn't taken up outside their own ecosystem before eventually giving up and joining the crowd. Unless it attracts highly hypothetical new developers to linux it offers nothing but downsides to users.
- It's tied to a close source server run by and solely controlled by Canonical with no ability to add software channels like virtually every other major software distribution model for Linux. This means not only could Canonical exercise undue control over how their users use software on their platform it means others including repressive governments could force it to on their behalf.
- Users may only install the most recent version of software and will be updated to the most recent as soon as it comes out.
-- This means that if devs push a buggy version you are stuck with it until its fixed. If it isn't fixed for months you just can't use the software. Bugs that effect everyone will probably get fixed immediately. Bugs that effect niche features or a smaller number of users are liable to go unfixed for longer. Please see bugs that are open for years at a time.
-- In case of developer getting compromised ability to push updates to all users as soon as users machine is online means that a substantial portion of user base can be hit within minutes and almost all within hours. If a new version had to get pulled in and then distributed at irregular intervals a new version would take at least weeks to compromise most users. This would give users/packagers/distros/developers time to realize what is going on before all users are effected.
- For some reason they are slow to start
- Waste users bandwidth and storage even with one or the other is dear.
- Results in 17 different apps having 17 different version of a dep 16 of which have known security vulnerabilities because apps don't use system libraries that get updates.
The difference between systemd and snap is that systemd has been validated by almost all the major distro makers. The exceptions are largely people who don’t want systemd precisely because of systemd’s popularity, and they want to retain some diversity in the ecosystem.
Canonical could barely get its own users to prefer snaps over the alternatives, which is why they are forcing it onto them.
More people using a system doesn't validate anything by that definition we ought to deprecate linux and mac on the desktop and all just use windows. Concepts and tools require objective validation not popularity.
> Why is it Canonical's fault that authors of individual applications have yet to migrate Snaps?
Canonical is the one pushing for snaps, and they own the centralized Snap store. On most other distros snap support is either non-existent or much less than for package manager or even flatpak. Let me turn that question around. Why should individual application developers have to package their apps as snaps, which are mostly just used on ubuntu?
> Maybe I'm just naive and not doing advanced super user stuff these Snap haters are doing
Most of the complaints here have been about the automatic updates (and specifically that you can't disable it, not that they are on by default). But, personally, I am more concerned with the fact that snap apps run slower. Snap uses squashfs for the program and any associated files, and squashfs is not designed to be fast, it's designed to store a file system in a small amount of (usually read-only) space, such as on a Live CD. Besides slower startup times, snaps also take up more space on disk (which may not be a huge issue for most people), and more time to download (a bigger issue) since each snap has to include its own copy of all its dependencies.
The containerization of snaps provides some security benefits, but there are also a couple security concerns with snaps:
- Unlike the official apt repos, the snap store is not curated. It is much easier to put malicious software on the snap store than get in the official apt repos.
- Since all dependencies are bundled with the snap, if there is a vulnerability in a common dependency, such as libc or openssl, then instead of updating a single package on your system to get a fix, you need to update all of the snaps. And you are dependent on the maintainers of all of those snaps to watch for such vulnerabilities and make sure their dependencies are kept up-to-date.
> If we all agree that on the whole auto-updating software is generally better and more secure than manually updated software, why not single out the applications that haven't migrated instead of blaming the whole standard?
Everyone certainly don’t agree on that. It really depends on the situation. And if that’s what you want, you had that already with unattended-upgrades. I really prefer to manage what updates, how, when, and under what conditions myself.
What’s next, forced unscheduled reboots a la Windows 10?
Unattended upgrades is a great example. You must manually install it, it's difficult to configure (you must edit some complex text file) and even then with it on and set to update all types of packages it often still doesn't. Maybe I'm doing something wrong but in my long experience with Ubuntu I've found unattended-upgrades very unreliable.
I get that some people prefer a less secure ecosystem and never want to update their software. But it seems like the greater community is better served by auto updates.
The solution to that would be to fix unattended-upgrades and ship it as working by default, with an easy-to-use script to remove it. I bet that would have been orders of magnitude easier than developing snap.
But that would have kept the onus of packaging, testing, and delivering updates on Ubuntu. Instead, with snaps, they can offload all that to upstream developers. That is really the endgame here. Snap is a play for developers, not for users.
Ubuntu are saying to developers "if you build a snap, you don't have to worry ever again about distro differences! And you can update anything you need, at will!" and in exchange Ubuntu get to reduce their support costs. Win-win, right? And it is... except for power-users, who will get autoupdates shoved down their throats and their mount tables polluted up the wazoo. But nobody in Ubuntu ever cared about power-users on the desktop, really, so no news there.
> when's the last time you even thought about upgrading Chrome?
But that's exactly what apt does. I last thought about updating Firefox (to use a more fitting example in the context of FOSS) around the same time as I thought about updating GIMP: not that I can remember.
> Maybe I'm in the minority but I like Snaps. I wish all software would auto-update silently in the background -- when's the last time you even thought about upgrading Chrome?
FWIW Snap isn't a requirement to do that. You can set Ubuntu to update .deb packaged software automatically.
Snap is not auto-updating, auto-updating is not snap.
Snap is more than just an update system. Even if snap's only concern was auto-updating, it would still carry a set of implementation decisions regarding auto-updating and it's irresponsible to rhetorically treat criticism/praise of an implementation as inseparable from the concept in general.
>from a distance to me this resembles the systemd vs init controversy.
Yes, people are again playing fast and loose with the distinction between features, and holistic analysis of systems that implement those features.
Chrome, no. But I am running FRR on Ubuntu server, and it's also distributed as a Snap - in fact, that has spurred the most known discussion about Snap autoupdates[1].
Of course I can - and do - use the deb version, but it's just one of the critical-always-on things that can creep onto a system as a Snap. For example, LXD is moving with Snap as the default way to distribute on Ubuntu[2].
OK, but why is the problem FRR not addressing bugs it may be auto releasing and instead the entire principle of auto-updates in general?
If we agree that auto-updates on the whole improve security for the platform, why is that not a goal worth pursuing? Why are application devs totally blameless in releasing buggy software?
All software has bugs. Also, how are the application developers supposed to test for every single environment?
In an ideal world, you could get away with pushing all updates automatically, but I for one would rather not get my production server get totalled because of a botched update.
I think Windows updates illustrates your point nicely.
Microsoft are a vast organisation and do a huge amount of testing before pushing updates, yet time after time, there are reports of serious issues with updates.
As a software developer myself, I completely understand the desire to have consumers running the latest version - but I also recognise that real world users have different workloads, different levels of acceptable risk, and different consequences when things go wrong.
I think automatic updates probably are the best thing for most desktop users, and for some server users but certainly not for everyone all of the time - I don't even mind if auto update is the default, but make it clear and give people a config option where they can control updates themselves!
The problem with snaps is that they are a stupid way to distribute software that doesn't solve any problem, while introducing many other.
Basically a snap package is a container, that means an image of an operating system just to run one software. Just this idea should be considerered stupid, is like saying every software is distributed in a Docker container. It's a great way to waste disk space, and also RAM since shared library are no longer really shared...
You can have software that update automatically also with debs, where is the problem? Unattended upgrades exists since decades, you install it, and it updates all your packages automatically.
You can even have proprietary software packaged in .deb packages, why not, if for Canonical that is a concern. You can even have software that runs in a container packaged as a .deb package, why not?
Snap has no real purpose to me.
Flatpack is something that makes more sense, since it aims at providing a way to package software for multiple distributions, doesn't really need a runtime, a daemon like snap, but it builds application bundles that you double click and run, without installing them.
There's flat out wrong stuff in this post. First off snap packages don't waste a lot of disk space because they avoid duplication of shared dependencies by using file hashes. Snap packages that need the same dependencies won't duplicate them.
One big advantage that snap has over flatpak is the "--classic" option to allow non-sandboxed applications given that some applications are hard to ship completely sandboxed without getting into some serious usability issues.
> There's flat out wrong stuff in this post. First off snap packages don't waste a lot of disk space because they avoid duplication of shared dependencies by using file hashes. Snap packages that need the same dependencies won't duplicate them.
This is not true. Snaps are just squashfs images, there's nothing fancy there. No deduplication or anything. You're thinking of Flatpaks with OSTree, which does do this.
Squashfs has a sorted order to files and LZ compression is stable (change a byte and everything will be the same after the dictionary window if not sooner). So it should be really easy to make very small update deltas for snaps without any kind of complicated GIT-like infrastructure at all.
I've only glanced at the docs but Flatpack looks very complicated with lots of infrastructure and things that can go wrong; use Git to extract the app into a local repository with hard links to resources? It sound like typical linux centralized overcomplication.
Snaps may be slow, there may be a lot of machinations going on to make it happen, but at the end of the day it's just a file. You have the file, your program runs. That's a big advantage.
Snap doesn’t work well in distributions that don’t support Apparmor, so you will run into various issues.
The biggest concern I have with Snap is that it’s hardcoded to a store controlled by Canonical. The store itself is closed source. Snaps can be side loaded, but doing so is a huge pain. Snap also requires you sign a CLA with Canonical, allowing them to relicense the ecosystem however they see fit.
> Snap packages that need the same dependencies won't duplicate them.
As long as they depend on exactly the same versions, right? This seems unlikely to happen by chance, without someone there to actively coordinate the versions, so there will still be substantial duplication.
Shared dependencies made sence back in the day, when the world was a simpler place and there was less variety.
We have long since arrived at a point where its much more sensible to sandbox every application, with a majority of it's dependencies - less things break, less compatability problens, easier updates, greater reliability.
All major operating systems have done this now, Windows, Mac, etc. There is no turning back now.
Is it? On the Linux desktop there haven't been any new desktop apps for well over a decade (up to maybe Krita and the Blender redesign). Inkscape has been 20 years in the making and just released 1.0. These apps are basically developed against the X Windows API from 1983 or so. So for which hypothetical apps exactly we do need these enormous container formats isn't clear at all. It's not that the existing desktops apps like Libre/OpenOffice (also from late 1980s/early 1990s) have grand plans for new components, or run better all of a sudden.
Is it browser-/Electron-based apps that need constant updates? Then the developers really should consider their choices; why would I download a webapp along with a whole browser runtime repeatedly rather than simply run the app from their website, especially when the target environment is also sandboxed like a browser? That simply doesn't make sense. At a certain point, after over 25 years of attempting to shoehorn the web into an app delivery platform, things get absurd.
It's true though that shared libs have caused more trouble than worth, and are the root of this mess. But the solution is simply to not use them and just ship statically linked binaries instead rather than put a layer of abstraction over them. Even on DOS/Windows back in the day users were able to download an .EXE.
It is wrong to frame this as people not liking auto-updates. That is not the issue. The issue is FORCED auto-updates. Or even just obfuscating how to turn off updates.
I've always said that if your updates are such a benefit, then surely users would almost never even need to turn them off without a good reason, so why not give them the option? Most of the time this happens, it is because a company is doing it to maintain their platform, at the expense of their users.
Don't say it is just for safety. Why is it easier to install an outdated kernel than an outdated web browser?
Operating systems and toolchains are FRAGILE. If I have a computer doing anything important, I have to be vigilant about keeping rolling images of it. Updates break things all the time, if you are doing more than just the basics.
I travel a lot. Sometimes I have to reschedule a flight from a 2G cellular connection and can't share that bandwidth with updates. I have computers that run proprietary CNC machines, use specialized musical hardware, or need to have ancient toolchains to build highly specialized software (like J2ME and other embedded toolchains) for internal use for some of my clients. This self serving evergreen mentality is filled with contradictions. Like that I can't use an insecure version of SSL to fix a SCADA device on a secured closed network because they would be insecure, but nobody has a problem with me using telnet or HTTP, without any warnings whatsoever.
It's my computer, I don't have to justify why I want to say no to updates! We should not even be having this conversation about why it is not ok for Google or Microsoft to make permanent changes to my data when I have said no.
Yes, I probably should make more backups, and I have had to become way more careful about that. But the response to a lot of botched updates is just to blame users for trusting them and not making backups. I shouldn't have to worry about data loss or loss of functionality from updates -- it used to be unheard of for updates to not to have built in rollback functionality.
And don't get me started on Google. They are the worst offender. And what bothers me even more, is that they lie about why they are doing it. They've treated their users as unwilling beta testers for years. They installed a persistent menu bar widget without my consent on my mac, which you could either hide, or disable using obscure undiscovered flags.[1]
It is only because I got fed up and made Chrome.app immutable and completely removed and locked their Keystone updater, that my Mac wasn't rendered unbootable by Google's recent involuntary update.[2]
This should tell you everything you need to know. They have such hubris that not just are they modifying their users's computers, but they are making it nearly impossible for the average users to say no.
If companies were truly being honest and stood behind their updates, then there would be a clearly labeled and discoverable checkbox to disable updates, like what OSX has. I'm fine with putting that checkbox behind a bunch of scary warnings, and having the OS check back to make sure you really want to keep updates disabled. But what Google and Microsoft are doing with updates is blatantly dishonest and immoral.
OK, what issues does it cause? And are those issues outweighed by the security and standardization issues raised by non-updating software?
I don't know who all remembers having to develop websites compatible with outdated versions of Internet Explorer but I do and still have nightmares about it.
Software companies are not infallible, and I've received updates in the past that have broken things. Enterprises don't get compensated for lost productivity when an auto-updating app results in broken workflows. Also, we recently got hit with a bug in a new auto-updated version. [0] It's great that this was finally fixed but silently rolling out updates like this make it harder to catch these issues before they are problems in the wild.
OK, so there was an image orientation bug in that specific release of Chrome.
But it's estimated there are 1 billion users of Chrome. One billion! How wide of an attack surface does that present? Or how much of a nightmare would it be if they were all on wildly different versions?
I get that this specific bug may have caused problems for you. But if I had to choose between security and compatibility for 1 billion software users vs an occasional image orientation bug, I'll choose the former, personally.
> But if I had to choose between security and compatibility for 1 billion software users vs an occasional image orientation bug,
This is a disingenuous characterization of my argument. The image orientation bug was a simple example. Further, why can't there be some kind of compromise where security updates are automatically applied and feature updates are not (of course I understand that the line can get blurry)?
Lastly, in a philosophical sense, I don't want to cede control of my machine to a third party. Automatically updating apps removes the chance for me to consent to changes and puts me at the mercy of a third party. It removes my ability to make an informed choice.
It's a simple example but that's the kind of tradeoffs platforms are trying to make. Small bugs in favor of security and compatibility. It could be a thousand image orientation bugs. Same arguments still apply.
And yes, you can't cordon off security updates from everything else. They accidentally break other things. But this is a general problem of software development.
If you're using a third-party's software, aren't you already consenting to their "control"? How much control do you have over someone else's software?
No using third party software doesn't mean you are consenting to their control. Why would you believe that?
For open source you can read the source and decide to install or change. You can limit permissions by assigning to different user groups. You can disallow firewall access. You can choose to use selinux and have additional restrictions.
My phone living on an older version of chrome is included in that number. It won't be updated.
You are looking at 20 million at most. Most are running just a server. If you are lucky if 1% are affected. Doesn't really close the loop and in the worst cases it breaks more experienced computer users.
Crashes, freezes, stopping systems from sleeping... I don't want Google's (or anyone else's) code running in the background doing whatever it wants. Want to check for an update and notify me? Great. But don't do anything else.
You might be on the wrong operating system if you don't like "code running in the background doing whatever it wants" as Apple has never shied away from doing just so. Try inspecting what your computer does when you've been away from it for 30 or so minutes, and you'll see a lot of chatter from Apple apps.
> I don't want Google's (or anyone else's) code running in the background doing whatever it wants.
Is it really "doing whatever it wants" or just updating itself? Doing whatever it wants seems like a much broader range of activities.
Crashing or freezing is a problem, but isn't all software susceptible to those bugs? What if Chrome was crashing or freezing on other people's computers and the latest update fixed it for them?
The problem with your preferred methodology of opting in to updates is that 90% of users won't do it, which leads to security and compatibility nightmares.
I get that an update can cause problems. But to say that all auto-updating is terrible and it just breaks things and software is doing "whatever it wants" in the background seems like an exaggeration and misses the larger benefits.
Google has confirmed a Chrome update is at fault for a mysterious reported wave of unbootable Macs. The update was causing issues by corrupting an operating system folder, which prevented those impacted from being able to log in to macOS.
Some of us also remember Internet Explorer 'updates' breaking printing and having to deal with upset PHBs.
Microsoft itself ended up developing a somewhat more nuanced understanding of autoupdates than the current Canonical standard - business clients have various ways to override autoupdates. Surely Canonical can improve upon this standard, rather than learn the hard way the same lessons.
> when's the last time you even thought about upgrading Chrome?
On Linux, Chrome does not autoupdate as it does on Windows or Mac. It installs apt or yum repository and then it is updated together with other packages, when YOU run the update using apt/yum/dnf/whatever frontend you use.
You're absolutely right. Snaps do solve legitimate problems that we've struggled for years with and are the right thing to do architecturally. Corner cases that might affect certain users like holding versions or disabling updates entirely "when the user really knows what they're doing" should just be fixed as they arise. But again, snaps or any equivalent system that allows stupid-simple packaging, bundling dependencies, doing fail-proof transactional updates and running in per-app confinement has been proven valuable by the mobile world and not only (automotive and embedded come to mind). It's high time the typical desktop/server Linux operating system got such a system by default. What we need is to iterate over it to cover its gaps. For example the one sticking point I see in this thread when people really get pressed to argument their unhappiness with snap seems to come down to being unable to stop the auto-updating behavior. I could easily see Canonical implementing a feature to control that and putting it behind an explanation of the downsides and a "Disable this only if you know what you're doing!" warning. And so on for the rest of the corner cases.
And from developer's point of view, I can attest that if I'm to package and publish something for Linux, I'd certainly use snap or Flatpak or both before using deb or rpm so long as they're not affecting me in some major negative way. I assure the reader this is not the minority opinion and the software catalog is only going to grow because of it. This is our way out of PPAs and dependency problems (among other things) for publishing up to date out-of-distro software. I have no problem with a particular set of users not wanting the latest version if they know what they're doing but that doesn't negate the benefits this kind of a system brings.
We've seen this play out with other new and needed Linux systems before and that's OK.
I'm still on 18.04, but I agree. Discord and VSCode keep occasionally prompting me to manually download and install .deb files. I'd really prefer if they just updated automatically. Discord gives a full page "download this update to keep doing things" message that interrupts everything. (i think only on larger updates? i havent seen it in a while) A quick google is telling me that both have snap packages - I guess I'll probably be switching to those when I setup 20.04. Hopefully the snap versions work as well as the non-snaps.
I think its interesting that they are pushing snaps so hard on this LTS release though. I always thought of the LTS versions as getting stuck with older versions and being very strict about updates not breaking things. I guess perhaps with the snap-sandboxing this should work smoother? Personally, as long as my system works I don't care. If the software gets updated, that's fine with me.
> Discord and VSCode keep occasionally prompting me to manually download and install .deb files. I'd really prefer if they just updated automatically
I installed VSCode from the .deb package on the VSCode website and it automatically added the update repository so that it auto updates via apt.
"Installing the .deb package will automatically install the apt repository and signing key to enable auto-updating using the system's package manager"
See https://code.visualstudio.com/docs/setup/linux
oh cool! I do see the repository in the list. Come to think of it I haven't noticed the popup recently. It used to be basically be a thing in the corner of vscode with a link to the download page. I would usually do the update manually when i saw it, cause I thought it couldn't update on its own.
We tried to make an internal IoT device using Ubuntu Core and snaps because the capabilities of it were very promising. We started a PoC and about halfway through we hit a major roadblock. Our enterprise network does certificate substitution, and Ubuntu Core absolutely does not allow you to install your own certificates globally, so our devices would never receive updates. We tried EVERY hack we could think up, short of making our own core snap. We talked to Canonical about it, and they didn't seem interested in our fixing our complaints without a massive amount of money, so our PoC died, and we dropped Ubuntu entirely because of it.
It seems irresponsible to inject devices into your network that that indiscriminately MITM all traffic and can easily be configured to log passwords and auth cookies, no matter what setting you're in.
You and I agree. Unfortunately most large corporations, and US Government agencies like to be able to see and inspect network traffic. Mostly to prevent the theft of confidential data. The fact that the MITM proxies hoover up passwords and auth cookies still bothers me quite a bit.
It's basically the TSA of corporate networks. They need to inspect traffic because they can't control what devices show up in their environments and what malware might ride along side legitimate traffic.
Plus which, it allows me to check what black box software is doing. Certificate pinning is great and all, but it also makes it way harder to know what data "huawei mobile services", "google play services", or a random mobile game for that matter, is phoning home about.
I'm not a big fan of these corporate MITM boxes that contain the keys to the TLS traffic of the whole company (which additionally often double as employees' private phones and laptops), but I do like to look at my own device's traffic.
Actually most of these corporations have plenty of controls on their networks preventing the random plugging in of devices into networks. Most of the time they are using something that involves 802.1X.
Not gonna disagree at all, but I don't see any widespread adoption from enterprises because of it. It's disappointing because Ubuntu Core is actually quite secure, and we were really impressed with it... we just couldn't use it.
Grandparent comment by beckler says they were trying to make some IoT product. That will be deployed in situations where that happens; if your customer has a MITM set up, you just nod your head and sell them something that works in that setup. You can't say, "MITM should be illegal, please buy my non-auto-updating solution anyway and stop it with your MITM."
Good thing beckler found this while eating their own dogfood due to their own network being that way. Imagine that everything worked fine in their environment and then so customers came back with this issue. Then they would be beavering away hacking up their own core snap or whatever.
There are different value tradeoffs in different countries.
The US says it is okay to spy on employees for no reason at all as long as you use company equipment.
The EU says that employees like every other human being have rights and you better have a good reason and do so in a respectful way and be clear about it.
In your own company you're free to do what you want.
I can understand the reason for this. Now that most suppliers treat their devices as 'black boxes' and call home to install updates whenever they want, the security team no longer has visibility nor control over this. So much stuff runs Linux which we don't manage but still has to have full access to our network.
And public repositories have been compromised and spread malware in the past. So yeah I totally understand this, even though as an enterprise Admin it's a total PITA to manage the root CAs.
For some situations, it's called for, but it's a huge pain in the ass. I am in a similar situation, and I need to patch every docker image I use. It's terrible to deal with, as an engineer, but the information security team does catch and eliminate a lot of content-based attacks.
I agree its a pain. It also makes things like working with other private certificate authorities (DoD Cert authority, other private certs) a pain. I spent a decent amount of time trying to get certain work/project related sites whitelisted from our MITM proxy because it didn't recognize the certificate chain...
A colleague of mine was also looking at Ubuntu Core for an IoT project recently, but Ubuntu wanted $15k/y to run a private, branded Snap store - erm.. no.
If they really want snaps to succeed, there should be an open source snap store protocol, and 3rd parties should be allowed to run their own stores, just like you can add 3rd part apt repos, for example.
We decided on Photon OS, BTW. It's tiny, and perfect for use as a Docker host.
Isn't there some old adage about how if you can't afford something you aren't the target audience. At the larger companies I've worked at you didn't even need approval for 15k/y.
This was for a company with just under 300k employees - you need approval from multiple people for everything.
From the marketing, blogs etc, Ubuntu Core does seem to be targeted at everyone, not just people that would drop $15k/y like it was nothing.
It's almost like a trap - it sounds perfect for IoT, so you start wasting your time building a PoC, and then much later you find out about the costs. And as another commenter mentioned, they also charge you for doing updates on top!
At the two larger companies I've worked at (~1,000 and ~50,000), I've been explicitly told that I cannot sign any contracts without getting it approved by the legal department. Furthermore, all software purchases must go through the approval process.
Wow, it's gone up! When we talked, it was $10k/year, and then there was an additional cost for every update we pushed out. It depended on the speed, size and number of devices receiving the update.
Just to be sure, installing the CA from that MITM box didn't work? Because that should be the generally recommended solution and I can't see why snap would have a hardcoded CA list separate from the system. If that didn't work, it's indeed a bug, but a rather weird one; definitely worth posting to the bug tracker.
The CAs are hard embedded in the core snap. They're pulled from some specific package when built, but snaps themselves are immutable. We attempted to overwrite it in several different ways, but the OS is just simply mounting these folders from the core snap (which is immutable), and then marking those mounted paths them as immutable.
I really dislike the way snaps create disk partitions. When I run $ df I want to see what I defined during OS installation, not a dozen nasty snap mounts. An application misusing fundamental system features like this feels like a violation of some UNIX principle.
Very much agreed, the fact that nobody at Ubuntu noticed that the extra mount points is a best annoying is kinda impressive.
I’m not blaming Ubuntu, nor Snaps for this issue, but we had a new server come online and our monitoring noticed that two or three partition was already at 100% usage. Those where snap mount point.
Snap has an issue on certain Linux distributions (including Fedora and Arch) in which many applications render tofu characters (□□□□□) instead of text.
Canonical needs to invest in compatibility if it wants Snap to be adopted in distributions other than Ubuntu. Flatpak doesn't have this issue, and unlike Snap, its server implementation is decentralized, free, and open source.
Isn't flatpack designed from the point of Desktop users (GNOME) where snaps are designed with a server in mind? This would mean that are very different and you can't just substitute them.
While there are Snaps for server software, Snaps are marketed as a way to publish applications "for desktop, cloud, and Internet of Things" "across Linux on any distribution or version" - and users expect Snap applications to work properly on all Linux distributions, not just Ubuntu. Otherwise, developers could just publish a .deb package for Ubuntu like they did before.
Yes, Flatpak is targeted to desktop applications while Snap has a broader scope, but it's questionable whether Snap's mandatory automatic updates are desirable in a server environment.
My point was about making clear that snap is not a flatpack clone, the 2 projects were started from 2 different perspective and have different architectures, just wanted to inform some people that would think that are the exact same thing and Canonical can JUST drop snap and adopt flatpack.
Edit: about autoupdates, I agree, the user should always have the choice.
I don’t even know how’s it’s questionable. Every single update we do to prod gets regression tested except for “cross yer fingers” 0-days that we are exposed to.
Snap's automatic updates apply to major versions as well as minor versions. Major version upgrades in an automatic update could bring breaking changes or require manual configuration at an inconvenient time, and this is precisely what server administrators want to avoid.
If you're looking for an alternative to Ubuntu but want to stick with a Debian-based distribution, I'll continue to recommend Debian testing.
It's a rolling release, so you don't have to stop what you're doing every 6 months - 3 years to install a huge update that changes the way everything works. It's more stable than the name would suggest, as long as you follow a few reasonable best-practices [1].
Software available on Debian testing is pretty up-to-date... If you're previously tried Debian stable but were put off by ancient packages, you won't see this in testing. Keep in mind that Debian testing (not stable) is upstream for Ubuntu's releases, so Debian testing's packages will be about as new as Ubuntu's packages on release day (but they're updated continuously, so they stay fresh).
I have personal systems running Debian testing or unstable that have been running continuously for 5-10 years without issues. They don't look or feel any different than systems I set up a few months ago.
Does that actually happen in practice though? And as for the severity, we're talking desktops so the most critical piece of software is the browser. Firefox is the only piece of software I download outside of the repositories to make sure the updates come directly from the source, but other than that, openssh very rarely has serious vulnerabilities, to attack Thunderbird you'd already need to mitm my traffic... it's all rather unlikely.
The only guarantee that the Debian project makes is that the stable branch is the security team's main priority. In practice, I've found that unstable and testing usually get patches pretty quickly.
Also security updates? Wouldn't that make the new version insecure at launch until someone pushes a thousand security updates at once (making it kind of 'testing' again because none of these were in testing before and thus haven't been widely tested)?
You raise a good point since I notice I don't know the process as well as I thought I did, but it seems odd that the frozen testing repo would only get all security updates all at once months later.
IIRC, Debian testing doesn't have a separate channel for security updates. Security updates are handled like regular updates: they start on Debian unstable and then flow down to Debian testing.
The Debian wiki mentions that delays can be specially large after a new release comes out. I don't know if I was misremembering it or if it can be problematic both before and after a stable release comes out. Hopefully someone with more Debian experience can clear this up.
I don't know if anything actually shows up here, but you no longer get the error for security.debian.org when you try to upgrade via s/stable/testing/ in sources.list.
I tried going this route and the desktop experience was rough. Debian doesn't seem to have reasonable defaults for fonts so I spent way too much time learning and configuring all the font systems. So literally just the font configuration was enough for me to switch back to Ubuntu and just only install the stuff I like (no snap, gnome).
I ran Debian for maybe 15-20 years before switching to Ubuntu. The first few weeks are rough. But once you get it configured up the way you like, it keeps on ticking for a decade with no announced changes or surprises. You front-load a lot of the work, but then productivity stays high.
Those 15-20 years, it was the same Debian install. Everything else in the computer changed, but Debian kept on ticking.
There was a while Debian was behind on supporting things like laptop power management and graphics cards, when I switched to Ubuntu. For a while, it was a more polished, user-friendly up-to-date Debian. That was nice.
Now that Ubuntu re-invents the wheel each new release (and quite often, replacing a spoked, pneumatic-tired wheel with a square piece of wood), and hardware support is a little more standardized, I think it's time to switch back. It feels more like a tech gizmo, designed for Ubuntu developers, than an end-user OS.
Yeah, I remember this... fontconfig can be kind of a bear. Once you find a configuration that works for you, save your fonts.conf somewhere so you don't have to do it again.
The Debian wiki page on fonts is helpful. The Arch wiki page is better, and most of it still applies:
> Debian doesn't seem to have reasonable defaults for fonts
When was this? I can confirm that it breaks from time to time but usually that means I haven't rebooted for four months and by restarting everything, it all works again. I never needed to dive into the font system at all while using Debian stable or testing in the past 5+ years with Cinnamon as DE. (Firefox, not from the repositories, being the exception where one might need to toggle gfx.canvas.azure.backends in about:config.)
I find each version of Ubuntu worse than the previous. It's change for the sake of change, and as in this case, something really thoughtful like .deb gets replaced with something which appears slapped together poorly.
I just want a stable, working system, and Ubuntu seems to no longer be the way to have that.
Best way is to backup your home directory and re-install. trying to do anything else is going to cause you far more headaches than anything could be worth.
You may also want to keep some places like /var/lib/ if you have databases going, etc. I'd just do a complete backup and restore pieces as you find you need them.
Personally, and I'm not new to this, I find these partitioned setups to be quite a pain. On server systems I can see the point of limiting /var/log and /tmp and such, but on simple desktops, I only know of people dreading the decision to go with that historical setup instead of one big root.
It's been completely painless for me for years. I've got it set up pretty much exactly as the grandparent described: 50 GB partition mounted at /, 600 GB partition mounted at /home. The graphical installer has made this easy since ~forever.
The only pain I can come up with is that I'm wasting about 20 GB, because the operating system doesn't actually need 50 GB. So maybe not a great solution for people who have to do with Laptop with a non-replaceable 128 GB SSD. But 20 GB is not a big deal to me.
The upside is that I've done clean Ubuntu reinstalls two or maybe three times since then, and my data was a non-issue, just reassign the existing partitions and don't format /home. I'd estimate it takes rather less than the 30 minutes the paines refers to, and I always grin despite myself when Firefox restores the browser window as if nothing had happened.
When I do it, anymore I refuse to do it with standard partitions. LVM makes this whole setup much more painless in the end since you can nearly endlessly change the splits without dealing with copying and moving partitions around when a change is needed. I wouldn't recommend thin provisioning for this kind of setup though since it carries some burdens when things fill up.
I'm avoiding Debian like the plague since finding out a bunch of packages (client ones too, like nfs-common[1]) pull in rpcbind, which has been bugged close to ten years to accidentally bind to all interfaces [2].
I run the "unstable" branch, personally -- but I've been using Debian for ~22 years now. It's definitely not for everyone (and not for production!) but a lot of the "power users" here on HN would manage just fine with it.
Just make sure to read the tips linked in the parent and the "Don't Break Debian" [0] page.
Most of the package names are the same between Ubuntu/Debian. Honestly, using testing just feels like using a "level headed boring adult" spin of Ubuntu. Instructions for installing or compiling XYZ for Ubuntu will typically work without modification on testing.
Is there a way to get ZFS on Debian the same way as with Ubuntu (for someone that prefers text-based installers and is fine with an unsupported solution)? Debian has always had more ideological purity that got in the way of users being able to actually use their machine efficaciously.
IANAL but Canonical's justification for the ZFS kernel module and Linux kernel being totally separate works, and the distribution itself not being a derivative work of GPLv2 seems dubious to me.
Seeing as how the Software Freedom Conservancy believes this to be a GPLv2 violation, and Oracle could launch a lawsuit as a result of it also being a CDDLv1 violation, I think it's safe to say that it's not just ideological purity that factored into Debian's decision not to include ZFS.
Saving disk space is certainly useful for rarely used apps, however, your web browser (and any other frequencely used apps), shouldn't be compressed, especially if there is ample disk space.
The bottom line is they optimize installation time by amortizing it out over the runtime life of the package, or in other words, optimizing a one time 15 second process to be a 14 second process, in return for making a many-times 1 second process a 30 second process. It makes absolutely no sense.
They do this using a filesystem originally designed for embedded devices, using a driver hacked to disable threading support because the sheer number of filesystems snapd mounts would otherwise consume a huge amount of memory in per-cpu buffers used for decompression. In other words, they broke squashfs for everyone in the process of trying to make it work for snap.
On-demand decompression like this has made very little sense on desktops since the mid 90s, and even if it did, snapd's manifestation of it is particularly terrible.
> On-demand decompression like this has made very little sense on desktops since the mid 90s
Ok, maybe not desktops? But ZFS on-disk compression is a sysadmin's frickin dream -- just one example that you can access logfiles with plaintext tools like grep while benefiting from the space savings with neglible cost, LZ4 has basically no overhead at all, https://www.servethehome.com/the-case-for-using-zfs-compress...
I really hope you will try on-disk compression, encryption, deduplication, and that sort of thing sometime, you will see it is so much better than gzip-compressed, gpg-encrypted files
Filesystem compression is a completely different animal than this. It has to deal with your ability to modify the file at any time. It doesn't compress the whole file together, it does it in blocks. When you launch a binary (and the system mmaps it) it doesn't have to decompress the entire file before you can start using it, only the first compression block.
Compression also typically makes it faster to launch applications from spinning rust, because the bottleneck is the drive and reading 50MB and decompressing it is faster than reading 100MB uncompressed. This would be true of SSDs as well except that most of them already do this internally.
But snap isn't reading e.g. 64kB and then giving you 128kB on demand (and then prefetching the next block) like the filesystem does, it has to read and decompress the entire 100+MB package before you can even open it. That is very silly and adds a perceptible amount of latency.
> But snap isn't reading e.g. 64kB and then giving you 128kB on demand (and then prefetching the next block) like the filesystem does, it has to read and decompress the entire 100+MB package before you can even open it. That is very silly and adds a perceptible amount of latency.
Wait, I could be wrong about this. I was deducing it from other people saying that it has to decompress the package every time you open it plus the empirically long application load times, but it turns out it's using squashfs which at least in principle could be doing the compression the same way as zfs. I haven't checked whether it does or not.
They're doing something wrong though or it wouldn't be this slow. Possibly more than one thing. Unfortunately there are a lot of different ways to screw this up, like not caching the decompressed data so it has to be decompressed again on every read even if it's already in memory, or using too CPU intensive a compression algorithm or too large a block size, or double (or triple or quadruple) caching because it's loop-mounted and then forcing slow disk reads through inefficient cache utilization, or over-aggressive synchronous prefetching, or any combination of these. Or maybe it actually is doing whole-file-level compression.
Can't you just alias cat to zcat and so on? There should be such tools available for just about everything that isn't a container format (zip, 7z, tar).
It's not squashfs fault, it's the snap people that just have absolutely no clue. squashfs is designed for embedded systems with say 8 or 16 MiB of very slow NOR flash, so you maximize compression ratio at the expense of speed (because the flash is probably still slower).
And decompression is typically very fast. What I don’t understand is why they’re not using something like zstd if they care about speed. It’s a supported compression algo for squashfs, but still they insist on using a single threaded compression (xz iirc?) algo.
The kernel code for reading zstd squashfs image has been merged for some time. But zstd is only a recently supported algorithm in upstream squashfs tools for creating the squashfs image.
In my testing with OS installs that depend on squashfs+xz, there is a significant lzma hit for decompression, resulting in significant latencies. And the higher the compression level used, the more the hit when decompressing. While compression computational hit for zstd is in the same ballpark as xz to achieve the same compression ratio, (a) decompression computational cost is far less with zstd, translating into faster reads; (b) is fairly consistent regardless of compression level.
Another factor for squashfs is the block size. The bigger it is, the better the compression ratio, but the greater the read amplification. I haven't looked at it, but it might be they're overoptimized for space reduction with too little consideration for performance. Since this isn't a one time use image, like for an installation, but intended to be read over and over again, erofs might be an alternative worth benchmarking.
Especially since this has been a solved problem for ages without any real performance penalty. NTFS have had this since 1995 it seems and zfs probably since its inception.
Apple has silently compressed files (including executables) since Snow Leopard [1] -- to increase speed. Did Ubuntu pick the wrong compression algorithm?
I see two mentions of "increased speed" in that article:
1. Increased installation speed. This one's obvious; less data takes less time to install. This is mentioned in your parent comment.
2. "But compression isn't just about saving disk space. It's also a classic example of trading CPU cycles for decreased I/O latency and bandwidth. Over the past few decades, CPU performance has gotten better (and computing resources more plentiful—more on that later) at a much faster rate than disk performance has increased. Modern hard disk seek times and rotational delays are still measured in milliseconds. In one millisecond, a 2 GHz CPU goes through two million cycles. And then, of course, there's still the actual data transfer time to consider. [...] Given the almost comical glut of CPU resources on a modern multi-core Mac under normal use, the total time needed to transfer a compressed payload from the disk and use the CPU to decompress its contents into memory will still usually be far less than the time it'd take to transfer the data in uncompressed form."
It's an interesting point, but seek times and rotational delays don't apply to SSDs. This is kind of an uneasy comparison to draw with "I hate that Chromium’s snap takes more than 10 seconds to load on cold boot on a freaking SSD".
It's an interesting point, but seek times and rotational delays don't apply to SSDs.
There is another reason to do compression on SSDs: you have more storage free and thus less write amplification and your SSDs will last longer. In fact some SSD controllers (e.g. SandForce controllers used to do this) compress data to reduce write amplification.
The trick that they applied is that say, if you had a 500GB SSD and you stored 400GB uncompressed which the controller compressed to 200GB, the drive would still report only having 100GB free, giving it an ample 300GB of free blocks, thus greatly reducing write amplification.
(Of course, the benefit of controller-level compression is gone with full-disk encryption. But I guess FDE was less popular when SandForce SSDs became popular.)
Really? I constantly run into disk space issues. Apple still ships their flagship 13" macbook pro 128gb storage and they charge $200 for another 128gb. While other manufacturer's laptops charge a lot less for storage these days, most still only come with 256 which is not enough these days for development IMO.
Even on my desktop, I managed to fill 750GB with various VMs and android development tools (the SDKs, etc). While I am not sure how much compression could have saved me, it could still be worth it (especially since I only use certain VMs or SDK version once a month).
Why would you? lz4 decompresses at ~5GB/s on a modern CPU [1] with good compression ratios, that's still more than most SSDs can push nowadays. Most applications are small fraction of that size on-disk.
The problems arise when you start using xz-compressed squashfs images. LZMA2 is optimized for compression ratio and typically decompresses several times slower than even zlib deflate (which is already ~10 slower than lz4).
Yeah... and when something is truly large it generally doesn't compress well anyway as the large assets are embedded media files; I don't understand the point of compressing stuff like this :/.
I did not notice the load times as I basically never close chromium but I have noticed (not a snap expert):
1) It does not work well with the rest of the OS (e.g. I will pin Chromium to task bar but after a bit it will stop using that icon and instead appear as a new one where clicking the pinned one opens some new instance)
2) It somehow consumes insane amount of CPU for me. I have noticed my fans going crazy (mind you I am using a 32GB RAM, 12 core brand new machine) and all my cores being at 60%.
The kicker about that, I did not even see chromium running! I had closed it but the rogue snap processes would not die. I had to sigterm everything and uninstall it.
Then I wanted to install chromium without snap but as the post says - YOU CAN'T! At least not easily enough.
So the solution for me - download Google Chrome after years of using Chromium because you could easily install it natively (I still use Firefox as main browser but sometimes stuff only works in chromium based ones).
It's a total disaster as far as I am concerned. Next time I am reinstalling the OS (hopefully not any time soon since I've just upgraded from 19.10 to 20.04) it will not be ubuntu.
I can confirm this. The same app (tested VS Code, IntelliJ, Pycharm, Atom) installed from a deb vs a snap is like 2 seconds vs 8-10 seconds on my beefed up rig.
> Auto-updating of snaps can only be deferred at best, until at some point, like Windows, it auto-updates anyway. Even on metered connections, snaps auto-update anyway after some time.
This attitude is obnoxious. Yes, not everybody is on a metered connection or running a mission-critical system, but some are, and it is hardly unreasonable to accommodate them.
And Microsoft didn't have a choice. Given an option regular users will never update their computers, perhaps partly due to fear of what they don't understand, fear of change, or maybe due to past bad update experiences. I witness this in my mom with technology all the time. Every time there's a popup she mini-panics, and she has trained herself to click close every time she sees something she doesn't understand.
Google started the trend of silently updating Chrome and everyone including Microsoft followed after, except upgrading an OS is nothing like updating a browser.
For most parts, I think auto update is necessary for tech illiterates, especially now that everybody's jumping on the Agile bandwagon, including Windows. There needs to be a way to ensure new versions reach their users given everyone's just churning out barely working software these days.
Honestly I don't have a problem with that. But if they don't give power user the option to opt out, this is just disrespectful
>Given an option regular users will never update their computers, perhaps partly due to fear of what they don't understand, fear of change, or maybe due to past bad update experiences.
You make it sound like an irrational fear but there's a real cost/benefit ratio to consider when you upgrade a machine. I personally always try to keep my machines up to date but stuff does break from time to time. Like last week an arch upgrade updated some system library to a new major version which forced me to regenerate my python dev env for work (which is not a trivial task because for various reasons our environment is fairly custom).
Windows is even worse in that regards because its upgrades are often significantly more intrusive (and they use automatic update to push new features and products, which is a great way to have people attempt not to update just to avoid them).
IMO if OS vendors want their users to update at all costs they shouldn't force it onto them, instead they should develop better transactional update systems that effortlessly and reliably let you revert to a previous version if something goes wrong. Then there would effectively be no reason to fear any update, since you know that at any point if something goes wrong you can just click on "undo update" and you're good to go.
> You make it sound like an irrational fear but there's a real cost/benefit ratio to consider when you upgrade a machine.
95+% of users have no chance of even making this cost-benefit analysis because they cannot assess the scope or risk associated with the upgrade even if they wanted to. They have to rely on vendors making those assessments for them.
> They have to rely on vendors making those assessments for them.
That's why I run Debian stable, which provides security updates and critical bugfixes for 5 years or more post-release with very limited changes otherwise. Maximized benefit, minimal cost.
FYI -- use Debian almost exclusively, been doing so for 20+ years and love it. (Well mostly, the last 5-7 years have been rough though.)
What Debian provides, is security updates for 1 year after the next stable release. That's it.
'5 years' is via LTS is volunteer support, provided by corporate and private donations, via a corporation in France. Their goal is to extend Debian oldstable's lifespan. The entire process is absolutely not the same as updates when running Debian stable.
For example, due to its volunteer nature, companies get to decide where they 'put their money'. What packages are prioritized. Rare / unused packages may never be addressed, depending upon funding.
Use PHP? Apache? The Linux Kernel? Sure, you'll see updates! Use rare package $x, and that may not happen, even though Debian Security would handle it.
I can also tell you via experience, that QA is not quite as good as Debian proper.
Still, is it a good thing? Sure! Is it managed by the Debian security team, 100% embracing all of its methods, and so on? NO.
I felt it is important for you (and others) to know this.
It's not as stable as Debian, not managed by Debian, and should ONLY be used as a stop-gap. You want stable?
I'd argue that almost 100% of users, myself included, can't really make this analysis for any given update. Usually you can't anticipate what will brake, that's what makes upgrades scary.
What everybody does however is to use past experiences to evaluate the risks. Who hasn't had a system upgrade break something that took a while to fix? In these conditions, who wants their system to auto-update if the system is critical and it could happen at the worst possible moment?
As I said I try to keep my system up to date, but if I know that I have an important deadline in the near future I'm likely to postpone updates to avoid shooting myself in the foot.
Arch is a rolling release, upgrades are expected to break the system at some point. Windows updates aren't meant to.
Most Windows updates are security or bug fixes only. The exception being the twice yearly feature updates. These have been an issue mostly because Microsoft had been using a random subset of Home users as beta testers for new updates. However, they are being less aggressive with this now.
> These have been an issue mostly because Microsoft had been using a random subset of Home users as beta testers for new updates.
Oh, that might explain my experience a bit. I've never had any issues with automatic Windows Updates managing 50+ with a mix of common software. Every one of them was always Pro or Enterprise though.
Ran Arch for a couple years for dev with a nightly cronjob of (iirc) "pacman -Syuw --no-confirm". That broke regularly but I knew it was a bad idea.
> Microsoft didn't have a choice. Given an option regular users will never update their computers
I think this is a misconception.
A few weeks ago I was updating my Windows 7 install that hadn't booted for a year or so. I opened Windows Update. It looked for updates, and found some. I clicked the update button. It proceeded to start downloading, by which I mean it would hang for 1 to 2 minutes and then download very quickly.
When it was done updating, it required a reboot. After the reboot it needed to do something for another 5 minutes. The last 2 of those minutes it showed 100% progress, seemingly stuck again. Then when that was done, it rebooted again.
I started Windows Update again. It looked for updates. There were more updates. I clicked the update button. It proceeded to start downloading, by which I mean it would hang for 1 to 2 minutes and then download very quickly.
When it was done updating, it required a reboot... and so on, and so on, 5 or so times. Every iteration took somewhere between 10 and 45 minutes. Determining how long an iteration would take was impossible, because everything appeared to just get stuck all the time. This has been the Windows update experience through XP, Vista, 7, 8 and now 10.
The result of this madness is of course that nobody updates Windows. To solve that, you can go two different ways: Make updates painless, or force everyone to go through the pain again, and again, and again. Microsoft being Microsoft, they picked the latter.
Meanwhile, on a typical Linux system, I just decide to update every once in a while. I make it look for updates, get a list of everything that will get updated, and accept. It starts downloading, by which I mean it immediately downloads everything very quickly.
When it is done updating, it may require a reboot if the kernel got updated. After the reboot it doesn't have to do anything.
I make it look for updates, and there are none. Updating my system actually brought it up-to-date. This generally takes 2 to 15 minutes, almost entirely dependent on the amount of data that needs to be downloaded.
The result of this is that I update my Linux systems very often. It's painless, so why not?
The update loop on Windows 7 is more an artifact of the traditional Windows servicing model where updates aren’t cumulative and only periodically were cumulative bundles released, as well as service packs incorporating prior hotfixes as well.
The Windows 10 servicing model doesn’t have that problem, a new cumulative update gets pushed out every month and can get a machine to the latest update for that branch regardless of how long it’s been out of contact with Windows Update. The semi-annual branches can also be directly upgraded to from any prior branch, as they are essentially a full upgrade of Windows just like moving between releases of Ubuntu/Fedora/etc.
It's great that they fixed that part, but to be honest that was the most excusable of the problems. I understand that maintaining an update path from any old version to the current one can be hard, so updating in steps is fine.
None of the other problems are excusable though.
No download should take 2 minutes to start, especially from a company like Microsoft. Sure, an anomaly is possible, but this update loop took several hours and every iteration was like that.
No update should ever take 5 minutes after the reboot, especially on an SSD.
No progress meter (that isn't dependent on a remote service) should ever be at 100% for 40% of its total runtime.
45 to 90 mins to update my Macs. They might not reboot 5 to 10 times but it's not quick.
19.04 Ubuntu just died on me today (I know some expert could have gotten in working). Apparently 19.04 support ended and someone took down the servers. So trying to update would tell me something about the servers having no release file. And they wouldn't let me update to 20.04 until I patched 19.04. I never modded anything. Whatever broke it broke itself. I searched the net for answers but my search foo sucked. Someone said download the 19.04 ISO and extract the sources.list file out. I did, it had different repos but got the same errors (with the source urls pointing to the new places of course)
So, 8 hours later I just finished reformatted the drive and installed 20.04
> I searched the net for answers but my search foo sucked.
Do not feel bad about it at all: somehow it seems impossible to search for answers related to Ubuntu. Anything related is for something like 14.* or 12.* releases. I do not understand what is going on: are people really not asking the questions for newer versions, both Google and DDG not properly ranking or are the questions/answers getting somehow marked as dupe and deleted?
19.04 isn't an LTS (long term support) release, it has 9 months of support, you're supposed to upgrade to the next release in a timely manner.
If you're not going to do release upgrades frequently you should stick to LTS releases (20.04 is one), which are supported for over 4 years: https://ubuntu.com/about/release-cycle
Cononical should really keep the old repos up for longer. At least the parts necessary for upgrading to the next version. 19.04 is barely over a year old. It's entirely reasonable for users to leave a computer for a few months and expect it to still work when they come back.
Sure they could use LTS but that has the same problem, just on a longer timescale.
I could turn on a PC with the first Windows 10 beta and it would update to the latest Windows 10 no problem. There's no reason Ubuntu can't do the same.
I don't know why major distribution upgrades are so unreliable. I used to be a Fedora user and it was pretty much impossible to upgrade the distribution version without breaking a lot of stuff. The package manager corrupted its own database once. I switched to Arch Linux and never had these problems ever again despite all the memes about Arch being unstable.
Updates are cumulative now to avoid such issues. Prior to this, the updating choices were geared towards regular users, so that such a case wasn't really counted...
> Honestly I don't have a problem with that. But if they don't give power user the option to opt out, this is just disrespectful
The problem is that "power users" can't be trusted to use this power responsibly, or they operate based on assumptions that won't always be true. Either way they end up hurting other people in enough situations that this becomes an ecosystem issue.
Power users often help others set up their systems like they do, but those users can't manage them. Other times, power users will move on. When they were managing the systems, it wasn't a problem. Now that someone else is, the systems stop getting updated and someone has to clear up the mess. Yes, good power users help transition things properly, but don't kid yourself on the number of times this actually happens.
This phenomenon is prevalent in software. Just looks at the defaults which "power users" frequently choose (cf. JWT).
The arrogance in this comment astounds me. It's almost never appropriate to assume that people don't have the right to make decisions for themselves, even if you think those decisions harm them. People have different values and weigh their own decisions against their own values, not yours.
> It's almost never appropriate to assume that people don't have the right to make decisions for themselves, even if you think those decisions harm them
But when those decisions affect others? When your computer becomes part of a botnet and is used to attack a target then it's no longer just your problem anymore. The responsibility to prevent this must lie somewhere, either with the user or with the manufacturer. And Microsoft would rather force the updates on users and occasionally be in the news for a botched one than to constantly be in the news for botnets of hundreds of thousands of "abandoned" Windows PCs, or the constant complaint that Windows is insecure because people keep their PCs stuck on the same version they came with.
To use some very current imagery, imagine if someone walked around coughing in your face saying the decision whether to wear a mask or not is theirs.
It would be appropriate if you didn't try to misuse the present crisis to lend undue emotional weight to your argument. This is manipulative.
Further your argument draws a connection but its spurious a sufficient difference in degree is a difference in kind. The way in which we comport ourselves while sick have the potential on net to kill millions of people where as the peril implied by users failing to update windows has never resulted in peril of that magnitude. Further one can reasonably suppose that one can with sufficient care design a system where updates are on by default and we don't create perverse situations which inspire many users to turn them off entirely.
For example one in which applications are updated without disturbing or interrupting the users workflow, where updates aren't effective until the user reboots, where applications are isolated from their underlying environments where users can roll back and pin a particular version if a new version is buggy or undesirable. Where major changes to entire user interfaces are rare and opt in for years.
> It would be appropriate [...] This is manipulative.
Please don't make unfounded accusations and personal attacks based on suppositions. It was just an easy to understand analogy of how your decisions can have consequences beyond yourself, and does the job without any hint of "emotional weight". The rest is in the eye of the beholder.
> have the potential on net to kill millions of people
Talk about manipulative and lending undue emotional weight to your argument. Nothing gives weight to your own words like not following them yourself.
> one can reasonably suppose that one can with sufficient care design a system where updates are on by default and we don't create perverse situations which inspire many users to turn them off entirely.
This supposition didn't fare well in reality because it's easy to suppose but hard to implement. Especially when talking about a very complex system that has to be put in the hands of ~87+% of computer users out there, and work with tens of thousands of combinations of hardware, software and different configurations. And perhaps the most critical aspect is that the perverse incentives are left to the judgement of users with little to no understanding of the system or the wider implications of misusing it. They are more likely to follow terrible advice because the explanation for the good advice is too complicated. This is why the easy to understand analogy was useful.
>perhaps the most critical aspect is that the perverse incentives are left to the judgement of users with little to no understanding of the system or the wider implications of misusing it.
This is true of capitalism and democracy. The worst choice for economics or governance except for all the other choices. I want a system that respects the users judgement not yours. No matter how well meaning you can't adopt the perspective of all users nor do I desire to see the mediocre results of smart people who know better than their users trying.
It's actually not that hard. If you make updates something that silently happens periodically without interrupting the users or making many changes to the UI that the users rely on they will let you update the parts they don't directly touch all you want to secure their systems.
When you want to make major changes make them opt in and test them to ensure they are actually substantially superior. After a while deprecate the old UI. People will tolerate infrequent major changes far better than constant small breaking changes to their workflow.
Again if you don't make updates suck you don't have to coerce people into doing them. If you are figuring out how to coerce users for their own good you are solving the wrong problem.
I'm sorry that this is uncomfortable reading for you, but not everyone shares your values. What you describe is nice conceptually, but is not actually true for most of society. You can have opinions that people might bucket as libertarian, or similar this doesn't mean that the world thinks the same.
So because perhaps some '"power users" can't be trusted" (in your opinion), that makes it's okay to just take away their own control of their own computers?
(That kinda reminds me of "trust us, we're the government and we know what's best for you".)
Sorry, but that goes against everything that this whole "free software" thing stands for.
>> (That kinda reminds me of "trust us, we're the government and we know what's best for you".)
Or, why we have regulations to protect workers. Everyone doesn't have to share your worldview. There are plenty of reasons why these changes are happening. The people making the changes are normally aware of their trade-offs.
Change is the key word. If the world was better before from the perspective of the designer then they are unlikely to have changed anything, or introduced an approach you disagree with. The choice is for you: accept it, or find an alternative. That's they way of the market, no?
>> Sorry, but that goes against everything that this whole "free software" thing stands for.
I agree. Ubuntu is a particular vision of free software. They can do it the way they want. There are way that you can do free software the way that you want to have it.
There's a real Dunning-Kruger problem with "power users," where some people know just enough to get themselves into trouble. And I think these D-K power users outnumber the "real" ones.
> especially now that everybody's jumping on the Agile bandwagon, including Windows. There needs to be a way to ensure new versions reach their users given everyone's just churning out barely working software these days.
The fact that everyone is on the agile bandwagon churning out barely working software is the main reason why people do not want new versions of software magically appearing on their computers.
Once you've got a version that actually works for you, allowing an update to come through is like playing russian roulette.
Microsoft did have a choice. They could simply have hidden the disable auto update (and telemetry for that matter!) somewhere deeper in the OS where non-techie users will never find it. Or even a registry key or whatever.
Windows has a long history of tech-illiterate users downloading optimizer tools from shady websites that mess with registry keys and break stuff in surprising ways. And it's always Microsoft who gets the blame, not the shady utility. From that POV, I can understand Microsoft being reserved about gating this kind of stuff behind registry keys. These days, you can only do this stuff with group policies if I'm not mistaken, a feature that regular home users don't have access to.
Isn't group policy applied by changing registry keys? I've never actually looked it up, but I've always been under the impression that group policy changes just update registry keys that are checked by other software/parts of the os, and that the only advantage to using it (in a non-domain environment) is that it's the official way of doing it.
It does yes, but it also prevents the user changing it (or it changes it right back if they can). But it is possible that the feature requires a domain join to actually work, or perhaps the Enterprise edition of Win10. I didn't look into it.
It's annoying having to set up a whole windows domain just to disable this though :(
I got Windows Long-Term Service Branch (LTSB, now LTSC for Long-Term Service Channel) for precisely this reason.
After having an update happen at a bad time, one closing a file I hadn't yet saved, and another killing a process of computations I had been leaving to run overnight, I decided that I wanted to use my computer the way I saw fit, not the way someone else saw fit. If someone cannot wrap their head around the idea that I might leave a process running while I sleep, or that I might forget to save something in a program that doesn't have an autosave feature, well, they're not considering all of the use cases.
For giggles, after LTSB I did "all the things" to get rid of forced updates on one particularly infrequently-used tablet and, as an experiment, left it unpatched for slightly over a year before seeing what would happen once I finally let it do its thing. For one, the updates took about three and a half hours, which I thought was interesting; just re-installing would likely be faster.
More interesting, though, was that the world did not end because I had not patched this tablet.
I get aiming at the eighty percent of your users who just would not think to update, but being that hostile to the other twenty percent who are aware of updates and still would like that control is annoying. It is sad to see that this kind of thinking has made its way to Linuxland, which I had imagined to be the last bastion of "we're doing this for your own good."
True, LTSB/C is really really nice. It doesn't come with any of the crap/bloatware that MS package in the normal editions either. It's just clean and does its job. I love it.
It's just really hard to get officially for consumers. They should sell it to everyone that wants it.
And yeah, Ubuntu Desktop is kinda the "Windows Home Edition" of Linux though. Something like Arch, Debian or Alpine won't do this to you.
I understand that Microsoft see the large number of vulnerable machines on the internet as a problem thay they are responsible for and the forced update as a necessary solution.
Nevertheless, I dual-boot my laptop and use Windows almost only for presentations. Whenever connections to external devices and other OS shenanigans force me to reboot my machine when I'm in the speaker position and the dreaded forced update cycle begins I swear I'll never touch that ugly OS ever again. And I already developed routine of checking for updates and booting in advance, but somehow this still happens from time time.
I guess there's need to be a special "seriously not now pretty please" button.
Exactly, just without the time limits. Also, make it work for all updates. Oh, and While they are at it, they should also remove the nasty auto update of office application when starting them, without giving any prior hint or notice.
Options are for everyone not power users. Options are a necessary escape hatch for users when you fail to take into account their needs.
Inevitably you will miss some use cases. Given sufficient configurability users faced with substantial challenges will be able to find a way to use your software.
If you can easily search options including a description of the option this can still be usable for all.
> Given an option regular users will never update their computers
This does not reflect the hundreds of regular users that I have serviced over the years. Perhaps fewer than five would postpone updates beyond a few days for convenience / uncertainty, the rest would always install Windows updates within a day or two of being prompted.
My problem on Windows has never been auto update. It is the forced reboot. One that Microsoft refuses to fix because "muh backward compatibility" and instead forces reboots.
Part of the solution should be that Microsoft Windows should refuse to install on any mechanical hard disk or eMMC and on any SSD slower than a certain benchmark. The other option would be to minimize the obnoxious amounts of reading and writing Windows does to disk but that would make sense.
Personally, I don't mind snaps that auto update. Apparently, "modern" snaps are sandboxed but "classic" snaps are not? I think this is the biggest failure of snap. What is a snap? Why couldn't we give legacy snap and real snap two different names? It isn't like there used to be snaps and then we got modern snaps. They both started existing at the same time.
>My problem on Windows has never been auto update. It is the forced reboot.
I agree - when one's workflow must be interrupted by either application or OS restart, then the software must work for the user and wait for a convenient time.
Some of them, very much. Go have a look in academia where there are hordes of tech-illiterates working on linux just because of reasons like some obscure piece of software working only properly on linux, because the postdoc did it that way, because it's cheaper, because they think it has to be done like that, ... Last lab I was in it was just painful to see how much time was completely lost on trying to get things working by people not understanding computers or OS. So if auto-update can alleviate some of those problems, why not. If it breaks things, well that's something else :)
There‘s a simple reason: speed. Have you ever tried to use WSL 1 with npm? I doubt do, because it just takes 10 times more time than on „real“ linux.
Also, it‘s way easier to work with multiple desktops and windows position better than on Windows where it seems that they are opening always there where you don‘t look.
I have. It wasn't called wsl but bash on windows back then. You had to disable bunch of services related to scanning files (windows defender?) to make it barely usable but since v2, they are using a lightweight vm so the speeds are significantly better compared to what you got on bash on windows or wsl 1.
I don't have anything to say about your other nitpick.
A good reason to use windows is games, Adobe and office. Games using anti-cheat will not be available on linux and if you can get them to work, good luck not getting banned.
I agree, gaming and Office are major bonuses for Windows.
And IMHO, the discussion about which OS one should use is probably older than any OS at all. It comes down to personal preferences and experiences. I would not for any longer time want to use only Windows for a couple of reasons. I use it for some office stuff (we have SharePoint and OneDrive for Businsess does not exist for Linux).
I did a quick search and here are two references about the WSL speed:
FWIW I tried Windows as a desktop two weeks ago, and gave up on it precisely _because_ the implementation of window snapping and positioning was so bad. I have a very wide monitor and need a way of splitting the screen into thirds, which Windows seems to lack.
That boat sailed long ago. Windows has a pretty decent linux subsystem now. Even devs that were hardcore linux enthusiasts moved to windows.
The benefit of linux is that it doesn't force stuff on you that you can't disable compared to windows. If you take that away, there are very little reasons for any normal user to move.
Windows Server won’t be getting “sold off.” I’m not sure you understand just how entrenched Windows Server is in the Enterprise and just about every Fortune 500 company out there, including huge telecoms and ISPs. MS still makes a ton of money off Server, and most of the companies/Enterprise using Server aren’t even thinking about Azure.
I think dual booting is still the way. Gets the best out of linux and eliminates headaches. Boot up times are so fast these days, in and out of the OSs almost as fast as logging in and out.
That’s not untrue at all. It’s just that when I have used that needs to reset his WiFi adapter, there are nice picture tutorials for Windows, and on Linux you better have some idea what a /dev/ is.
Think about explaining to grandma over the phone.
It’s just a non-starter for Linux. Which is fine, it’s what it is for a reason. But it’ll never have mass appeal because of it.
Yea, “ First determine the name of your WiFi interface by issuing the command : nmcli d “
I knew that, but only in the back of my head. If the internet wasn’t working, you know, a WiFi problem, I would have a very tough time remembering that. Which is the point, trying to remember the voodoo.
And if I had to ask grandma to type in nmcli d then read me what she sees? Well, if anyone wants to prove me wrong, I’ll set my grandma up on Linux and give her your cell number.
I get that there are a lot of Linux fans here, but be realistic, it’s not for most people and never will be. You need autism or have never worked outside of tech lack the empathy of how mass computer users interact with their machines.
Disagree. “If I had asked people what they wanted, they would have said faster horses.” - Henry Ford. Customer is mostly right. Not always. Use your best judgement as a business owner. Innovate where necessary and stick to customer feedback where you think it makes sense.
I think there's a pretty big distinction between the customer lacking the imagination to see a better solution until it is presented to them, and being denied a choice while being in full possession of the facts.
Yeah, and this isn’t even an equivalent situation.
A more comparable situation is if Ford had requested his supplier send him a certain type of screw but the supplier sent him a completely different type of screw because it was “better”. It may be better. But it may also be absolutely the wrong thing to build Ford cars out of because they weren’t designed to use it.
And that’s what Canonical often seems to forget. Ubuntu isn’t just a product. It’s also infrastructure, and an individual (although critical) part in many other products and systems.
And AFAICT Ubuntu as a product is far less popular, and pays far less of the bill than Ubuntu as infrastructure, which is why their actions are doubly incomprehensible.
The auto update certainly falls in the former you mean? These (older, less tech savvy) people clearly lack the imagination. That's why auto update has to be mandatory. This is again for consumer facing OS and not for the server infra.
Unfortunately this is an attitude I do not agree with. Especially having worked in the service sector, far too many people use this as an excuse to be rude and unreasonable.
The other customers, impacted when one customer is out of date and their machine turns into part of a massive DDoS Botnet, are also customers. Say that with me. It's a distributed problem that you can't fix or meaningfully reason about just by imagining each user on their own in a vacuum.
I've been a very faithful Ubuntu user for 12 years now but if there's no easy way to disable this abomination then my current one (18.04) will be the last.
Whose idea was it to replicate one of the worst features of Windows?
I think gconf and macOS's plist have proven that Microsoft is at least somewhat right with registry, or at least the idea is. I don't know why people didn't like it and honestly I don't have a strong opinion on it, but I have a feeling in some ways Microsoft didn't have much of choice given that they need to have backwards compatibility.
it's handy for config, sure. but it has several downsides. very opaque compared to a config file.. keys can't have comments. the average user has no idea what a "dword" is. some registry paths are obscenely long. you can accidentally break one program or windows trying to edit another's settings. and i'm not sure currently, but a big problem in the past was uninstallers didn't remove registry keys reliably.
personally, i think XDG_CONFIG_HOME + a robust config file framework with a standardized format (yaml/toml, whatever) would work as well. being able to specify a schema could be extremely useful, too, to prevent borked configs. as you've said, we've seen a lot of tools/OS's go this direction. and it doesn't have to be perfect, the 80% case would be a huge improvement.
i'm also sure the registry made a lot of sense at the time. it was probably way quicker than reading and parsing files. we didn't care so much about sandboxing/isolation and backing up. people probably had less application installed.
gconf is like the windows registry how? is it just because its a centralized config store? how evil is an annotated, easy to use, non archaic file that some apps put their settings into?
Ubuntu security updates do not cover the universe repository, which contains the largest number of packages. The universe repo is 'community-supported', which means that a lot of CVEs are not fixed in practice.
As many others I haven't liked vanilla Ubuntu since they swapped Gnome 2 for a (what I think is a) clone of the Mac OS interface, but my desktop still uses the Ubuntu 18.04 base system under the hood so this might come in handy for a number of people like me.
The above might come across as flip, but I think it's actually spot on. Why would a mission critical or metered-connection-only machine be running the latest Ubuntu desktop anyway? Match the tool to the job.
Ubuntu has a real problem here; Desktop Linux users are much more mobile than other OS users. If you’ve installed Ubuntu, you’re fully capable of installing at least Debian, and probably any other Linux distro.
I guess it's finally time to switch to Arch. This is an absolute deal breaker for me as well, as I use an ubuntu system for live visual performances, and I need to ensure that no unexpected auto-updates take place.
As a long-time Arch user, I can say that the switch is not for the faint of heart. Arch won't do anything automatically, including installing basic things like a window manager or a browser - Arch is more of a "build your own OS" experience.
On the flip side, this is incredibly empowering. If something breaks, you can just fix it, since you were the one who put it together in the first place. That's a big jump from Ubuntu, where everything just works, but you have less control. Arch is worth it if you think of it as a future investment, rather than a short-term easy fix.
Good warning, and I agree on the learning curve. I've been using ubuntu mainly as a convenience, and it's amusing how frequently I refer to the Arch wiki (which is excellent[0]) to diagnose/fix ubuntu-related issues. I've wanted more fine-grained control over my daily system for a long time, and this snap debacle is the final straw for me.
Just remember that Arch is a rolling release distribution so the updates you do get will be bleeding edge. I recommend setting up Timeshift and always backing up your system before updating; if you’re using BTRFS it’s instant (although ext4 is practically instant on modern NVMe drives).
Also consider Manjaro if you want an Arch-based distro but don’t want to go through the trouble of manually setting everything up on first install. I’ve been running the KDE edition as my main OS and it’s fantastic!
It's like Ubuntu if Ubuntu were rolling (so much faster drivers and DE features, but nerfed delivery to minimize breakage and still really user-friendly, with a GUI kernel and driver switcher) and if instead of PPAs it had a big repo that just had everything on it (the AUR, which Manjaro's GUI package manager, Pamac, works with once the setting's toggled).
if you like Ubuntu and stability go debian. edit: the downvotes are fascinating. if you do LIVE performances and even remotely care about stability Arch is the last distro you should use. its a great distro but its bleeding edge by design. part of the fun of arch. contrarian views should be expressed with words if possible because id like to know what universe some of you are from..
I didn't downvote you, but the appeal of Arch to me is the extreme customization. Sometimes I need to do really funky setups. That said, the stability of debian is also very appealing for the very reasons you stated.
i saw it explained once to someone else that arch is more cutting edge, than bleeding edge. it's not like you're getting beta software or anything with arch - it just updates very frequently. it's a hacker's OS at the end of the day; the culture is that you're buying in to the high ownership/control of your machine, ergo you're going to put a higher effort into at least searching wiki/forums/main page for issues.
you can actually get a very stable system in the same way you would any other OS: install only the packages you can live with. you can install, say, xmonad and no DE and have a slick, solid, low resource dev setup, whereas ubuntu comes with some goofy ubuntu software store that's not removable (or wasn't when i last used it) - one wonders what other "goodies" are in there, esp. after amazon search behavior change. remember that deleted code is debugged code: the more spartan, the less chance for issues. so arch's actually a plus there.
FWIW my boomer parents run arch on 2 different desktops solidly, without much sysadmin work from me other than a few pacman upgrades whenever i remember to VPN back in, perhaps biweekly on average. yes the occasional breakage happens, see above re: website/forum/etc.
debian stable was OK as firefox was fairly regularly updated for vulnerabilities, but the actual version number was lagging a fair bit behind. ubuntu LTS just seemed like a worse debian. ubuntu non-LTS is a non-starter: probably 50% of the dist-upgrades i ran ended up failing and basically needing a clean reinstall for anything not in /home, so it's fairly admin hostile in that sense.
I'd recommend NixOS instead. Trying different desktops is easy as changing the config and doing a `sudo nixos-rebuild test`.
Arch with its rolling release model is prone to breakage. Every upgrade gets you closer to having to reinstall everything from scratch. On contrast, NixOS is transactional and if something breaks you can switch to a previous version easily.
(I actually went from Arch to NixOS, after using Arch for about 3 years).
I've looked into Nix, and I like what it's accomplishing in principle with atomic upgrades/rollbacks, but I felt very frustrated by their clunky package "DSL." I use quotes because it appears that they are trying to encompass an entire programming language worth of constructs. Why they didn't go with an existing real programming/scripting language is beyond me.
Personally I don't want FlatPak either except maybe for 1 or 2 apps where the sandboxing actually makes sense. It's still a waste of resources just to appease lazy maintainers.
Ubuntu went totally overboard with their snaps, putting everything in it trying to force it to become a standard. But FlatPak is not great either even though it is an open standard.
It's nice for the 1 or 2 niche apps that aren't available any other way, but for the most part I prefer normal packages.
I might defend this if the updates were something like a remote wormable hole, but the updates are more likely to be improvements to drawing avatar icons in Dark Mode or something similarly trivial.
How would you indicate to a package manager that something is a critical bugfix release, rather than a non-critical feature release?
Package managers (including snapd) basically just "think" in semver. Semver is one-dimensional: releases are arranged on a big line, and they'll know to auto-update based on e.g. whether a given release is close to your current release on the line.
For the cases where we do distinct security updates (e.g. kernel updates), we seem to do them using all sorts of hacks.
Maybe we just need a superset of semver, that can encode more than just the "newness" dimension, but also the "criticality" dimension?
> Package managers (including snapd) basically just "think" in semver. Semver is one-dimensional: releases are arranged on a big line, and they'll know to auto-update based on e.g. whether a given release is close to your current release on the line.
This is NOT how semver works, though... each section has, well, semantic meaning. Some updaters might linearize, sure, and many developers run fast and loose with versioning, but semver is a graph of varying major, minor, and patch version numbers which change with their own semantics which can describe more than just newness
Bump major -> y'all best be careful
Bump minor -> something you care about maybe changed, read the changelog
Bump patch -> we'll probably just fix some bugs
patch releases could and should be backportable to other minor releases under the same major release if people care about stability of a module. I think that the "work from master" mentality that npm and GitHub UX has lead people to is one of a handful of reasons that prople misunderstand versioning strings...
I agree that flagging criticality is useful, though. Linux packagers like YUM/DNF have had this for a while, even the ability to feed a CVE identifier or bug id in to the package manager to resolve them
I use a commercial OS, so there's a company out there where some well-paid professional decides which updates need to be pushed out ASAP and which ones can wait until next patch Tuesday.
I'm not sure if that person / team can ever be replaced by modifications to a version numbering scheme.
To be fair, Snappy isn't aimed at "mission critical systems", nor even at bandwidth constrained users. It's a straightforward, reasonably robust and obviously used mechanism for pushing routine consumer software. In that realm, the aggregate benefit to society of having everyone running current software outweighs the annoyance of the specialists, sorry. It's always been this way.
If you are running "mission critical" software, you need to be using a packaging mechanism (c.f. Docker) which lives at a lower level and provides harder guarantees about what you're running. Those solutions exist, they just aren't Snappy.
Ubuntu is sending out "Mission critical" software as snaps: I think the biggest offender has been LXD, but I'm sure there's others.
That combined with Ubuntu forcing snaps as the main packaging method (i.e. you have to jump through various hoops to install .debs of snaps) is what a lot of the outrage about.
(Side note: Something about listing Docker as a packaging mechanism makes me uncomfortable. IMO Docker and containers in general are deployment tools, not packaging systems)
> IMO Docker and containers in general are deployment tools, not packaging systems
Obviously it sits at the boundary, but a "Dockerfile" is (at least when properly used) a recipe for reliably reproducing a specific version of software packaged in a format that can be deployed to all sorts of systems with absolutely minimal dependence on host configuration.
That's what people who want a "mission critical Snap" almost certainly want.
LXD is a weird one, it doesn't seem that suitable for distribution as a snap in the first place. The filesystem become a magical bundle of weirdness when you use the snap version. Also LXD is something I’m not that happy to have auto updating, I would like to roll it out to a test, preprod or staging environment first.
To me snaps seems like a desktop solution that Ubuntu forgot to disable on the server edition.
> If you are running "mission critical" software, you need to be using a packaging mechanism (c.f. Docker) which lives at a lower level and provides harder guarantees about what you're running. Those solutions exist, they just aren't Snappy.
What? No it doesn't live at lower level. Docker uses same Linux APIs as Snappy.
The only upside of Docker when compared to Snappy is that it doesn't turn your hardware into zombie execution unit constantly pulling code from mothership (Canonical).
> In that realm, the aggregate benefit to society of having everyone running current software outweighs the annoyance of the specialists, sorry. It's always been this way.
There's in fact an easy way to satisfy both groups, but for some weird reason Canonical chose to turn every user into zombie execution unit, no matter the level of proficiency. It's not like a patch to disable this malware behavior would be hard to submit, but it's crystal clear it'd be rejected.
And it's clearly not for causal user benefit. They have documented switches to postpone calls to mothership. That's command line realm for experienced users.
I honestly wonder what their real motivation is. Those who seek to take back the control are people who want to control their own machines. The only thing that comes to mind is that sometime in future blog posts recommending to disable automatic updates would crop up and those dumb pesky users would just copy paste it into console without much thinking.
I think the issue is we expect Ubuntu to not only design features for common folk but also have an easy to use solution for the type of tech literate people that Linux attracts.
They have prioritized snaps but how have they made life easier for those of us running "mission critical" software?
I'm using opensnitch[1] and it can block snap requests.
Pretty nifty software, but it brings my machine to its knees unless I ``` sudo service opensnitchd restart ``` once in a while.
With that Ubuntu has become useless trash joining the ranks of Android and its impossible to disable constant Play Store updates funneling endless unknown changes onto a machine you supposedly own.
Try absolutely blocking the play store from updating itself. Even when I've jumped through all the hoops to supposedly stop this from happening, it will still randomly do an update in my experience.
I'm talking about the play store updating itself. It's proven impossible to entirely disable over here, and when I used adb to forcibly uninstall the play store the device stopped booting a week later requiring a factory reset.
Good to see that I am not alone at being upset about the current snap vs. deb vs. flatpak Ubuntu situation. I always considered a unified package management system as a huge plus.
I do not understand that. Why being upset? If you do not like Ubuntu just install some other distro and be done with it. There are many out there! Fedora, Arch (Manjaro) and even plain Debian are all much better then Ubuntu these days.
Ubuntu is doing more harm than good to desktop linux at this point in time.
Continuing to push Xorg over Wayland.
Removing support for flatpak (cross-distro way of using sandboxed apps)
Horrible PPA system that works much worse in practice than the AUR or other ports systems.
No daemonless docker (podman)
Lots of people try Ubuntu since it is the "most popular" version of Linux, realize it isn't great and think desktop Linux is in bad shape. The reality is Canonical doesn't seem to have good ideas and refuses to incorporate the good ideas from other distros.
Arch is a no-go for most people and any sane IT department. Fedora doesn't offer LTS and RH/CentOS feel quickly outdated on desktops, let alone laptops.
Ubuntu ships a distribution that works on most hardware and provides a reasonable desktop environment. While snaps aren't the most elegant solution, they do the job when your users need to install stuff like Jetbrains IDEs for instance.
Ubuntu tried shipping wayland on by default already and it was a clusterf* and made that release all but unusable. Wayland isn't ready for an LTS release yet.
Wayland is the default display server for Red Hat Enterprise Linux 8. I have been running Wayland for the last 2 years and have never had any problems. (Admittedly I haven't used Nvidia)
It may well be Canonical's fault and they just screwed things up...but regardless, if they haven't managed to ship it in usable form in an off-cycle release, it's not ready for LTS. In fact after the last time, they said they would only try again when they could go a full cycle of 3 non-LTS releases with reasonable results before it would land in LTS. And their expectations before shipping in the off-cycle didn't seem too crazy (ie the X11 compatibility layer not constantly crashing the whole session and taking out all the user's applications) but still weren't met in time for 18.10. I think they're slated to try again now for 20.10.
> Admittedly I haven't used Nvidia
Yeah, that. Not supporting ~half the video cards in the world sounds like a clusterf*. I know it's largely nvidia's fault, but at the same time, I have a lot of investment with them, and none w/ wayland.
So theres one huge issue with Snap no one in my circles is talking about - i dont want my server changing without me controlling it! This seems like a) Someone could get an update in without my knowledge and it'll get pushed to me without my consent b) if the code changes, i want to control that, if a package changes, i also want to control that....its my server, i want to control it all! On a desktop, maybe I could handle this, but for a server, its absolutely a no go. wtf are they thinking?
Yeah, I'm confused as well. I just tried Calc on win10 for the first time in weeks or months and it opened before I could count 2 seconds. This is from a SSD where it won't have been in cache.
I was wondering why the calculator took so long to start on 18.04. I thought there was some problem with my system.
Doesn't bode well for 20.04. I've been with Ubuntu for a very long time, I like that it just works most of the time. May be time to try out another distro if all the applications are this sluggish.
A good time to ask I suppose: Is Devuan stable and does it carry security updates and package repositories equivalent to normal Debian? If so I'll probably switch myself.
I filed a bug a few years back asking why Snap refused to allow any repositories outside of Canonical's and had no open-source server implementation. That blew up.
To disallow control of the updates by the users is probably well-intended as some kind of a trade-off, but what if I don't want a new feature that is coming in the next version? What if I already know it's broken in my system configuration -- I hope for a fix for 60 days, and then what? What if the fix never comes? Does my system stop working?
What if I'm a business relying on that specific version? Do I just say "oh well" and close shop?
And what about airgapped systems?
I understand there's the "security", but then, on the other hand:
1. If snapd gets forked because of this, the snap ecosystem becomes fragmented and Canonical loses control of part of their baby.
2. If snapd stays as-is, and Canonical keeps preventing user control of the update cadence, then people will just run away from using snaps once the magical auto-updates create any high-profile problem. Lxd is a snap as well, so containers will be in the crosshairs.
All in all it feels like a silly decision if you ask me.
Ubuntu 20.04 forced me to switch to Fedora Server on my home server. Pretty happy so far. I have significantly fresher package versions and most of the software tools I use offer a RPM package. I think I will wait for snap to mature, before giving it another go.
they didn't say it was ubuntu server, it might just as well be ubuntu desktop, and used as a home server (i.e. a box used mostly for server software, but no necessarily without desktop environment).
For largely this reason, I've just switched to Manjaro after over two decades (ouch!) of using Debian and Ubuntu. I'm very happy with it!
Package installs are unbelievably fast. But mostly, the AUR repository of user contributed packaging scripts is awesome! Although I'm a bit worried about installing packages from random internet people, they are generally short and very easy to check for unwanted *ware. Haven't been disappointed yet!
Came here to say this. I've been on Manjaro at work and at home for 2 years now. It's the longest running Linux desktop distro I've ever maintained and it's precisely due to the excellent AUR repo system.
I've had to install very few things from source, mostly small, opinionated system tweak utilities. But even installing from source is easier than hunting down a PPA and a key and trying to keep apt from getting corrupted somehow.
Yeah a lot of the software you see in the store is legacy software that seems to be stuck on an older version. Also many of the items are lacking a screenshot and a comprehensive description of what the software does. I find myself using the store to discover software and then go to the software's official website (usually on Github) and install it the oldskool way by doing:
I was asked by a friend to get Netflix working on his Lubuntu 20.04 Linux laptop. He was running Chromium which comes without Widevine, and the extension wasn't available in the Chrome Web Store, but it can be enabled by simply making libwidevine.so available in Chromium's library path. But how to add a file to the sandboxed lib/ directory of a Snap? You can't without rebuilding the entire Snap as your own custom product, and then it's no longer a standard package from the repo so you lose the blessings of package updates! Maybe there's a better way, but I sure could not find one.
I had to ditch Chromium and unfortunately resort to Chrome directly provided by Google, with all of its privacy problems.
Usually firefox does not ship with it, but it's a one-click install when needed. It depends on the package though, some packagers build it with it bundled some don't. Since it's not a FOSS component it can't be bundled in FOSS distros though.
I feel like snaps are trying to do too much. Applications are slow to start. Besides, I don't want the underlying system to be changing and updating all the time. For me the stability of RHEL/Centos with flatpak for desktop applications is perfect.
It is worth noting you can basically get all of the benefit of using ubuntu(a good gnome experience, ppa's, etc) but without any of the snap stuff by installing pop os. It's a bit unfortunate that pop os is branded as something basically for scrubs and new linux users, because it is still fully featured and a great overall experience even for a more adept linux user.
I don't know Pop OS. On the other hand, I have been using Debian on my PC since 1999 and only got Ubuntu for my laptop because I had trouble getting everything to work properly and Ubuntu seemed to do the job.
> Pop!_OS is an operating system for STEM and creative professionals who use their computer as a tool to discover and create.
Okay but who is STEM specifically? Further down it mentions "Deep Learning" "Engineering" (Mentioned apps are all for software dev) "Media Production" "Bioinformatics".
I don't know about you, but to me that gives the exact opposite impression of "something basically for scrubs and new linux users".
It's not about whether it can be disabled or not, but about the direction Canonical is heading to. Also, I wouldn't call installing Chromium from .deb in 20.04 exactly straightforward.
I do not have an informed opinion on the different aspects of snaps, because I haven't had to deal with them at all.
That said, the feature "automatic updates whenever the system feels like it", is an annoyance for me, even if I can defer them. Typing "sudo apt full-upgrade" takes a few seconds.
I just use Ubuntu Server and get only what I need, which is available through the APT repositories. I have not noticed these repositories getting any smaller in favor of snaps and I am already using the 20.10 development branch. The Ubuntu Server installer does not install any snaps by default. However, snapd is there but it is trivial to remove it if one wants to. Chromium is the only "loss" I have witnessed and I also saw this mentioned in other comments. Note that there exists a really nice PPA, maintained by Pop!_OS, that contains quite a few packages, including Chromium.
I am wondering what security issues do snaps address that apparmor, whose purpose is security, does not. I also think it is unfortunate that we need snaps to deal with different applications / packages needing different library versions...
I know Snap maybe offered tighter sandboxing, but why didnt AppImage take off? It seems like it solves the problems of deb/rmp/etc war and is easy to create for app devs...
I much prefer AppImage, just recently I created one that packaged an old windows game and included the required dependencies (wine, libopenal, etc). Starting the AppImage is as simple as "./game.AppImage" and in theory would work on any Linux distribution. I think it has good potential for preservation of old applications and video games.
We work on a product where we must deploy different Linux distributions for customers but it’s imperative they’re stable and we’ve tested the image at an exact point in time.
Snap is an extremely big pain when it comes to this. We have to workaround for forceful updates by telling it to use a non-existent proxy, it’s very dumb.
Actually, I'm startled by the religious believe that nothing can go wrong with "snap"! What is so new to snap that it is so reliable, in contrast to Debian-packages? Currently I'm trying to update my 18.04 to 20.04 but get stuck, because the Python3-Version shipped lately seem to have issues. So, the upgrade process abruptly stops, leaving the system in a half-baked state. While I do not complain, Ubuntu is free, I'm terrified by the thought such an upgrade could be initiated, without me monitoring it, and having the chance to try it on a non-productive system first.
Currently I can live with the snap-approach, because I don’t have to use it at the time being, but if this is the way Canonical will proceed, I probably will have to vote with my feet too.
Mark (not Shuttleworth :)
I’m sympathetic to most of these arguments, operating systems should typically trend towards more clarity and user choice; especially ones likely to be used by professionals like Linux. But I will say that the Ubuntu software store has sucked for as long as I’ve ever used Ubuntu; they made it pretty years ago, and apparently never once considered performance or behavior. I would regularly search in the store for applications I knew existed, and it would either take forever or not find them. I always gave up and used apt, at least until I gave up on Ubuntu altogether.
I've been using Ubuntu Server non-stop on a wide range of differently configured (real aka bare-metal as well as virtual) servers for over 10 years.
As a server OS I've really found it useful.
(Except for that time when they shipped buggy versions of high-availability tools, warned against by the upstream maintainers, which segfaulted occasionally and showed inconsistent state across different nodes - thanks :/)
I found it an improvement over Debian. And that an improvement over Red Hat (before it was RHEL and Fedora). And that an improvement over Slackware. CentOS and RHEL got used too in parallel on some sites, and were very solid, but with their own annoyances.
But recently, on the server, LXD (aka LXC 2) switched from being a .deb package to being a Snap. They made a policy decision to stop shipping the .deb, to force server admins to start using Snap whether they wanted to or not, I guess.
So I was forced to use Snap just to run LXD, which was annoying as things like configs and paths to the container images are buried inside the Snap somewhere, and various things with container-uids and host-container shared mounts stopped working. At least, the upgrade broke a bunch of working scripts.
But it wasn't too painful, just a bit of work that felt like it was caused by an unnecessary annoyance.
Ironically given other comments, the Snap does not seem to auto-update unlike all the other .debs on the system!
Now, hearing about Ubuntu 20.04 and application snaps, I'm extra cautious before assuming Ubuntu 20.04 Server will be a smooth change. I'll take a look, and if the server software is much like 18.04 (except for annoying LXD) I'll use 20.04.
But if they've moved a lot of things away from .deb to Snaps-only on the server, that's going to be some overhead to deal with, and it will at least result in evaluating whether switching distro is no more effort and a better choice.
I've not been a fan of CentOS, but that time I was burned by the high-availability tools due to Ubuntu (and Debian) shipping a dangerously buggy version in an LTS release did make me realise that RHEL/CentOS are well maintained in that department and just wouldn't do that.
Ubuntu has followed the same path of first building their user base on trust and quality and then monetizing it hard with various shady schemes.
Even coffee shops do it. First they sell high quality coffee for little margin and "don't expect or accept tips". Then they switch to the dark side to monetize the trust: they cut hard on materials, they pay $2/hour to employees and expect the customers to subsidize their minimum wages with tips (that are now accepted and very much expected).
I moved from Ubuntu MATE to Fedora MATE for this reason. One reason I use free software is that I get to have the final say in what my computer does. There's a big thread at snapcraft forums asking for the ability to shut off updates. Canonical wouldn't bulge. So I ve moved elsewhere and thank the Linux ecosystem for allowing diversity so that I can still use it as I d like to.
This is terrible. We use Ubuntu on systems where bad things can happen if software becomes unstable due to an unwanted update. In other words, nobody, nobody would ever even dream of installing any software of updating the OS without authorization. And this authorization would require regression testing in order to verify the changes would not compromise the system.
I don't understand how anyone working on Ubuntu could think this is a good idea.
To be clear, if Linux (let's say Ubuntu) is going to become a viable OS for the masses it probably needs something like this. I get it. However, there is no reason to break the traditional stability of Linux and even risk creating danger.
Disabling this mode should be as simple as one setting. In fact, it should be offered as a choice during installation with a full explanation. This is where they could make a pitch for the functionality. Those of us who don't need it (or can't use it for other reasons) can opt out and move on. Consumers, sure, I don't care.
I've been running Debian since ~1994, and never really got the excitement about Ubuntu. Some colleagues did, but it seemed to be more about the marketing.
I've always been a KDE user, which was always a first class citizen option during the Debian installation process. For Gnome / Unity users having that default promoted and baked into the distribution might have been compelling.
The refrain 'but it's so much easier to install' never really sat well with me -- I hand-install my desktop & laptop OS's infrequently, given the upgrade process (with Debian) has always been wonderfully robust. I auto-install and maintain, almost exclusively headless, every other VM, so a nicer installer wasn't relevant there.
Claims of 'stodgy' versions are rebutted by using testing or unstable branches, or even backports. For enterprise, stodgy's often a plus in any case -- look at what you get with RHEL/CentOS.
Look, I'm a huge advocate of KDE, but objectively I'm struggling to see what's ground-breaking and desperately urgent in 5.18.
Ubuntu (or rather Kubuntu's) schedule [1] unsurprisingly isn't faster than the mothership's.
I guess there's been a lot going on in the world since February, so people haven't been 100% focused on getting free software packaged up into a free distribution as fast as normal.
Well, I'm out then. Sorry, but at the end of the day, it's a user's machine, not yours, Snappy/Canonical. I will not use a system which will not let me turn off something that installs shit on MY machine. I can't believe the arrogance of the snappy devs. Just shut up and give the people what they want.
Yeah, startup times is something that you don't really notice until you have a system that is really fast and responsive, and them you simply get annoyed by when you're using a system with slow startup times again or get accustomed again with the slowness.
Startup times didn't bother me too much, but nowadays it does, specially for things that my brain assumes that should be fast. This is why, for example, both my Neovim and Zsh configuration is tuned to start in <0.1s:
time zsh -c exit
zsh -c exit 0.00s user 0.00s system 84% cpu 0.007 total
time nvim -c qall
nvim -c qall 0.07s user 0.02s system 100% cpu 0.081 total
It does makes a massive difference, for example spawning a new shell is instantaneous and when I know that I will make some small changes I much prefer to open a file in Neovim than Emacs (my main editor/IDE nowadays).
There was nothing wrong with Synaptic + Deb files that needed fixing. Same with Systemd and now they're going after our home directories (never mind that it breaks SSH). My home directory is already portable as in all my config information is in .dot files. As for snap, yet again a solution in search of a problem. you can download a nightly-build as a compressed archive and run it without installing. I too will be moving with my next installation. any recommendations for a small fast distro where the GUI doesn't get in your way?
For someone who has been out of the loop on the whole Snap vs Flatpak situation, could anyone give (or point to) a comparison on their technical merits, and why would one be better than the other?
My issue is that they still have not sorted out suspend and resume. Half the time when I reopen my laptop lid my keyboard no longer works. Had the same issue on 18.04.
Ok, a central store is a problem, but I don't get hn's bitterness about autoupdates.
I don't mind autoupdates of apps, especially sandboxed ones like Snaps, and especially when they can be rolled back if issues arise. Snaps do, in fact, allow for easy reverting to a previously used revision:
$ sudo snap revert vlc
vlc reverted to 3.0.5-1
It will even remain on the reverted version without updating until another version is published.
I've been using kubuntu for a while and while I like it a lot, it seems very "chatty" to me - way too many notifications which I have to disable one at a time. I only discovered the snap situation when Chromium started prompting before it would open files, and I couldn't find a way to disable it. In the end I installed Chrome, but I am looking around for an alternative distro.
I don't use Software Center so the only thing I should be concerned about is getting the snap flavor of Chrome instead of the .deb. My main browser is Firefox anyway.
I get that the forced direction toward snaps seems premature. I wouldn't stop using it though if there's simple measures to get around it. Also use Xfce/Xubuntu.
I think that Pop has a bit of a branding issue. It has the reputation of an os with training wheels for newbies and gamer bros, but it is actually an all around great OS for anyone. It's a great gnome experience out of the box with all of the benefits of using Ubuntu with none of the nonsense. I'm hoping to see it gain popularity with the broader linux community. Everyone who used traditional ubuntu and isn't happy with the direction it's headed should really give Pop OS a shot.
I only use the micro editor as a snap. The chromium snap was forced on me. I keep it around to test and separate web stuff. From that minimal use I get a daemon running and a dozen bullshit mounts. Wish they just statically compiled it.
I tried snaps and I'm okay with them atleast for non-core apps. Like games and a few bigger programs I want isolated from the rest of the systems. But I wouldn't want my calculator or browser as part of snaps.
So that's why two applications suddenly couldn't install anymore. Aw man, I've been so happy with Xubuntu the last couple of years, I really don't want to have to find another distro.
> I installed Pop!_OS 20.04 LTS just yesterday, hoping to get a better out of the box experience. I’ll log back with thoughts about it next week.
I tried Pop!_OS 20.04 in a VM just yesterday and my experience was dreadful especially with its "store" application. Things like the entire program freezing randomly for seconds while trying to fetch things, images taking ages to show up (btw, i have high speed connection), showing me ridiculous download sizes for flatpak packages (think something like Geany requiring 1.1GB or something like that), not giving any indication about download processes if you navigate away, wasting screen space when showing applications in a category (it uses a list view where each entry gets a huge icon at the left side, its title, some very short and useless description next to the icon like "Goofy is a goof foogbar", an "Install" button at the right side and vast swaths of emptiness in between), etc.
Also had UI screwups everywhere, like the Preferences application cutting out the "Preferences" title in the titlebar to "Preferenc..." when a) those three dots took the same space as the rest of the word and b) that title was the only thing in the entire titlebar with almost 500px of unused space at each side. Or controls moving and windows resizing a few pixels after i start typing something (e.g. trying to type some number in the calculator has the divider between the buttons and the number move down a little the first time i type something). And the entire "let's merge titlebar and toolbar" idea is as awful as it was when GNOME 3 introduced it (well, actually it was iTunes but i do not remember anyone ever saying that they liked iTunes). Trying some of the preinstalled apps, i accidentally clicked some tool buttons trying to move a window around (the UI being slow didn't help).
And it was dog slow. Terribly, irredeemably, laughably slow. Opening the icons had the entire UI chug worse than my 386 running Windows 3.1 (and actually i'm pretty sure if i wrote a program to move icons around in Windows 3.1 it would be faster).
IMO the only time i felt Linux had a high quality desktop environment was actually the first Linux distribution i used: Caldera OpenLinux 2.3 (to the point that in my youthful enthusiasm i force installed it to every relative and friend's computers i encountered :-P). That distribution was very well put together and thought out (try to find it in archive.org and test it in 86box or PCem with an emulated Pentium 200 and 32MB of RAM to see what i'm talking about). It had some issues, but two decades later things should have been much improved - instead everything went downhill after that and for most Linux users their desktop use is mainly a matter of how much tolerance you have for all the issues you encounter (ie. when you see someone having a problem and there is a reply like "i never had that problem" or "i've been using Linux XYZ for years and that has never been my experience", it is usually from someone who had a high enough tolerance that anything non-major does not even register anymore).
Can someone explain why it's fashionable to hate Snap? I am a casual Linux user (Ubuntu VM for schoolwork and projects that WSL can't handle) and don't get the hate.
Nothing about everything being statically linked into a big binary blob with very little connection to sources? It's my main problem with containers in general.
Is there any sensible documentation on this somewhere?
I've been running Debian, and then Ubuntu, for a quarter-century now, largely because things just worked. .deb is nearly perfect.
Having used Docker, it's the last thing I want on my desktop, for a whole slew of reasons.
It's odd, Windows and Linux switched spots in that time. Windows 10 is faster, more stable, and more understandable than Ubuntu. Ubuntu is increasingly layered, convoluted, and bloated. Windows runs on light systems. Ubuntu requires a ton of RAM and CPU.
I think this might be the thing which will makes me give up Ubuntu.
> It's odd, Windows and Linux switched spots in that time. Windows 10 is faster, more stable, and more understandable than Ubuntu.
I mean, you can claim a lot of stuff about lot of things, but saying Windows is understandable is pretty rich. You can’t even reliably cron a single script to run once an hour without weird, unfathomable, undebuggable bugs starting to pop up.
I have three machine, one is Windows, one is Ubuntu, and one is Mac. The statement was relative. I never said Windows was understandable. I said more understandable. Nineties-era Debian, I felt I understood every piece of, to where I could fill in details with documentation or code in a few minutes.
2020-era Ubuntu developed the same layeritis as Windows, only worse. Microsoft cleaned up a little bit from the bad, old Windows 95 days too. Windows 10 is slightly simpler than Windows 95, but a heck of a lot more stable.
And for some reason the server installer doesn't set up any kind of swap at all.
And forces snaps and lxd on you, which also brings in lovely things like cloud-init...
You spend 10-15 minutes cleaning up a supposedly minimal install.
ie: I thought they were more useful for installing a random app that is not supported by your OS... Is it easier to include a snap then it is to create a regular package? or are there any other advantages?
One thing I will not do is willingly allow somebody else a way to deploy and execute code on my computer without my say so (which snap is).
After reading the whole thread at https://forum.snapcraft.io/t/disabling-automatic-refresh-for... and seeing Gustavo Niemeyer's arrogance (we know better than you when you should be applying updates) I will be voting with my feet and will be installing Pop!_OS instead of Ubuntu, and if snapd is present I will remove it.
The stated goal of Niemeyer, to have users use updated software, would have been fulfilled in my case if I had a way to see what updates would be applied beforehand, instead of the updates being force-installed.
Lengthy dialog with Niemeyer in the forum thread seems to have been a waste of time for all the people who participated trying to convince him to allow disabling of force-installed updates so I suggest you do the same as me and vote with your feet!