I upvoted this only because of the discussion that it could spark in the HN comments, the article itself is actually very poor. It compares only disk space, memory and CPU usage and of those another user pointed out that the disk space calculation for snaps is not even correct.
Furthermore these metrics are amongst the least important when discussing "next gen" package managers. I am more interested in hearing about developer tools, sand-boxing capabilities, the update mechanism, OS support, private distribution, enterprise pricing. I don't care if one packaging system uses slightly more memory or whatever because of how it handles shared libraries, and if I do care I would still be interested in hearing what the trade off for that increased memory usage is.
Indeed the size calculation is incorrect. It's likely looking at the unpacked size, but snaps are never unpacked, which ironically is a difference missed by itself.
A very important aspect of snap that should have been noted in the article is the lack of user control over the a snap's updating process; Users are not allowed to control when updates are applied, leading to a windows 10-like user experience: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...
From what I could understand, placing the update process under the control of snap app developers - rather that device owners - is deliberate design decision.
A poor one at that. I don't understand why they couldn't make the Deb packages portable to a containerized format, where all the work has already been done. They'd be much better off automating the package build process instead of creating a new format for package maintainers that doesn't align with Debian.
At least Arch and Alpine has a simple and human readable PKGBUILD format that only takes 15 minutes to understand and update your own packages. They have up-to-date packages for pretty much everything, even though Debian and Ubuntu have been around longer.
I'm all for containerizing a Linux distro, and think it is a great idea, but Canonical hasn't exactly proven to be the best at this. Right before Ian Murdock passed while working at Docker, he didn't have many positive things to say about Canonical and their leadership. Most reviews on Glassdoor also talk about the poor leadership there. Therefore I see no reason to use Snap at this time.
For those that understandably won't want to go through it all, the short version is that by design snaps will force the update eventually, so that a system isn't simply left behind, but since snapd came out a few years ago we've been constantly working on multiple methods to offer control over when exactly the update takes place. These are features such as:
So, the goal is actually to offer control, but we are indeed trying to prevent systems from getting out of date for good. Maybe that's a bad idea, and if it turns out to be we can change that in the future, but we've been making an honest effort to try to fix the problems of automatic updates instead of simply giving up. Once we give up, there's no going back since the dynamics around package updates will change. We have plenty of experience around these aspects with the traditional systems.
Sorry, but unless there's an option to hold back the update until I explicitly allow it, that's not being in control. This is still my computer and I decide when to update software.
This is this being downvoted? He is exactly right.
The ability to pin at specific version is needed not just because, but for multitude of reason: the newer version breaks something, or you have only license for up to certain version (think non-subscription Jetbrains products), etc.
The inability to disable updates removes that packaging system from further consideration. It is a showstopper.
The comment is mistaken. I suspect it’s being downvoted not because update control isn’t appreciated, but because it’s very much there with snaps. There are several ways to disable snap updates, and they are really quite nicely balanced for modern operations.
For example, if you publish a snap that depends on another snap, say an app which uses a database, you can set things up so the database won’t update until you publish a validation certificate that your app snap version X has been validated with database snap version Y.
Updates can be deferred by anybody, and I think there is a plan for snaps themselves to be able to defer their own updates (for example, a movie payer that is playing a movie at the scheduled update time).
Enterprise management systems can also control the flow of updates very nicely. For example, they can have a different snap revision as ‘stable’ or ‘beta’, which means they decide when a new revision of a snap will be considered for update by all the machines tracking those channels. They can also prevent any updates from happening on specific machines.
Device manufacturers also get a layer of control, similar to snap publishers with their dependencies. So an appliance that uses snaps might see specific revisions of snaps only once those have been certified on that device.
Considering how rich the actual reality of ‘software update distribution and managment’ is in practice, it’s nice to see that level of thinking built in to the system. We’ve had simplistic approaches around for decades and the results in practice are too or, there are millions of vulnerable machines out there because of neglect. I’m interested to see if these mechanisms achieve a better result all round, and the simple thing you are focused on is certainly already there.
Tl;dr. Your computer isn't yours, and "We" know better than you, lousy user.
(That's what I took out of the shitty design decision and disgusting justification you wrote. I'll be damned if I have to defend myself against open source apps in some shitty all-in-1 format with fascists at the helm.)
Your opinion is one shared by many on HN regarding forced updates, but the parent comment has made a proper effort to answer your questions and you resort to name calling.
I am all for freedom of speech but if you write it on a brick and throw it through my windows you're going to have a hard time getting me on your side.
Comments like these alienate people from what you're trying to accomplish.
So, a decorum "argument"... Every last thing I said was true.
I'm putting the Snap maintainers in the same bin as Facebook, Microsoft, and Google. They want to idiot-ify the user experience with "We know better than you" style of rules and degrade MY ownership.
For that, yes, I will show anger. Even if this is -1'ed and buried, I'm sure one or more of the maintainers saw it.
If it makes you happy, yes, I see it. But that doesn't make things better for anyone. We've had more interesting discussions around this issue where people could actually present good arguments towards more control, and some of these conversations resulted in the development of more control features, as presented earlier in this thread.
Also, it's important to realize that it's not me or Canonical that has control over the updates, so it's not me knowing better than you. The goal of this exercise is to have good tooling that would allow updates to flow between a publisher and a user with a better overall outcome.
That means, for example, that we are putting more pressure on publishers to get it right, because they will more quickly and obviously break people if they release something broken. There are actual high profile publishers that changed their processes because of that.
We are also putting more pressure on the tooling, because we need to be able to recover gracefully when the update does fail, and that's one of the reasons why we have a more polished transition and rollback mechanism than any package manager out there.
Yes, maybe it won't work, but it's a very interesting problem and is worth solving. Then, even if we don't fully solve it, the exercise will have been worth it, because it improved all those aspects in meaningful ways.
But I hear you... you're mad at me. Point taken. :-)
Hi, I'm a poster somewhere up in the thread, not crankylinuxuser.
However, I still have an impression from seenitall's and your comment, that you are solving the wrong problem.
- dependencies - that's something the traditional package managers solve very well :)
- temporary defer updates while running: not interrupting the user is a different issue than not updating. (In my opinion, Flatpak solves this elegantly: it can tell the running application, that a new update is installed, and when it is convenient to the application, it can restart itself. Until then, both versions are available.)
- not everyone uses EMS; I'd say that no SOHO and only some SMEs do; so here you are creating a problem for power users and small businesses; notwithstanding, that your competition (Flatpak) has the stable/beta/whatever channels too, without needing EMS or other tooling;
- device manufacturers were always able to do this with traditional package management (using a metapackage that depends on specific versions);
- some of the problems above seems to stem from the insistence of snap to have a single source of truth, under a control of single entity. All the other package systems avoid some of the issues by being decentralized. For example, you can have your own apt/yum/flatpak repository and you can control, what packages get in. AFAIK, this is impossible with snap, thus needing to implement it's own solutions for a defined scenarios.
- the above scenarios are still missing the crucial component, that crankylinuxuser points out: the user's consent to update. Neither poster in this thread addressed this concern.
The issue is, that you cannot rely on vendor/someone upstream to certify the solution for you, because they may be wrong. When it is wrong, it will break your system and the vendor will not be quick enough (or even willing) to un-break your system.
My example: we are using a certain well-known ETL tool (I won't name it here, no point in shaming them), that has a bug since some version, where it drops characters in certain Unicode range. Most customers do not have a problem, only those who are unfortunate enough processing data in a language that falls into this range. The vendor has a bug report in their bug tracking system, they are claiming to work on it, and they semi-regularly do new releases - without the bug fixed.
Of course, nobody that needs that specific Unicode range can update. Nobody in this context means a handful of users worldwide, i.e. a small fraction of percentage of customers, so that means the bug fix priority is not exactly a big one.
With traditional yum, it is easy enough to solve - just exclude that specific package from updating. With your competition, Flatpak, it would be easy too: just do not update that package. At installation time, they can still install the "old" version, when needed.
Here, the vendor certification would be not enough - the application works for most users, it's just too bad when it doesn't work for you. In the absence of EMS, there would be not much what the users could do.
And that's where the issue "someone else has control over update process, not me" comes from.
> - the above scenarios are still missing the crucial component, that crankylinuxuser points out: the user's consent to update. Neither poster in this thread addressed this concern.
Indeed. Lack of control is my biggest source of anger.
I'm accustomed these days that most devices are under some sort of remote control/ownership from the "mothership". Free Software and friends have made it so we could avoid this, and retain ownership of our systems.
On Windows, reboots are a fact of life. I was in an MRI last year that took 1.5 hours. The MRI itself was a 1/2 hour, but the required mandatory update came in when I showed up.
I also know people who 3d print have used Windows, reboot cycle update, and lost their print.
Long story short, Windows, Mac, iPhones, and Androids are not our devices. We at best rent them. Ownership = control.
So when some open source group wants to centralize and force updates like this, its because they are trying to fight for ownership of my devices. Ive fought long and hard to free myself from most onerous software. And I see it now popping up in what was once a bastion of freedom.
So yeah, I'm angry. And yes, Ill do what I think is right in terms of impeding this.
Indeed, Spotify (distributed via Snap on Ubuntu at least) doesn't scale properly on HiDPI screens for example, requiring a small tweak to the shortcut[1]. I know when Spotify has updated because I'll launch it and everything will be tiny again!
I really dislike AppImage when it's the only way to get an app. I find it quite annoying to get set up on my system in such a way that it behaves like a normal piece of installed software (I run a fairly basic i3 setup with dmenu for app launching). I mostly just keep an unorganised "AppImage" folder now full of random clicky items I can launch. It's like being on an early 2000s iMac.
Flatpak has been good when I've used it; it works more similarly to how apt functions, and seems to need less faffing to get going on my system.
Snap is almost good. I found the documentation for how to actually make snaps incredibly frustrating (hard to explain, it was like lots of little steps were missing) and I find the permissions model with them more awkward. I tried installing Gitea on my home server via snap the other day, and promptly got annoyed enough to just give up, as I wasn't that invested in it.
You can create your own desktop files if you like and shove them in ~/.local/share/applications if you want the app to show up when you list apps instead of binaries.
Example when you launch i3-dmenu-desktop instead of dmenu_run.
As an aside I suggest you check out rofi. Its a better looking, more featureful alternative to dmenu. Compared to dmenu it offers
-Themes
-The option to show as a window in the middle of the screen or dmenu style at the bottom/top
-Optional icons
-slightly faster at showing apps vs binaries compared to i3-dmenu-desktop
-Configurable hotkeys
-The ability to launch a typed command in a terminal with a hotkey
-The ability to launch a typed command as is or complete from a history much like a shell history. Example I have it set to run as is with Enter or complete first choice for history with control+Enter
-A built in tool to list/switch windows, select from a list of applications like i3-dmenu-desktop, list available ssh connections
Are you using dmenu or i3-dmenu-desktop? If you're using the latter, I'm pretty sure you can create .desktop files for AppImage apps so they're added to the list:
For myself, I've only needed to use AppImage once, and that's for a program that I only bother using from a terminal anyway; other than not showing up in dmenu out-of-the-box, I haven't had any other complaints.
I use i3wm and Rofi (similar to dmenu). Pacman from Arch and apk from Alpine are so much easier at managing packages. Until they can make a containerized distro that works basically the same, then I don't see a good use case for changing, unless you want to try something out in a more isolated environment (and even then there is Docker).
My biggest problem with them all is that I don't really have a problem with my apt.
I understand that this situation is problematic with vendors who want to distribute their apps. But to me it feels like that the packaging situation has always been an excuse for them to drop Linux support.
If they consider desktop Linux supported during development, providing a couple statically linked deb and rpm files is no big deal. And a lot of vendors are doing that nowadays.
Sandboxing would've been nice for sure. But again, I don't have a security problem with my Linux Desktop as of right now.
> If they consider desktop Linux supported during development, providing a couple statically linked deb and rpm files is no big deal.
well, yes it is, because it only covers two families of distros over many. I develop a software with a fairly small niche and you wouldn't believe the weird distros on which people test it. With the AppImage, everyone is happy - and also the software can be installed without administrative permissions which is fairly useful when you want to use it in a class room without having to spend a week with the local system administrators to get the .deb deployed and just have the students download the AppImage and execute it.
It isn't really that hard to repackage it for any packaging format after extracting the deb file. I have used those distros(arch, void, now nixos) for a long time, i never had a problem with anyone providing just a deb file.
With my upstream developer hat and Ubuntu developer for about a decade hat on I really call BS on this. This is very hard in practice. It's hard even when you know things very well just because it is very complex and involves multiple pieces moving across multiple organisations and people.
Snaps, flatpak and appimage really make the aspect of distribution of free software less crazy. It is a hard problem to solve, we will get some things right and some things wrong. We will learn in the process. Eventually the platform may become useful for proprietary software but it will first and foremost improve for FOSS.
If you are on a niche distribution you are missing out 10s of thousands of applications because it is packaged for Debian or Fedora and not for your system. If you are on Debian stable you miss out that important update or that new feature that you won't have for the next months or more. If you are on the packaging side you know how it is like in the trenches.
I really cannot believe anyone who has done packaging or upstream development to not acknowledge those real world issues. Packaging is extremely hard because we all made it so.
Crazy novel idea; The FOSS developer could just give out the source code, maybe with some helper build scripts, and then users could just build the software on whatever platform they want. Maybe making changes to which features they need and which external dependencies they required.
Even crazier; what if we had a standard, well supported build system that we could included with every OS and distro?
Nah, thats crazy talk. Lets repackage an entire OS so you can run an OS while your running your OS so you can listen to internet radio.
> and then users could just build the software on whatever platform they want.
most of my users are non-technical (yes, even on linux). Also, if you want to use a modern development environment this won't work: for instance, I use C++14 / 17 which restricts me to distros at least as recent as ubuntu 18.04 for building and cuts out an immense part of the user base. With AppImage it does not matter what OS my user is running - even if the OS isn't able to build my software, it will be able to run it.
> Even crazier; what if we had a standard, well supported build system that we could included with every OS and distro?
won't happen. I have users on ubuntu 12.04 which still uses e.g. CMake 2.8 which is way too old and a frankly different language than CMake > 3.0. Even if it was another build system it would be the same problem : you would have to restrict yourself to the oldest released version still in used of the build system, which really really sucks because build systems hardly ever "get it right" in their 1.0 version.
It's not a crazy idea, just one that entirely limits the target audience to extremely technical people. Some people would like to make their software available to a wider audience.
> If they consider desktop Linux supported during development, providing a couple statically linked deb and rpm files is no big deal.
There's also the issue of dependencies on other non-default packages. Do you ask the user to install another repository? Do you host your own repository and distribute the dependent packages in it along with yours? (Note this is much more complex than building and hosting a single .deb/.rpm)
Not to mention "desktop Linux" is not a thing. Supporting Fedora, Ubuntu and opensuse, for example, are all distinct efforts with only minimal overlap in terms of effort (think not just initial dev, but ongoing maintenance and testing).
Technically, they usually aren't "statically" linked as such.
They usually use dynamic linking, but then ship those dynamically linked libraries in the package.
This has the advantage that the user can still replace the libraries if they want (e.g. for games this is sometimes useful to replace the sdl build with one with more options enabled), and I've heard rumblings that the support for static linking on linux isn't all that great.
Your knowledge is incorrect. Snaps don't have to be statically linked. They use a mount namespace with a predictable set of libraries that are maintained and receive security updates. In addition the application can bring additional libraries but those are on the application developer to maintain (or delegate to a hosted service like build.snapcraft.io to rebuild on security updates).
Sure, you can do static linking, you just don't have to and this is not how snaps work.
> If they consider desktop Linux supported during development, providing a couple statically linked deb and rpm files is no big deal.
Many developers don't have the resources to go through the whole process needed to get their app into the distro repo. In that case, I'd consider these package formats better than deb and rpm.
You can provide a repo hosting your software so that prospective users can add your repo. This is essentially trivial and can even be hosted for free.
After your user adds your repo updates will be handled automatically with the rest of the system and installs can be done in the same software management gui as anything else.
Schools almost exclusively use Microsoft Windows. Linux is almost exclusively used on servers, technical peoples workstations/laptops at work, and interested users personal computers.
Optimizing for users you wish you had doesn't seem optimal.
In the hypothetical school setting you probably want to actively prevent the user installing insecure crap in their home directory to the degree its possible to do so.
There's a lot of corner cases that break under static linking - anything that dlopen()s stuff using a private API is an obvious one, and that includes glibc.
> I don't have a security problem with my Linux Desktop as of right now.
Matthew Garrett (who does internal Linux desktop security at Google) gave a great talk at GUADEC about the current state of Linux desktops: https://www.youtube.com/watch?v=DUa-nnjjQcc
TLDR: It's not good. User-level processes have free, unrestricted access to all of your data, unless you use Wayland and desktop containerization.
This is a pretty light-weight article - doesn't go into security/isolation differences, or disk usage comparison. Here's something I came across a few days ago while I was researching the differences: https://askubuntu.com/questions/866511/what-are-the-differen...
That comparison looks quite biased. Features and yes/no answers seem to be carefully picked to promote AppImage. Especially "Objectives and governance".
Treating "Can run without sandboxing" as a feature is dubious.
That you have the option to, for applications that won't work without full access? Yes, definitely. I don't want my format to say "well this file manager application needs to access your filesystem so it's not available for your OS because your packaging format doesn't allow us to run unsandboxed, good luck".
I assumed the goal of those systems was for end users with no system or programming knowledge. hability to escape sandbox will only cause pain and security holes, while serving one or two advanced users.
Do you mean pro-feature as in it's a good feature or as in (as with the case for snaps) allowed outside of dev-mode only for paying customers of Canonical?
Evidence? I don’t think that’s the case. There are plenty of non-commercial snaps with system access, it’s really a question of the nature of the snap. You would want something like Puppet to be able to read and write files all over the system, so the snap declaration and metadata needs to say that. I think the only interaction with Canonical is that they need to review snaps which do have filesystem access to check that it makes sense and ty to spot Trojan horse apps being published that way. It’s not perfect but it’s sensible.
As a developer having these three formats is a real pain. I wish we had a standard already.
The app I'm working on, Polar (https://getpolarized.io/) is a cross platform document repository. The Windows and MacOS builds are pretty straight forward.
But with Linux I now have Appimage, deb, rpm, snap, flatpak, and of course tar.gz (but maybe that doesn't count).
Just pick one. Appimage fits the bill quite nicely and does not depend on a certain distribution. It's drop and forget. No silly mounts or other "infrastructure" needed to deploy.
I haven't used Flatpak but Snap is quite annoying and it was one of the reasons I switched from Ubuntu to Debian. Even if you unininstall snapd in Ubuntu 18 you are still stuck with the annoying mount points. That and the auto-updates.
Saw a comment from you the other day on HN. Was really excited, even installed snap to try it out.
For some reason the application is not automatically in my path after installation. That’s enough for me to go back to apt, and unfortunately Polarized is collateral damage.
I preferred installing from a ‘repo’ which updates my software automatically, something it seems that your deb does not do.
It will be on the next reboot/login. Due to some bugs setting up PATH is hard (harder than other variables and much harder than it should be) so we did the best we could (require a logout/reboot)
I feel this article misses the point entirely. Flatpak runtimes - which unbundle a huge number of dependencies (such as GNOME's entire platform library set) and allow independent security updates for them - aren't even mentioned. Neither is sandboxing.
I was hoping for something interesting about the technical tradeoffs between the three formats and why I might want to have Fedora support Snap in the future (or why I might not to), etc.
Interesting to read the other comments and discover that Flatpak's really oriented towards desktop software only. I didn't know that.
I agree with you that there is nothing new to this sort of thing, but why does there need to be a common standard? Personally I'd prefer a few different standards with their own ideas competing instead of a single one that is just some compromise between the different ones.
"Let a thousand blossoms bloom" isn't the right approach for package management. A package manager is IMHO not the place for creativity, competition, bikeshedding, and fragmentation because that's what we have already. The focus should be on winning devs to actually use a package format, so that users can more easily install software without upstream devs needing to maintain yet another format.
We're not talking about thousands of formats...only a few formats make up the most popular formats around.
I see nothing special about package managers that makes competition undesirable. I think creativity and competition are good for package managers. Besides why does an upstream dev need to maintain these new formats? They can choose to support whatever they want.
Personally I don't understand why this is a big deal. Any fragmentation is the result of reasonable people disagreeing on the best approach. The plethora of different ideas is a feature of the ecosystem not a bug.
I see nothing special about package managers
that makes competition undesirable.
The most common rationale I hear for using Snap/Flatpak/AppImage over traditional deb/rpm is that the fragmentation of the latter is burdensome [1].
One of the main claims of Flatpak is "The days of chasing multiple Linux distributions are over. [...] Create one app and distribute it to the entire Linux desktop market." [2]
If you are designing something to solve the problem of fragmentation, and by so doing you increase fragmentation, you've achieved the opposite of your goal.
Just to be clear, I see no reason to use Snap/etc. over older packaging solutions. However I wouldn't necessarily agree that this automatically increases fragmentation. If Snap/etc. were to support Ubuntu, debian, Redhat, etc., they could provide maintainers a single target across many systems. Then the maintainer could just target that solution and be done. Of course this requires them to win that developer marketshare, but if they believe they can, then they believe they can achieve their goals without fragmenting they system.
I personally don't see the need for these systems, but I still see absolutely nothing special about packaging. No maintainer has to support these new packaging methods if they want. If some developers believe they can introduce new and improved packaging systems, I wish them luck.
Because developers want to target the whole Linux desktop segment without having to reproduce the packaging process several times. Has nothing to do with whether the format is better or worse as long as it is consistent. Fortunately, gems like FPM have attempted to address this issue.
One of the main advantages of Linux and all the different distributions is the freedom the users have to have a system that matches his or her needs. This has been the case for decades. There has never been a single packaging solution and I see no need for one. Of course packagers want their lives to be easier, but the request to target the whole Linux desktop segment without reproducing the package process is pretty unreasonable given that different users simply want different packaging systems. Developers should weigh the advantages/disadvantages of supporting (say) Ubuntu or CentOS just as they do for Mac and Windows.
If you want to innovate, innovate in user-facing areas. This kind of tinkering is the reason Linux didn’t take off anywhere, except for back end servers, where it won because of being free and because of Linus’ community building skills.
Android had to lock it down almost completely to gain acceptance.
They even call themselves "next generation" while encouraging the 1990s "setup.exe" deployment model: no centralized management of security updates, no vetting process from a trusted 3rd party, large applications.
To be fair, that's true for all new-fangled package managers including Docker, and exactly what's painful about them: that they're trying to solve a problem (that of mixed libaries on Linux distros) by bypassing shared lib loading, thereby defeating the purpose of shared library loading (that of preventing stale/insecure libs) in the first place.
If that was the goal, why not just distribute statically-linked binaries or distribute into /opt package prefixes, which would be the natural solution? I guess there's no problem that can't be solved by another layer of abstraction, except the problem of too many layers of abstractions. Again, https://xkcd.com/927/ comes to mind.
Snaps seem to work well where they work, but it's limited by their systemd dependency. So they're most useful on an LTS Ubuntu or Debian stable in order to get more up-to-date version (or missing) of a package. But if you're in a more esoteric environment with a different init, Snaps do you no good - which is a pity, because that's where they would actually be the most useful to me.
Flatpak and AppImage aren't limited in this fashion, but have few packaged apps, and they're generally all the apps I already have access to.
In practice, I've found Guix, Nix, and Docker to be most useful solutions to missing/outdated apps, though these are more complicated than Snap, Flatpak, or AppImage.
> Snaps seem to work well where they work, but it's limited by their systemd dependency.
A more serious issue with snaps is that they rely on AppArmor as a security mechanism, which is not actually present on most Linux systems (only Ubuntu variants, SUSE and Solus). The snaps will still run elsewhere, but not with the same security as you might think you were getting.
> will installing snapd not install AppArmor as well?
It can't, because AppArmor is a kernel-level feature that also requires some level of integration into the rest of the distribution. Red Hat/Fedora-based distributions already use SELinux in place of AppArmor, so using snap on those systems can't have full security capabilities (making SELinux and snap work together would be non-trivial, and I don't think anyone is motivated to do it).
Flatpak uses other mechanisms for limiting the access that applications have, so does not rely on either AppArmor or SELinux being on the host system.
What Flatpak also does well wasn't mentioned, which is sandboxing applications (can snap do this as well i think), I use proprietary applications on flatpak sometimes so i can feed them the resources they only need, like discord, it does not need to see my files or my running processes for the running games feature, so i just restrict them with flatpak. Sandboxing is nice for proprietary software basically, or software that collects info or just any software that connects to a server, you can snip out useless stuff you don't need while it still hopefully functions well.
> Almost all popular applications on flathub come with filesystem=host, filesystem=home or device=all permissions, that is, write permissions to the user home directory (and more), this effectively means that all it takes to "escape the sandbox" is echo download_and_execute_evil >> ~/.bashrc. That's it.
> To make matters worse, the users are misled to believe the apps run sandboxed. For all these apps flatpak shows a reassuring "sandbox" icon when installing the app (things do not get much better even when installing in the command line - you need to know flatpak internals to understand the warnings).
I have not used flatpack. Is this description accurate? Also:
> Up until 0.8.7 all it took to get root on the host was to install a flatpak package that contains a suid binary (flatpaks are installed to /var/lib/flatpak on your host system). Again, could this be any easier? A high severity CVE-2017-9780 (CVSS Score 7.2) has indeed been assigned to this vulnerability. Flatpak developers consider this a minor security issue.
There was already a post on this. Basically the argument about home is true but this is because 1) apps should not use filesystem access but rather portals (if they can) 2) nothing should be executable in the home folder (nobashrc, no script, etc...)
If I remember well the second argument was about update not being frequent enough.
So nothing fundamentally about Flatpak but more about the infrastructure (lack of updates) and the use of it (we should not allow home access and use Portals or we should disable bashrc).
Says who? The purpose of a home directory to contain user-specific files including executables. Developers compile their software and write their scripts in their home directory. Even if we made the absurd decision that no file may be executed from the directory, there are many ways to cause harm by simply editing user-specific configuration files (e.g. in ~/.config).
Arguing that the problem is with executables in $HOME rather than Flatpak is incredibly delusional.
Had a pretty bad experience with snap. A project I wanted to try was distributed as a snap package, after fixing many download issues, finally got my package, it didn't work but that was okay, was going to make some changes anyways. Built from source just fine, but it turned out the project itself depended on snap's sandboxing heavily, so I'd have to create a snap package anyways. Unfortunately at the time (and likely still true), the dev tools didn't like anything that wasn't the latest Ubuntu (recent Debian didn't cut it). Apt purged snapd, but still had to manually delete a bunch of snapd related files (systemd units mostly).
the issue is though is that we have to make sure everything is safe, when passing custom install scripts through bash, in the future it could get hacked and end up running malicious commands, mainly the issue is that it just fetches and runs a script, which sometimes people blindly run these
I looked at these a tiny bit already and think it'd be better to package in each distro's standard channels. I'd be interested in good reading material about packaging for debian apt, ubuntu ppm, arch aur, red hat rpm, etc if anyone's got some.
it's more like .exe on windows. It requires more disk space because some dependacies might be installed twice or more, but it saves both the end users, the distributors and the developers of a program an enormous amount of headache having to support dozens of different linux distro's with their own dependancies etc...
Furthermore these metrics are amongst the least important when discussing "next gen" package managers. I am more interested in hearing about developer tools, sand-boxing capabilities, the update mechanism, OS support, private distribution, enterprise pricing. I don't care if one packaging system uses slightly more memory or whatever because of how it handles shared libraries, and if I do care I would still be interested in hearing what the trade off for that increased memory usage is.