Hacker News new | past | comments | ask | show | jobs | submit login
Flatpak: A security nightmare – two years later (flatkill.org)
377 points by krimeo on Oct 2, 2020 | hide | past | favorite | 351 comments



Relative to running the same application without flatpak:

- You'd expect the application to run as your user and have full access to the home directory. Many applications expect to have this access. This is what it means to have user space applications. Flatpak is not about fixing or changing this.

- You'd expect individual applications might have out of date dependencies. Depending on the application this might be completely fine. This is not a security risk of flatpak but of the application. Developers are responsible for releasing updates, not flatpak.

So, works as advertised and intended. If your flatpak install has security bugs, update it. If you use a linux distribution that ships an out of date flatpak with fixed security issues, consider using a different one or update it yourself. If you use flatpak apps from developers that can't be bothered to release updates or fix bugs, consider using something else.

Basically, flatpak not being a middlemen between you and the application developers kind of is the whole point. They don't mention sandboxing on their frontpage, at all. This is not a goal. Instead, providing a cross distribution consistent installation experience is the main goal. They are not trying to be more secure than manually installing packages on your system.

So, these are not security bugs as far as I can see. It's not a security nightmare. It's fine. Actually looks interesting and might give it a try if/when I get rid of my mac.


> They don't mention sandboxing on their frontpage, at all.

But the apps themselves still show "Sandboxed" when they aren't[1]. It's trivial for the flatpak "store" to just hide "sandboxed" if the flatpak requires access in its manifest. The Microsoft/Windows Store does this. UWP apps are sandboxed and permission-managed, but if a store app requires full access, the permissions section[2] says:

> This app can

> Access all your files, peripheral devices, apps, programs and registry

[1] See first screenshot in OP article.

[2] https://www.microsoft.com/en-us/p/microsoft-flight-simulator...


That screenshot is of GNOME Software, not of a Flatpak app. GNOME Software decides how it describes apps.

The GNOME designers are working on a redesign of this area: https://gitlab.gnome.org/Teams/Design/software-mockups/-/blo...

There are other software installers that can install Flatpak apps, e.g. KDE Discover.

So this is a valid criticism of GNOME Software, but not of Flatpak.


You're right, I checked Plasma (Kubuntu) and it doesn't show the same "Sandboxed" indicator for Flathub. This is more of a GNOME issue, even a serious one if it existed for an whole 2 years.


Great, let's create the bug report, otherwise GNOME devs may not know about this defect.


> But the apps themselves still show "Sandboxed" when they aren't

Redefining the term and then claiming that what someone else meant by it is not in line with what you meant with it is ... well it is messed up.

They are sandboxed, but rightly Gimp has access to your whole filesystem because that is what people who install it expect.

Of the 30 apps I have installed via flatpak only 6 have access to the whole FS or my entire home directory (Zotero=home, Krita=host, KolorPaint=host, Inkscape=host, Gimp=host, Geeqie=host), and of those 6 only 1 (Zotero) of them seems like it may be fine without it.

Having access to the host FS in no way mean to me that Geeqie is not "sandboxed". It just means that what I would expect is the case. If you want to change it, go open up flatseal and change it.


> They are sandboxed, but rightly Gimp has access to your whole filesystem because that is what people who install it expect.

They can be sandboxed, but that doesn't mean every Flatpak package should be labelled "Sandboxed". Especially if said app isn't actually sandboxed and has full filesystem access.


Except that with "old school" package managers you get security updates without the application or the developer having to act on it. I don't know if it's "a nightmare", but it's worse security comparing against the previous solution.


Nobody should be relying on dynamic linking as the primary way they keep their home computer secure. Enough applications statically link their libraries that this is not a reliable strategy.

If you can't trust a library author to keep their code up to date, it's probably also a mistake to trust them to use your system libraries in a way that minimizes their attack surface.

I also have problems with Flatpak's security model, particularly with their UX around permissions. We already learned from Android/iOS that allowing highly permissive manifests out of the box was a bad idea. But at least Flatpak is trying to do something, and that something seems like a tangible improvement to me. The idea that our systems will be secure because we dynamically link everything needs to die; it's not a viable security model.

Even if the only thing that Flatpak does is kill the "dynamic libraries make me secure" perspective, that on its own will be worthwhile.


Distribution packagers often remove statically linked libraries from upstream and make the software dynamically link anyway.

I'm not saying that the UX of native packages is great for end users. I'm just saying that the security model is saner.


That cat is already out of the bag. How many Ubuntu users have custom software sources enabled; how many of them are installing games from Humble Bundle that come as their own `deb` packages with statically linked assets? GoG installed Linux games don't even use the package manager at all.

Heck, there's a decent amount of Linux software being distributed right now as `tar.gz` files with an install script that just extracts the folder someplace and then modifies the user's path.

Flatpak improves on all of that.

It's one thing to look at the Linux security model and say that OSS authors should coordinate with distros, but OSS authors and general Linux developers already don't do that today. So how is Flatpak making that situation any worse?

Developers were already abandoning the sane security model. Flatpak is a response to that trend, not the cause of it.


>How many Ubuntu users have custom software sources enabled; how many of them are installing games from Humble Bundle that come as their own `deb` packages with statically linked assets?

I don't have an answer to this question, but I would never do this. If the game was open source, I would rebuild it to be dynamically linked, and then publish the dynamically linked package.


If this is genuinely your philosophy then Flatpak isn't a real problem for you because you can take any OS application that's bundled with Flatpak, rebuild it to be dynamically linked, and then publish the new package using whatever package manager you prefer.


Or, alternatively, create a new flatpak runtime and rebuild it for that.


The distribution can recompile dependent packages if required without requiring the upstream program to release a new version no matter how the libraries are linked.


In which case, the problem isn't Flatpak, it's packages that are distributed outside of the distro's control. If Ubuntu was the ultimate distributor of every Flatpak on their system, they could also recompile and update the dependencies for those Flatpaks whenever a security risk was discovered.

Should we get rid of the AUR while we're killing Flatpak?


If applications are distributed by the same people, at the same frequency, linked against the same versions of the dependencies, why have two distribution mechanisms at all? What you're describing sounds like it could be a separate APT channel ("backports", "non-free", "edge", hell call it "flatpak" if you want) but doesn't need another package manager and package format.


That's what I was getting at -- the defense that package managers handle everything in the existing system falls apart as soon as you start compiling packages from source, adding custom repos, or installing proprietary packages, many of which will statically link their dependencies or use custom versions of libraries. There's nothing a distro maintainer can do about them.

The "authors won't keep their own software up to date" point isn't an argument against Flatpak in specific, it's an argument against custom package sources in general.

If we really don't trust authors to update their own software, then we ought to be pushing for even stricter sandboxing and system isolation than what Flatpak currently provides. The cat is already out of the bag: people already run software on their system from the AUR, from private repos, from custom Linux installers that just bundle everything into a tar.gzip. Dynamic linking won't save you from them.


> If Ubuntu was the ultimate distributor of every Flatpak on their system, they could also recompile and update the dependencies for those Flatpaks whenever a security risk was discovered.

If Ubuntu was the ultimate distributor of every Flatpak and had the means to recompile and update the dependencies for those Flatpaks whenever a security risk was discovered, they wouldn't need Flatpak. They could just maintain the software the way they've always done before.

Consider what the advantages to Flatpak really are.

1. Anyone can publish them (low standard of inclusion, which I hope isn't Ubuntu's goal)

2. You can easily distribute closed-source software with the exact dependencies they were compiled for (which Ubuntu can't change unless they have access to the source code and its maintainers are willing to provide that charity to closed source software shops)

3. You can easily install the software on any system with a recent enough kernel (Ubuntu being the sole provider wouldn't change that)


It's not as robust as it could be, but Flatpak pretty objectively has better sandboxing than a .deb package once you get rid of the dynamic linking problem. That's a good thing, because not all security bugs come from dependency management. But, ignore the sandboxing aspect for a second:

1. Ease of distribution of software should be a goal on Linux if it's not already. The lack of ease of distribution of software contributes to decreased support from developers on other OSes, increased pressure and stress for distro maintainers, and some truly awful installation methods outside of the official package managers.

2. Proprietary software largely already does this, so granted, using Flatpak doesn't really change anything. But that's also kind of the point; proprietary software already ignores the software center, so adding better sandboxing to those apps is a really good idea.

3. There are reasons to want the ability to link against multiple versions of the same dependency even if you're getting all of your software compiled by a single source upstream. Reconciling dependencies if you're a distro maintainer is hard. Being able to quickly update any app that's using an outdated dependency without worrying that you're going to break another app is kind of nice.

I'm not saying that Flatpaks should be centrally managed. My point was that people are already distributing software outside of app stores. We already have the AUR, we already have `tar.gz` files (even from Open Source projects), we already have custom repos that might break/rename/embed old dependencies.

Flatpak is responding to a trend that already exists; trying to make it slightly better so that instead of a random install script somebody pulls off Github to curl dependencies, they get a well-defined package that's less likely to bork their system or introduce a security hole into their other applications.

And that's good. We want to live in a world where people can install software from any source -- we wouldn't use Linux otherwise. But any system where people can install software from any source is going to have the same issues with dependency management and with trusting the author to keep their app up-to-date. All of the problems people are throwing at Flatpak in that specific area are just criticisms of the fact that Flatpak isn't centrally controlled. They're not really criticisms of the technology itself.


Those problems were brought up as objections the very minute that Flatpak and Snappy arrived.

OP has just provided concrete examples of those problems appearing in the real, existing public repos for Flatpak.

This is not surprising.

It has turned out exactly as expected: Upstream developers cannot be trusted to update dependencies on a timely basis.


> Upstream developers cannot be trusted to update dependencies on a timely basis.

And the current system, even without Flatpak, doesn't force them to. You're not arguing against Flatpak, you're arguing against decentralized app distribution in general.

You are already living in a world where upstream developers can decide to distribute their software using channels that distro maintainers can't control, update, or patch.

Flatpak changes nothing about that arrangement, it merely acknowledges that the problem exists and tries to make it slightly better. The alternative to Flatpak for a lot of Open Source devs isn't an official Debian package, it's a tar.gz file, which is just objectively worse.


Static linking is dead. Most applications cannot be built statically, at all.

Solaris-style RPath is still an option on Linux, but it is rarely used.

As far as “security,” dynamic linking is not an attempt to contain applications, but rather to fix as many things as possible with as few updates as possible.

Flatpak actively encourages upstream developers not to participate in the wider picture, and as it turns out, upstream developers can’t be trusted to fix security holes in a timely manner.

This was a widely foreseen issue that was pooh-poohed in the early days of flatpak, but, well, look how it turned out.


A lot of languages like Rust and Go (but also C++) mostly do static linking.


Golang doesn't really support static binaries. This may be confusing, since it compiles-in all golang dependencies, but by default, all output is dynamic, and it's quite difficult to get static binaries out of it at all.

The golang model expects all golang deps to be distributed as source code, and then it expects to dynamically link to all C dependencies. It is technically possible to diverge from the latter set of expectations, but it is not easily or commonly done.

C++ is much like golang in that respect. Template libraries are always "static" in the sense that they are distributed only as source code, but the final binary output is almost always a dynamically linked binary.


> Most applications cannot be built statically, at all.

That’s a pretty bold claim. Can you identify a specific C program that cannot be built statically, no matter how hard you try?


GTK does not support static linking, so that rules out pretty much all GTK applications: https://mail.gnome.org/archives/gtk-list/2018-January/msg000...

The question about it comes up every so often, but nobody has ever put up the effort to maintain it. It appears to be a lot of work for very little gain.


Certain libs use dlopen() which needs dynamic linking. Mesa, Pam, Qt (static is possible but a lot of stuff will break), Glibc(static is possible but nsswitch.conf will break), Alsa. So most graphical apps will fail to be pure static(Prob possible but will hardware/config specific). So we need to tackle and provide alternatives to dlopen.


Running NSS and PAM modules in-process was IMO a historical mistake. Convenient in one aspect, rather inconvenient and impractical in many others.

If we were to redo these today, I think we'd do them out-of-process and talk to them over UNIX domain sockets or pipes.

The transitive dependencies of NSS and PAM modules can also wreak havoc with your application. There are many bugs which have arisen because of these limitations. Yes, looking at you, GnuTLS.


OpenBSD chose BSD Auth over PAM for this reason. With BSD Auth the C API invokes binaries in /usr/lib/auth which are SUID or SGID according to which permissions are required for the credential store.


> Qt (static is possible but a lot of stuff will break),

as a user of statically-built Qt I haven't noticed any particular breakage


> identify a specific C program

Well, not sure about C (but I assume something using gtk could be pain), but if we expand that to C++, I mean, I would expect firefox and chrome to be basically unbuildable in static manner.

> no matter how hard you try

Well, no. If you are willing to rewrite it (even significant changes), of course it will be possible to built it statically. That does not mean it is doable in practice.


I doubt they meant "no matter how hard you try" nor were limiting this to C programs only. Something the size of Firefox, for example, with hundreds of files, multiple languages and dozens of platform variations, is simply not designed for a fully static build and trying to modify it to build like that would be an absurd amount of work for basically no reason, so it hasn't been done and probably never will be.

Add licensing complications to that (Qt is notorious for this) and OP's statement is for all intents and purposes true.


Perhaps so, but Firefox is but one program among the hundreds or thousands that are available in a typical Linux distribution. "Most" implies a quantitative analysis (i.e., more than half), which I think is disputable.


Firefox has a hard dependency on GTK, so it is literally unbuildable as a static binary.


Firefox. Chromium. KDE. Gnome-shell. Libreoffice.

Static linking is an atavism. Application authors don't spend time thinking about it any more.

The main reason static linking persisted as A Thing into the 1990s was that fossilized vendor UNIXes didn't support it, or didn't support it well. Those systems are all extinct.

About the only place you still see static linking is embedded. And that is its own world.


The two weaknesses cited by the OP to support your argument were CVE-2019-17498 and CVE-2020-12284. That means if you use the RedHat's App Store to illegally torrent media content and pentest strange ssh servers, you might get pwnd. Tears are flowing to mine eyes.


I think the industry has moved past believing that dynamic linking has a net security and usability benefit over static linking/sandboxing, even if you heavily bias toward preferring security. The three most "modern" and popular linux "package managers" made in the last decade (Snap, Flatpak, Docker) all bias toward static linking and sandboxing (in the case of docker, forced sandboxing, but the others are opt-in).

Sidenote: I include docker as a package manager in that list because recently I actually ran into a very large and popular command line application that, per their documentation, prefers distribution and execution via docker over anything else: aws-cli. They don't keep any other linux package managers up to date, despite having a package in snap and distributing a .tar.gz. They recommend invoking it via docker. Rather interesting, and I'm surprised we haven't seen a frontend for "docker as a pm" gain traction, to increase usability over "go into your .bashrc and alias aws=docker run aws/aws-cli:latest $*"


I think the industry has "moved past this" because for SaaS platforms, it takes a lot less labor to enforce sandboxing in-house applications on the cloud than to maintain dependencies across the shop. However, for an OS distribution to end users devices, where applications inevitably ask for an are granted MUCH more permissions than they need, this is not a safe model. It is CLEARLY superior for users and for operating system maintainers, or others responsible for user security, to update dependencies using the traditional dynamically linked model. No work needs to be done to consider the compilation tooling of each individual application.

The existence of Flatpak etc. is a misguided concession from the FOSS community to private businesses and to software developers who are used to packaging for the cloud. The model is not appropriate for desktop or mobile software -- there is a reason that filesystem access is extremely restricted or impossible on mobile platforms, and that even Android has moved to dynamically prompting for permissions when apps use them rather than bundling them into a single prompt at installation.

For Flatpak to work, we need MUCH heavier sandboxing, which will be detriment to productivity for desktop users for certain classes of applications, and may prevent many types of applications from being packageable as Flatpak. I think this is a fine compromise. I'm OK with installing Spotify as a Flatpak (given better security, not now) but keeping my webserver, database, file manager, terminal, programming language, and other "system" software in the OS repository.


Snap, Flatpack, and Docker applications almost universally use dynamic linking.

They just use some combination of rpath fields and virtual filesystems to make sure the dynamic linker only inspects the paths mandated by the application author.


Docker dynamically links; you don’t need to recompile the world to update a docker image. You just regenerate the docker image.


It can dynamically link within the container, but as far as I know the container as a whole does not dynamically link with the outside world. Its effectively a chroot with a full system image inside that root filesystem.

And that's what we're talking about, effectively; how self-contained and sandboxed the package itself is.


> Developers are responsible for releasing updates, not flatpak.

That's what distribution maintainers do.

> They are not trying to be more secure than manually installing packages on your system.

That's downgrade from distribution experience.


The primary reason I have when considering using a flatpak or appimage is for software either not commonly included in distros (eg. proprietary software like games) or that are updated more often than the distro cares to manage (eg. browsers). It fills the gaps in distros.


You do not have to abandon distribution model. I've just installed Nix on Arch Linux [1]. It ships all required dependencies - glibc etc - version specific. Works on Ubuntu/Debian [2].

[1] https://wiki.archlinux.org/index.php/Nix

[2] https://ariya.io/2020/05/nix-package-manager-on-ubuntu-or-de...


> You'd expect individual applications might have out of date dependencies. Depending on the application this might be completely fine. This is not a security risk of flatpak but of the application. Developers are responsible for releasing updates, not flatpak.

Not so much in a system that links executable dynamically to system provided libraries (e.g. Debian), or provides system wide maintenance updates to statically linked binaries when their dependencies receive important updates (e.g. Alpine).

...which really covers every sane, actively maintained *nix system with a package manager, so in that sense Flatpak is definitely a step down. Instead of trusting, say, the Debian or OpenBSD maintainers to provide timely security updates, you have to trust each individual vendor of your flatpaks.

The only advantage I can see for them is in providing closed source software, which OS maintainers simply can't maintain and update. A developer advantage may be that the standard of inclusion is much lower than for Debian or OpenBSD, but that's IMO a user disadvantage.


> Instead of trusting, say, the Debian or OpenBSD maintainers to provide timely security updates, you have to trust each individual vendor of your flatpaks.

Well, yes, and it's the most obvious approach. You're trusting the person producing the product for producing a good product.

The distro approach is great, but it doesn't scale well for obvious reasons and it makes software development harder due to downstream issues often going reported as upstream issues.

I appreciate and enjoy the distro approach for what it gives me, but there should always be a way for developers to provide packaging that works everywhere and then distro maintainers can feel free to recompile/repackage as they please.


> You're trusting the person producing the product for producing a good product.

That's a generalization. Specifically, I'm trusting the person producing the product to do something that e.g. Debian has a proven track record of doing quite well: maintaining dependencies so that I have a secure (and consistent) system altogether...so well that I'd rather trust them to do it over any existing flatpak vendor, and definitely more than trusting n vendors for the n flatpaks I have installed.

> The distro approach is great, but it doesn't scale well for obvious reasons

Regarding what aspect doesn't it achieve the scale necessary? It's not obvious to me what you mean at all.

> and it makes software development harder due to downstream issues often going reported as upstream issues.

A cost of system-scale software maintenance, for sure.

> I appreciate and enjoy the distro approach for what it gives me, but there should always be a way for developers to provide packaging that works everywhere and then distro maintainers can feel free to recompile/repackage as they please.

I don't fundamentally disagree. My point is that I think it's a security risk to leave it to every developer to maintain their dependencies as they see fit, and that this is a problem that flatpaks create that a well maintained distro software repo doesn't have. It's better to acknowledge this risk and consider it to grow as you install more flatpaks than to ignore it, expecting that every flatpak vendor will respond in a timely manner to dependency CVEs; the trust I place in Debian—by necessity very high—now has to be multiplied for each individual flatpak.


> This is not a security risk of flatpak but of the application. Developers are responsible for releasing updates, not flatpak

If it was a regular application installed by a linux package manager, the dependencies will be automatically updated to fix security issues(At least in most cases).


> You'd expect the application to run as your user and have full access to the home directory. Many applications expect to have this access. This is what it means to have user space applications. Flatpak is not about fixing or changing this.

I as an user could at least expect a sandbox that protects a rogue application from modifying or even reading critical files without user consent: .config of other applications, .ssh/, shell configs (profile, rc). A web browser may require create-write access to ~/Downloads but not arbitrary read or write access anywhere else with the exception of save-as/upload explicitly requested by the user*, and that could be done by the OS file selection dialog giving back the filename + a token. And so on.

Perhaps it could even be abstracted to a common "profile" for application classes, and go even further than that and have stuff like raw USB/Bluetooth not authorized by the browser but by the OS/sandbox layer (to prevent attackers exploiting a browser bug).


So wait ... It's in my way for no reason?

/snark

I get the idea of Flatpak. I want some kind of app sandboxing. Ideally with a good UX and shared libraries, however.


> Instead, providing a cross distribution consistent installation experience is the main goal.

Who is asking this exactly?

As an arch user and technically a fedora, Gentoo, slackware, Ubuntu, debian user, I'm not asking for this.

I like how I install packages.

Installing garbage from a webpage like some windows pleb or having to eject weird desktop download things after drag and dropping things on a Mac is not something I want.

Please stop pushing this idea about flatpak being better, its not for a lot of users and for me, security doesn't even enter the equation.

But the idea that random flatpak maintainer has better security processes I place than the sec team for many distros is laughable at best.

You can wrangle the argument all you like about ideal world scenarios but the fact of the matter remains that suboptimal ideal still works out as more effective in the real world. Tragedy of the commons problems generally do require sub optimal solutions in the small to solve in the large.


There’s a lot of good software that isn’t in package repos. This isn’t a failure in the developers part. It takes knowledge and time and effort to package for a repo, let alone all the different repos on each distro. Flat pack gives a single way to distribute something and on that merit alone it’s valuable.


Almost every package manager on Linux distros allow you to install packages outside of the main repositories and still manage dependencies using the repos.

I agree there are a lot of distros to package for, but I'd say the bonus of not having to maintain and update all the dependencies outweighs that fact.

For example, there are a lot of tools that can generate Debian packages for you if you don't want to spend any time learning the manifests. I assume these exist for most distros.


Even if there are easy tools available, I'd still need to learn them for each distro I want to support. Heck, just figuring out the different names each distro uses for a particular package is a fair bit of work.


Just use Gnu Stow as a secondary package manager and install into the /usr/local tree.


The discussion is about packaging your software for users, not yourself. Stow is a solution for managing non-packages software on your personal system, not for distributing it.


Most people don't package their software for distros.

Smart distros keep things close to upstream so the burden is small.

This is a weird argument.

Now you want each developer to manage dependency updates for their application? I don't trust them to develop their own app let alone apply security patches properly.


Again, to whom?

Arch has aur, arch can sideload rpm/Deb's, Mac has home brew, who needs this?

The only people I see using flatpak is the 50 odd Linux users we have at work, they constantly have dramas trying to make things work around home directory sandboxing, permissions, updating and more

> Single way

Why is this valuable? Just saying it doesn't make it true.

Why do we need a single way to install?

I'm not saying it's a bad thing, but giving up home dirs and a bunch of other platform specific benefits is a cost I'm not willing to pay for some made up ideal that gives me a worse experience.


> Again, to whom?

Are you trolling? I think the answer is obvious: less duplication of efforts may make the linux ecosystem be more lean which may mean more manpower to... bring the year of the linux desktop.


If you avoid duplication of effort by doing something worse than existing solutions, is that progress?

Distro package managers have more than 20 years doing this. If you want to replace them I think there should be fewer drawbacks. Otherwise you are throwing out decades of design, thought, bug fixes for something more problematic.


You are assuming that packaging is a substantial burden when compared to making a complex application and that that labor translates ultimately into tangible benefits.

If the packaging is done by the dev you are assuming that 1% extra effort wont be spent playing chess instead of adding features.

If the packaging is down by an interested party for that distro you are assuming that a person competent to write a package file is even competent to say write an app let alone interested in doing so.


To me at least, but I'm sure there is more people than me who wants this.


As a Gentoo user, I very much welcome being able to pull in some applications that are closed source and/or not part of the portage tree without the hassle of finding out how exactly the base platform that it was compiled against looks like. For a while I had to keep a lot of 32bit versions of libraries around just to make Steam happy, and build the stupidly large qtwebengine just for the Nextcloud client. Nowadays I get both of them via Flatpak without hassle and can keep a leaner base system.


Why do you exactly run Gentoo in the first place if you don't actually value building the dependencies from source?


That's an odd question. I absolutely value having almost everything built from source and the control I get from that through use-flags.

That doesn't mean I enjoy compiling some incarnation of webkit every few weeks for multiple hours on a laptop, in particular if it's just for a single application (in this case: Nextcloud client). Same for 32bit stuff, I have absolutely zero use for these libraries outside of closed-source games, so I'd rather have them just work than full control over them.


As a Gentoo/Funtoo user, I feel like we can solve this problem with better security by running applications in Docker. See Funtoo's approach to Steam for example.

https://www.funtoo.org/Steam


As a free software developer of Windows, Linux and Mac software I would not want to have to need to be "approved by the gods" in order to release an application to the AppStore / SnapStore / Windows Store and whatever is called in the different flavors of linux.

I put my software in my webpage and whoever wants to download it and use it does it. If they don't trust it well then I'm sorry.


You can self-host Flatpak repositories with flat-manager, which differentiates Flatpak from the App/Snap/Windows Stores.

https://docs.flatpak.org/en/latest/hosting-a-repository.html

https://github.com/flatpak/flat-manager


Then don't use Flatpak!

The other side is that Software Developer Foo isn't necessarily going to have the resources/inclination to work with your distro maintainer. You may decide to avoid Foo's software as a result, and that's totally fine—but if I like Foo's software and want to use it, Flatpak gives me that opportunity.


I don't and I won't but I'm still looking to point out its shortcomings whenever it comes up.

Staying silent is as good as encouraging the echo chamber outright.

People are pushing this shit because it's trendy not because it's good.


Well, getting the wrong version (years old, not what the developer designated as stable & supported as of yesterday) of absolutely everything I care about is actually kind of really annoying with most contemporary linux distributions.

But basically, as with anything open source, if you don't like it, don't use it, make some other choices, or build your own. I see a lot of people whining and stamping their feet here but not a lot people building alternatives. Quit whining & start coding if it matters that much to you. As far as I can see there's Flatpack, Snap, and not much else here because nobody can be bothered apparently.


Microsoft calls typescript stable. I find bugs in it all the time.

I don't trust developers to designate things as stable.


I can see two opensource groups that benefit from this. Maintainers can produce quality packages that don't vary unintentionally between target systems with less effort to produce. Package consumers can rely on having the same quality packages on varying systems. I use Ubuntu Server mostly because its packages are highly consumed. If I could get the same packaging on a different system I wouldn't have to worry about arbitrary variations not related to the distro but different just because it's produced through a separate workflow.


Flatpak is supposed to be the solution for installing proprietary app while staying safe. For open source software, less value for using flatpak apart from no need for package in multiple format for different distro. Appimage also solve the same problem.

Now is flatpak secure for installing closed source app? At least not when they have full x11 access when using x11 desktop.


> As an arch user

> like some windows pleb

I'm not sure if you're serious or making fun of stereotypes and thus don't know how to respond.


Seriously making fun of windows plebs? Yes. How anyone trusts Mac/windows these days still amazes me.


Not that I expected different from a "l337" Arch user, but "Windows plebs" have had "chocolatey" and the Windows Store for a while now...


It's for the people who spent their free time to write the software that you use for free and call plebs.


> They don't mention sandboxing on their frontpage, at all. This is not a goal.

...and without this "fake sandboxing" strawman, flatkill.org is left without a convincing argument.

The security trade-offs that come with flatpak are the same tradeoffs that desktop operating systems like MacOS and Windows have made since their inception.

Considering that these are successful desktop operating systems, whereas Linux is not, one might conclude that these are the right tradeoffs, all things considered. Sure, it's "a security nightmare", but we've been living through it for decades. The alternatives clearly aren't good enough to bring everyone to move over.


Scroll down that page. The ui integration problem is real


I'm not saying that any of these problems aren't real. Many applications on Windows and some on MacOS disregard theme settings as well. That's not a security issue though, and it's fixable.


I strongly believe that both flatpak and docker both went about the wrong way of building images.

they should have leveraged the package model that linux distributions already have and viewed them as mix and matchable layers much like packages are mix and matchable. with full dependency resolution. i.e. creating an "apache" container image should be as easy as "apt-get install apache" but without actually installing anything. just creating a manifest that references the packages/layers that are needed.

simple upgrades a container image is then just upgrading the set of layers in the container image, and unlike docker where even if 2 layers are exactly the same content, if a layer "above" them is different, they are different layers, in a package -> layer concept that's unnecessary.

In fact, this is what I published and created the notion of container images that are defined by a file system that is created by composing a set of shared layers together, but Docker took the easy way out and punted on that.

this would also make it simple(r) to do security analysis on container images as well as compose multiple images together.

https://www.usenix.org/legacy/events/atc10/tech/tech.html#Po...

https://www.usenix.org/legacy/events/lisa11/tech/#Potter

(which also goes to my in joke that I'm the only one who can get a job that requires 10+ years of docker experience ;) )


Sometimes the easy way is all that there is capacity for. So I agree there are potentially better ways but IMO the world is better off with flatpak and docker than without them and we are free to still investigate and try better ways with all the time we save with docker and flatpak.

I for example use conan(.io) for some binaries - and there you can compose arbitrary packages - and theoretically it can serve as the basis for a package based approach to composition.


HP-UX Virtual Vault was a thing back in 1999, among other examples I can reach for.

Sometimes before getting the capacity, one should learn from what already was achieved in the past.


I agree, I don't fault them for taking the easy way, I do think it was wrong though. I created tools tools for converting debian packages into layers (in my research I stored these layers on a shared file system (either a multi-attach (in practice would need an fs that can have a single writer and multiple readers, which might simplify things) on a san or nfs.

But, debian packages aren't really made for this directly, as they have their pre/post install/remove scripts. My "hack" was to treat every package as an dpkg --unpack version (i.e. not configured) and then at image "build" time (i.e. each image is really a layer that is a manifest of dependent layers + data created at image build time), we would dpkg --configure --pending all the composed layers (with a bit of magic happening behind the scenes to make dpkg realize that the packages were unpacked but not configured yet).

This mostly worked well, but you could have some ordering issues. A bigger issue is because debian doesn't expect this to happen, it creates encryption at install time (say for ssh), not at first run time. So for example, if one created an ssh daemon image, any instance run from it would have the same key. not a great idea. Of course, its possible to engineer my way out of this (simple case would be to write tooling to recognize these cases and inject it into the created images), but it goes to why a direct conversion from debian packges into layer wasn't a silver bullet solution. I had the thought that perhaps an Ubuntu type approach would work, where some packages are wholesale imported from Debian into Unbutu without any changes but "important" packages ae modified to fit Ubuntu better could work.

As an aside, the 2 papers referenced above were my "job talk" for my post doc at IBM Research, and got me the job. I tried to get support to continue this line of work there, but that never happened (and then my manager left, and I was left working on unrelated cloud stuff for the last 1.5 years there). Of course, this is now important to IBM (see Red Hat purchase).

I wasn't approaching this from the pet/cattle metaphor, so I also gave my system the ability to upgrade instances in place by mark removeding layers as unlinked (i.e. ignored by readdir/lookup), and adding new layers. A big advantage here over traditional package management is that this made upgrades "atomic". For example, when you upgrade libc in the traditional package managed world, your system is temporarily "broken", in the sense that any executable that tries to run in between old libc removal and new libc installation will fail as its not on the file system. However, in the cattle world of containers, this is probably unnecessary complexity.


The system you describe seems close to Jib [1] or Nixery [2], which are dependency-aware tools built on top of Docker. I think neither they nor the prototype you've described could be as distro/language-agnostic as Docker and ostree/flatpack are. Debian could offer something like Nixery, any distro could, but then it wouldn't be a cross-distro solution.

I personally like appimage, but I don't mind using flatpack because it is so similar to Docker and is also supported on most major distros. Distro repositories aren't going anywhere, and I think flatpack strikes the right balance for users and developers.

[1] https://github.com/GoogleContainerTools/jib

[2] https://github.com/google/nixery


yes, its not distribution agnostic, what I'd argue is that "docker style" containers (i.e. created out of a composed set of layers) would benefit from being constructed out of its own layer distribution instead of a package distribution. that the benefits one gets from building images in a way that leverages layers would be a big win.

simple examples:

1) we spend time ordering the layers we install in docker so as not to waste space if an earlier layer changes. in layer aware building, this isn't an issue.

2) if one does a run apt-get install XYZ, unless one nukes one's cache on every build (possibly ruining all efforts to not waste space if data comes out somewhat differently), one will not actually be getting the latest packages.

The primary benefit of leveraging an existing distribution is that you have all the software they have already built (and is built to work on them). Which I don't want to downplay the value of that. I understand why the decisions were made to leverage them, I just feel they come with a cost.

I'd also note that the prototype I described was basically finished being built 4 years prior to docker. we've learned a lot since then, but I think we've also forgotten a bit.


My problem with "the package model that linux distributions already have" is that it's not one model and each one often comes out a bit different for unintended/unknown reasons.


when I say package model, its short hand for a dependency graph model between packages where packages can easily be installed and upgraded independently with whatever dependencies that they need to be installed along with them.

If one really wants to simplify it, I'd say "Debian's package model" as that's what I have the most direct experience with.


Funny you should mention that as it's exactly what I've had problems with. Even though Debian and Ubuntu use the same package model/format but having separate repositories use differently made packages with varying default settings for example. Because of this I've found that less popular software in Debian repositories are less crowd-tested. Flatpack fixes this. I would like to freely choose the best package and use it on the best distribution for my purposes.


In essense there's two main issues raised:

1. Apps still use the filesystem=host/home instead of intended portals bypassing the sandbox.

2. Common runtimes have un patched security issues.

Re 1) This is a backwards compatibility option until app developers start explicitly coding for flatpak compatibility. Given the money involved (~0) I can understand why it's taking some time. As a user one can address this with flatseal[1].

RE 2) This is more a problem of the ecosystem rather than a problem of the people developing flatpak. And it's not like this problem is not often encountered in other ecosystems. This just shows that the ecosystem is still in early stages. Still it s something worth pointing out and hopefully people at freedesktop.org look into it.

[1]: https://flathub.org/apps/details/com.github.tchx84.Flatseal


>Common runtimes have un patched security issues.

This is an area where Ubuntu's snap ecosystem does better. Instead of essentially trying to create a meta-distribution, snaps provide a way to run software built for Ubuntu LTS on other distros, at least in theory. They are compiled against Ubuntu LTS runtimes and are built primarily from packages in the Ubuntu LTS repositories. Further, when any Ubuntu based dependencies of snap packages get security updates in the Ubuntu repositories, the owners of said packages are prompted to rebuild their snaps.

In contrast, it's not clear who is assuming responsibility for maintaining flatpak runtimes like org.gnome.Platform, or what is their support lifecycle. It would be better to promote runtimes based on a well-known distribution's packages, since they would get security updates in accordance with that distribution's security policy. For instance, something like https://developers.redhat.com/blog/2020/08/12/introducing-th... looks more promising from a security point of view.


Isnt Snap actually worse by not having a runtime? While its indeed hardwired on the Ubuntu packages and infrastructure, the author still has to actively do something. When the app uses one of the major Flatpak runtimes, the vulnerability will be fixed for all apps using the runtime once the runtime is fixed - no action from app maintainer necessary.


Snaps are built on a "base snap" which in turn is derived from Ubuntu LTS. Further, they can bundle debs from the Ubuntu repositories using the "build-packages" and "stage-packages" manifest keywrods. One bonus of this design is that the people writing the manifest don't have to worry about how the deb dependencies themselves are built. They only need to worry about building any third-party dependencies.

Now, flathub does maintain a collection of "modules" -- basically prewritten manifests for building various common dependencies from source. But that collection is still quite small, and writing a working flatpak manifest can involve duplicating a lot of the hard work of Debian and Ubuntu package maintainers.

Recently, Fedora started something promising from a maintainer's point of view by:

* creating a flatpak runtime based on fedora releases, and

* allowing one to specify existing Fedora packages as dependencies instead of having to figure how how to build all the dependencies from source (https://docs.fedoraproject.org/en-US/flatpak/tutorial/).

But it is still in its early stages.


It seems like 2) gets to the heart of what flatpak is and how it works. The argument in favor of traditional packaging over flatpak is that security vulns in dependencies get fixed for all packages at once. The architecture of flatpak allows / encourages package maintainers to update vulnerable dependencies at their leisure. The fact that this is observed in the wild in the linked article seems a natural consequence. What is the mechanism that would cause this behavior to change as the ecosystem matures?


More available resources to update dependencies and hopefully plug in CI/CD pipelines. Curated runtimes by appstore/OS vendors would also help. Also since most if not all the code/images is open source, automated vulnerability scanning. Using approaches from the docker ecosystem that faces the same problem would also help.

All of the above however need resources (ie money) and that's why things are moving forward so slowly.


> Apps still use the filesystem=host/home instead of intended portals bypassing the sandbox.

The main issue seems to be the fact that flatpak still claims these apps to be "sandboxed", which is simply a lie. Properly communicating this situation to the user is all the article really asks for (and this seems very reasonable)


It's not flatpak, its the Gnome software center. Not the same thing. However removing the sandbox picture for apps that use the filesystem=host/home permissions on Gnome Software would be a better representation of reality.


> But is it just this one application? Let's look at the official runtimes at the heart of Flatpak (org.freedesktop.Platform and org.gnome.Platform 3.36 - as of time of writing used by most of the applications on Flathub).

This is why the idea of "runtimes" is bad at the core: existing applications do not benefit from new features and fixes in the latest version of the runtime. But fixing this means that said "runtimes" (ie Gtk, et al) need to actually give a damn about backwards compatibility, which is certainly not something anyone on the desktop side of Linux above X11 cares (and they want to break X11 too instead of fixing it because "it is too hard to fix it so let's throw everything away and make something new that for some magical reason wont also become hard to fix in the future").

Why the idea of having a standard set of libraries to handle common GUI tasks that are available "everywhere" (in desktop distros at least), just like the C library and X11 library (so far) is available everywhere and can be relied on in the long term (think decades, not weeks) is something that sounds like science fiction instead of common sense is beyond me.


At least Qt cares more about compatibility: https://wiki.qt.io/Qt-Version-Compatibility - Qt 5.x.y releases are supposed to be binary compatible with each other.


Qt5.x.y might be compatible, but what about Qt4.x.y or Qt6.x.y? If some program is linked against Qt4.x.y it wont get any benefits and fixes from Qt5.x.y and the same will happen with Qt6.x.y.

And TBH i'm not sure if a C++ library can ever be backwards compatible, at least without major major hacks (think like automatically generating a "proxy" library that uses the old ABI to forward calls to the new ABI - i think the only C++ API that ever did something like that is Haiku so that they support both G++ 2.95 from BeOS and modern G++ programs from the same libraries - even then this is easier for them since they're the OS and can decide they'll always distribute those proxy libraries, which isn't something you can rely all Linux distros to do), since C++ does not even pretend to have a stable ABI. I know that the G++ developers are trying to keep things compatible but i'm not sure of the C++ designers care that much.

On the other hand on Windows, just a few hours ago i was playing a recent(ish) game made with RPG Maker 2000, which as the name implies was made two decades ago and yet still works perfectly fine. And i have a bunch of older programs that work fine linking against the latest version of Windows 10's libraries, with all the fixes and new features they've got over the years.


> since C++ does not even pretend to have a stable ABI. I know that the G++ developers are trying to keep things compatible but i'm not sure of the C++ designers care that much.

The Itanium C++ ABI is the stable ABI on Linux, and the C++ standards committee (along with those who maintain the Itanium ABI) cares very much about not breaking the ABI. For example, the C++ committee is not able to make any ABI-breaking changes to the standard library, even where it would be advisable.


The Linux C++ ABI has broken in at least one massive way recently, as mandated by the standard.

https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_a...

I remember this having happened multiple times before, though thankfully it has gotten less common over time.

The C ABI is stable, except when it isn't (largefile was a fun one...)


And keeping Qt up to date the system-packages way works fine without needing flatpak et al. I don't think that's coincidence.


> But fixing this means that said "runtimes" (ie Gtk, et al) need to actually give a damn about backwards compatibility, which is certainly not something anyone on the desktop side of Linux above X11 cares [...]

Actually, I'd say hardly anyone above the kernel in Desktop Linux cares. Which is indeed among Desktop Linux's largest issues that will likely never be solved because its culture is unable to even comprehend the problem.


Nah, X11/Xorg is stable. I remember a talk from Keith Packard about a fix in the Xorg font server (which he assumed would not be used by anyone by now but he noticed a bug report from 5 years before he made the fix and he decided that if someone bothered to report a bug then there are some people out there still using it, so it is worth fixing it) where he also mentioned how to ensure that any future change wont break backwards compatibility.

Also personally some years ago i tried to build a small GUI library i was working on at the time under a Red Hat distribution from 1997 and then run it on my 2017 Debian[0] (the colors are off because i didn't bother supporting colormaps) and it worked perfectly fine, showing that X11 (which is the only mandatory requirement the library has, under Linux at least) and of course the C library (the binaries are linked dynamically, not statically) do have two decades of backwards binary compatibility.

It is just everything else that is built on top of that that doesn't work. At least everything else that modern desktops use, AFAIK something like Motif is also binary compatible going back decades (which is how Lesstif worked anyway since it was supposed to be binary compatible with Motif).

[0] https://i.imgur.com/YxGNB7h.png


Isn't that what happens in open source projects? Different groups of people with different ideas trying to propel the project in their direction?


"make something new that for some magical reason wont also become hard to fix in the future"

this is really well put. I wish people realized that if they are doing essentially the same thing folks prior did, they really will end up with the same level of results. Unless they really have identified a magical fix to entropy.


It might happen though. Look how Git made a neat change in the CVS domain.

Admittedly, it takes people aware of the issues with the current stack, and the ability to take a different successful approach.

Those behind Wayland were competent about X11, as far as I know. But for some reasons, adoption of Wayland is still scarce.


The user-facing impact of switching to git is that committing is faster and you get much better branching and merging. The user-facing impact of switching to Wayland is that you stop being able to run GUI programs over SSH.


What you're saying is not true at all, X11 forwarding still works because of XWayland.


There is no particular reason. Making big changes is never easy.


Obviously both FlatPack and Snap have failed. Everybody hares them for many serious reasons. Yet there are awesome ideas behind. The Mac way of installing apps (by just putting the packages into a dedicated directory) is cool and intuitive, removing a lot of entities a user has to understand. Sand-boxing and being able to fine-tune the interactions with the host system and the Internet an app is allowed to perform is a feature absurd to be lacking. Do we need to invent a new format, taking FlatPack and Snap mistakes in account to just get all of that, implemented a working and easily usable way?


How has FlatPak failed?

> Sand-boxing and being able to fine-tune the interactions with the host system and the Internet an app is allowed to perform is a feature absurd to be lacking.

Are these features lacking? I mean I can open flatseal and go disable Geeqie access to network and filesystem, so not sure how it is lacking - maybe I'm missing something.

Of course if I disable Geeqie access to the host FS it kind of defeats the purpose of the app, but at least I guess you can technically say it failed in that for Geeqie to operate as I expect it to, it needs access to the host fs.

EDIT: Seriously though, I am not sure how most of Gimp, VSCodium, PyCharm, Octave, Inkscape, Audacity or VLC will be useful to me or anybody else without host fs access.

Where the heck do you keep your stuff if not on your fs?

If you want to go fiddle with the specific directories that option is available to you in flatseal.

And I just had a look at some other apps like Bitwarden, Teams and Spotify, and there the permissions is much more constrained, for example none of those have blanket access to the host fs or home directory. Spotify for example is limited to xdg-music and xdg-pictures.

Another EDIT:

flatpak also does support blacklisting of directories, see nofilesystem in https://docs.flatpak.org/en/latest/flatpak-command-reference...

I think support for this is lacking in flatseal currently but I'm sure it is coming.


Epistemic Status: I've been thinking about this for a while, but have not decided on any conclusions yet.

More and more, I view flatpak-style sandboxing as something kind of like wine. It's very important that it exists, as a stop-gap measure to allow people to run the programs they currently depend on. With wine, that's Windows-only programs; for flatpack it's proprietary applications, while retaining some control over their permissions. But it's not an ideal long-term solution.

Using wine in the ideal way requires creating gigabytes of duplicate files (since you want to run one program per prefix, since they may require different and incompatible tweaks). Both it and flatpak make it harder to write shell scripts and generally to hack on your operating system. More importantly, both solve problems that, while they aren't going away any time soon, could be solved just by using high-quality, trustworthy software that targets GNU/Linux natively, without the downsides of these technological solutions.

So, while I do use both of these technologies and appreciate them very much, I prefer native packages and would rather put my effort toward better supporting people who want to write software that is distributed that way.


I run inkscape from flatpak in my makefiles so it is not that bad.


> Are these features lacking?

I mean they are lacking if we don't use Flatpack or an alternative so we have to either use it or invent something better.

> Seriously though, I am not sure how most of Gimp, VSCodium, PyCharm, Octave, Inkscape, Audacity or VLC will be useful to me or anybody else without host fs access.

I don't need any of these to access anything outside a specific directory (or a small selection of such, but I don't need them to access the full home dir, let alone the full root fs, even for reading). Hopefully flatseal can do this.


> Are these features lacking?

Well, as I said I see them in flatseal so either flatseal is misleading or the features are there, and I have no reason to think flatseal is trying to deceive me so I assume they are there.

> I don't need any of these to access anything outside a specific directory (or a small selection of such, but I don't need them to access the full home dir, let alone the full root fs, even for reading).

Fire up flatseal and change the permissions to what you want it. I'm sure you can also petition for xdg-code directory or something and then keep all your code there and request the packages be changed to default only work under there but I suspect most people would not be so happy with this.

I am not sure how you expect the package maintainers to know where exactly on your FS you keep your code, I also don't keep mine in my home directory.

And maybe a blacklist would make sense, but if all that is needed is a blacklist then I would harldy say that flatpak failed because it is not really that difficult to fix that deficiency.

EDIT: Actually blacklisting is supported, see --nofilesystem in https://docs.flatpak.org/en/latest/flatpak-command-reference...

So really everything is there, maybe everything is not available in a nice neat UI, maybe the UX is not what it should be, but the core underlying system is not "lacking" these capabilities AFAICT.


You've misinterpreted me. I don't say Flatpack is failed because it is "lacking" these capabilities. Quite the contrary. I mean we need Flatpack or something alike because that's what they offer. BUT Flatpack and Snap (and some other alike, I can't remember, there were 2 more) seem failed because everybody around seem hating them. There are just so many negative comments around. Therefore we probably need something like Flatpack but way better so people wouldn't be dissatisfied.


If you must open flatseal config, it's a bad system. What should happen is that everything should be forbidden by default, and attempts to access files or dirs are individually approved (if the file is opened interactively, it's naturally part of a trusted system file picker that approves the access and gives the filename to the app when you click the selection button), and you can make large config edits if you want to.


Seems like a pretty good idea, and it would be a nice if someone added this functionality to flatpak. I am still happy that flatpak is there because it is better than nothing and for most apps I use it with the permissions is exactly as I expect them to be.


> I don't need any of these to access anything outside a specific directory

What's wrong with firejail?


No amount of inventing new formats and technologies will solve any major problem under Linux because the ecosystem is not developed by a single vendor. Any solution will not be uniformly adopted and there will be multiple competing solutions, lacking various levels of polish. Also the money under Linux is with servers, so unless that changes, the desktop landscape is never going to approach MacOS or iOS cohesiveness. Too many ideas, too many projects and efforts, too little top-level organisation and too little time or monetary interest for consumer desktop solutions, especially by the largest players who're all focused on the server and cloud and any benefit for the desktop is incidental.


That's not very useful way of thinking.

.deb packages format for example are quite uniformly established and provide a lot of interfunctionality on many different systems. Of course you are going to find other obscure systems and even big other ecosystems (RedHat obviously, and derivatives), but the format we are looking for doesn't have to be the single one either. It just has to work well enough and be somewhat universal. .Deb-level of universality would suffice just fine.


.deb is a format in the same way that .tar.gz is a format. If Fedora or Arch used .deb the problem would no better because each distro has its own package names, its own package versions, and its own downstream patches.


That's subjective. IMHO the Linux UX (including both usability and aesthetics) has been improving steadily during the last 20 years as I've observed it. And the only reason I use Linux (Manjaro with KDE Plasma) is my experience is "like Mac but better" (perhaps it's worse in some aspects I don't know).


True, as a long time Linux user, it is true that the ergonomics and state of the art has steadily improved, but its competitors on the desktop have also been steadily improving all the while, and they had a considerable head start, so Linux sadly always tends to be almost there but not quite. That is why Canonical expressly targeting user desktops when they started out was a watershed moment, but sadly, that did not sustain because of failed monetisation.


Many people seem to moan that Windows 7 is still the best Windows and that macOS has been on a downward trajectory since Snow Leopard, or whatever. Is it so obvious that other OSs are actually advancing and leaving Linux in the dust? For example I don’t find Windows new app packaging and sandboxing (Store apps) to be much of an improvement over anything.


Could you outline some of these improvements?

I mostly stopped using other operating systems a long time ago, so my impressions of them are partially ouit of date, but for my previous job I had to use a Mac for development, and lately I'm experimenting with Windows in some virtual machines in order to automate the setup of certain tools for Windows developers. In neither case have I seen some desktop environment feature and thought ‘Wow, I wish I had something like that on Linux!’.


Linux is not an operating system or a platform, it's a kernel. Ubuntu, RHEL, Suse etc are operating systems, but more importantly they are separate independent operating systems. Their interoperability pretty much starts and ends with POSIX, everything else being more of a happy coincidence. And many of those major Linux-based OSs have strong vendor behind them.

Same way you don't expect cohesiveness between macOS and Windows, I think it is bit unreasonable to expect cohesiveness between RHEL and Ubuntu or whatever.


POSIX had a patch update 3 years ago, a minor update 12 years ago, and a major update 19 years ago.

That's the problem.

The desktop distros could and should have done a lot better to coordinate on fundamentals of user experience in the age of the open/hostile Internet and frequently updated software.


POSIX does not know much about desktops really. If you want coordination in fundamentals of desktops, look at Freedesktop specs https://www.freedesktop.org/wiki/Specifications/


Thanks for the referral!

Unfortunately the website isn't loading for me. Wikipedia shows their projects seem to be focused on lower level graphics/UI issues, but not app packaging and security issues that have become very important in past 10 years.


FlatPak was freedesktop.org's app packaging and security project for a couple years, when it was still called xdg-app. It's hosted on github today.

https://blogs.gnome.org/alexl/2015/06/24/xdg-app-moving-to-f...

https://freedesktop.org/wiki/Software/Flatpak/


Another way of saying this is that flatpak and snap try to solve what's ultimately a social or policy problem by technical means. Building an abstraction layer like flatpak does necessarily entails a lot of complexity for at best a 90-percent solution; for instance, you still have to worry about what version of flatpak or xdg-desktop-portal each distribution has.


Speaking as a user, not a dev, I don't really understand what you are referring to. There are "multiple" solutions, namely 2 (two). Both of them work on all distros, so I could not care less which one you pick.

Same for desktops, again there are two: gtk-whatever and qt-whatever. All software works on both of them, so again I don't care.

Same for installing software. Installing Wolfram Mathematica is literally downloading one file, clicking on it and pressing Enter a few times.

If anything iOS and its appstore showed that the repo model is great and there is a reason why microsoft developed/copied "winget".


As a user, you're missing details, and completely misunderstand several abstractions.

For instance, gtk and qt are not desktops. They're ui toolkits. Bunches of widgets that can be composed to cobble together a UI.

That's completely separate from the topic under discussion, which is mode of application delivery. FlatPak involves using namespacing and containers to "spin up" a virtual system with only visibility into those slices of the overall host needed to run the program.

This is good because it at least keeps unfamiliar software constrained somewhere predictable, but poor because often it won't use OS host libraries, which are generally kept the most up to date, and worst of all, enforces a complexity and debuggability tax where one has to have an intimate understanding of what is going on "in the box" if something goes wrong.

Other ways mentioned are the distro model, whereby distributions maintain repository ecosystems and make decisions with regard to FHS compliance, system tooling, update pipeline, etc..., and use those to support their user base.

There is the original "compile and stage it yourself" crowd, who basically concentrate on reproducible build capabilities, but tend to lack in automatic dependency resolution.

There's the Mac way, which isn't terrible. It tracks and designates places for libraries, Frameworks, and Applications, and has a lot of automated and well integrated ui-tooling which makes the user experience of software install easy, but it's just another paradigm you have to track when trying to write portable software.

There's static linking, which delivers executables that are entirely self-contained, but tend to be bigger memory footprint-wise, each have an upgrade path separate from every other executable, and benefit not at all from dynamic linked libraries on the host system.

Then finally, there is dynamically linked executables. They're small, commonly reused code is loaded once in memory you have the additional complexity of the linker to be aware of, but you can update the entire system's audience of a particular library at the same time... Which can be a double-edged sword given how the maintainer scripts the install or sets up their system/build environment.

Personally, I just sidestep the issue myself by digging into and learning about software I use on a regular basis with disassembly tools, and treat all of my systems the way a good farmer treats their livestock. Distant reverence, but with a careful eye as to whether there is something wrong, and a hard fought for willingness to kill something and start from scratch. It's the only way I've found to be truly safe and resilient in an environment where everyone optimizes for their own particular definition of convenience.

(My definition of convenience is a minimum difficulty in troubleshooting what might be wrong, so I favor fewer abstractions as that entails less Tower of Babel to wade through).

Full disclosure: my approach generally would classify me as a bit of a curmudgeon in the industry as I still approach computers as being analogs of physical machines. There is a healthy corpus that revels in abstraction, but I'm not really one of them. I like to have a ballpark understanding of what the artfully arranged beach sand is doing. Adding more abstractions or having the computer do things itself is generally not something I strive toward as it almost always comes with an unacceptable increase in the overall complexity inherent to navigating the Gulf between how I and everyone else thinks the system works and how it actually does.

More than anything else, in my experience, keeping that Gulf small leads to better overall satisfaction from a lay User. YMMV.


> For instance, gtk and qt are not desktops. They're ui toolkits.

And I never claimed that. Again, as a user the difference between gnome, cinnamon or unity is not all that interesting, so I simply wrote "gtk-whatever" as an umbrella term and "qt-whatever" for kde plasma and lxqt.

My basic question is the following:

- If I find software that is only available as a snap, I will use snap. It works on any distro.

-If I find it only available as a flatpak, I will use that. It works on any distro.

- If I find only an app image, I will use that. It works on any distro.

- I get an installer from some big commercial software. It unpacks itself in /opt or uses a docker image or... . Again works anywhere.

Why should I, as a user, care which distribution mechanism you chose if they all work anyways? Why is "fragmentation" an issue, if whichever you choose works everywhere regardless?


As a user, you will eventually be limited by the decision that developers/maintainers decide on.

Want to run your program, but I only support something for hardware that supports full client virtualization, which your system doesn't have? Oh well.

Oh, you wanted to run that Snap, but the cache is invalidated or cleaned, and no internet connection? Oh well. Sucks to be you.

Oh crap, you need to do that one final fix real quick in that one program, but oops! I just force pushed out a functionality breaking security update! Sorry bout it!

It may not seem so important from the casual user perspective, because you may not sample different configurations or distribution channels as devs/maintainers/admins, but you'll hit it one day, and you'll be as pissy as we are. Why doesn't this shit just work?

And the answer is, because it isn't magic, and it has to get to you somehow. Again, I hedge on the side of being deliverable with the simplest set of abstractions possible, and I'm old enough to remember when complex things actually came with manuals; so I prefer that delivery and understanding the ins and outs of the tool. You don't get that with FlatPak or Snap without cracking open the container which is far beyond most average computer users, and I place value on people being able to easily learn how to program, utilize, or reason about computers better, something that no container will ever teach you; especially given that in order to understand them you need to already be pretty deeply immersed in the mechanics of computing.

On the other hand, a zipped workspace with some tooling, READMEs and documentation can make for hours of discovery and familiarization with system fundamentals.

It's an approachability thing, and we've totally lost touch with it as a modern industry in my opinion.


Is that the Mac way?

This is how you run a third party Firefox binary package downloaded from Mozilla:

   tar xJf firefox-x.tar.bz2 ; cd firefox ; ./firefox
Most third-party commercial applications (CAD software etc.) is similar.

Obviously there is room for improvement here, for some basic sandboxing (chroot, dedicated uid and so on) and desktop integration (give the user an icon to click on, links to other applications).

It's just that Snap (and probably Flatpak too) just isn't it. Much too heavyweight and gave the users other things to worry about (disk space, updates, new types of package repositories). Something more fit for Linux culture would probably be closer to a set of best practices (this is what the executable is called, this is where libraries are put, there should be an icon here in this format), and let a thin wrapper handle it all.


Not fully. MacOS applications are nothing more than _APP_NAME_.app directory with an internal structure and manifest files that denote it is an application so when double clicking on the directory Finder knows to run the actual application binary located with-in the directory. This allows for decoupling the resources from the binary. Had to place multiple 3rd party binaries here in case the end-user didn't have them installed and used them as a fallback.

Most 3rd party applications are actually an installer script that places the contents into /opt/_APP_NAME/ or where ever the end-user requests they be placed.

The closest I'm come to mimicking Mac style on Windows to create an installer that does not actual install but extracts the contents to user's temp directory and executes the application. Needed a simple solution so the end-user just had to download and double click to run the application since they might not have admin rights to install the application.

I actually prefer the MacOS style since installing and backing up applications or moving to a new computer is the same process.


> The closest I'm come to mimicking Mac style on Windows to create an installer that does not actual install but extracts the contents to user's temp directory and executes the application. Needed a simple solution so the end-user just had to download and double click to run the application since they might not have admin rights to install the application.

I create these self-extracting packages all the time too. Getting the icons and such right on the package can be a bit of a pain so I made a quick script[0] to automate the process. You may find it useful too.

[0]: https://github.com/pR0Ps/make-sfx-package


No. The Mac way is download the app file disk image, click to mount it, drag the app file from there to the Applications directory, click to unmount the image the way you unmount USB flash drives. Perhaps I would also suggest removing the disk image part to further simplify the procedure. A user should not be forced to enter any terminal commands or even understand what an archive (or an executable installer) is. We must take "stupid" users in account if we want to conquer the desktop market as they are the majority we have to conquer.


Big Sur fortunately addresses the unmount step. It prompts you to unmount/trash the .dmg file when the install finishes.


What do you mean with "the install finishes"? If you mean a .pkg install, then previous macOS version's already prompt to unmount and trash the dmg file in that case.


Parent showed the general process. The implementation can change. If you're in any recent Linux desktop, you can click on the archive and it will open in file explorer so you can copy the app anywhere. No commandline required.


That simple way is ok but it misses all the small details of the Mac os install - a canonical place to copy the installed program, a nice icon to launch it with, file associations and so on


Disagree.

flatpaks provide apps that weren't packaged before. They provide easy an easy way to install apps that are bleeding edge and may have used newer versions of system libraries.


> The Mac way of installing apps (by just putting the packages into a dedicated directory) is cool and intuitive

1) They don't need to be put in a special directory to work.

2) AppImage is the most prevalent implementation of this concept in Linux (and it is indeed great).


> They don't need to be put in a special directory to work.

I know but AFIK that's the way that's meant to be done.

> AppImage is the most prevalent implementation of this concept in Linux (and it is indeed great).

Why don't we give FlatPack and Snap up to just use AppImage then? Chances are AppImage also has some cons and is inferior to Flat pack in at least some ways.


In theory Flatpak allows for sharing of dependencies and has sandboxing built in. This comes at the cost of needing special tooling, a repo, and non-portable (portable here meaning you can put the files wherever you want) applications. In practice you're probably better off using Firejail or the like on AppImage if you want the sandboxing anyway, because of things like the issues noted in the article. Snap is like Flatpak but even worse because it is entirely controlled by canonical.


Anarchy. Steve Jobs / Tike Cook can declare one package format for Mac. No one can do that for Linux. With a single distro leadership can sometimes do it, but there isn't enough funding to solve problems within a single distro.


The awesome idea behind is Nix. Please, everyone just go use Nix.


That's easier said than done, since I think Nix has a somewhat steep learning curve.

However, what I'm missing in most comments here is that - if I'm not mistaken - Flatpak and most others are essentially tailored towards distributing binaries, which is at least my personal gripe with them.

The reason is that especially with Nix I'm used to ad-hoc `.override`/`.overrideAttrs` and patch software in ways I'd like them to behave rather than being degraded to being the sole user of a software not supposed to be modified.

Flatpak, Snap and even Docker go the opposite route, which essentially degrades FOSS to essentially proprietary software, if even upstream would just point you towards "just use the Flatpak"™.

Others here also have suggested that it would be a good way to only distribute proprietary software, but as a Nix user who's packaging and patching (well, and reverse engineering) proprietary software I'd even disagree on that, because patching the environment and the entire dependency graph is something very useful which you'll lose with Flatpak/Snap.


How has Snap failed? Snap lets you install apps like Spotify easily on linux and in a security sandbox. Also snap doesn't have the main issue listed in this article of every app needing filesystem access. Snap has the personal-file interface which you need explicit permission to upload a snap to the Snap Store that accesses this interface: https://snapcraft.io/docs/personal-files-interface .

Source: working on a open source snap applications (without the personal-file interface despite being able to install packages into the sandboxed $SNAP_USER_DATA directory): https://github.com/argosopentech/argos-translate


Flatpak has portals for the same purpose. Unfortunately, many apps don't support portals yet. The Flatpak for Spotify does not allow access to home, only to the music directory (read-only). So, it's running in a proper sandbox too.


I don't use Spotify, but have you tried using Flatseal[0], to modify the folder access of the application?

[0]: https://flathub.org/apps/details/com.github.tchx84.Flatseal


I'm thinking that good standard practices and abstractions for SELinux and, optionally, per-application users would go a long way to solve these problems.


Situation: There are 14 competing standards...


Obviously. Buе this principle should not be followed fanatically. In the real world there are cases when a new standard can be introduced for good and take over. You just have to thing twice before inventing a wheel - to make sure it's going to be sooooo much better that it's worth it.

In this actual situation we have ther isn't going to be "14 standards" because there is no standard (for packages of this kind) so far, only a number of candidates. The first one to be good enough will become the first standard.


Why do applications have to be installed from strangers?

If you're unfamiliar with computers then you should only use things sandboxed as heavily as web pages or native apps that have been carefully reviewed by operating system maintainers.

If you know enough to make a useful decision on the safety of an app than you can probably build it yourself from github.

This practice of flinging binaries around is only useful for "software markets" and almost always seems to end up distributing the most pathological garbage possible.


Holy shit I hate this opinion. The best thing about personal computing is being able to build software and just pass it around easily. Having to rely on a third party to be able to install software is awful.


I'm absolutely not suggesting you rely on a third party. It's very easy and preferable to pass around source code, and it's (by design) very easy to build GNU/Linux apps from the source.

Distributing binaries will absolutely bring about calls to lock computing down to 3rd party distributors because the kind of people that can't handle typing `./configure` are the same kind that will pull random maleware off of google.

No good can come from installing closed software anyway.


> I'm absolutely not suggesting you rely on a third party. It's very easy and preferable to pass around source code, and it's (by design) very easy to build GNU/Linux apps from the source.

Yeah, it is so easy that no one ever uses containers to ensure a consistent and working build environment for software!

Oh wait, yes they do.


If your application is that complex then yes you probably should be working with it in a container.

There are decades of very good desktop applications that are simple and easy to build. Flatpack will just encourage things like slack.


Why are Linux distros working so hard at finding packaging systems worse than the status quo.

Usually installing a large package from a .deb file takes just a few seconds. It took a few minutes to install chromium from a snap. It's faster to install programs on Windows.

What's up people?


Because some developer wants a easier way to distribute their software instead of packaging the software in multiple format or wait for distribution maintainer to update? Also, some user also wants to get the latest software too.


> Why are Linux distros working so hard at finding packaging systems worse than the status quo.

Not everything flatpak does is worse than the status quo. Whether it's worse all things considered is debatable.


Having two installation systems is automatically worse than having one -- unless there are LARGE advantages to having the second one.

For instance, once you have two places a software package has come from you've added the cognitive load in debugging to determine where a given package came from.


You just mentioned another of flatpak's disadvantages.

> [...] unless there are LARGE advantages to having the second one.

That's the thing: What's considered a "LARGE advantage" is highly subjective. Not something you can really quantify.


My issues with the post start with the tone. It's not a respectful tone, rather aggressive and always flashes me back to people who write similar blogposts about systemd or something else that is new but not accepted by greybeards yet.

Fonts and theming are an issue, I agree, however, this isn't entirely the fault of the flatpak maintainers but in part the fault of a fragmented and hard-to-compose linux ecosystem when it comes to their configuration.

The IME issue is a problem but I don't think a dynamically linked system would benefit much here. The solution is to make portals between flatpak and the system more extensible and versioned, so that an outside daemon an talk to flatpak apps and potentially flatpak could even try to determine which runtime to use based on what outside portal versions are available.


It's hard to keep a civil tone if you feel like the world around you is descending into madness, and this is exactly what the authors of this posts feel.

Its hard to have a civil discussion if everyone pulls into a direction you deem wrong and your words are never convincing anyone.

I have similar resentments when people in my company push for docker on embedded systems, because they are used to docker on the server, and whenever I ask them which problem they are solving on the embedded system with docker, they describe things that are wrong.

I don't share the authors strength of opinion on flatpak, although I see the same issues, and I don't want to advertise or excuse strong "tone" - but I understand the frustration.


Docker on embedded systems. We really have come full circle.

It reminds me of a talk by Jonathan Blow - "Preventing the Collapse of Civilization" - https://www.youtube.com/watch?v=ZSRHeXYDLko. He makes the point that Docker is a solution to what was a non-problem.

This code-piling approach seems destined for failure.


Interesting talk...one quote, however:

"How much functionality is added to Facebook year after year? It's not that much!"

This is in support of his argument that per programmer productivity is approaching zero, due to the over-use of abstractions.

Facebook is and has been hiring top talent software people for years, and in large numbers, but most of them aren't working on https://facebook.com/ or https://instagram.com/, at least in any visible way.

I've been programming on a nearly daily basis for over 40 years now, and my total productivity has never been higher. Not primarily because I'm smarter and more experienced, but because the tools available to me are just awesome.

I can write a functional app that allows a hand full of the billions of people around the world to interact, and I can do that in less than an hour, and on the cheap, perhaps even free.

40 years ago, that would have required a lot of time, up front money and ongoing maintenance costs.

There are countless examples.

I'm not dismissing the hazards of abstractions and the freakish amount of complexity that we face today. And I agree that a lot of it is done wrong.


Docker solves the problem of reliable deployment and dealing with dependencies.

This is a real problem an regular that regular Linux distros are not very good at.


> Docker solves the problem of reliable deployment and dealing with dependencies.

Which, as noted, are problems that didn't used to exist and were created.


The thing that gets me is deploying JVM fat jars in Docker containers.


tar solves that problem too, it's a matter of what type of abstraction is a good fit for the problem at hand.


And shell scripts solve the problem of starting and stopping daemon, but not particularly well.

The boundary between a binary tarball and the rest of the system is very poorly defined.


What an interesting video, he makes a lot of good points. However, I would definitely say that the current technology stack is more robust and capable than those of the 70s. It’s possible to have a simple system when there are hundreds of computers in the world, it is not possible when you have millions.


> push for docker on embedded systems

Aaargh!

A particularly odd one as most embedded systems are handled as "images" of some sort - firmware blobs, ramdiscs, imaged HDDs for embedded PCs and so on.


What exactly is wrong with docker on embedded systems? Many have the memory and storage to afford containers and existing established distros like yocto are not very good.

I'd much rather deal with Dockerfiles than buildbake.


I actually think yocto solves the problem better then docker in many ways. Yocto goes to a large amount of effort to make for repeatable builds, something which docker is actually quite bad at for all that it is touted as a solution for not being able to replicate someone else's setup.

I think nix actually represents the next useful stop here: it avoids just piling another layer of abstraction/virtualisation on the pile and actually reinvents the package manager in a way which actually solves the problems docker exists to solve but in a more complete manner. Bitbake I think actually represents a much messier implementation of the same ideas: just like nix uses a functional language to express package configuration, so too have bitbake files grown the same kind of functionality. The main things which bring bitbake down are the language is ridiculously quirky, hard to debug, poorly documented, and has any number of soundness holes (as well as a good amount of action-at-a-distance akin to INTERCAL's COMEFROM statement which is simultaneously really useful and really frustrating)


> Yocto goes to a large amount of effort to make for repeatable builds, something which docker is actually quite bad at

Is this something that yocto itself helps with or just something accomplished by a thousand people hacking at .bb scripts? Last I checked features like the sstate cache came with big caveats.

Dockerfile layer caching seems much more effective at shrinking build times.


Just to clarify yocto is not a distro but a project that allows you to create your own custom distro. Two yocto builds could be completely different from one another even using different package managers.


Yocto and various derivatives can be used to "build software" in the same way that Dockerfiles can be used to "build software" but using yocto is a much worse experience.


What is the problem docker solves?


> It's hard to keep a civil tone if you feel like the world around you is descending into madness, and this is exactly what the authors of this posts feel.

It really has not, flatpak is not madness. It has issues, they are finite, they can be fixed, they should be fixed, but it is not madness. It is better than AppImage or just running random binaries you find or compiling it yourself.


I'm not saying it is, I'm saying this is what it feels like for him and that I can understand the feeling.

Also, flatpak doesn't only compete with "AppImage or just running random binaries you find or compiling it yourself". It competes with the package manager, that has worked well for decades. The package manager has issues as well.

Maybe from the authors perspective it looks like everyone wants to throw away the package managers because of some issue they have, instead of tackling these issues. If he then points out the flaws in flatpak, people say "sure it has some issue we need to fix".


I skimmed the article and I don't find the tone super disrespectful. No ad-hominem, no insults to the developers etc... The backlash against systemd was unacceptable because it ended up targeting people and not the technology. Writing an init system shouldn't net you death threats.

Beyond this, and not unlike systemd, flatpak has been pushed hard down people's throat by powerful entities within the ecosystem, so it's not surprising that in both cases there has been some intense backlash. I also personally think that in both cases it's warranted.


> Beyond this, and not unlike systemd, flatpak has been pushed hard down people's throat by powerful entities within the ecosystem, [...]

Sounds like a tinfoil hat theory to me. Powerful entities? Pushed down people's throat? You are free to ignore Flatpak, no one is forcing it on anyone.


I agree, how thin skinned are people? this seems like valid criticism that wasn't acted on in two years. What is an appropriate 'tone' in this case


Tone is narcissistic and self-important - it does not even tip a hat to address prioritisation or resource constraints - it's just "this is important to me THEREFORE IT IS IMPORTANT, GNASH, WAIL". I also think the author does not understand end-user software - if you package everything up so it is very secure by default, almost nobody will ever use it and nobody will care except the people who are capable of securing it themselves.


An appropriate tone would be to press the angle that help is needed and either offer to help patch the applications or mention where contributors can get started.

I get the author's concern but there is nothing useful in this article. It's misleading suggesting that the problem is with flatpak when the problem is that certain applications aren't using its full capabilities, aren't configured correctly and are in need patching. Or in the worst case, might need a redesign to fit the sandboxing model. This needs to happen with any sandbox that uses this model, not just flatpak. There is nothing else that can be done about this short of suggesting users switch to a different sandboxing model like Qubes. (Maybe the blog author can try that too and see how it goes, or could develop their own sandboxing model just to see how hard it is on a complex system like Linux)


As well as misleading, the article seems outdated? Which is weird, because archive.org's first snapshot of it is today.

So, GNOME Software has included a "Permissions" field (which lists an application's specific sandbox holes) since - if I recall correctly - GNOME 3.34: https://i.imgur.com/lCGgA1B.png. Not perfect, but definitely better than pictured. It's a bit of a shame that people have to use older versions of GNOME in 2020, but then, it's nice that Flatpak makes it easy to run new applications regardless :)

Also, I was trying to verify the author's claim about a vulnerable libssh in the gitg package, partly because I was curious whether they'd bothered to report any of these issues upstream. Looks like that was fixed in May: https://github.com/flathub/org.gnome.gitg/pull/12. Similar story with ffmpeg: https://gitlab.com/freedesktop-sdk/freedesktop-sdk/-/commit/....

So, woefully behind schedule, but, how long was the author sitting on these? It would be more accurate, less irritating, and more persuasive if they simply talked about it after the fact: this happened, various things are wrong with it, it should be avoidable, etc.


The first two CVEs linked in the article do not even exist: whether due to rejection or something else is uncertain.


I don't see an issue with the tone.

Actually, why are people so concerned about tone nowadays? It matters, but this is perfectly fine.

I feel like you can't find anything to nitpick about the post, so you pick on the tone instead.


I've brought up issues beside tone because of this.

In case those aren't enough; I think the issues brought up aren't issues with flatpak.

The first issue is with a GUI application on top of flatpak and the second is a packaging issue of the runtimes.

These aren't inherently problems with flatpak so I mentally don't count them.


Don't forget lying about sandboxing.


Yeah, that's a dubious claim as well. Here is how GNOME Software presents a sandboxed application:

https://i.imgur.com/VMsj6mk.png

Is it a bad interface? Sure. Is the list of permissions hard to find? Hell yes. Is the Sandboxed emblem kind of misleading? Yeah. But here (in at least GNOME Software 3.36) you see a list of the relevant permissions available to the application. Nobody is lying here.

For contrast, here is an application with a stricter sandbox:

https://i.imgur.com/lCGgA1B.png

The point here is both of these are sandboxed. Each sandbox just has particular holes in it. For GNU Octave, more than seems appropriate. Users misunderstand that in the same way people misunderstand "autopilot" and that's a good issue to bring up, but then I don't think non-technical users understand what "sandboxed" means anyway and technical users should be able to take one more gorramn step and learn that "sandboxed" obviously comes with particular limitations [at least in the present day] or it isn't going to do anything.


You've skipped CVE.

Flatpak has a lot of claims [1], author feels them as a lie. And he provides facts in support. Linux "next generation technology" would certainly alert users of insecure application. This does not. The story of themes and dbus shows how hard it is to contain all dependencies.

Also there were a lot of claims here that Linux distributions makes no good, static linking is better and libraries would be patched.

Now lets switch perspective. Windows users are perfectly fine running closed source software which may contain outdated dependencies. Most of them are not updated anyway. And it has gallery, something Winget still lacks. Looks like next gen. What is cry for open source developer may be fine for consumer.

Personally I'm fine with Flatpak, Appimage. There should be some way to install proprietary software. And some starting point to make it safe.

[1] https://flatpak.org/


You want the meaning of sandboxed in flatpak changed, and for it not include access to the home dir? That seems fair. But with the current UI, I think most people expect that a video player can open video files in their home directory.


For effective sandboxing you have to integrate something like capability passing and delegated access. For example, opening a file with drag-and-drop should allow access to that file, and the File > Open menu should do the sandbox equivalent of a hypercall to open a privileged file browser widget.


I agree that this kind of capability is essential for usable sandboxing on desktop systems.

Though the trusted filepicker is tricky to do when one logical document consists of multiple files. For example separate subtitle files next to a video file are common.


This could be reasonably expressed. Flatpak can sort of do this via portals, you have a filepicker that can see the host system and the picked file is transparently mounted into the app container.

For multiple files per document, the solution would be either multi-file select (bad) or allowing apps to specify very simple rules for additional files, ie "removing extension, also mount all files starting with that name".

The filepicker could integrate that and either show all those files as a single document or select all files for a document visible, so the user is informed.


I think this is also true for a lot of ebook meta files. Some have separate thumbs and things.

Certainly one could select all the files and drop them in. But that's asking a lot of the opening program, and isn't usually how they're set up.


The idea behind sandboxing is that you explicitly grant access to files and directories to specific apps. Your ~/Videos folder can be made accessible to mpv or whatever video player you're using.


Or, make people aware they are passing files between different sandboxes.

..à la Qubes.. https://www.qubes-os.org/doc/copying-files/


Yeah - I mean certainly home folder with restrictions would be good. Things link dotfiles in home and ~/.ssh should certainly not be accessible or editable by a sandboxed application without knowledge.

I think the Downloads, Documents and Movies folders people tend to have etc. those major sub-folders would be enough for normal use.

Also, perhaps file type limits could be imposed that mean at that files are only accessible to apps that can open them. Of course, I'd want my text editor to be able to open files across root file system, if I open it as root - so I can see how quickly a person can want sandboxing, and then also want none - but the less apps with privileged access, the better.

Also I think some people had hopes it would be closer to IOS level sandboxing, which is heavily restricted on file access, and frankly can be annoying because of it, but does have the security benefit at least.

Also, the security update gripe is significant. The isolation of packages only matters if it works. Like at least if I run stuff myself inside container, I can choose to update the underlying OS and force certain packages etc. at the same time as sandboxing, and mount only certain folders if I actually need them.

I think it's hard to make it both simple for users and powerful enough provide the levels of granular access I'd expect for context specific sandboxing. I would want a principle of least required privilege, with some ability to escalate it (like Android asks me all the time if I want to allow software to do stuff at point of the privilege being required).

I had high hopes for Flatpack, Elementary OS went all in on it for a better way to deliver their curated apps. https://blog.elementary.io/elementary-appcenter-flatpak/

I guess though, they curate and approve the apps, so perhaps they are able to make certain assurances that are not universal to Flatpak availability.


> Also, perhaps file type limits could be imposed that mean at that files are only accessible to apps that can open them. Of course, I'd want my text editor to be able to open files across root file system, if I open it as root - so I can see how quickly a person can want sandboxing, and then also want none - but the less apps with privileged access, the better.

That's SELinux, if configured well. Files are labeled according to their function and access to these files is whitelisted according to the security label of a process. For example, no security labelled process can access my private ssh keys, that's guaranteed.

It's just a shame that SELinux has such a documentation and UX issue, so almost no one uses it and almost no one uses it well - except for package maintainers and some application developers.


The text-editor case is somewhat solved by portals and could be expanded upon.

The texteditor gets no FS access. Instead the call the portal to request the user to pick a folder (for projects) or file with a specific ending (the compatible file formats or if folders can be picked can be defined statically for most apps).

If the app was started with root privileges, it gets no root privs but the portal transparently translates it's IO calls to the actual file.

But only that file. So even if the text editor had a CVE, it couldn't overwrite arbitrary files, still needs permission from the user by showing them a file picker dialog.

And you could still ban certain directories, so even if you started as root, the text editor cannot access /etc/sudoers and /etc/passwd, for example.


I never used a system with flatpak, but when I read sandboxed I expect the maximum permission to be read-only access to my home directory or something like android where it asks for additional permissions.


Read only is already giving the keys to the kingdom if internet connections are not limited. Any sandboxing that doesn't protect against exfiltrating private documents is not sandboxing at all.

It's fine if it's a trade off between usability and security but then they shouldn't call it sandboxing or make it very clear that that's the trade off.


+1... need outgoing connections whitelisting, blocking all by default.


It could at least prevent access to dot files or places where dangerous things could be dropped (e.g. ${HOME}/bin).


Isn't that what flatpak portals were supposed to solve? I think then it's fair to say that an app that doesn't make full use of the advertised sandboxing features of flatpak shouldn't get an unqualified checkmark on that.


I never quite understand what is so broken in package managers that a solution like this or snaps was a good idea.

Can someone tell me again what problems do this actually solve? Note: not being Mac-like is not an actual problem.


It's an easy solution for lazy developers. maintaining dependencies is a hassle, and with proprietary software users can't even do it themselfes if the developers decide that an 8 year old ssl lib is perfectly fine. The distributions probably won't keep that lib around either, so the easiest solution for proprietary vendors who don't care about dependency updates is to ship it all in a self contained bundle.

It also makes it easier to clean up when uninstalling and makes for better damage control by limiting what can be broken if done right, but like the article said this is not what it's actually all about since noone seems to care about these points.


> It also makes it easier to clean up when uninstalling

I think pretty much every package manager tracks what was installed manually and what was installed as a dependency of something else, so that it's easier to just remove everything that is no longer needed.


With a traditional binary package manager for Linux, if some application needs a special version of a dependency, you need to install that special version of the dependency systemwide, overriding whatever would come from your default repos.

When you remove that package, you now also have to change the configuration of your repos to remove the one that it came from, or arrange a vendor change (the best case, with a powerful package manager like zypper), then run an upgrade to get the default one back into place.

What the GP is talking about is how containerized packages eliminate that step during uninstallation.


The solution is for the distro to provide multiple major and minor versions of a package. This is quite common.

If a program depends on a specific version of a library that's not provided by the distro (because it's obsolete, vulnerable, too specific etc), it still can be installed by the package manager, if the vendor provides a suitable version within the package. The nice way of doing this is to avoid colliding with the system-provided package either by using a different name or by installing it under a different path. Many vendors do precisely this when offering their packages.

In both cases, it's easy to uninstall the no-longer used dependency - on the first case, it'll be marked as automatically installed and can be removed with a cleanup command. On the second case, uninstalling the package should also remove the library.


All of the solutions you describe require special attention from distro maintainers, packagers, or users. In the case of Flatpak, it doesn't really take any extra work in terms of adding extra package definitions or creating new prefixes that coincide— the people packaging the Flatpak (usually the app developers, we expect?) can just modify whatever distro package is used to provide that dependency in-place and ship it out in their Flatpak as if there were no collision problem at all. Similarly, on the user's side, the collision problem you outline is preempted; Flatpak packagers simply can't prepare their dependency chains in a way that will actually conflict with libraries present on end users' systems.

There are some more traditional (in the sense that libraries are always shared by default and no containerization is required for packages to work right) package managers that extend the kind of isolation you describe (‘installing [special or patched versions of dependencies] under a different path’) to all packages, namely Nix and Guix. These package managers also make it easy to do things like take a package definition from somewhere else and rebuild it with just one or two dependencies modified without redefining the whole package, or alternatively take such a package definition (perhaps originally targeted to a different release version of your distro) and rebuild it with all dependencies instead being provided from your current distro. At the same time, since the dependency chains of packages are isolated by default in these systems, it's relatively easy, in case of a special need, to take any package and bundle it and its dependencies into a container for distribution on ‘foreign’ systems. (And for packager convenience, the package collections in these distros may also explicitly include multiple versions of a library in their default package sets in the same way that other distros do, as you describe).

Nix and Guix have their own difficulties and quirks, of course! But one of the things I like about their approach is that their innovation is conservative in a certain sense: it offers more isolation for packages (along with the cross-distro compatibility and potential for bundling that come with that) but preserves the virtues that have been the strength of package management on Linux forever. I would like for whatever really is the future of package management on Linux to share in that.

So I don't think containers are The Way Forward™, although I do think that desktop containers like Flatpak are potentially useful as (1) a step toward more fine-grained permissions for desktop apps in terms of access to files, multimedia devices, and so on; and (2) a way of providing cross-distro packages for applications that (a) have yet to accrue much developer interest from packagers and distro maintainers or (b) are (unfortunately) proprietary.

(Personally, if I really want a piece of software, it's not in the distro I'm currently using, and it doesn't seem especially difficult to package, I will just go ahead and make a package for the distro I'm using. Case (2a) provides a convenient, more managed way for me to evaluate a new program I've heard about without having to package it just to see if I like it.)


> Can someone tell me again what problems do this actually solve?

Problem: You want to use FooApp 2.0, but your distribution only ships FooApp 1.0. Or even worse your distribution doesn't ship it at all.


Make a package, push it to AUR, great help for other users.


Or just download the tarball and `./configure && make && make install`

In the worst case scenario, you'll learn why the distro is not offering the newest version.


Yeah ... I'll go with the Flatpak, thanks.


What if I'm not using Arch?


Endless possibilities

* Try Nix on your distribution [1], package may be already there [2]

* Create package for your distribution, become maintainer, that's not the last time you need it

* Switch to less outdated channel, other packages would be fresh too

* Change distribution

Flatpak has its niche but your use case looks like fixing broken distribution. There is no package, package is outdated, it is hard to make package, it is hard to publish package - these problems already solved.

Hard problems may be

* I want several versions of the package

* I want entire environment like it was 6 years ago so I can run / compile application (in parallel with current)

* I want to run application in sandbox - no network, no file access, no mic etc

* I want proprietary application and they don't play well with Linux ecosystem

Hope Flatpak will address them while keeping security in check.

---

Debian Sid is claimed to be not more unstable than Arch [3]. And I had less problems in a decade than a single Ubuntu dist upgrade. Rolling release does not accumulate breaking changes. Run update once a week and you'll be fine.

[1] https://nixos.org/download.html

[2] https://search.nixos.org/packages

[3] https://linuxconfig.org/how-to-run-debian-sid-relatively-saf...


Let every app haul its own libraries. Be surprised that they aren't patched and contain vulnerabilities.


So like Docker.


At least Docker is standardized and has a growing culture of scanning images (especially base images like ubuntu,debian,alpine that everyone bases their image on).

Snap/Flatpak are just an unnecessary disaster IMO.


Apps simply need a solid CI pipeline so that when dependencies have security patches, the app is automatically rebuilt and pushed out with the patches as soon as possible.


Right, and since this is so easy software distribution is a solved problem, bugs don't exist and tools like containers, VMs, package managers, app stores etc were never developed because they are totally unnecessary. In particular, this thread does not exist and I never wrote this.


Well, didn't say it's easy but if you app supports containers, CI becomes a lot easier.

You only have to test 1 system configuration and you can easily script building your app based on this configuration.

My life is certainly easier since I've packaged several apps into docker containers, even if it was a pain to get started on that.


Which negates the need for containerization...


Well, not quite.

The containerization still has advantages. For one you have a solid and predictable baseline system regardless of what distro the user installed. Next you have sandboxing (if properly applied) protecting the user until they install the update (which might be some time).

You can also SXS the application in case you want to (ie some feature in the app was removed or changed but you want the new version too).

Layered security is the biggest plus here though.


No, it does not, and this very article reveals why on a desktop environment. If the "solid and predictable baseline system" you used contains even a slightly different version of the input method library the user is currently using, then the user can no longer type _at all_ in your fancy application. Same thing for a myriad other integration points users are expecting in a desktop program. Drag and drop? Running the default program for X service? Window manager decorations? Etc.

The user might as well just be forced to run your "solid and preditactable baseline system" as his desktop system.

Containers may be massaged to work on servers because at the end of the day most interaction between server programs happens using TCP/IP (sigh). This is an entire different story with desktop programs.


>If the "solid and predictable baseline system" you used contains even a slightly different version of the input method library the user is currently using, then the user can no longer type _at all_ in your fancy application. Same thing for a myriad other integration points users are expecting in a desktop program. Drag and drop? Running the default program for X service? Window manager decorations? Etc.

Well that input library is not an issue of the flatpak environment but an issue of the application and/or that library. That error could have easily occured if the application linked an old version of that library and the new version of the daemon was incompatible.

Drag and drop works with flatpak to my knowledge, as does running something as default program for opening a file or URL. And window manager decorations work fine for my flatpak apps too.

The user would not be forced to do that if Linux system devs got their shit together. At the moment, Linux is barely a end-user desktop because everything beyond opening a webbrowser (and maybe not even that) is complicated. Configuring default fonts requires me to sudo'ing a text editor into an /etc file and manually insert the right incanations into an XML format.

Many subsystems in the linux desktop haven't been properly updated in quite some time and lag behind modern developments.

This is an issue Linux created because everyone makes their own solution and then distro managers will silently patch over the mistakes, so it all kinda works together.

The IME issue can be solved by making portals versioned, so flatpak can install the app with a runtime that contains a version of the IME lib that works. Similarly for other IO between the host services and client libraries.

Additionally, portals can already provide a number of things (drag and drop for example) without having to compromise security or convenience.

I would also like to point out that not every input method library the user has is going to break every single revision.


> Well that input library is not an issue of the flatpak environment but an issue of the application and/or that library. That error could have easily occured if the application linked an old version of that library and the new version of the daemon was incompatible.

But that wouldn't happen in a distro-packages environment, because the distribution would ensure that everything was built against the same version of the library.

> This is an issue Linux created because everyone makes their own solution and then distro managers will silently patch over the mistakes, so it all kinda works together.

And flatpak is yet another case of that - doing their own incompatible thing instead of fixing what's there already.


>But that wouldn't happen in a distro-packages environment, because the distribution would ensure that everything was built against the same version of the library.

This does not ensure that the library is compatible with the package, it only ensures that it builds.

>And flatpak is yet another case of that - doing their own incompatible thing instead of fixing what's there already.

They are working on standards that would improve things regardless of whether you're in a container or not. For example, the flatpak portals generally work just as well if you don't use flatpak at all, so apps don't care if they are inside our outside it.

This is part of the Freedesktop initiative on various things around Linux, which is already followed by quite a majority of desktop apps (and CLI apps), so there is a standard that improves things as they are.

Portals give you a standard way to interact with the user and the system that don't rely on distro specific patches.


It's like saying we don't need vaccines any more because no one is getting sick.


If no one gets sick then you don't need vaccines.


I run apps like that inside a docker container, which is a little bit better, I think:

- security library updates update with the base OS, which I update religiously and images are rebuilt (via "make") whenever the base updates, too

- the apps are effectively "somewhat" sandboxed

- I only bind-mount the directories I want the app to have access to (i.e. -v "$HOME/Pictures:/home/user/Pictures", -v "$HOME/.config/appname:/home/user/.config/appname" etc)

While this isn't ideal for graphical apps due to needing sharing the X11 socket for things to work well, which comes with its own type of problems...

... at least no app can change _my_ bashrc, simply because it can't even see it, nevermind edit it.

Going one step further, some bind mounts can also be mounted ":ro" to ensure the app cannot change the contents.


I also use certain desktop programs via docker, but like you say, it isn't ideal. Not just for the X11 socket (which is a potential security issue on its own), but also for audio (both ALSA and Pulseaudio can be made to work, but you have to bind-mount correct sockets, use correct user IDs and set correct environment variables) and video (usually hw-accelerated, so you have to install correct opengl libraries for your hardware and keep them on correct versions to match the drivers your host is running.

It all works, and provides some benefits (it is also fun, if you're into sysadmining!), but it kind of breaks one of the basic promises of "dockerizing" - that the containers are independent of what the host looks like.

e.g. I cannot just take the Dockerfile for a video streaming app container from my desktop with a Nvidia GPU, and use it as-is on my laptop with an Intel GPU.


Who is this anonymous author who made effort to set up a specific website just to rant about Flatpak shortcomings? Seems strange to me.


Linux packaging certainly is a mess.

I have 50+ "update-foo" scripts for programs that want me to use yet another package manager. Everything that doesn't use apt is in there, so many of them just run "pip install foo" or "npm install foo", at least this way I don't have to remember which program uses which package manager.

I used to be very adamant that if something did not have an apt repository then I wouldn't use it, but that's become hard to do. Even the piece of software I use the most, my browser, is not managed by apt. Firefox (developer edition) lives in ~/.local/lib and updates itself.


All that, and they are still WILDLY better than Snaps. You chose poorly, Canonical.


So, Snaps are bad, Flatpak is bad. What's good?


Packages from your linux distribution. Install using their standard package manager.


Distro packages get full access to your system (install scripts get root) so if your issue is trusting the software vendor they are strictly worse.


Dedicated accounts, sandboxing, jails, Docker, and VMs exist.

The old thinking was that software was a user agent, serving the user's needs and interests. Some OSes (Debian comes to mind) explicitly acknowledge this.

Increasingly it's simply a naive and unjustified belief. Applications must be considered as untrusted necessary evils. Allocating them a minimum level of access is prudent.

Flatpak promises this, but fails to deliver.

Package managers offer this by convention, but rely on users (and security researchers) discovering and reporting malicious behaviour. Mitigation is retroactive: fixed behaviours or packages removed from the distro's repository, but the damage is done and existing deployments remain.

The underlying issue is one of packaging, updating, and distributing sandboxed apps.

And in recognising that packaging systems serve users, not developers.


Not just the software vendor, but also the third party package maintainer, often an unpaid volunteer.


It is not that simple [1], one need to earn trust. Distribution maintainers have much better record than browser addons authors.

Maybe we can trust big names, just like on Android — Google, Amazon. We may think that it is safer with paid software — they have something to loose. But free - I'd bet on maintainers.

[1] https://wiki.archlinux.org/index.php/Trusted_Users


If a library has a vulnerability, the distro often updates it and then every package that uses it has been patched. With these sandboxes things that include their own versions of dependencies, each one of them needs to be updated.


How is it any worse than flatpaks having full access to your home directory?


How is it any better than flatpaks having full access to your home directory? Yet, one of the points of criticism against flatpak is that it doesn't protect the users home directory.

Now, it's certainly debatable if flatpak could or should push more in that direction, like apple did when they introduced the sandbox for app store applications, but flatpak certainly does not have leverage to the same extend as apple has. So it's up to the app to opt in to a suitable sandbox set. It's a sad state of affairs, but I don't see a viable path for the flatpak folks to change that.


Package managers do not claim sandbox. No claim - no false claim - no accusation.

And it is easy to fix - publish capabilities in the gallery. Apple iOS "request location" shows that people care if they know. Sandbox certainly is a good thing. Without it Flatpak is just another package manager.


Flatpak does not claim sandbox either. It’s gnome’s installer that claims it. But flatpak provides capabilities for sandboxing, afair.


Clearly something went wrong

> It is advertised as offering a sandbox environment in which users can run application software in isolation from the rest of the system.

https://en.wikipedia.org/wiki/Flatpak

> flatpak(1):

> ... isolated from the host system ('sandboxed') to some degree, at runtime.

http://manpages.org/flatpak

> One of Flatpak’s main goals is to increase the security of desktop systems by isolating applications from one another. This is achieved using sandboxing and means that, by default, applications that are run with Flatpak have extremely limited access to the host environment.

https://docs.flatpak.org/en/latest/sandbox-permissions.html

And a lot more

https://www.google.com/search?q=flatpak+sandbox


It’s better by not depending on yet another component or layer of abstraction. If you had a distribution which would use flatpaks for everything (ie no other packaging system involved), then sure, usual packages probably wouldn’t be any better than flatpaks.


But that’s not the point that is made against flatpaks - the point made in the article is “it allows access to the home directory, so it’s worse.”

Whether flatpaks is better or worse than packages depends on many other factors. Flatpaks allow software providers to package for multiple operating systems in one package, for example. This makes software available that otherwise wouldn’t be. Whether that’s worth it or not is debatable, but raising clearly invalid points to attack flatpaks is disingenuous.


Does it? Which operating system other than Linux uses flatpaks?


Let’s count the various Linux flavors as different operating systems for this purpose, because they use wildly different package formats. And yes, I’ve installed software for fedora via flatpak that did not have packages for fedora at all or depended on versions of dependencies that the host system did not provide. (Specific python versions and dependencies IIRC)


Package format does not make an os. Codebase does.


Indeed. If you're an ISV packaging an application that uses OpenSSL, $DISTRO version N and $DISTRO version N+1 can easily be different OSes because they ship different incompatible OpenSSL versions, so what you do is provide a .deb/.rpm/.tar.gz that bundles a statically linked OpenSSL.


You don't understand how something running as root with full access to your entire filesystem is worse than something that has access to your home directory?


In most cases it’s not worse in any way - all the important stuff is owned by me, in my home directory.


That's the thing. Not every package is available on every distribution.

Snap/Flatpak were meant to solve that issue.

Edit: There's also the fact that practically most packages on Ubuntu or Debian, for example, are outdated.


> Snap/Flatpak were meant to solve that issue

And they do? Just because they have some flaws doesn't mean that they're completely unusable. If you need an app that isn't on your distro but is on Flatpak, by all means use it.


> practically most packages on Ubuntu or Debian, for example, are outdated.

Some reasonable level of quality control does take time. There are distros that are much more up to date with upstream versions (Arch Linux, Gentoo, etc), but by living on the bleeding edge, you will eventually get cut.


Nix and Guix on your foreign distro of choice are a solution.


> That's the thing. Not every package is available on every distribution. > Snap/Flatpak were meant to solve that issue.

https://xkcd.com/927/


> Not every package is available on every distribution. Snap/Flatpak were meant to solve that issue.

This "issue" was already solved by static executables. Snap and flatpack are the work of computer illiterates.


You still need to distribute the static executables and there's no way to update them besides downloading them again unless the software has some auto-updater built-in. That definitely doesn't solve the issue.


I'm not sure I follow your reasoning here. If you are so concerned about the security of your system that you want to run each program on a sandbox, then you definitely do not want to allow programs to "update" themselves by automatically downloading random binaries from the internet.


Well, I never said I was concerned with security. I am to some extent but not to the point where I inspect every single update.

What I value most is having every software or library available through some form of package manager. Downloading static binaries off the internet without auto-update just doesn't make it which is why I like having Flatpak as a fallback.

It also makes it easier for the common user to have every piece of software available through one store. They don't need to know whether it's a .deb or a Flatpak underneath, it just needs to be there and work reasonably well.

There's really no other option: either distro maintainers include every single piece of software in the repositories (unlikely), or we need a common format that works on all distros (already the case with Flatpak).


Snaps and Flatpaks provide package management without system privilege required by the system package manager. This is a good deal for users.

On the maintainer/author side, yes, I think static binaries largely perform the same.


What about packages that isn't provided by your distribution. Or packaged software in your distribution is too old, and you have a reason to use a new version.


compiling from source, i guess


But compiling isn't so simple. For example, if it require a dependency that is not in your distribution then it won't be easy to compile.


You can compile that one (sorry couldn't resist, recently went through this exercise).


Same, I remember I compiled a program like that 3 weeks ago, and it takes me half an hour to figure out how to do so. But I think expecting all the casual Linux to be able to do that is unrealistic. So I think it is necessary to have things like Flatpak, Snap, AppImages to exist.

BTW, I used to think that we can just provide a prebuilt binary to let user download from a website, but finding that isn't easy to achieve unless you use a language like Go, Rust... because I found that many C program are hard to compile with full static linking.


99% of software is available in rpm or deb. So copy/paste/edit - profit.


Absolutely the opposite of true. A tiny amount of the code on GitHub say is available in rpm/deb. Largely these package popular C and C++ code, and the majority of other stuff is not packaged. The language package managers are huge as well.


Screw those 1% I guess.


And people wonder why it still isn't The Year of the Linux Desktop...


Native apps are bad, webapps are really bad, firmware is worse, CPU firmware is even worse. That's the state of the world.


We need easier/more integrated app sandboxing. Last time I tried firejail it was a pain in the ass, but looking at the docs it seems like it may have gotten better. Regardless, distributions could actively promote it (or something like it) as a default way of running apps, with some convenience features.


I have been using firejail for quite some time now. It definitely has gotten better.


Spreading the PPAs culture, hoping that usage, and consequently tooling, will improve.

What at least some people don't understand, is that PPAs make the source available, so one is not downloading a black box.

Plus, they also add reproducibility, which is not to underestimate.


PPAs are the worst of the worst and absolutely should not be encouraged.

With PPAs, everytime someone runs sudo apt upgrade, they are giving root privileges to their machine to some random person on the internet. No, having users scan the source code every time a package is upgraded is awful.

I reiterate, the most popular PPA is an old 3rd party Java PPA which doesn't even offer Java anymore. That PPA has root access to thousands of machines.


Using an old 3rd party Java PPA is as foolish as it gets, as there is an official OpenJDK PPA that supports all the JDK versions for many Ubuntu releases. I'm also skeptical of the statistic you mention, since Canonical doesn't publish rankings, and download stats need to be retrieved via API on an individual repository basis.

Relying on extremely incompetent users to make a general point is a strawman, not to talk about defining PPAs as "random people", as many software products have official repositories or affiliations (I take you don't use PPAs).

If one likes to scream about administrative privileges to get attention, they're forgetting that any Linux user is giving root access to thousands of packages. So the point is really the web of trust.

If we talk about the past and present, there has been no malicious attacks (or in number so small, that it's hard to find reports). So much for the "worst of the worst".

If we talk about the future, there's no reason why a web of trust can't be built. "To reiterate", lots of PPAs are official, including the OpenJDK one, so if the PPAs approach happened to get traction, it'd be really a matter of software authors to build their own or to appoint somebody to do.

This is really the concept of maintainers and their network, it's applying to the distro you're using, and it's nothing fundamentally different.


>Using an old 3rd party Java PPA is as foolish as it gets, as there is an official OpenJDK PPA that supports all the JDK versions for many Ubuntu releases. I'm also skeptical of the statistic you mention, since Canonical doesn't publish rankings, and download stats need to be retrieved via API on an individual repository basis.

It was from a podcast from Canonical's apopey who described precisely some of the technical decisions behind snap and why they never bothered to open source it after the disaster that was open sourcing launchpad. He knows the statistics because most of these 3rd party PPAs were hosted on Launchpad that was only run by Canonical.

>Relying on extremely incompetent users to make a general point is a strawman, not to talk about defining PPAs as "random people", as many software products have official repositories or affiliations (I take you don't use PPAs).

Systems like this should be designed for 99% of users. PPAs were designed for 0.1% of system admins, and developers, not users. They are absolutely awful UX design, they are inherently unsafe, and unreliable.

Expecting users to vet that software is safe just because the source code is available is flatly a stupid idea. 99% of users for any piece of software will have no idea what they are looking at and are incapable of vetting it. Have you vetted manually chromium, vlc, firefox, vscode etc?

Publishers on PPAs don't test against every distribution, and if they publish a package or dependency that breaks system libs than users are stuffed. Users would have little recourse. I doubt every ppa owner tests 14.04, 16.04, 18.04, 20.04 and 20.10 builds to check their ppa won't break anything.

>If we talk about the past and present, there has been no malicious attacks (or in number so small, that it's hard to find reports). So much for the "worst of the worst".

Which has inspired both RedHat and Canonical to try and move towards flatpaks/snaps instead? The reason malicious attacks aren't there is because ppas aren't that popular because for good reason people tend to main repos.

Already on snap there was evidence of people bundling a cryptominer that was detected by Canonical. You think nobody has ever attempted to build/publish malware through ppas? Please.

>If we talk about the future, there's no reason why a web of trust can't be built. "To reiterate", lots of PPAs are official, including the OpenJDK one, so if the PPAs approach happened to get traction, it'd be really a matter of software authors to build their own or to appoint somebody to do.

My web of trust is purely Canonical. I chose it when I downloaded their OS. I trust their repos and snap store. I don't need to trust random Russian PPA for any reason. If a dev wants to publish something newer, put it on the snap or flatpak store or I won't use it.


Do PPAs work on non-Ubuntu distributions? I thought there was one old ticket about supporting Debian, and it was ultimately closed or abandoned.

Also, PPAs do absolutely nothing about sandboxing. It's a different kind of concern.


PPA (personal package archive) is the Ubuntu name, but other distributions make it possible to do the same. The system is a way to distribute packages through the package manager, and repository metadata is hosted centrally.

Fedora has such repositories (RPM Fusion, Copr). Arch does, too (AUR). Other distributions, including Debian, can use them as well, so long as there exists a community.


I would not really call RPM Fusion a PPA - it's basically a repo with packages that are inherent not open source or patent encumbered and can't thus be in regular Fedora repos (and Copr actually). Otherwise the community maintains it to a very similar standard as the regular Fedora repos.

Compared to that Copr is a real PPA where you can build stuff into personal repo (as long as its built from source, licensing is fine & the thing is not patent encumbered).


It is a completely different approach, instead of sandboxing potentially malicious software, prevent it from getting into your machine, this approach works for open-source only.

Most popular distors provide similar stuff to PPA, the AUR in Arch for example.


This is the perfect and not possible territory. Even with full access to the source, you can't be sure if it's malicious, or if it can be made malicious at runtime.


For rpms the alternative would be Fedora COPR. (https://copr.fedorainfracloud.org/)

It is rpm-specific, but not Fedora-specific.


> Also, PPAs do absolutely nothing about sandboxing. It's a different kind of concern.

Therefor we use opensource. Anything you dont trust is even hard to run safely in a sandbox.

BTW +1 for PPA culture.


It's a totally different concern. I don't understand how PPAs and similar distribution approaches (i.e. native packages with hosted build systems) are getting mingled with flatpak and/or snap sandboxing objectives.

You can distribute open source software via flatpak or snap. And you can create a build system that takes open source software as a source and creates a flatpak or snap distribution.

It's totally possible to hide a backdoor in an open source software.

A working sandbox will prevent certain attacks to your system, whether it was built from an open source or not. This already works on most (all?) mobile operating systems.


i was responding to the sandboxing part.

not having a sandbox is less of a problem when you run FLOSS (in the context of PPAs that's relevant i think)


> not having a sandbox is less of a problem when you run FLOSS (in the context of PPAs that's relevant i think)

I disagree. Source auditing is irrelevant for day-to-day software use.


when was the last time you fully audited a ppa? Like the source, that it builds from the canonical source, all patches it applies, all install scripts it packages and all changes it actually does? The trustworthiness of the ppa author. I know that I at best do cursory checks based on the reputation the author or the source linking to the PPA have, but I'm aware that this is a honor based system, and open source is practically irrelevant to that.


good point. i never checked it.


OpenSource doesn't prevent CVEs that allow attackers to take over anyway.

What If you install script has a bug that lets an attacker place arbitrary SUID binaries? Or it has a bug that deletes your entire system (Steam had this bug and the script was open and available, they weren't the only ones either).


PPAs? You get the source and the build and it's all open...

I mean, Launchpad could be modernized and brought to the 21st century but all the functionality is there. I'd love to hear arguments against it though :)


appimage


AppImage has a feature that neither Snaps nor Flatpak do: You don't need a repository or special manager for them to work. You can literally just take the AppImage file to (almost) any Linux Desktop on a thumb drive and run it.


Provided that the application inside the AppImage is built in a portable way, for which AppImage provided zero support whatsoever, last time I checked.

FlatPak has Runtimes/SDKs for this, Snap has the one true Canonical-controlled base snap.


compiling from source, i guess


Here is a comparison between Flatpak, Snaps and Appimage.

Notice in the table regarding sandboxing. Someone is telling porky lies here.

https://www.fosslinux.com/42410/snap-vs-flatpak-vs-appimage-...



In this week's story of bad open source contributions[1], we bring you some dude who keeps paying for a domain and spending wads of time writing long and poorly targeted diatribes against one of the only genuine efforts to actually address the thing he purports to care about. Instead of filing bug reports, or writing code, or advocating for the thing that he wants? Or at least engaging in the bare minimum effort to assign blame correctly.

- The fact that GNOME Software says things are "Sandboxed" without further information is (gasp) a GNOME Software issue. And I don't think anyone is all that happy about it. In fact, there's a redesign pending, but resources are limited. GNOME Software is a rickety old beast and needs love, but it would be nice if people weren't trying to tear down surrounding projects as a result?

- Good job finding a security vulnerability. Please report an issue with the gitg Flathub repo[2], and with freedesktop-sdk[3], respectively. Congratulations on your amazing contribution! Have a free t-shirt before they run out [4].

- Speaking of Flathub, there is a discussion to be had about Flathub's lackadaisical approach around submitting stuff (and updating it). Maybe write about that? Or, better yet, don't write about how it's terrible and kills babies: write about how we can solve this stuff. For example, we could add some better backend tooling for Flathub that checks for unpatched libraries. Flatpak makes this sort of thing pretty doable. The Flatkill guy seems to have some experience with that, so, maybe that can be his next contribution (after he reports those two issues in their respective bug trackers). Heck, "Flatkill" is an okay name for a vulnerability scanner, so we're halfway there.

- To clarify, it's okay giving these things shit in your spare time, but once you're running a domain and spreading fear and uncertainty for dubious reasons (when being constructive is actually less effort), you're just being an asshole.

[1] https://news.ycombinator.com/item?id=24658052

[2] https://github.com/flathub/org.gnome.gitg/issues

[3] https://gitlab.com/freedesktop-sdk/freedesktop-sdk

[4] https://hacktoberfest.digitalocean.com/


I really wonder what the author of the sites endgoal is. Do they want flatpak to be eliminated? Do they think theyre stopping the next "systemd" from taking over their favorite distro?



Is there a positive case for flatpack? I have only ever heard people basically cursing its existence.


> Almost all popular apps on Flathub still come with filesystem=host or filesystem=home permissions, in other words, write access to the user home directory

I agree it's absurd to have a sandbox system that doesn't protect the home directory [1]

Browsers and smartphones have an advantage here, because every app uses their file-open and file-save dialogs. If the user wants to open /home/user/pictures/whatever.jpg the sandboxed code gets access to that file and that file only, with the user's explicit consent. And if the app's file formats and suchlike don't match that way of working, tough luck because there's no alternative.

Whereas on Linux, where there are already 6+ other ways of packaging and distributing your app, a new entrant doesn't have the power to dictate terms or force developers to change their programs. And distributing a version of Gimp that couldn't open files in the user's home directory would be absurd.

[1] https://xkcd.com/1200/


OS X managed to solve this using the standard file browser API grants access to the file. That’s a little more challenging on Linux though perhaps not impossible.


That's exactly how it works in Flatpak, and you get it for free if you use GTK's FileChooserNative dialog: https://developer.gnome.org/gtk3/stable/gtk3-GtkFileChooserN...

If your application is inside a sandbox, it communicates with another process that presents the file chooser and hands over permissions based on the user's selection.

It is possible to choose directories in the same way, and portals are always improving: https://github.com/flatpak/xdg-desktop-portal.

Plenty of Flatpak applications actually do use this, but the author of this website loves to pick out the ones that don't so he can act like the project is flawed to the core and justify his sensationalist domain name. But actually it is very solvable, it is being solved, and in many cases you can tighten an application's sandbox with a two line diff.


To be fair - if you make a website named flatkill you're signalling you don't want a dialog with the developers, and just expect them to throw in the towel and call it quits.

Later edit - just to be clear, it's the kill part that I have a problem with. For example, name it flatrepair and developers will likely see your statements as something constructive.


If I understand correctly, a lot of desktop apps need to rewrite their parts of code to adopt to flatpak, and probably also snap to use the sandbox correctly. This post is therefor not criticizing flatpak but GNU Octave for not having enough dev power to implement it.


> a lot of desktop apps need to rewrite their parts of code (...) to use the sandbox correctly.

This is completely backwards. Security features and encapsulation are the responsibility of the operating system. Never of the individual applications nor the package maintainers. A good, secure, operating system should be able to run hostile apps safely. For all its shittiness and complexity, this is something that browser developers got somewhat right.


I don't agree with your statement.

Apps can use some low-level calls to the OS, to do things that cannot be replicated in the same way in a sandbox. So it is not surprising that some parts of apps must be rewritten.

If they are not rewritten, they just don't work in the sandbox. That's exactly the goal of a sandbox. Provide safe API for applications, and block apps that are trying to access more.


Currently I happily use flatpak for Slack, Teams, Bitwarden, Spotify, Geeqie, Zotero, Inkscape and other things.

The options here are:

1. Run those things directly as OS packages or sometimes AppImages or just precompiled binaries.

2. Run them via flatpak.

3. Don't run them

I don't think 1 is more secure than 2, and 3 is not really something I care for.

So sure, Flatpak has it's issues, however I still appreciate it and IMO it is better than snap and AppImage. I also use AppImage for some things where the sandboxing of flatpak gets in the way too much, but snap never worked for anything that I tired it with.


This seems to miss the article's point. You wrote:

> I don't think [running apps directly as OS packages] is more secure than [running apps via flatpac]

The article's claim is that running them via flatpak _is_ less secure because they don't automatically get security updates when the distro's libraries are updates and, as a result, they're less likely to receive prompt security updates. Do you disagree? If so, why?


The Flatpak model is good for "pet apps" where you have a company, team, project whose entire job is to make this one app. Relying on upstream to fix security vulns is less of an issue because these companies have incentive to fix them. Spotify, Teams, Skype, Bitwarden, Slack are all pet apps.

The Flatpak model is terrible for distros who are already strained for people and who benefit from factoring as much of the work as possible out of each application so you only have to update one place and push out a fix everywhere.


For non open source software, dependencies are often packaged with the software itself, so it would be the same in flatpak or in the distro. It is true for software that uses shared libraries though.


But they also use a lot of system libraries.

E.g. they likely will not bundle openssl but use the systems openssl.


what I don't like with installing it from distro is to pollute my OS and full my /usr/bin with some user app that should live on my /home. But i agree that the security updates are a real issue.


But it is kind of a moot point, because you can update the flatpak stuff. I run dnf update -y to update the packages on my fedora laptop and flatpak update -y to update my flatpaks. It updates signal and the fedora platform libraries (security updates anyone?).


If I could get the same things from distro I would maybe be more inclined to use distro packages, but in many cases I cannot.

My distro either does not have the things I use from flatpak or have them with some deficiency (like older version or compiled features).

I can still get rpms for some, like Teams, but that also comes with bundled libraries and the Teams flatpak does have very tight restrictions so if I am concerned for the rest of my system flatpak is more secure.

For some other things on that list there are no rpms however.

So yes, you are more exposed to the maintainers behaving well, but you are also more insulated than just running things without any sandboxing.


Well, the sandbox is a lie. Did you RTFA?

Plus, you can have rpms that provide their own packages, but it's rare and will still be upgraded if it's in the repos.


> Well, the sandbox is a lie.

It's not though. And for Teams filesystem access is serverely restricted only allowing access to xdg-downloads.


Agreed. Things don't have to be perfect. They just have to be a little better than alternatives.


I agree with that, flatkpak is very much work in progress. A lot of packages are still unofficial. Some software are also not meant to be sandboxed and do require access to the whole file system. I do think it is going in the right direction though.

The author is also mistaking Gnome Software (the gui for managing apps in gnome that has a flatpak backend) and flatpak itself. Marking the app as Sandboxed when it's not is a gnome software issue, not flatpak.


Things don't have to be perfect, but they need to be honest.

To state that an application with full access to the host's filesystem is "sandboxed" is surely very harmful and very misleading.


It is sandboxed, sandboxed in this context just does not say anything about filesystem access. But it still says something about how it is running and again most people would exect something like gimp to have access to the host filesystem when they install it. You have options to whitelist specific directories in flatseal if you want to restrict it more.


> I don't think 1 is more secure than 2

It depends - do the apps in version 1 dynamically link to system libraries that can be kept up to date? If they do, they may be more secure than the flatpak versions. If they simply come as statically linked blobs, then you are right.


> dynamically link to system libraries

It's very likely for openssl ;=)


I downloaded slack from slack's download page at https://slack.com/intl/en-it/downloads/linux

It's either a .deb, a .rpm or the snap store. Where do you find the flatpack?

I know that I downloaded the .deb (Ubuntu 20.04 here.)

I checked what I got installed now

  ii  slack-desktop  4.9.1        amd64        Slack Desktop
It's the same version listed on their site and it's from September 17. Maybe it autoupdates. I can't remember if I reinstalled it two weeks ago. I don't think so.


You can get it from Flathub. It's packaged by volunteers, who use the .deb they get from the Slack website.

https://flathub.org/apps/details/com.slack.Slack


Which distro are you on that it doesn't even have an official Inkscape package?

edit: since you mentioned Snap I assume Ubuntu. What's wrong with `sudo apt-get install inkscape`?


inkscape 0.95 in the standard repos and will remain that way for the life of 20.04

inkscape 1.0.1 via the snap maintained by the inkscape devs that will continue to get upgrades and give me options to switch over to edge releases if I want them (currently 1.1)


I use fedora. The inkscape that comes with fedora was missing some functionality last I tried to use it, can't recall exactly what now.


Snap does have some wild issues that appear to stem from its sandboxing.

For me: I have discord and spotify installed via snap, with standard (not classic) confinement, and have had zero issues whatsoever with them. I also use VSCode and DataGrip via snap w/ classic confinement, and all of these work flawlessly.

I also have Slack installed, with classic confinement. It had issues with clicking links causing a separate Firefox process to be launched, rather than re-using the one I already had open. This was resolved with a config change inside firefox (weirdly enough), and is a common issue. Otherwise, it works great.

Development tools like node and go, those are another story. NodeJS is officially distributed via snap, and appears to run relatively well in classic confinement, but I've also ran into some really strange issues where, for example, a node process starting a child node process either silently fails or takes multiple seconds to start up, which causes issues in our company's testing framework. Go is not officially distributed, but is kept up to date by the community, but has major issues interfacing with VSCode (even though both VSCode and Go are in classic confinement!)

Overall, I really think Snap is a massive step forward in package management for linux. About 80% of applications I use are available there (versus maybe 40% on apt, 50% on apt if you include "run these dozen commands to use our custom apt server because we derive pleasure from making life miserable for our users" and 20% on flatpak). But, it has very obvious issues even non-technical users would see immediately. Hopefully they can work through them.


I can't comment on it's security, but all of the snap apps I've used have an insanely high start up time that it's genuinely confusing how it could be so bad.

On my laptop with a latest gen ryzen cpu and nvme ssd, the Chromium Snap takes 30s-1 mins to launch, while the .deb launches in under a second. Same for Spotify, Discord and others that I've used. The start up time is so bad that I replaced them all for .deb versions which has been much better.


I have not noticed this in GUI applications, which generally have some overhead anyway so the snap overhead is negligible. I have noticed it on simple CLI applications, such as jq, where the snap is often 10x+ worse than native (but, we're talking 10x of microseconds; it enters the tens of milliseconds, which becomes noticeable, but its not horrible).

Snap startup overhead is a known issue. Its definitely a thing.


Just curious, what do you find so miserable with 3rd party apt repos? I mean, just add it once and it will integrate with dependencies from your existing repos. I have never had any issues with such setups (as long as the 3rd party repo is maintained, of course).


Well, you're wrong, 1 is more secure then 2. That's the whole point of the article. OS packages and precompiled binaries don't install their own version (often older and unpatched) of core binaries they are dependant on.

By all means, use flatpak, but be aware you are seriously degrading your environments security.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: