Hacker News new | past | comments | ask | show | jobs | submit login

> I would prefer less app-ification, more package management.

I would argue for the opposite: One problem with linux is that you can't run untrusted software without it getting the same access as the user. That's exactly what the appification is.

I see package managers as a source of some problems of linux: Package maintainers are a unneeded middleman between the writers of the software and you, package managers interact with the whole system, scattering files around the whole filesystem and running scripts as root when installing/uninstalling packages. Applications being files/folders that can be installed/uninstalled with cp/rm seems way more unixy to me.




> I would argue for the opposite: One problem with linux is that you can't run untrusted software without it getting the same access as the user. That's exactly what the appification is. > I see package managers as a source of some problems of linux: Package maintainers are a unneeded middleman between the writers of the software and you, package managers interact with the whole system, scattering files around the whole filesystem and running scripts as root when installing/uninstalling packages.

High-standards imposed by distros (see for example: https://www.debian.org/doc/debian-policy/) and tedious work of packagers has been a very important part of the free software ecosystem, and kept GNU/Linux snoopware-, malware- and virus-free and trustable [1]. They are far from "unneeded middleman".

Here's a very recent example of this system at work: Chromium sneaks a binary blob which secretly listens to your microphone and sends data to Google: https://lwn.net/Articles/648392/

This is just one instance, there has been many many examples of this.

There are just so many, let me say, shady, "apps" out there, so I don't install anything from Google Play Store on my smartphone. On my desktop/laptop however, I feel safe installing whatever I need from the official repositories. I personally don't want to use a distro to which "app" developers from Android ecosystem can freely push their programs (and updates to them).

> Applications being files/folders that can be installed/uninstalled with cp/rm seems way more unixy to me.

But that's essentially what a package manager does for you, although it takes away your freedom to screw up in the process. In GNU/Linux, it essentially 1) protects you from screwing up your system accidentally 2) allows you trust system-wide binaries (assuming that your distro is trustable).

App Stores are also package managers, so I'm not sure what you're trying to point out here.

====

[1] I'm excluding Ubuntu here. Ubuntu comes with spyware, spies on you by default and monetizes the privacy and trust of its users [2]. I don't trust it, I don't use it, and I recommend against it.

[2] http://www.gnu.org/philosophy/ubuntu-spyware.en.html


Downside is that you are limited to what software your distro has vetted and provides. With a strong and trusted app sandbox, I can more easily trust less-vetted software, because I can see what it can do before running it.

E.g. if it can only do file accesses outside of it's own settings folder after prompting me, I know that it has way less abilities to screw up my system.

I don't think that can replace a package manager for "complex" or "infrastructure" software, but for other things it could open the selection up. Many people already run sandboxed applications, in sandboxes called "Firefox" or "Chrome".


> There are just so many, let me say, shady, "apps" out there, so I don't install anything from Google Play Store on my smartphone

Lately I've found myself checking if something is available on F-droid before installing from Google Play. Sometimes it is!

If something is on F-droid, I'm less worried about it being scammy spyware disguised as useful software.


In some cases the version on F-Droid has been modified to remove the questionable functionality (tracking, ads, etc). So it's possible the google play version has that stuff even though the app is also available on F-Droid.


> Here's a very recent example of this system at work: Chromium sneaks a binary blob which secretly listens to your microphone and sends data to Google: https://lwn.net/Articles/648392/

Downloading a binary blob in the first place was the real issue. However, the voice recognition module wasn't actually used unless you opted into the "Ok Google" hotword feature.

Getting back on topic, Android handles similar problems in a different way, via the permission system (esp. the improvements in Android 6.0+).


> However, the voice recognition module wasn't actually used unless you opted into the "Ok Google" hotword feature.

So says someone, but you don't know what else the binary blob (which can be "updated" at the will of Google, without asking anything to the user) does for sure, do you? If you're content with that explanation, you're essentially taking their word for it with blind trust. For example, will it push a new binary and/or suddenly start listening to you or send certain files from your computer if someone three-letter government agency tells Google to do so? Would you bet your life on it?

Trust in software may not be that serious for you, but there are people out there whose lives depend on it.

The issue here is twofold, and in terms of security, it being a binary blob is the lesser one: 1) this is a binary blob which hasn't been vetted by the eyeballs of FOSS world 2) Google circumvents the package manager (along with package reviewers and FOSS community) and secretly and freely installs (and updates) a binary blob on your system, which is essentially a closed-source backdoor singularly controlled by a US company.

A program that silently pushes programs on users' systems (and silently executes them!) at the pleasure of a company never had any place in Debian or any distro with similar principles. It wouldn't matter if they pushed the source code and compiled it on your system (in fact, some rootkits work just like that).

> Getting back on topic, Android handles similar problems in a different way, via the permission system (esp. the improvements in Android 6.0+).

You're talking about a totally different problem/class of permissions here though (accessing network/video/audio/filesystem etc vs ability to install packages as a non-root user).


Oh please. If google wanted to record from Chrome users, they could do it easily. If you actually trust packagers to know what Chrome is doing, you're crazy.


I trust the package managers to build software in a way that I can be sure matches with the published source package. That's the important difference. Projects bundle blobs into their binaries that aren't in the source code all the time (cf. chromium as discussed, but firefox did (does?) this too).

Of course you could say in a large codebase like chromium, bad things can be hidden. But at least there's a better chance such things are found, and it's more risky to put it there in plain sight.

In a perfect world, all software would have reproducible builds and there would be no issue with trusting that binaries contained no hidden functionality not in the source, as there'd be third parties to rebuild and compare the result against the published binaries. We're a ways away from that.


The Android 6 changes are a smokescreen.

You can't use it to deny specific permissions, only groups of them. If an all wants to write something your contacts, you also have to allow it to read them. This even though the core permissions system allows that level of granularity.

also, network access is deemed safe (by Google) and is thus in a group of permissions the user has no control over.


> I see package managers as a source of some problems of linux: Package maintainers are a unneeded middleman between the writers of the software and you, package managers interact with the whole system, scattering files around the whole filesystem...

It sounds like you'd enjoy Slackware's approach to package management. Zero dependency resolution, packages are built from the source tarball with little or no patching (unless you patch it yourself), and the files are installed where the author intended them to be (though that can be good or bad depending on your perspective). If the application has dependencies, you're left to install them yourself, ensuring that only the absolutely necessary dependencies are installed, and not, for example, all of Gnome when you just want a simple text editor.

> ...and running scripts as root when installing/uninstalling packages.

Given that this happens even when you build from source and install yourself (./config, make, sudo make install), I'm not sure what you're asking for. Perhaps the Mac OS X method of "the app is a folder with pre-built binaries and all necessary libraries"? And even some OS X apps require elevation to install, especially those distributed as .pkg files.


The parent is advocating for the Slackware approach—just take whatever's in the upstreams and slam it together—but installed into a container (so all that slamming doesn't conflict with anything), and distributed as a container image. This is effectively the 'modern Linux app philosophy', as supported by systemd et al. It's similar in concept to app bundles, but it also assumes automatic sandboxing.

It's great for games in particular, given that you'll be able to boot up a 10-year-old game container-image as if it were new. (In that way, it's like game emulation, but without the performance drawbacks.) Most other "app" software gets lesser-but-still-significant benefits, in much the same way you see on e.g. the Mac App Store with their self-sandboxing apps.

There's little advantage in distributing system software this way, given that system software is usually full of low-level extension-points that other system software plugs into to achieve the purpose. And this is what can make a platform abstraction layer like Android make sense. With a platform, the system software is all composed together by the platform maker into an underlayer, and a single, clean API is specified atop it. Above the platform abstraction, there doesn't need to be any exposed runtime model for system software—since it's all been encapsulated into the black box of the underlayer. The platform only needs to specify a model for developing+running+distributing app software.

Sometimes things straddle the boundary, though. There are numerous cases of software that has "failed out" of the Mac App Store after a while, because the software is fundamentally a system-software/app-software hybrid, and sandboxing the system-software aspects just didn't work.


> The parent is advocating for the Slackware approach—just take whatever's in the upstreams and slam it together—but installed into a container (so all that slamming doesn't conflict with anything), and distributed as a container image.

Have you heard of Nix and NixOS? It doesn't use containers for this, but each package's tree is kept isolated from the others'.

https://nixos.org/nixos/about.html


Nix is probably the optimal package manager, but it still is a package manager, and so still encodes particular ontological choices that get made differently by different distros at different times.

In other words: I don't just want my game from ten years ago to run; I want my game from ten years ago that was built for a RHEL-alike to run on the Debian-alike I've got today (or maybe even using FreeBSD's Linux-binary support.) Nix can't give me that, except by just being the particular tool that ends up putting together the packages that go into a container-image.

Nix can be used to exactly re-instantiate the same set of files ten years later, presuming the original Nix repo you found the software on was still alive ten years later. In that way, it is similar to creating a container-image. But for this use-case, it doesn't give you anything that the container-image doesn't, while the container-image does give you an assurance that a stable (sandbox-based) interaction model was defined at the time of the image's creation, which can still be used today.

Something that intersected the Nix package-management model with the Mac App Store "only self-sandboxed apps will be accepted" model would be strictly equivalent to containers, though.


GNU Stow gives shell users a great way to build contained applications that can be patched into the traditional filesystem through symlinks, avoiding the uncontrollable spew of files everywhere.


Or take a look at Gobolinux, that applies that to the whole distro.


I've never liked the traditional unix way of managing a filesystem either; GoboLinux had the right idea.


Well for the most part because it uses what unix offers rather than ignoring it.

The basic problem with most package managers is that they ignore the soname mechanism.

https://en.wikipedia.org/wiki/Soname

This allows multiple versions of a lib to live side by side, while giving the binaries the version(s) they want.

package managers ignore this, and instead is hung up on getting that one "true" version everywhere. This because their basic design only allow one version pr package name. Thus you can't have ABC 1.0 and 1.1 installed at the same time. Instead you have to mangle ABC into something like ABC1 to allow 1.1 to install without removing 1.0 in the process.


I wouldn't strictly blame the package managers. Last I checked, it's not straight-forward to link against any library version other than what the versionless .so symlink points to (-llibname will always use liblibname.so). The lame workaround is to move a portion of the version into the basename, and that's where you get libname2.so, so you can explicitly ask for it with -llibname2). You can specify the path of the library file to link against, but this doesn't get you the same dynamic linking semantics (and you really want to let the linker search the cache for the exact file rather than having to specify the path yourself).

It's actually pretty straight forward to build multiple packages with the same name and different versions that can be installed side-by-side: it's like a single line in an RPM spec file, as long as the files list doesn't conflict. A lot of the required effort is in building the software that uses the library. The library's header files are not often in versioned paths, so the -devel package for multiple versions has conflicts. It's easier to specify the option to configure that installs all the files with a version suffix and just be done with it. It is, unfortunately, not in the interest of the person doing the packaging to significantly diverge from how upstream suggests it be installed in their docs. That is, what should happen is entirely doable but, as part of the larger ecosystem of software and packaging and distributions, it's complicated and has little payoff.


From my experience, as long as -soname is provided to ld, the rest should sort itself.

At least it seems to work wonderfully for daily usage on Gobolinux.


I'm not sure exactly what you're referring to; the -soname ld option is used to set the value internally to a library that that library is known as when building a library, and is used as part of ld.so at runtime to find the appropriate .so file to run-time dynamically link against.

The issue I described is during build time of a program that uses a shared library. Given this list of available libraries:

    lrwxrwxrwx     21 Apr 20  2011 libevent.so.2 -> libevent.so.2.1.3
    -rwxr-xr-x 106656 Aug 20  2010 libevent.so.2.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent.so.5 -> libevent.so.5.1.3
    -rwxr-xr-x 277760 Jun  3  2012 libevent.so.5.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent.so -> libevent.so.5.1.3
there is no way to use the -levent option to gcc or ld to specify you want to link against libevent.so.2. If you just give it -levent, it will always link against libevent.so, which as these symlinks show, is pointing to libevent.so.5.1.3 (which is actually libevent-2.0) — this is documented in the ld man page under the -l option as for how "namespec" is resolved. Unfortunately, the APIs are different between 1.4 and 2.0, which is why this is necessary: not everyone who uses libevent has updated their code to use the new libevent API, yet we still want to use that software.

The "fix" is to move some kind of "logical" version number into the filename (although, which is "logical" and which is "physical" becomes ambiguous), like so:

    $ ll libevent[-.]*
    lrwxrwxrwx     21 Apr 20  2011 libevent-1.4.so.2 -> libevent-1.4.so.2.1.3
    -rwxr-xr-x 106656 Aug 20  2010 libevent-1.4.so.2.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent-2.0.so.5 -> libevent-2.0.so.5.1.3
    -rwxr-xr-x 277760 Jun  3  2012 libevent-2.0.so.5.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent.so -> libevent-2.0.so.5.1.3
so you can obtain a linkage against 2.1.3 by specifying -levent-1.4. This "encourages" one to use the 2.0 API by using libevent-2.0 if you specify a bare -levent. The -devel package with the header files will usually be for the latest version anyway, complicating the build process on a single machine with both libraries available (you'd have to build against a local/non-system-include-tree copy of 1.4's headers). I know some redhat/centos packages do do this (which is why I have both libevent-1.4 and libevent-2.0 on my system, but I don't know how those packages' specifically do this.

The other way around this is to explicitly link against the full path /usr/lib64/libevent.so.2.1.3 and not use the -l option at all for this library. This works in part because of the -soname option you pointed out. While this works, and produces the same output if one could use -l to select the correct library file, most of the build tools we have at our disposal prefer to use -l to select libraries at build time rather than full paths (because if you use -l to select a system installed library, ld does the work of searching its cache and standard paths for you; however, you can get around this with the :filename version of namespec to the -l option). This means that some programs' build processes would be different on systems that have only the old, only the new, or both libraries on the system, and a bunch of programs build processes might need to change once the new version comes out.

The problem here is that -l only lets you specify the name, not the version to link against, under the (arguably well-founded) assumption that old versions should not be used for new builds but you need to keep them around for already-built programs to link against (dynamically, at run time).


> The basic problem with most package managers is that they ignore the soname mechanism. ... This allows multiple versions of a lib to live side by side, while giving the binaries the version(s) they want.

I see you've qualified your statement with "most", so you're not in the wrong, but unless I'm mistaken both Nix[1] (and NixOS) and Guix[2] handle this problem well and do it in a way which scales infinitely.

[1] https://nixos.org/nix/

[2] https://www.gnu.org/software/guix/


> The basic problem with most package managers is that they ignore the soname mechanism.

Well, there's a million hobby project package managers, so I'm sure many of them fall in to that category. But the big ones, the ones used by the vast majority of linux users, don't have a problem with this (deb and rpm-based distros commonly version their libraries and install multiple versions side-by-side). This is a pain in the ass for the package maintainers as a lot of software developers don't think about versioning their API, but it's great for users.


Which package manager are you thinking of? The dpkg ecosystem (i.e. Debian and Ubuntu etc.) definitely uses sonames. It is true that the soname version ends up in the package name, but this happens automatically, at build time, and it's only the binary package name that carries the soname, the source package remains unchanged.


So does redhat & spinoffs.

You can install[] packages with libxyz.1.so and libxyz.2.so; you cannot install two -devel packages, because that package contains symlink libxyz.so to specific version.

Often, the system even installs such packages for you (they have compat in package filename, i.e. compat-libstdc++).

[] provided they don't contain additional files with assets, that conflict with each other. But this problem is not solved by gobo either.


Dependency management and isolation are 2 different things. With Nix I can get either or both (with isolation in the form of lightweight containers).


xdg-app is solving those problems, although I don't know if it will be adopted, especially by KDE.


I would sorely like both. The system level stuff should be package managed to create a coherent base system. For individual apps I would really like to use the latest version of e.g. Libreoffice without having to leave the comparative stability of a LTS distribution or adding a ppa. I would also like to run something like Skype or Dropbox in a sandbox.


I think many distribution* are evolving toward an even better compartmentalization with containers / VM, I just wish we'd work more toward than and indeed stop having to package apps for every distributions...

* QubesOS, SubgraphOS, NixOS(?), etc.


Can't we say Google play store for android is like a package management system?


su "limited account" -c "questionable binary"?


A large value of app sandboxes are easy but controlled ways to allow exceptions. E.g.: an app can ask the sandbox for file access, the sandbox then prompts the user to select a file and only that file is then exposed to the app.

Or even giving global access to functionality from a manifest file, without having to set up a restricted user/environment manually. (I wouldn't know without looking it up how to set up a linux user account that can't talk to the network. Or even better, only can talk to some part of the network.)


Reminds me of a friend of mine that when provided with a manual for how to do something "that's too complicated, please do it for me".

Meaning that "containers" just becomes a wrapper for things that can already be done, if one just learn to do it rather than pointing and drooling (as someone once referred to the GUI as).


Who needs restricted user permissions, you can just review the machine code of all your executables before running them ;)


har har har...


Doing it for the humans is exactly the raison d'etre of software.


Yes and no. The problem is that the magic word "container" obscured what is actually being done (cgroups, namespaces, iptables, etc etc etc).

Its one more thing that result in confusion between user and computer about the state of the machine.


It's a bit extreme, if you follow your logic why even bother with an OS ?!


I prefer to encode my bits on the disk with a needle and magnet, thank you very much...




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: