Hacker News new | past | comments | ask | show | jobs | submit login

As a hardcore Linux user, I'm really torn about Android.

On the one hand, it has created a nice software ecosystem by pushing a consistent set of APIs. Linux desktop applications were never remarkable except for a few ones. I think fragmentation into a myriad of frameworks led to this. The only 2 X11 applications I use are firefox and zathura.

On the other hand, too much stuff has been redone in Android. It's too foreign. I would prefer less app-ification, less stores, more package management. And no mothership, no Google.




> I would prefer less app-ification, more package management.

I would argue for the opposite: One problem with linux is that you can't run untrusted software without it getting the same access as the user. That's exactly what the appification is.

I see package managers as a source of some problems of linux: Package maintainers are a unneeded middleman between the writers of the software and you, package managers interact with the whole system, scattering files around the whole filesystem and running scripts as root when installing/uninstalling packages. Applications being files/folders that can be installed/uninstalled with cp/rm seems way more unixy to me.


> I would argue for the opposite: One problem with linux is that you can't run untrusted software without it getting the same access as the user. That's exactly what the appification is. > I see package managers as a source of some problems of linux: Package maintainers are a unneeded middleman between the writers of the software and you, package managers interact with the whole system, scattering files around the whole filesystem and running scripts as root when installing/uninstalling packages.

High-standards imposed by distros (see for example: https://www.debian.org/doc/debian-policy/) and tedious work of packagers has been a very important part of the free software ecosystem, and kept GNU/Linux snoopware-, malware- and virus-free and trustable [1]. They are far from "unneeded middleman".

Here's a very recent example of this system at work: Chromium sneaks a binary blob which secretly listens to your microphone and sends data to Google: https://lwn.net/Articles/648392/

This is just one instance, there has been many many examples of this.

There are just so many, let me say, shady, "apps" out there, so I don't install anything from Google Play Store on my smartphone. On my desktop/laptop however, I feel safe installing whatever I need from the official repositories. I personally don't want to use a distro to which "app" developers from Android ecosystem can freely push their programs (and updates to them).

> Applications being files/folders that can be installed/uninstalled with cp/rm seems way more unixy to me.

But that's essentially what a package manager does for you, although it takes away your freedom to screw up in the process. In GNU/Linux, it essentially 1) protects you from screwing up your system accidentally 2) allows you trust system-wide binaries (assuming that your distro is trustable).

App Stores are also package managers, so I'm not sure what you're trying to point out here.

====

[1] I'm excluding Ubuntu here. Ubuntu comes with spyware, spies on you by default and monetizes the privacy and trust of its users [2]. I don't trust it, I don't use it, and I recommend against it.

[2] http://www.gnu.org/philosophy/ubuntu-spyware.en.html


Downside is that you are limited to what software your distro has vetted and provides. With a strong and trusted app sandbox, I can more easily trust less-vetted software, because I can see what it can do before running it.

E.g. if it can only do file accesses outside of it's own settings folder after prompting me, I know that it has way less abilities to screw up my system.

I don't think that can replace a package manager for "complex" or "infrastructure" software, but for other things it could open the selection up. Many people already run sandboxed applications, in sandboxes called "Firefox" or "Chrome".


> There are just so many, let me say, shady, "apps" out there, so I don't install anything from Google Play Store on my smartphone

Lately I've found myself checking if something is available on F-droid before installing from Google Play. Sometimes it is!

If something is on F-droid, I'm less worried about it being scammy spyware disguised as useful software.


In some cases the version on F-Droid has been modified to remove the questionable functionality (tracking, ads, etc). So it's possible the google play version has that stuff even though the app is also available on F-Droid.


> Here's a very recent example of this system at work: Chromium sneaks a binary blob which secretly listens to your microphone and sends data to Google: https://lwn.net/Articles/648392/

Downloading a binary blob in the first place was the real issue. However, the voice recognition module wasn't actually used unless you opted into the "Ok Google" hotword feature.

Getting back on topic, Android handles similar problems in a different way, via the permission system (esp. the improvements in Android 6.0+).


> However, the voice recognition module wasn't actually used unless you opted into the "Ok Google" hotword feature.

So says someone, but you don't know what else the binary blob (which can be "updated" at the will of Google, without asking anything to the user) does for sure, do you? If you're content with that explanation, you're essentially taking their word for it with blind trust. For example, will it push a new binary and/or suddenly start listening to you or send certain files from your computer if someone three-letter government agency tells Google to do so? Would you bet your life on it?

Trust in software may not be that serious for you, but there are people out there whose lives depend on it.

The issue here is twofold, and in terms of security, it being a binary blob is the lesser one: 1) this is a binary blob which hasn't been vetted by the eyeballs of FOSS world 2) Google circumvents the package manager (along with package reviewers and FOSS community) and secretly and freely installs (and updates) a binary blob on your system, which is essentially a closed-source backdoor singularly controlled by a US company.

A program that silently pushes programs on users' systems (and silently executes them!) at the pleasure of a company never had any place in Debian or any distro with similar principles. It wouldn't matter if they pushed the source code and compiled it on your system (in fact, some rootkits work just like that).

> Getting back on topic, Android handles similar problems in a different way, via the permission system (esp. the improvements in Android 6.0+).

You're talking about a totally different problem/class of permissions here though (accessing network/video/audio/filesystem etc vs ability to install packages as a non-root user).


Oh please. If google wanted to record from Chrome users, they could do it easily. If you actually trust packagers to know what Chrome is doing, you're crazy.


I trust the package managers to build software in a way that I can be sure matches with the published source package. That's the important difference. Projects bundle blobs into their binaries that aren't in the source code all the time (cf. chromium as discussed, but firefox did (does?) this too).

Of course you could say in a large codebase like chromium, bad things can be hidden. But at least there's a better chance such things are found, and it's more risky to put it there in plain sight.

In a perfect world, all software would have reproducible builds and there would be no issue with trusting that binaries contained no hidden functionality not in the source, as there'd be third parties to rebuild and compare the result against the published binaries. We're a ways away from that.


The Android 6 changes are a smokescreen.

You can't use it to deny specific permissions, only groups of them. If an all wants to write something your contacts, you also have to allow it to read them. This even though the core permissions system allows that level of granularity.

also, network access is deemed safe (by Google) and is thus in a group of permissions the user has no control over.


> I see package managers as a source of some problems of linux: Package maintainers are a unneeded middleman between the writers of the software and you, package managers interact with the whole system, scattering files around the whole filesystem...

It sounds like you'd enjoy Slackware's approach to package management. Zero dependency resolution, packages are built from the source tarball with little or no patching (unless you patch it yourself), and the files are installed where the author intended them to be (though that can be good or bad depending on your perspective). If the application has dependencies, you're left to install them yourself, ensuring that only the absolutely necessary dependencies are installed, and not, for example, all of Gnome when you just want a simple text editor.

> ...and running scripts as root when installing/uninstalling packages.

Given that this happens even when you build from source and install yourself (./config, make, sudo make install), I'm not sure what you're asking for. Perhaps the Mac OS X method of "the app is a folder with pre-built binaries and all necessary libraries"? And even some OS X apps require elevation to install, especially those distributed as .pkg files.


The parent is advocating for the Slackware approach—just take whatever's in the upstreams and slam it together—but installed into a container (so all that slamming doesn't conflict with anything), and distributed as a container image. This is effectively the 'modern Linux app philosophy', as supported by systemd et al. It's similar in concept to app bundles, but it also assumes automatic sandboxing.

It's great for games in particular, given that you'll be able to boot up a 10-year-old game container-image as if it were new. (In that way, it's like game emulation, but without the performance drawbacks.) Most other "app" software gets lesser-but-still-significant benefits, in much the same way you see on e.g. the Mac App Store with their self-sandboxing apps.

There's little advantage in distributing system software this way, given that system software is usually full of low-level extension-points that other system software plugs into to achieve the purpose. And this is what can make a platform abstraction layer like Android make sense. With a platform, the system software is all composed together by the platform maker into an underlayer, and a single, clean API is specified atop it. Above the platform abstraction, there doesn't need to be any exposed runtime model for system software—since it's all been encapsulated into the black box of the underlayer. The platform only needs to specify a model for developing+running+distributing app software.

Sometimes things straddle the boundary, though. There are numerous cases of software that has "failed out" of the Mac App Store after a while, because the software is fundamentally a system-software/app-software hybrid, and sandboxing the system-software aspects just didn't work.


> The parent is advocating for the Slackware approach—just take whatever's in the upstreams and slam it together—but installed into a container (so all that slamming doesn't conflict with anything), and distributed as a container image.

Have you heard of Nix and NixOS? It doesn't use containers for this, but each package's tree is kept isolated from the others'.

https://nixos.org/nixos/about.html


Nix is probably the optimal package manager, but it still is a package manager, and so still encodes particular ontological choices that get made differently by different distros at different times.

In other words: I don't just want my game from ten years ago to run; I want my game from ten years ago that was built for a RHEL-alike to run on the Debian-alike I've got today (or maybe even using FreeBSD's Linux-binary support.) Nix can't give me that, except by just being the particular tool that ends up putting together the packages that go into a container-image.

Nix can be used to exactly re-instantiate the same set of files ten years later, presuming the original Nix repo you found the software on was still alive ten years later. In that way, it is similar to creating a container-image. But for this use-case, it doesn't give you anything that the container-image doesn't, while the container-image does give you an assurance that a stable (sandbox-based) interaction model was defined at the time of the image's creation, which can still be used today.

Something that intersected the Nix package-management model with the Mac App Store "only self-sandboxed apps will be accepted" model would be strictly equivalent to containers, though.


GNU Stow gives shell users a great way to build contained applications that can be patched into the traditional filesystem through symlinks, avoiding the uncontrollable spew of files everywhere.


Or take a look at Gobolinux, that applies that to the whole distro.


I've never liked the traditional unix way of managing a filesystem either; GoboLinux had the right idea.


Well for the most part because it uses what unix offers rather than ignoring it.

The basic problem with most package managers is that they ignore the soname mechanism.

https://en.wikipedia.org/wiki/Soname

This allows multiple versions of a lib to live side by side, while giving the binaries the version(s) they want.

package managers ignore this, and instead is hung up on getting that one "true" version everywhere. This because their basic design only allow one version pr package name. Thus you can't have ABC 1.0 and 1.1 installed at the same time. Instead you have to mangle ABC into something like ABC1 to allow 1.1 to install without removing 1.0 in the process.


I wouldn't strictly blame the package managers. Last I checked, it's not straight-forward to link against any library version other than what the versionless .so symlink points to (-llibname will always use liblibname.so). The lame workaround is to move a portion of the version into the basename, and that's where you get libname2.so, so you can explicitly ask for it with -llibname2). You can specify the path of the library file to link against, but this doesn't get you the same dynamic linking semantics (and you really want to let the linker search the cache for the exact file rather than having to specify the path yourself).

It's actually pretty straight forward to build multiple packages with the same name and different versions that can be installed side-by-side: it's like a single line in an RPM spec file, as long as the files list doesn't conflict. A lot of the required effort is in building the software that uses the library. The library's header files are not often in versioned paths, so the -devel package for multiple versions has conflicts. It's easier to specify the option to configure that installs all the files with a version suffix and just be done with it. It is, unfortunately, not in the interest of the person doing the packaging to significantly diverge from how upstream suggests it be installed in their docs. That is, what should happen is entirely doable but, as part of the larger ecosystem of software and packaging and distributions, it's complicated and has little payoff.


From my experience, as long as -soname is provided to ld, the rest should sort itself.

At least it seems to work wonderfully for daily usage on Gobolinux.


I'm not sure exactly what you're referring to; the -soname ld option is used to set the value internally to a library that that library is known as when building a library, and is used as part of ld.so at runtime to find the appropriate .so file to run-time dynamically link against.

The issue I described is during build time of a program that uses a shared library. Given this list of available libraries:

    lrwxrwxrwx     21 Apr 20  2011 libevent.so.2 -> libevent.so.2.1.3
    -rwxr-xr-x 106656 Aug 20  2010 libevent.so.2.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent.so.5 -> libevent.so.5.1.3
    -rwxr-xr-x 277760 Jun  3  2012 libevent.so.5.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent.so -> libevent.so.5.1.3
there is no way to use the -levent option to gcc or ld to specify you want to link against libevent.so.2. If you just give it -levent, it will always link against libevent.so, which as these symlinks show, is pointing to libevent.so.5.1.3 (which is actually libevent-2.0) — this is documented in the ld man page under the -l option as for how "namespec" is resolved. Unfortunately, the APIs are different between 1.4 and 2.0, which is why this is necessary: not everyone who uses libevent has updated their code to use the new libevent API, yet we still want to use that software.

The "fix" is to move some kind of "logical" version number into the filename (although, which is "logical" and which is "physical" becomes ambiguous), like so:

    $ ll libevent[-.]*
    lrwxrwxrwx     21 Apr 20  2011 libevent-1.4.so.2 -> libevent-1.4.so.2.1.3
    -rwxr-xr-x 106656 Aug 20  2010 libevent-1.4.so.2.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent-2.0.so.5 -> libevent-2.0.so.5.1.3
    -rwxr-xr-x 277760 Jun  3  2012 libevent-2.0.so.5.1.3
    lrwxrwxrwx     21 Jun  3  2012 libevent.so -> libevent-2.0.so.5.1.3
so you can obtain a linkage against 2.1.3 by specifying -levent-1.4. This "encourages" one to use the 2.0 API by using libevent-2.0 if you specify a bare -levent. The -devel package with the header files will usually be for the latest version anyway, complicating the build process on a single machine with both libraries available (you'd have to build against a local/non-system-include-tree copy of 1.4's headers). I know some redhat/centos packages do do this (which is why I have both libevent-1.4 and libevent-2.0 on my system, but I don't know how those packages' specifically do this.

The other way around this is to explicitly link against the full path /usr/lib64/libevent.so.2.1.3 and not use the -l option at all for this library. This works in part because of the -soname option you pointed out. While this works, and produces the same output if one could use -l to select the correct library file, most of the build tools we have at our disposal prefer to use -l to select libraries at build time rather than full paths (because if you use -l to select a system installed library, ld does the work of searching its cache and standard paths for you; however, you can get around this with the :filename version of namespec to the -l option). This means that some programs' build processes would be different on systems that have only the old, only the new, or both libraries on the system, and a bunch of programs build processes might need to change once the new version comes out.

The problem here is that -l only lets you specify the name, not the version to link against, under the (arguably well-founded) assumption that old versions should not be used for new builds but you need to keep them around for already-built programs to link against (dynamically, at run time).


> The basic problem with most package managers is that they ignore the soname mechanism. ... This allows multiple versions of a lib to live side by side, while giving the binaries the version(s) they want.

I see you've qualified your statement with "most", so you're not in the wrong, but unless I'm mistaken both Nix[1] (and NixOS) and Guix[2] handle this problem well and do it in a way which scales infinitely.

[1] https://nixos.org/nix/

[2] https://www.gnu.org/software/guix/


> The basic problem with most package managers is that they ignore the soname mechanism.

Well, there's a million hobby project package managers, so I'm sure many of them fall in to that category. But the big ones, the ones used by the vast majority of linux users, don't have a problem with this (deb and rpm-based distros commonly version their libraries and install multiple versions side-by-side). This is a pain in the ass for the package maintainers as a lot of software developers don't think about versioning their API, but it's great for users.


Which package manager are you thinking of? The dpkg ecosystem (i.e. Debian and Ubuntu etc.) definitely uses sonames. It is true that the soname version ends up in the package name, but this happens automatically, at build time, and it's only the binary package name that carries the soname, the source package remains unchanged.


So does redhat & spinoffs.

You can install[] packages with libxyz.1.so and libxyz.2.so; you cannot install two -devel packages, because that package contains symlink libxyz.so to specific version.

Often, the system even installs such packages for you (they have compat in package filename, i.e. compat-libstdc++).

[] provided they don't contain additional files with assets, that conflict with each other. But this problem is not solved by gobo either.


Dependency management and isolation are 2 different things. With Nix I can get either or both (with isolation in the form of lightweight containers).


xdg-app is solving those problems, although I don't know if it will be adopted, especially by KDE.


I would sorely like both. The system level stuff should be package managed to create a coherent base system. For individual apps I would really like to use the latest version of e.g. Libreoffice without having to leave the comparative stability of a LTS distribution or adding a ppa. I would also like to run something like Skype or Dropbox in a sandbox.


I think many distribution* are evolving toward an even better compartmentalization with containers / VM, I just wish we'd work more toward than and indeed stop having to package apps for every distributions...

* QubesOS, SubgraphOS, NixOS(?), etc.


Can't we say Google play store for android is like a package management system?


su "limited account" -c "questionable binary"?


A large value of app sandboxes are easy but controlled ways to allow exceptions. E.g.: an app can ask the sandbox for file access, the sandbox then prompts the user to select a file and only that file is then exposed to the app.

Or even giving global access to functionality from a manifest file, without having to set up a restricted user/environment manually. (I wouldn't know without looking it up how to set up a linux user account that can't talk to the network. Or even better, only can talk to some part of the network.)


Reminds me of a friend of mine that when provided with a manual for how to do something "that's too complicated, please do it for me".

Meaning that "containers" just becomes a wrapper for things that can already be done, if one just learn to do it rather than pointing and drooling (as someone once referred to the GUI as).


Who needs restricted user permissions, you can just review the machine code of all your executables before running them ;)


har har har...


Doing it for the humans is exactly the raison d'etre of software.


Yes and no. The problem is that the magic word "container" obscured what is actually being done (cgroups, namespaces, iptables, etc etc etc).

Its one more thing that result in confusion between user and computer about the state of the machine.


It's a bit extreme, if you follow your logic why even bother with an OS ?!


I prefer to encode my bits on the disk with a needle and magnet, thank you very much...


Android has been friendly toward removing the mothership since day one. Anyone can deliver APKs using either a homegrown app store (see Amazon's App Store app) or by just letting the user download APKs from the browser.

The problem is that most app developers currently -only- distribute their app via Google Play, as it's the primary distribution channel that devices tend to come with. This isn't a complete lock-in, since you can still get those installed APKs from a device where they've been installed and archive them or reinstall them on other devices. (Refer to ApkMirror as an ethically dubious prime example of this)

Still, this is nothing to hold against Android itself.


There's always fdroid!

https://f-droid.org/


Are there any other good sources for open source Android apps?

I love F-Droid, but they plan to drop Firefox, and Chromium will never make it on there.


Do you know why they're planning to drop Firefox? And why would Chromium never make it on there? I don't know much about F-Droid and its limitations.

EDIT: Okay I did the Google search that I should've just done in the first place. Apparently they're not dropping Firefox fully, but rather forking it to "remove the proprietary binaries out of the official builds" (Google Play API, etc). I suppose APIs like that are important enough to your use-case that you need an alternative source for Firefox?


F-Droid wants to drop Firefox because Google Play is now required to build it (though not to run it). Same reason why Chromium isn't being considered: https://f-droid.org/forums/topic/chromium/page/2/#post-16388

F-Droid has (temporarily?) taken down their Fennec fork, seemingly due to difficulties with development: https://f-droid.org/forums/topic/please-do-not-drop-firefox-...

This is presumably why Firefox's removal has been continuously postponed. Reliability is one of the reasons why I would prefer first-party Firefox over a third-party fork.

Though there are good reasons why F-Droid doesn't include Chromium and seeks to remove Firefox, the absence of many open source Android apps from F-Droid's catalog has a direct impact on utility. It's also not uncommon for apps on F-Droid to lag behind the version on Google Play or the latest release direct from the developer. (Again, there are good reasons why they screen apps, but it does impose a delay.)

I appreciate F-Droid's screening process, and I wouldn't want it to go away, but it'd be nice to have an alternative open source Android app catalog that's a little more liberal.


Gotcha, so very roughly speaking the same reason that people sometimes leave Debian for Ubuntu, because they hit the balance between ideological purity and convenience a bit better.


> -only- distribute their app via Google Play

And even if after much begging you get a .apk from them, it often still won't work on a Google-free Android because it uses the damn Google API that requires the Google binaries that you don't want to soil your system with.


I cannot help but wonder if "apps" aren't going to eventually replace package managers. That's certainly what Docker-like containers try to do.

It might sound crazy when you consider some of the services typically installed with a package manager but ultimately there's no reason why an "app" cannot run in root, many root packages aren't actually installing drivers, they're only root for historical reasons (e.g. being able to read/write to /etc).

Why, for example, couldn't a web server be an app? Or a DNS client? Or a DNS server? Sure, there are certain system services which cannot (anything installing drivers/filter drivers/core UI/sound subsystem/graphics subsystem) but MOST things can.


I'm certain containers will end up replacing package managers. This idea is being pushed by the systemd guys.

I think its a step backwards, as you loose tight control over dependencies. E.g., in case of a security issue, one cannot easily patch all containers if they are blackboxes.

IMHO, the right tool for this is an improved package manager such as Nix or Guix (which incidentally also support containers in their own way). Perhaps the future should be a Nix-like package manager for dependency management, and container-ization for security and resource limits.


Well, at least xdg-app has the concept of "runtimes" shared among applications. If a lib/bin in a runtime has a security issued, the whole runtime might be updated. Transparently for the apps running over it. A runtime might be FreeDesktop-1, Gnome-3.14 for example. Lets say a 0day is discovered and patched in gtk 3.14, a new version of the Gnome-3.14 is issued and dl by the clients. Magically (with the help of overlayfs and co) all the apps depending on this specific runtime have a secure gtk.


That seems like yet another case of freedesktop/gnome/fedora finding another hammer and applying it to all the "nails".

Containers fix a problem that unix has solved for decades.

https://en.wikipedia.org/wiki/Soname

The problem is the package managers used by most distros getting hung up on there only being a single version number of each package name installed at any one time.

Sort that out, like say how Gobolinux does it, and you do not need containers.

But containers is all the rage on the web these days, so ipso facto is also must be all the rage for desktop Linux.

Back before Gnome and Freedesktop, desktop Linux was a kernel up project. But more and more these days however it is a web down.

Meaning that web people get interested in using the L in LAMP on their desktops, start looking into Gnome/KDE, then get involved in Freedesktop plumbing, and all the while bring their web-isms ("move fast and break stuff" being the most annoying) with them further down the stack.


Unix has solved for decades? Not really, if we look at the state of adoption of this solution.

We supposedly discovered concrete 2 millennia ago but only adopted it widely 150 years ago.

I wouldn't diss people who actually have a shot at fixing the problem, even though you might not like their solution.


Is there a problem with adoption of SONAMEs?

I use debian all the time, and the major libraries are versioned, and some have more than one version available. Debian is a very common linux distro, and through it, a lot of child distros get it too. AFAIK, RedHat and Fedora do this too.

So, do you mean that the state of adoption is bad for niche linux distros?


Those that do not learn from history is destined to repeat it...


How does `soname` solve the problems of privilege separation? Like letting the user forbids certain apps from making network connections?


And the goal posts goes byebye...


If we're in the app world with dependencies running as background services packaged in apps, you still keep all the control.


I mean, I guess, but they tend to all be built out of packages assembled by a traditional package manager. Corralling and testing a bunch of things from a bunch of different sources and making sure they work together is hard. Plus, C library wrangling is becoming a bit of a lost art.


I'd say that's what Bitnami already is: an app store for servers. https://bitnami.com/

I've never actually used it, though, since Debian packages work fine for me.


Why do you think having different APIs (qt, gtk mainly) had much to with the fact that a lot of people don't use Linux for desktop?

If you're talking about binary compatibility, that's an issue for closed-source shops. With static linking, that isn't an issue that you can't workaround either actually (there are many closed-source programs for Linux such as thousands of Steam games). Distros will never put their binary blobs in the repositories, but companies still can (and do) publish .deb/.rpms (and third-party repos in some cases) for their programs.

Binary compatibility is an issue nevertheless, of course. But it is an issue that FOSS distros consciously don't care about, mostly because binary blobs taint a FOSS environment and stand against the underlying philosophy of FOSS.

It's not because Win32 is a superior API that games and big software titles are Windows only. For decades, people were force-fed Windows when they bought a computer, and they became a monopoly, much of it thanks to the deal they cut with IBM. It wasn't like people chose Windows. And Windows kept many big companies from making Mac/Linux ports with behind-the-door deals.

And Android didn't end up dominating the desktop either. Smartphone-land is a totally different and distinct user base. And the fact that it dominated over iPhone doesn't have anything to do with the API either.


Yes it does.

For application developers there is a consistent set of APIs that are expected to exist in each version of Windows, Mac OS X, Android, iOS, Windows Phone...

On GNU/Linux one needs to bundle the libraries with the application, preferably static linked. But no one is going to do with with GNOME or KDE applications, for example.

Then there are the deviations each GNU/Linux distribution does from UNIX daemon management and paths.

So application developers are forced to choose a set of gold distributions and let everyone else to figure out how the application might, compile and run on their own system.


I thought we're talking about users here, not developers.

Steam, Mathematica, Mendeley Desktop etc. have long proven that stable API for GUI toolkit doesn't have to be an issue for users.

> For application developers there is a consistent set of APIs that are expected to exist in each version of Windows, Mac OS X, Android, iOS, Windows Phone...

This problem you're referring to isn't something inherent in or specific to Linux.

On Windows, the very same problem exists and is known as DLL hell. Android is another Linux distribution, and you're talking about Java programs running on top of a VM on top of it. Java programs work just fine on Linux too.

I belive you're talking about Cocoa and Win32 API.

On Unix, there is POSIX and X11 which go way back. And there are many GUI toolkits (including but not limited to Qt and wxWindows) that allow you to statically ship your program, with the added bonus of being cross-platform.

API isn't an issue that can't be solved for developers either.

> On GNU/Linux one needs to bundle the libraries with the application, preferably static linked. But no one is going to do with with GNOME or KDE applications, for example.

Yes, they can and they do ship statically linked binaries. While they don't generally use Qt or GTK, both promise binary compatibility with a major version.

They don't need to make it a GNOME or KDE apps to run it under X11.

> Then there are the deviations each GNU/Linux distribution does from UNIX daemon management and paths.

Can you be more specific about the problem you're referring to? Are you talking about a particular closed-source daemon program that uses something other than /etc/init.d or systemd (which has sysv init compatibility layer)?


"DLL hell" doesn't have anything to do with the built-in GUI API calls. The part of the API that pertains to the GUI is built-in, standard, and has been since the 90's. I can take an application written today and run it on Windows XP-10 and it will work as long as I take care to not use any API calls that are specific to later versions of Windows (very few for most apps).

If you want to criticize the GUI aspect of Windows, you should target the lack of any standard modern GUI API calls that don't involve using Direct2D or GDI+ (by modern, I mean full support for transparency and anti-aliasing, among other things). The old API, plain GDI, was standard and well-understood, whereas the newer improvements are completely different types of layers built on top of GDI. That's the real mess.


Comparing X11 to Win32 is just laughable. X11 is ANCIENT. It also provides no widgets or anything other than windows and drawing primitives (which are ancient). You can't do anti-aliased fonts for example, you have to render then with freetype2 into a pixmap buffer and put that in your window. Practically every GUI toolkit does it that way - allocate window, do the drawing yourself, present final pixmap.

Win32 is more like Qt and if we could just fucking standardise Qt as THE API to target, everyone would be much happier. But of course then the Gnome devs won't be happy. The Linux desktop would be in a much better place if Gnome was dead.


Win32 is also ancient. It is the model of programming with handles and window message loops, that was inspired by MacOS (Classic) and that the OSX abandoned 15 years ago.

To consider you example with text, in Windows, you also have to use GDI+ or Uniscribe to have at least passable text (without shaping, that is). Or use DirectWrite, which is about as old as Pango.

Even OSX went through changes, AAT/OpenType wasn't available when QuickDraw GX was all the rage.


Or you could just not slap a fucking GUI on everything since most Linux users probably don't care.


Every year I go look where Maemo/Meego/Mer/Tizen (whatever it's been rebranded as) got up to. The N900 remains the greatest phone I ever owned.

I had high hopes for Jolla but I doubt they're really out of their financial troubles - they owe me a very expensive tablet, currently. By the time it's delivered it will be obsolete.


It's a shame they didn't opensource, IMHO failing to attract a share of Linux enthusiasts.

They should have worked out a viable business model with an open Sailfish.


It certainly didn't help that their phones aren't authorized to be sold for the US. That's probably a very big segment of the pro-Linux poweruser demographic...


I'd love to revive my N900. Do you have any recommendations about which OS to install on it these days?


Swordfish is great on a phone. The picture for Jolla looks different if you are not in the US speaking from an Android/iPhone centric point of view. There is really nobody else taking Linux on mobile this seriously.


...

It's Sailfish.


Most Linux distros have a mothership in the form of the package repositories and the signing keys thereof. Doing without would require a more decentralized system that would be harder to engineer, unless you want to go back to the days when all software updates were piecemeal and manual.

I would actually say that Linux distributions pioneered the app store concept via "yum install" and "apt-get" and such. Linux is certainly the first OS where I can remember seeing a centralized app repository you could pull from with a single command.


Oh, I meant no deep embedding of Google services, like in Cyanogenmod.


But no information is sent out to Debian without my opt-in consent, in sharp contrast to Google, Microsoft, Apple and Ubuntu.


Please kill package management. Android brilliantly bundled reusability within the app concept itself via Intents. It's safer, centrally managed, and way easier for the user.

A new Android OS and your favorite software is installed in minutes compared to traditional desktop oses and their awful combination of command line and gui crap.


It has created a nice software ecosystem by being a good target for developers because of its (massive) user base. It has a massive user base because Google works hard to make it a good, user-friendly consumer product and they work hard to get manufacturers to get (many models of) handsets to market. It helps that the average person needs and can afford a phone, most of which are now smartphones. Linux never made the effort required to be a serious consumer grade product (well, Ubuntu gave it a good go).

This is not so much a technical issue. Sure, the technicals need to be there, but when the average consumer goes looking for a computer, they don't start with the question, "How stable are the apis for this OS?" They do not know what an api is. And developers will not develop just because of stable apis. The Windows phone has not been a resounding success, but if ever there were a product that I, as a developer, would target due to expectations of stable apis, that would be it.

Appification is why my dad uses the iPhone. He can do cool things by hitting squares with his finger, and then get on with the rest of his day which will likely include a lot of mentally draining tasks. It's the same thing that gravitated folks to Apple's first graphical operating system. The user friendliness. The approachability. Not the cutting edge object oriented software behind it. But the mouse and a picture of a garbage can.

In that context, it's not foreign at all. I'm mean, the first time I saw the command line, that was foreign. But now it's not. But hitting a picture of an envelope to send mail, not so foreign.


Would be nice to be able to run Android apps in normal Linux distros


you kinda can, via chrome android runtime.


emphasis on the word "kinda"


I did a quick Google but Zathura is a popular name. Do you mean Zathura the document viewer? Just asking because I've been looking for one.


Yes, zathura the document viewer. It's a very minimal GUI with vim-like keybindings and it supports several PDF, DJVU, PS... backends


Like Ubuntu Snappy Core?


Isn't RemixOS required to release their source code?

It's based on open source Android (incl GNU GPL parts) and yet doesn't publish the code. They shrug off such questions in their official forum.


Android is mostly Apache licensed. Meaning OEMs don't have to publish their changes.

http://source.android.com/source/licenses.html


A huge amount of OEMs in Android does this as well or release late. Let's not jump the gun on a company doing amazing things and give them some time to do it.


I think it's unlikely that they intend to release the source as they talk about "Partnering for free!" for "licensing", so clearly want to retain some pretty tight rains on the code. "Profit sharing" is a term they throw around a bit too.

Count me out.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: