Hacker News new | past | comments | ask | show | jobs | submit login
Linus Torvalds responds to what killed the Linux desktop (plus.google.com)
183 points by netsmashers on Sept 3, 2012 | hide | past | favorite | 139 comments



What users actually depend on Linux's iron-clad promise to never break binary compatibility? It's curious to me that this issue is considered so sacred and yet I rarely hear the rationale or actual user stories of people who want to run 20-year-old userland on a new kernel.

In particular, note that the promise of kernel binary compatibility does not guarantee that an old binary will run on a modern Linux distro unless the binary is statically linked. Most user-space libraries bump their major version number every so often, so it's unlikely that the required .so's for a very old binary will be present on a new system.


You raise a good point. Why are most open source programs dynamically linked again?

Static linking is one of the nice things about the OSX ecosystem. If only Apple wouldn't unnecessarily break their runtime environment every few releases.


> Why are most open source programs dynamically linked again?

Probably because dynamically-linked binaries are smaller, use less memory (by avoiding duplication), and can get fixes or security updates without rebuilding or re-deploying. When you're a distro it hardly makes sense to ship a copy of libc inside every single binary. Any fix/update to libc would require re-downloading basically the whole system!


Maybe the problem is saying all programs have to behave the same. It would indeed be irritating to have every unix-tool link to it's own libraries. But most well-known tools are small programs that don't get constant feature updates. But it's a whole different thing for typical desktop applications. Those are often way more often updated than the distros and there's no 2 distros out there having the same library versions which might be needed for an app to run. Still distros do not like if such applications use separate library versions... and that just doesn't work so well except maybe for the handful of high-profile applications which have enough maintainers. I mean even keeping an up-to-date Iceweasel on Debian already caused me trouble with library dependencies just some months ago - and that is probably the most prominent desktop application in the free software world.

It's getting even stranger in the world of free games. When games need library fixes (very typical situation as games and engines are very interwoven), which are not yet in a distro (and won't be for some time because the library is for example not yet officially updated or maybe a certain patch simply won't be included). On a system like Windows it's no problem - modify the library source and add the dll. On Linux distro's... well, just not easy possible. Which means funny enough that it's easier to modify library sources in the in the proprietary microsoft world than in the free software world. We got freedom coming with library + distro gatekeepers so to say... which pretty much sucks (even for the library authors).


"I tend to think the drawbacks of dynamic linking outweigh the advantages for many (most?) applications." -- John Carmack

"Dynamic linking considered harmful": http://harmful.cat-v.org/software/dynamic-linking/ http://harmful.cat-v.org/software/dynamic-linking/versioned-...

OTOH, arguments of the other side, for proper polemics: "Static linking considered harmful": http://www.akkadia.org/drepper/no_static_linking.html


Bandwidth is cheap. Memory and disk space are cheap.

I think the days when it mattered are really coming to an end personally.

The amount of space (memory/disk/bandwidth) taken up by binary code is minuscule compared with data/video/streaming/games/etc etc


Not on every system. Not even on most systems. Embedded, mobile etc still benefit from low distro size.


Cache memory is still not cheap.


OS X apps are not generally statically linked. They are dynamically linked. But Apple almost never breaks binary compatibility of the public APIs.


>But Apple almost never breaks binary compatibility of the public APIs.

Technically no, not binary compatibility. But practical compatibility? Woo boy do they ever.


Strong versioning is key. Only allow links to be redirected to bug fixes, never feature enhancements. If you wrote your code for 1.0, you'll continue to link against 1.0, barring some bug fix (1.0.1). Your code should never be silently upgraded to link against 1.1. Microsoft also gets this right with the global assembly cache.

As an aside, DLL Hell was coined by Szyperski, who still works for Microsoft.


Well they are dynamically linked but usually only to Apple provided libraries as far as I know. It's not like every app installs a dozen new dlls to your system and that's what makes the big difference. It's what makes Mac apps (at least traditionally) standalone and executable from anywhere.

I fear however, that Apple is currently destroying this design philosophy with the sandboxing. At least the application data folders have become way more complex now and I wouldn't like to troubleshoot them anymore, something that has been always been super easy.


Technically, outside of Apple frameworks, they're typically dynamically linked to frameworks located inside the .app folder. This is by no means mandatory though.


If only Apple wouldn't unnecessarily break their runtime environment every few releases.

What runtime-breaking changes are you thinking of? Apple's been very good about not breaking the binary compatibility -- of the runtime, or of their frameworks between OS releases.

Sure, there are some differences in 64-bit systems from 32-bit systems. But, if you take a 32-bit Mac app from five years ago, and run it, it will still run just as well today as it did when it was first compiled.


Shipping security fixes in core libraries is easier this way.


>Why are most open source programs dynamically linked again?

To save on bloat of shipping multiple redundant libraries with every app and to guard against the perceived 'DLL hell' in Windows(which was fixed a decade ago). Ironic that ended up in 'Dependency hell' with RPMs and Apt packages requiring specific versions of libraries.

Read through this excellent discussion thread if you're really interested in the problems with linking and the lack of a Common Object Model in Linux.

http://linux.slashdot.org/comments.pl?sid=3079259&cid=41...


This is incomprehensible. "DLL Hell" is what happens you use dynamically linked libraries without managing versions and compatibility correctly. DLL Hell can only happen when you are doing dynamic linking so I'm not sure what you can possibly mean that dynamic linking "guards against perceived 'DLL hell'".


I meant the DLL hell that is made fun of, in Windows and seen as a bad thing. It was fixed in Windows by having 'Side by side' assemblies (SxS). http://en.wikipedia.org/wiki/Side-by-side_assembly

The user does not see any problems except hard disk use(or perhaps a little slowness?) or when the filesystem breaks.


Oh boy SxS. So now you have seperate DLLs for every application.

How about just linking it in statically and make the applications standalone?


Sigh, sneering and snark and also downvotes on my GP comment. That's HN I guess. Must.. resist.. temptation.. to be snarky myself.

Anyway back to the point...

>Oh boy SxS. So now you have seperate DLLs for every application.

No, only if the application uses a different version that explicitly is marked as NOT being compatible. If the version used is the same, you do not have separate DLLs for every application.

>How about just linking it in statically and make the applications standalone?

That will needlessly bloat up the application.

Assume 10 applications need library X version 2.3 and one application needs 2.2. With SxS, you will have one 2.3 DLL and one 2.2 DLL. If you link it statically, the same code will be duplicated in 10 EXEs. Multiply this by all applications and DLLs used across applications.Not to mention waiting fot 10 apps to update to fix a security bug.

You think that's good?


I can see the benefits in terms of security but not in terms of space. So what if the application folder uses half a gig more, if I never have to use installers for most applications.

And as far as your example goes: From what I understand most Windows developers just use a fixed library version number to specify what dll to use. When that happens, security updates won't do any good either. IMO as long as APIs can be changed in between library versions, developers will always be responsible themselves to upgrade to the newest libraries. It's a nice idea but it just doesn't reflect the reality in the world of Business application where incompatibility directly result in monetary losses.


"So what if the application folder uses half a gig more, if I never have to use installers for most applications."

Sure, but a gig more for RAM would make a lot of difference. http://en.wikipedia.org/wiki/Dynamic-link_library#Memory_man...


It looks like you don't know how manifests and SxS works. You can mark the DLL compatible while bumping the version number for security upgrades.

The comments in this thread(from my OP) is a good start if you want to learn.

http://linux.slashdot.org/comments.pl?sid=3079259&cid=41...


I'd take DLL hell over SxS hell any day of the week.


This rant makes little sense to me. Linking and COM are mostly orthogonal issues for once, and ABI compatibility in windows is not directly tight to COM.

I think the lack of ABI compatibility in linux has more to do with incentive and economics: innovating while keeping ABI is very difficult and ressource-consuming. I don't understand why Miguel would cite Apple, as they are pretty lousy in that department, whereas MS famously spend tons of resources on that issue.


Commercial vendors do. It is surprisingly hard to maintain proprietary software for linux, as different versions ships with different shared libraries, have different ways of doing things. Even maintaining systems that need to run on RHEL 4 through 6 takes _a lot_ of effort. And when you're creating commercial software, effort means time and money.


The people who write and deploy software to limited markets appreciate it. Have you seen how many Windows 2000 deployments are still out there? It's the same with BSD and Linux - there are a lot of 10+ year old systems out there doing useful work. Functional binaries save hundreds of hours of work to maintain the software.


I don't think your argument applies: you are talking about running new binaries on an old OS/kernel, but this is not guaranteed to work. New system calls and kernel interfaces are added all the time, so new binaries are unlikely to work on old kernels. The promise is in the other direction; old binaries are guaranteed to work on new kernels.


At my office, we actually still have some Windows 3.1 servers in production alongside some Windows 2000. Replacing the software running on it has proven to be a bit more of a trick than previously thought.


Anyone who relies on (for example) Oracle Database on Linux. Not so much for the compatibility itself (oracle do maintain their software), but because it builds trust between the Linux kernel team and the software vendors who are shipping stuff on top.

If it wasn't for this promise, I think it's much less likely that people happily developing for Unix would have moved over to Linux


I suspect windows end-user don't care so much either, but the enterprise world does. The number of Fortune 500 companies that still use windows xp because they can't upgrade is most likely significant. People also often underestimate incompetency and things like "we lost the source of this software", "nobody knows how to upgrade this codebase at an affordable price".

This is the bread and butter of companies like RH (many companies are still on RHEL 5, and most of those have some legacy RHEL3 systems in my experience).


There are definitely occasional stories about people running old binary apps.

But I think the real reason for this policy is its simplicity. I don't know if Linux developers are disciplined enough to follow Solaris-like deprecation schedules. When you allow exceptions, there is a tendency to allow more and more of them.


has linux desktop failed? Really? Correct me if I am wrong but we are talking about a concept which has spawned thousands of distros used in millions of computers still.

And main argument is rotten as heck: backward compatibility out of the box? On Mac? Please! You have to install Rosetta for that, without it you have no chance of running that escape velocity on your shiny macbook pro. Windows 7 make you feel like it can do backward compatibility jig, until you meet ultrahyperfast running software. And no I am not talking about some console application but the second incarnation of Saints Row.

I mean I would have understood if he simply said "I have started using Mac Os insert-cat-name-here and liked it" nobody would have any problems. I really can't understand the need to declare previously used software dead, bad or other derogatory terms. You use Mac, I use linux and it's still alive and kicking, considering I got security updates this morning.


>has linux desktop failed? Really? Correct me if I am wrong but we are talking about a concept which has spawned thousands of distros used in millions of computers still.

Desktop linux has utterly failed, yes, clearly. OSX grew to over 10%. Linux desktop continues to remain statistical line noise. That doesn't mean that literally no one on the planet uses it, just that a very small number of people use it.


I personally felt that the "What killed the Linux Desktop" article was completely irrelevant.

It seems there's always going to be some people who complain that GNU/Linux isn't 'good enough' or whatever, and as a community we should simply ignore these people.


I don't really get the problem.

The desktop never really needed Linux, and Linux never really needed the desktop.

People seemed to like Ubuntu some years ago though.


>The desktop never really needed Linux

That depends on how far Microsoft and Apple go with the tabletization of Windows and OS X in the next few years.

I've been a mac user for years, but after playing with mountain lion, I could see the writing on the wall. It was time for a new laptop, so I went with a thinkpad with Arch Linux.

I don't think that Microsoft will ever manage to completely force the app store on everyone--solely because of enterprise customers--unless maybe they force you to use it on home/basic and make it optional on business/enterprise. However, I do think that's what they and Apple eventually want. I also think that's probably what the average consumer wants.

We need Linux as a third (and 4th, 5th...) option for all of us edge cases that don't fit the standard consumer mold.


As an aside, which thinkpad did you go with? I bought a macbook air thinking I could easily replace OS X with linux but almost bricked the thing due to EFI issues (rEFIT wasn't any help and actually caused the problem). I'm thinking of going with a thinkpad next time around but not sure which one as I've never had one before. X1 Carbon looked promising but battery life is a bit disappointing.


I went with a T430, mainly because it's built like tank, and it's cheap enough that I'm not afraid to haul it around everywhere. At first glance it looks like an older computer, so I'm not too worried about someone stealing it either.

I love the trackpoint (don't have to take my hands off the home keys to use the touchpad). I also love the docking bay, with my mac I had to unplug the power, speakers and usb, now I just push a button and it pops off the dock.


Have the same experience with reFIT. Just unnecessary. What you can do instead is to just install Linux normally with GRUB, and then access it through the Windows boot function built into the mac. Hold command during boot and choose windows, takes you to GRUB.


It's funny you switch to Arch Linux, because Arch Linux is exactly the worst example of broken userland that Linus is talking about in the post. Arch Linux values "bleeding edge" over all things including basic usability, and they willfully break compatibility with all other distros even when there is no conceivable benefit.

The best example of this is the Arch mishandling of Python 3. Remember when they decided that "python" was now Python 3? If you are a Python dev, then alarm bells should already be ringing in your head. This is a BAD decision with no excuses.

Python is wonderful because it is very portable. I can write Python scripts on my Mac and run them on Linux or even Windows without significant changes. Python 2 and Python 3 are incompatible, and I usually target Python 2 for anything I want to be portable, since Python 2 ships default on OS X and on lots of Linux distros. This is reality.

My Python 2 scripts begin with a shebang, "#!/usr/bin/env python" which launches the Python 2 executable, wherever it is installed, on OS X or Linux. Unless you are running Arch Linux, in which case you need to go in and manually change the script to point to "python2" instead of "python". Except "python2" is essentially unique to Arch, it doesn't exist on Debian or Fedora or OS X or whatever. So no portable script is EVER going to point to "python2".

But there's no real benefit to making "python" into Python 3. Anyone targeting Python 3 will write "python3", which works wherever Python 3 is installed, including Arch Linux. This is the decision the upstream made, and it's a good decision. Anyone targeting Python 2 writes "python", which works everywhere except Arch, and on Arch you have to edit your scripts. Who wants to keep a separate copy of your scripts for Arch? What if they're not your scripts? Do you write a shell script that runs "sed" to fix them? What if you keep your Python scripts in Git or SVN, and don't want to change them every damn time you check out a fresh copy on Arch?

Worse yet, lots of Python devs want to install and test with specific versions of Python -- say, the Python version that their servers run. Arch sabotages this, because as soon as you put "python" in your $PATH, a ton of Arch programs break if they target Python 3. And they wouldn't have broken if they just said "python3" in the shebang to begin with.

The result is that Arch has to maintain a fork of EVERY Python script in their repository. For what? No real reason. The only conceivable benefit of this terrible change is that a user can type "python" in the terminal and get Python 3 instead of Python 2. I've been doing this for years with an alias in my .bashrc, and it doesn't break every Python package in the repository.

So in short, the Arch Linux developers promise a new world in which not only is binary compatibility impossible, but source compatibility for SCRIPTING LANGUAGES is impossible as well. They have taken a bad idea and stuck to it, dismissing anyone who disagrees with them by suggesting that they use "sed" to fix the user's "broken" scripts, in such a way that it would break compatibility with every distribution besides Arch.

And I have further rants about Arch with respect to (1) their dismissal of bug reports of serious security vulnerabilities in default configurations (2) the dismal performance and general usability of their package manager (3) the inability of their maintainers to create a correct package for something as simple as a font (4) the terrible average quality of advice on their wiki, which is often gives advice that simply doesn't work for well-documented and easy-to-discover reasons.

Gawd, use anything else. Gentoo, even.

Okay, I want to rant more.

(1) Security vulnerabilities are dismissed with the kind of reasoning like "users who run this package know what they are doing, and don't run the application on untrusted networks." Considering any (non-virtual) network secure is almost certainly a sign of incompetence in your sysop.

(3) Install "terminus-font", it won't work. You have to manually "xset fp" the path to it.

(4) See #3, and then look up the Wiki advice for it. It just doesn't work if you run e.g. GDM, which doesn't run ~/.xinit.


I don't really understand the fuss about the Python thing. `#!/usr/bin/python2` should work on pretty much any system, not just an Arch Linux system. Whatever happened to "explicit is better than implicit"?

If the Python community eventually wants Python 3 to become the default, they will all have to deal with this issue sooner or later; Arch is just opting for "sooner", as that's in line with the way Arch approaches all changes. The real problem is that there are two incompatible versions of Python, and most Python developers give too little thought to forward compatibility to include an extra '2' in their shebang lines in order to disambiguate their intentions. Just because python == python2 on most systems today does not mean the status quo will forever be the same, nor that it ought to be.

In any case, there is no problem making Python scripts work on Arch, even if they use the ambiguous shebang. All of the official packages work out of the box with no problem, of course, and anything unofficial (i.e. AUR packages) just needs a single, trivial sed command added to the PKGBUILD. Arch is not "maintaining a separate fork of each Python package". This is the opposite of true; in fact Arch packages are, on the whole, far closer to their upstream counterparts than the same packages from other distros, where it is common for huge divergences to be made from the upstream defaults. Hardly worth ranting about.

You come across as if the Arch developers viciously attacked your family or something. Maybe it's just a distro with different values than yours, man! Relax, there are lots of other choices! It's not supposed to appeal to everyone, but for those of us who align well with the Arch "philosophy", it's fantastic.


There is a serious problem when "pretty much any system" excludes both Debian and Mac OS X, and I don't know which others. The "python2" symlink is frighteningly recent in the Python world, and some of us like to support systems other than the bleeding edge. It makes me kind of skeptical that you read my post, because this was the main point of my complaint.

My point is not that it's hard to get Python scripts to run on Arch, my point is that I shouldn't have to do any work at all.

You can see a summary: http://www.python.org/dev/peps/pep-0394/ (note it's not yet been accepted)

"Until the conventions described in this PEP are more widely adopted, having python invoke python2 will remain the recommended option."

"This feature [python2 symlink] will first appear in the default installation process in CPython 2.7.3."

Python 2.7.3 was released in August 9, 2012, slightly over one month ago. That means that if you are running a Python 2 that is more than one month old, you won't have the symlink unless you make it yourself, or your distribution provides one for you.


I'm using Archlinux for years now and I'm perfectly happy with it. And yes, I'm a Python dev, but I think that Archlinux' naming approach is the right one. Main reason why I love Arch is because I like bleeding edge - and I don't really care if it breaks compatibility with other distros (what does this even mean? except for the Python example most other software doesn't require any modifications at all).

At work I'm using OS X machines and "administrating" them is way more effort than for Archlinux. With Arch I run a full upgrade every once in a while and if something breaks it's always trivial to fix (if you understand the system). With OS X you have to choose one of multiple broken package management systems (macports, brew, ...). A third of the packages I'd like are missing, the next third is outdated, and the last third doesn't build at all.

It may be perfectly possible that Arch doesn't work for you. But stop assuming that everyone is like you. I don't need someone telling me what to use and what not to use. If you like Gentoo (or OS X or whatever) better than just use it.


Arch does not maintain a complete copy of all python scripts, the packages just include a post install step that uses sed to replace python with python2 in the shebang lines. For example see the PKGBUILD for Django: https://projects.archlinux.org/svntogit/packages.git/tree/tr...

As for 3. adding /usr/share/fonts/local to the xserver's font path in /etc/X11/xorg.conf.d/20-fonts.conf does the trick. It's too bad there are no recursive search for font files.

It's kind of explained here: https://wiki.archlinux.org/index.php/Fonts#Fonts_with_Xorg

It's too bad you got bitten by The Arch Way(tm) but some people like it.


this is so lame, do a ps -e | grep python to find any python running apps that you normally use(when idling) I found out on my Arch its using python2 not just python, so why don't you remove python and add a sym link to python2 as python, wont hurt would it? I've used Arch for about 6 months, I only got to break the system by my mistake, it was never broken as per for its own fault(I actually do read the new on updates when they recommend it). The thing with computers is that they aren't toys that are easy to use, stop trying to make them: you'll either loose security or performance or space. If something like chmod +x was there on Windows virus propagation might be alot less, but who the hell cares to chmod +x an executable downloaded from the net? Or who has the "time"? so they make big and bloaty anti viruses to do that stuff and its not even doing it right!


>That depends on how far Microsoft and Apple go with the tabletization of Windows and OS X in the next few years. >We need Linux as a third (and 4th, 5th...) option for all of us edge cases that don't fit the standard consumer mold.

Err, what about Android?

>I don't think that Microsoft will ever manage to completely force the app store on everyone--solely because of enterprise customers--unless maybe they force you to use it on home/basic and make it optional on business/enterprise

That is already the case. To sideload apps home/pro versions you need a valid developer license. To sideload on Windows Enterprise the domain IT admins can do it. http://blogs.msdn.com/b/windowsstore/archive/2012/04/25/depl...


>That is already the case. To sideload apps home/pro versions you need a valid developer license. To sideload on Windows Enterprise the domain IT admins can do it. http://blogs.msdn.com/b/windowsstore/archive/2012/04/25/depl....

Not really a windows guy, so I didn't know it had gotten that bad. Wow.

>Err, what about Android?

Android is far too rigid in it's current iteration to use as a general purpose OS.

There's just too much you can't do with it. Google has it's vision of what it should be and you have to fight to fight the OS to break away from that (try removing the navigation bar on 4.1).

Android could work as a general purpose OS, but it would require a major fork.


I think the problem is that some Linux detractors do not realize that there are two seperate standards of success (as mentioned on the Google+ discussion). There is the business standard, and there is the community standard.

Linux doesn't need to be a threat to Apple/Microsoft in order to be a success. That is an artificial bar that some people seem to have become obsessed with.


I agree, I often get criticised for running a Linux desktop by Windows and Mac people because "nobody uses that apart from geeks lol". Well I'm a geek and I use it because it's the best system for what I do (writing Linux hosted server side stuff) so why do I care what the rest of the market is doing?

I guess there is some merit to this argument in the sense that if Linux was the defacto standard on the desktop I wouldn't have to maintain a dual boot. On the other hand I quite like using a niche OS because it means that I'm not a target for malware/shovelware/adware vendors.


""nobody uses that apart from geeks lol". Well I'm a geek"

Exactly. I cannot count the number of times I have had the discussion: "You should use [MacOSX/Windows].", "Why?", "My grandmother can use [MacOSX/Windows] all by herself."

It is hard to physically restrain myself when people think that is an insightful thing to say.


Same thing here. I get the "why dont you like/use a Mac/iPad?" quite often these days, which is seriously annoying when coming from non-tech litterate people.


I agree -- its even stranger when you consider that the commercial success that Linux has had on the server side doesn't really impact Microsoft. Linux killed commercial UNIX.


The desktop needs Linux if you care about a free desktop. I spend most of my waking life on the desktop and I just prefer a system that allows me to dig as deep into it as I want. Not because I enjoy the digging, but because I know it's available when I have to (once in a while).


I'm a little confused. Are Linus (and Alan, Ingo and even Ted Ts'o) arguing that the GNOME team experiences breakages because they ignored the kernel team practices and used internal interfaces instead of the public ones?

Also why do they refer to GNOME as a "research" project?


No, they're arguing that the GNOME team defends breaking (GNOME) public APIs by arguing that the kernel team does not hesitate to break internal APIs, instead of recognising that actual public Linux APIs are highly stable.


It's a bit more nuanced than that. Miguel(the founder of Gnome but was uninvolved since 5 years) said that the culture of Linux was to break things(see driver ABI) and then use the fact that the driver source is available for most drivers to get around that and just recompile them with changes. And that the userland adopted that practice(see autoconf) and led to fragmentation of the software platform with library hell, leading to commercial software staying away for the most part.

As you say, others responded with external vs. internal interfaces, but what is an external and internal interface for things like Gnome or KDE? They have only one API that's used by both their other libraries/applications and application writers.


"They have only one API that's used by both their other libraries/applications and application writers."

Which is a problem they without doubt did not inherit from the kernel team.

There is little doubt that the GNOME guys have been using kernel practices to justify their actions. The point being made here is that they are wrong to do so.


>Which is a problem they without doubt did not inherit from the kernel team.

Interesting point, Windows, OS X, iOS etc. get derided for having internal private APIs that they try to prevent external devs from using, and not doing the same thing is now a 'problem' for GNOME and KDE? Isn't the whole point of Linux for developers, freedom to use it as you see fit?


Isn't the whole point of Linux for developers, freedom to use it as you see fit?

I've yet to hear of a philosophy that allows you to both 1) use everything as you see fit, even things marked as "internal" or "private" and 2) give you the right to complain about your software getting broken because you relied on things marked as "internal" or "private".

In other words, you can either restrict yourself to the published APIs and demand compatibility or you can use the guts of the system and have no expectations of stability when the guts change.


Allowing the public to use API calls marked as private, with the understanding it may change at short notice, actually seems like a good compromise.


How many things can you mark as private in an application GUI toolkit like Gnome or KDE/Qt and make them available to your own applications? Private/public API doesn't really make sense for GUI or sound libraries.


The Linux internal APIs are things that can only be used to construct kernel modules & components - derivative works that make the kernel function differently or support more hardware. Windows/Apple 'internal' APIs enable Apple & Microsoft to create applications with features or performance not available to competitors.


Regardless of what you think of them not having a stable "external" ABI, it is something they did not get from the kernel team.


Windows and Apple due it to illegally stifle or at least lazily under-support potential competition with their apps. That is a non-issue in free software. And Windows and Apple are the talking about company-private userland APIs, not kernel-internal APIs.


Does it really matter?

To my mind this argument is the greatest argument against OSS for anything but hobbyist application.

To see the Gurus of the Linux world squabbling like this makes it abundantly clear why this format will never take on the established desktop OS.


I'd hardly call the platform running most of the web useless for anything but hobbyist application.

And yes, figuring out why Linux soars on servers but stagnates on the desktop is a question worth finding an answer for.

Why does public debate disqualify a technology?


I don't really consider the Linux desktop "dead". Besides Android, it just has a marginal market share because you mostly need to be tech oriented to use it or install it.

I think the risk for the future is that the idea that the traditional desktop is dying out. In some respects the touch capable tablet UI's are much more user friendly. On the other side, the lightweight web-only interfaces (Chrome OS) are becoming increasingly more powerful/useful.

Windows and Mac have already moved to hybrid designs.


Off topic. App.Net says it is "a real-time social feed without the ads". This G+ page is quite on topic with some heavy participants, design looks good, and clear from ads. Why would I want to pay App.Net?


For app.net its a defining feature. For G+, its just the current implementation.


For one thing, G+ doesn't have a serious API yet, whereas App.Net already has Android and iPhone apps by third-parties using its API.

But overall, I've found it interesting how little G+ comes up in discussions about Twitter and App.Net. I happen to think it's an absolutely excellent product and find it much more engaging than Twitter. Most people I see complaining about it have made hardly any posts and haven't taken the time to build up a network.


> But overall, I've found it interesting how little G+ comes up in discussions about Twitter and App.Net.

Probably because it's completely Apples-to-Oranges. Twitter/App.net is for short status updates. I deeply value the enforced brevity. G+ posts tend toward blog-style bloviating; the use cases really don't overlap.


Fair point. I've noticed G+ doesn't work at all for live event updates, so expecting them to build a Facebook-like "heartbeat" thing on the side at some stage.

At the same time, I can say I've moved a lot of link sharing to G+, so for that use case, it's a fairly direct substitute. The rich embedding is better and I can say what I want without having to cram it. It's like that Pascal quote about not having time to make it concise...G+ just lets you write a couple of sentences, or more on whim, without having to then manicure it into 140 characters.


Saying "off-topic" doesn't give you a go-ahead to flak on off topic things.

This has nothing to do with App.net. I agree, it's one of the dumbest things I've heard of, but has nothing to do with this thread.


OT - Now would be a really good time for Google Plus to permalink comments (like Slashdot and others have done since ancient history).


At times like this I wonder why am I even trying to read discussions on G+. http://i.imm.io/D7zA.png


I don't enjoy it either. The tampering of the scrollbar and having the text stuffed into some sort of frame ruins the experience. Those two problems make it difficult to read on G+ so I usually just avoid it.


At first I thought this would just be an easy CSS fix, and it almost is -- but the reason this problem exists is because they put that huge sidebar at the top of the page with the guy's face and four or five links under it.

Seeing as long posts and discussions are starting to become pretty common on G+ I hope that the G+ team invests some time into figuring out how to make everything more readable.


Reading this discussion is hard for me without thinking, what would have happened if the whole Linux ecosystem was BSD-licensed?


It would have an ecosystem the same size as BSD's around it.

The GPL license allows companies to contribute technology to the project without fear the technology they contributed will be used in proprietary products from their competitors.

Oracle, for instance, doesn't have to fear HP will use BtrFS in HP/UX and compete with Solaris' ZFS.


If you are referring to the fairly durable schism between the folks who define "open" and "free" one way and the people who define "open" and "free" a different way, then I don't think a whole lot would have happened [1]. I tried to get Sun back into the workstation/desktop game from afar (I had already left by that time) but it wasn't to be. Their original business model is just as durable today was it was when they were founded, spec the hardware, do enough software work that the user experience is capable, and let others build on that. I keep hoping someone will do that with FreeBSD but so far, no such luck.

[1] We can use the existence proof that FreeBSD exists with a bsd license and it hasn't materially changed the Linux outcome.


I think that iXsystems' support of PC-BSD is something like an attempt to do that, but it seems like they're not doing much workstation stuff these days, whereas previously they were at least trying to do slightly more. PC-BSD is getting a reasonable open source ecosystem and user experience around it, though, such that someone who can handle the business development and hardware side of things would have a rather easier time than someone starting from stock FreeBSD.


That's easy,some BigCo would've co-opted it, paid developers to go the last mile and polish it to make it usable, and make it incompatible with everything out there and sold it. Perhaps Sun or IBM or even Oracle. E.g See OS X and to a lesser extent, Linspire/Lindows.


Hasn't happened to PostgreSQL and I doubt it would have happened to Linux. The leader and community are pretty strong with Linux and that is not about the GPL. I would imagine some folks would fork it, but that always is a losing game from a business point of view (paying for a fork vs. patching and getting everyone to pay). Having BSD licensed kernel at that point in history during the AT&T BSD lawsuit would have been seriously interesting.


Maintaining a fork isn't a big issue if you have enough resources, considering that if Linux was BSD licensed they would still have had their proprietary fork mirror it "close enough" to pull in at least some of the big changes from the main Linux tree. Android seems to work in this manor to a certain extent (although they open source patches back when they have to because of GPL).

You would most likely have ended up with a bleeding edge open source version and several more conservative closed ports.

Bear in mind that a lot of the reason Linux development is strong is because many of it's developers are paid to work on it and then contribute back (again because of GPL). If a company could gain a competitive advantage by paying the same people to work on a closed fork..


I think the problem with maintaining the fork is the speed of development of the community, and let's face it, it is much easier to monetize hardware than software in the open source world. The fear of being left behind and weird vendor specific errors should not be discounted.

I agree though, someone would try. I just think Linux would have done just as well with a BSD license being released at the time it was. The people and the timing deserve more credit than the license.


you haven't seen EnterpriseDB or vFabric data director then


Neither are more popular than the original


See also the http://en.wikipedia.org/wiki/Unix_wars, during which the industry mostly lost a decade of work entombed in proprietary forks. Tit-for-tat has better survival characteristics, and the GPL is no exception.


> "last mile"

I strongly disagree with this phrase. I'm not saying making Linux usable is 90% of the work; it isn't, but it isn't "the last mile" either (maybe that + customer support + developer relations and evangelism equals to 15-20%).


I meant last mile as in developer scratching their itch. It's much more sexy to work on new and interesting things(that mostly break existing things) rather than the doing the grunt work on backwards compatibility, bugs or documentation, not to mention enforcing a central vision and design on developers(ever hear of OSS devs quitting or making forks because of differences in opinion?) .


This is essentially a duplicate of https://news.ycombinator.com/item?id=4467653 from one day before.


Except this is the actual thread that's still evolving while the other was some cut up quote filled "news" post.

The conversation here may parallel the other but I think this link is more valuable.


Here's a few places it falls short with reaching mainstream consumers, I realize many people may not care about that, but I think it could be a viable alternative to windows or OSX for many people if it were easier to get started with and use.

1. Not clear how easy it is to dual boot install with windows for non tech users. e.g. difficult to get started.

2. Make it faster, latest ubuntu is sluggish. The UI gets non tech users excited, a few minutes of using it and they are going to bounce.

3. Make installing programs easier, I know how to get them running, I still don't know how to get them properly installed. Average Joe has no chance.

4. Support all common user tasks out of the box, for example viewing videos online.

To me it feels like Ubuntu is 90% of the way there, just a few things stop it from being a good alternative (these things of course are probably not trivial to fix, but I would say they are critical).


Epic thread. I haven't used Linux much since early 2000, but all the giants of Linux I remember from my Linux hacking days are there. Interesting read!


I has been a while since I stopped using a Linux desktop. The reasoned I stopped:

a. there were at least two of everything: GNOME, KDE, vi, emac, gedit, Firefox, Opera, kasablanca, ftpcube ...

b. crap device support: graphics cards, printers, cameras, thumb drives ...

c. highly variable levels of application quality and support


I think this flamewar is just a viral marketing strategy for Google+


I don't think so. It's so hideously designed [1] that the more people don't use it (and have a feeling that it's great, as they've heard others like it), the better for them. Now I'm trained to not open G+ posts unless I'm absolutely sure there's something I will enjoy/learn from. YMMV.

[1]: That stupid "Join G+" black bar, the absolutely unnecessary chrome around the content that takes up more than 83% of screen real state on my 15" MBP, no links to comments


IMHO, Gnome2 was great, but since then its all gone downhill. Still cursing Ubuntu for Unity.


The good thing about Ubuntu is that it has so many forks: Xubuntu (Xfce), Kubuntu (KDE), Lubuntu (LXDE). They all use the same Ubuntu repositories. For example, you can get an Lubuntu system by removing the unity and gnome package from Ubuntu and installing the LXDE ones. I held off upgrading my Ubuntu machine from 10 because of the Unity thing. I installed Lubuntu 12 on another laptop last week and everything works great and there pretty much aren't any surprises going from 10.

What's annoying isn't Unity but the way Ubuntu changes the way startup scripts are done and small things like .xsession don't run by default (and if you turn them on, suddenly .Xmodmap stops working...). That's true for the Ubuntu derivatives too. But that's a fair trade-off for good drivers (even if they are proprietary) and up to date packages. Otherwise I'd go with Debian testing.


Have you tried the GNOME 3 "classic" desktop in Ubuntu? It's very similar to GNOME 2. So far it's worked out well for me.


Yes - I'm using Gnome3 "classic". It is ok - just doesn't seem as configurable as Gnome2. XFCE might be good - I just don't have the time to figure it out.


Yeah, at first I was annoyed with the lack of configurability in Gnome 3. But when I discovered you could press the super key and just start typing the name of a program to start it, I found it nicer to use than Gnome 2.

Also, installing plugins for Gnome 3 is really convenient. For example, I didn't like Alt+Tab switching between programs instead of windows, so I installed this: https://extensions.gnome.org/extension/15/alternatetab/.

There are still some things that bug me though, like I don't see any way to configure the format of the Date/Time.


I switched to xubntu, and use xfce4 on top of debian too. xfce4 is kind of nice, kind of half-busted, but overall, it feels like an older style desktop, which is what i want.


When I upgraded to Ubuntu 12.04, I installed Cinnamon and have been quite happy with it.


I'll have to try that. How does it compare to the Netbook Remix of old?


I recall trying Netbook remix on my Aspire One and quickly reverting back to stock Android, but I can't remember what I didn't like about it. OTOH Cinnamon reminds me pleasantly of Gnome Classic but feels less hobbled, compared to the old Gnome desktop.


FUD, Unity is great and you know it. People bashing Unity have not really tried it yet.


I've been using Unity for about a year and it's not as bad as many proclaim it to be. My biggest complaint is that advanced features are hidden away. If all those crazy Compiz editors and whatnot were easier to accidentally find it would all be fine. A separate "For Advanced Users Only" options set in "Settings" would fix it and probably resolve tens of thousands of web searches without a browser.


Whilst I agree to an extent in that I used to be a Unity basher and now I use it as my main desktop and think it's by far the most polished Linux desktop UI, I can totally understand why it doesn't fit a lot of people's workflows.

It has a very opinionated approach to things and throws away a lot of customisation options (focus follows mouse, you can't move the launcher or go back to a standard taskbar etc).


I love some features but it just causes too many troubles when working with it.

I need for example right-click menues sometimes and those are simply not supported. So the only workaround is installing packages that get me applications menues back which is then a mess where the buttons to close a window switch from left to right depending on fullscreen or windows all the time. Not that I liked top-menues to start with anyway. And certainly lot's of applications just work badly with the top-menu simply because Linux-apps never were written for that. So you get stuff like double-menues even with some applications which are in the offical distro.

Or the idea to use F10 as a system key - sorry, but it just isn't one and never was. Which is why it's used by lots of applications which suddenly miss a key now - who came up with this shit?

There is one reason Ubuntu stays on my Laptop - it works most of the time good enough with the hardware. And I can live with Unity most of the time. But Unity is the reason why Ubuntu doesn't make it on my desktop.


I've been using it for a few months and becoming more and more upset with it. I have very high inertia and so I've stuck with it to give it a chance, but the deal-breaker is probably going to be that sliding my mouse to a new window ought to result in me immediately being able to type in the new window.


That's not the only mouse-related thing it does poorly. The workstation I use at work came with Unity as the default. I decided to give myself a week to get used to it but gave up after only 3 days. Using Synergy with Unity is a complete pain in the ass with that menu thing that is on the left side of the screen. It was constantly activating when I moved my mouse to the other computer or not activating at all when I wanted it to.

After 3 days I just installed Awesome and went from there. Unity provides me absolutely nothing that I want that Awesome doesn't. The only thing it does for me is get in the way.


I can't get my Gnome2 with Compiz to do focus follows mouse either. How?


I do not remember how but I am almost 100% sure I at one point ran Gnome2+Compiz with focus follows mouse. Maybe you need to install the advanced configuration manager for Compiz.


Try KDE, it's pretty easy to enable focus follows mouse.


Is it really inconceivable that other people might not like something that you do?

Give me a break.


Yeah, I've been trying to use Unity for a month or so now, and it's not great. It has potential, but for now, it's quite bad.


I have tried it. Wasn't happy with its support for multiple monitors.


What? Unity is the first shell that actually properly supports multi-monitor setups. I use it with two monitors actually and it's been just amazing to work with.


The edge resistance. The weird, inconsistent, PITA edge resistance it puts in between screens.

Whoever thought that was a good idea needs to be thrown off the team, forever. It drove me absolutely nuts until I finally managed to get rid of it.

I use FVWM2 at work with two monitors, when I have time I plan on moving to xmonad. I've used Dropbox and a few other lightweight WMs too. They all worked OK with my dual-monitor setup.

I've used Unity at home since the beginning, and it's the single worst multi-screen experience I've ever had. I only stick with it in hopes that I'll get to watch it evolve into something useable.


So, you not tested it with KDE. I was using dual monitor setup for a year with KDE and it's amazing ! The only problem that I had it's thanks to crap ATI drivers.


The Linux desktop will never succeed as long as nobody can profit from it. As long as Microsoft prevents the distribution of Linux through tithes or threats of patent litigation, why would anyone spend a penny making the desktop look better?

edit: sources

http://www.informationweek.com/windows/microsoft-news/micros...

http://www.groklaw.net/article.php?story=20090619161307529

http://www.pcmag.com/article2/0,2817,2331462,00.asp


> The Linux desktop will never succeed as long as nobody can profit from it.

There is a strong case for someone paying developers to create a free alternative to Windows to commoditize desktop operating systems, thereby attacking Microsoft's revenue stream.

http://www.joelonsoftware.com/articles/StrategyLetterV.html

The problem has been that it's been too difficult to produce something easy enough to transition away to from Windows and Windows apps for this to work.


>There is a strong case for someone paying developers to create a free alternative to Windows to commoditize desktop operating systems, thereby attacking Microsoft's revenue stream.

See Ubuntu bug number 1. https://bugs.launchpad.net/ubuntu/+bug/1 Mark Shuttleworth is that 'someone'.

>The problem has been that it's been too difficult to produce something easy enough to transition away to from Windows and Windows apps for this to work.

Everytime a Linux distro is made one bit "easier" or simpler, the power users find it "dumbified" and quit that distro and move on.


I think we have enough developers now using Linux that UI nitpicking should have a negligible impact.

On every conference I go, I see tons of Macs. Those developers are very unlikely to find Gnome 3 or Unity too constraining for their tastes.


>Those developers are very unlikely to find Gnome 3 or Unity too constraining for their tastes

The problem is that most of those developers have already left Linux and would be unwilling to switch either because of the hardware they like or the Mac app ecosystem or toolchain they use (for example iOS development). I doubt a significant percentage would come back even with a better Linux.


> why would anyone spend a penny making the desktop look better?

Why don't you ask Canonical?


That's an extremely opinionated point and doesn't add to the discussion and comes off being on a witch hunt. MS couldn't stop Samsung, HTC, Lenovo etc. from shipping Android devices, nor all of them plus Acer, Dell, and hundred other OEMs etc. from shipping Android tablets. What about HP and WebOS? Dell still ships Ubuntu laptops. And almost all the big OEMs have a big Linux division beside their Windows server division on their server pages. http://content.dell.com/us/en/enterprise/linux-servers http://h18000.www1.hp.com/products/servers/byos/linuxservers...

Where are the lawsuits against System76? If there really was demand, you think a company wouldn't start up and offer Linux? In fact there have been OEMs shipping Linux, but stopped or hid their offerings after the high rate of returns from people buying and returning them. And the tech savvy people who are the real customers can buy ANY PC out there and throw Linux on it(or just get a Macbook Air). There's not much money in it for the OEMs except high support costs, return rates and a whole toolchain to get Linux images(hard on thin margins).

If there is real mass consumer demand, a company can ship Linux profitably, MS won't care about them till they start making hundreds of millions a quarter, at which point successfully fighting a lawsuit based on antitrust case precedent on PCs or paying a low amount per PC won't really hurt the manufacturer. So, stop painting MS as the bogeyman in this discussion, Windows' and Office dominance(due to Open/LibreOffice not being up to par due to various reasons including Office compatibility) is a well known reason.


1). Yes, companies ship Android devices, but if you read any of the articles you would have seen that those companies pay $8-15 for a Windows Mobile license for each phone.

2). You are right about Microsoft not trying to litigate for Linux servers, but I'd take a guess that is because most companies already have a heterogeneous environment, and if push came to shove I would assume that most companies would consider a full switch to Linux if Microsoft forced their hand.

3). HP and Facebook are the two main companies defending Linux atm, you are right about that: http://www.theregister.co.uk/2011/04/20/facebook_hp_and_open...

4). Dell does not currently distribute any consumer laptop with Linux as an option, at least not when I tried the top 10 popular laptops on their site.

5). They don't profit from putting System76 out of business, and in fact would probably cause much more financial harm in terms of what it would do their image as picking on the little guy.

6). You are absolutely right about the low consumer demand. However, don't you think it's weird that NOBODY is pushing it? Not even as an experiment? No smartbook running Android? Why is the Asus Transformer the closest attack we see?


> Dell does not currently distribute any consumer laptop with Linux as an option, >However, don't you think it's weird that NOBODY is pushing it? Not even as an experiment? No smartbook running Android?

Wrong. It's funny you're commenting on this so vehemently with so much anger against MS in this and previous threads and you haven't even heard of Dell's Sputnik?

http://www.pcworld.com/businesscenter/article/259229/dells_u...

They do sell in places where there's more demand, and the Windows tax is high in relative cost of living terms.

http://www.omgubuntu.co.uk/2012/06/ubuntu-dell-laptops-go-on...

They did try a while ago in the US and they didn't sell well and now they're making a dev ultrabook.

http://arstechnica.com/information-technology/2012/07/dell-l...

http://www.zdnet.com/dell-re-enters-high-end-linux-laptop-ma...

Microsoft is plenty evil, but there's no need to make up things.

Edit: How many would buy Dell's laptop? Most devs would prefer their own choice, perhaps a Macbook Air or throw Linux on a Thinkpad. How does Dell get a leg up on Lenovo(with Windows) here without making 20 laptops with Linux?


Can you buy a Sputnik laptop? No, it's not for sale. Until you can, it's just another Linux laptop that gets press and mysteriously never gets released; see every smartbook (aside from the recent Chromebooks, which corps. still have to pay to suppress patent litigation) http://www.techradar.com/us/news/phone-and-communications/mo....


This strategy is very risky. Microsoft can litigate any OEM to the ground and the fees they extract from Android makers would eat away any profit margin in the x86 PC space. The fact they convinced Android OEMs to pay can be used to legitimize those patents on the PC space and many of their "licensees" already are PC makers (and thus will have a harder time litigating those patents).


Any shakedown by MS in the PC space extremely unlikely to fly given the terms of settlement in the antitrust case in both the US and EU and restrictions still in place in the EU on the browser choice screen, requiring network and document specs etc. Hell, B&N even tried the antitrust angle in the Android tablet patents cae.


> Any shakedown by MS in the PC space extremely unlikely to fly

Not if they have a good excuse (patents Android OEMs consider valid enough to license).

> Hell, B&N even tried the antitrust angle in the Android tablet patents cae.

Unfortunately, Microsoft and B&N became "partners" before the list of secret patents became public.


I find this discussion increasingly irrelevant. It's hard to make the case that companies like Samsung or Dell are not pushing Linux more just because MS may increase their Android requirements.

Even a hint of any such thing will result in a multibillion dollar fine paid to the OEMs and injunction against MS in the courts. What about the fact that there is little interest in the general public for Linux(due to many reasons including Wintel and Office monopolies) and the OEMs are hampered more by that and the reasons in my OP above rather than your weak reasoning?

Have you seen how few Chromebooks have been sold? Do you really think MS hampered them with their patent licensing deals(there have been a couple) or do you think there are much bigger reasons that ChromeOS/Linux hasn't taken off ? What about new companies like System76? Why don't you start a new company that sells Linux PCs? Do you know how Dell or Compaq started? It was just one or two people in a garage assembling PCs and throwing DOS on them. The difference was that people bought a lot of them.

http://www.zdnet.com/chromebook-looks-like-another-googleflo...

This whole blame MS for everything regardless of real issues attitude is getting old.


You argue there is no demand and I say that, even if there was a demand, the PC margins woulds not support the Microsoft taxes associated with Linux. You say Microsoft would not extort PC makers like they do with Android makers but there is no evidence for that. If you have any documents suggesting otherwise, it would be nice if you could produce them.

Besides, when all PC makers depend so much on Microsoft, there are many other creative ways to punish OEMs who want to be more independent without raising too many eyebrows.


>Microsoft would not extort PC makers like they do with Android makers but there is no evidence for that. If you have any documents suggesting otherwise, it would be nice if you could produce them.

I shall do exactly that. http://www.justice.gov/atr/cases/f200400/200457.htm

>A. Microsoft shall not retaliate against an OEM by altering Microsoft's commercial relations with that OEM, or by withholding newly introduced forms of non-monetary Consideration (including but not limited to new versions of existing forms of non-monetary Consideration) from that OEM, because it is known to Microsoft that the OEM is or is contemplating:

developing, distributing, promoting, using, selling, or licensing any software that competes with Microsoft Platform Software or any product or service that distributes or promotes any Non-Microsoft Middleware;

shipping a Personal Computer that (a) includes both a Windows Operating System Product and a non-Microsoft Operating System, or (b) will boot with more than one Operating System; or

exercising any of the options or alternatives provided for under this Final Judgment.

>Besides, when all PC makers depend so much on Microsoft, there are many other creative ways to punish OEMs who want to be more independent without raising too many eyebrows

If an OEM can prove it, billions of dollars to them in court.

Maybe System76 can be the big Linux OEM, except that Linux folks buy Thinkpads or Macs.

https://www.system76.com/

Do you have any documentation that MS is suing them?





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: