Hacker News new | past | comments | ask | show | jobs | submit login
Major Linux Problems on the Desktop, 2016 Edition (narod.ru)
329 points by gerbilly on Dec 30, 2015 | hide | past | favorite | 371 comments



He's right.

Many of the driver problems come from the fact that Linux finally worked on the desktop about the time desktop machines were replaced by laptops. Desktops with slots tended to have relatively well-defined hardware, and plugging in third party hardware was normal. This is much less true for laptops. OS development for laptops requires that laptop. It needs a Q/A organization which has one of everything you support. Linux lacks that.

Microsoft got drivers under control with the Static Driver Verifier, which uses automatic formal proofs of correctness to determine whether a driver can crash the kernel. (The driver may not control the device correctly, but at least it won't blither over kernel memory or make a kernel API call with bogus parameters. So driver bugs just mean a device doesn't work, and you know which device and driver.) All signed Windows drivers since Windows 7 have passed that. This has eliminated most system crashes caused by drivers. Before that, more than half of Windows crashes were driver related. Linux has no comparable technology.

The monolithic Linux kernel is just too big. What is it now, 20,000,000 lines? There's no hope of debugging that. It shows.


> The monolithic Linux kernel is just too big. What is it now, 20,000,000 lines? There's no hope of debugging that. It shows.

The Linux kernel is about 20MSLOC. The Windows kernel is about 50MSLOC. IIRC, OS X used to be ~80KSLOC.

Problems with debugging are endemic to any monolithic kernel. Neither Windows nor OS X is easier to debug technologically, but Microsoft and Apple both have many employees and lots of money invested compared to Linux.

(Also, Apple solved the problem by making their OS specific to their computers, so they had the whole thing under their control.)


>Microsoft and Apple both have many employees and lots of money invested compared to Linux

There are a lot of people who are paid to work on the kernel full time from Red Hat, Google, IBM and many others. If I had to guess I'd say there are probably more than the other two, it would be interesting to find out. But if you include people where it's not 100% of their job, but still an official part of their job, I'd say it's almost certainly more for Linux (not even counting unpaid contributions).

Linux is the most popular platform for servers and HPC because most of the time it's the better kernel. It's so dominant that Apple has basically left the server area and the handful of showcase supercomputers built on Apple gear have long since faded from view. Linux went from having one supercomputer in the top 500, a fraction of a percent, in 1998 to 98.8% of the top 500 currently.[0] The other six seem to be IBM machines running something else. OS X's first release was 2001 and it and Microsoft offerings are simply not present in that top section of the HPC space.

Linux is also the kernel on the most popular smartphone platform so it's not all computational either, when the hardware is tightly controlled it works fine. The problem isn't the number of developers on kernel or how hard it is to debug, it's that laptops and desktops aren't offered from a single company that can tie everything together.

[0]http://www.top500.org/statistics/details/osfam/1


> Red Hat, Google, IBM and many others

And they are all working on desktop\Laptop support right? Linux is great in the data center because it has big guns behind it in the data center. Linux runs well on cell phones because Google put in the effort. As soon as someone is willing and able to put in the effort on desktop Linux, it will be as good as it is in those other areas.


I think everyone can agree that Linux's biggest shortcomings on desktops/laptops are the graphical and audio part. So, even though I agree with "As soon as someone is willing and able to put in the effort on desktop Linux, it will be as good as it is in those other areas." I would replace "someone" with "every vendor".

I feel like most vendors (NVIDIA/ATI/Wacom/whatnot) concentrate much more of their effort into supporting Windows and even OSX becasuse thay's their audience.

Also, I remember reading somewhere that NVIDIA/ATI work closely together with Microsoft because od Direct X[citation needed, though]. I had the opportunity to work with Direct X (the new API) and I found it much more pleasant than working with OpenGL (even though I ended up using OGL in the end; I used Windows and DX to simplify prototyping, because doing the same thing in OGL required much more dev time, at first atleast).

EDIT: Also, let's not forget how most of the majority of linux developer community neglects GUI and the overall end-user friendliness, and how the environment is in most cases quite hostile towards UX/GUI designers in general. There are, of course, exceptions, but those are few.


While it is a vendors issue, they don't just randomly support hardware. They go where the big guys are. As you say NVIDIA\ATI contribute to Direct X, but it happens because MS makes it happen. Google dragged the cell vendors into supporting Android. Desktop Linux just needs a backer that is big enough to make them put forth the effort.


"...making their OS specific to their computers..."

As a non-Linux open source project OS user, I am continually faced with driver deficiencies as a result of hardware specs being under NDA.

A recurring idea I have which I am here sharing for the first time (apologies!) is: why not just pick a single item of hardware and build an open source, free OS project around it?

Why? Hopefully, more control, to the extent possible (notwithstanding Intel ME, etc.). Coreboot, support for as many peripherals a possible, etc. Most importantly, the elimination of the issue of hardware support and the notion of a list of "supported hardware".

Why not? Performance, latest advances, etc.

Hasn't this been done? Maybe. OpenWRT, etc.? But my understanding is that the use of Linux on this router was initially the non-public work of a company, Linksys, and the open sourcing by Cisco was neither anticipated nor intentional.

How is my idea different? The project would be free, open source, but intentionally focused on a _single_ target. Big tradeoff, but maybe some interesting gains.

To be clear, I like the idea of hardware that is more or less "OS agnostic", e.g., RPi and booting from SD card.

But I am tired of watching volunteers struggle to keep up with the latest hardware (many thanks to the OpenBSD and FreeBSD contributors who write drivers for networking, etc.), or having to settle for binary blobs.

Maybe I am just dreaming but I could forsee such a project potentially growing into a symbiotic relationship with some manufacturer if the OS developed a sufficiently large, growing user base. And these users were all purchasing a very specific item(s) of hardware, known to be supported by this OS.

If you comment, please remember I am not a Linux user. And hardware support is not quite the same under BSD. As such, it is something I often have to think about and cannot just take for granted.


Linux mostly works on most laptops. However, my next laptop is definitely going to at the very least be on Ubuntu's supported list and probably going to be either a Dell or system 76 with pre-installed Ubuntu.


Oops, I meant ~80MSLOC for OS X.


Kernel debuggers exist. Of course, we're all aware of Linus' opinion on the matter.


Not all debug tools are single step debuggers. There are memory verifiers, formal proof techniques and self testing frameworks that Linux uses that are very, very useful


What are those tools?


sparse -- adds annotations to kernel code which can be checked by the compiler. It is a little bit like a parallel type system which provides domain-specific knowledge like "this function takes lock A and then lock B" or "this function runs in interrupt context." See https://sparse.wiki.kernel.org/index.php/Main_Page

kmemcheck -- sort of like valgrind, but for the kernel.

CONFIG_FAULT_INJECTION -- inject random faults at runtime (such as in memory allocation) to test infrequently encountered error paths.

CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK -- run expensive mutex validation checks at runtime.

coccinelle -- a source code matching and transformation engine. You can use it in some of the same contexts as sed or awk. Unlike those tools, it is aware of the C language so it can do smarter things like add an extra final argument to all occurrences of a call to do_foo_bar_baz(). See http://coccinelle.lip6.fr/

checkpatch.pl -- Checks a patch to see if it conforms to the kernel style guide. Simple things like enforcing 80-column lines, but also more complicated things as well like variable naming, whitespace, etc.

smatch, flawfinder -- static analysis tools that are similar in principle to Coverity. Like Coverity, they are unsound, but often helpful.


Thanks for the summary.

However, what do we have on the formal proof side?


I'm not, does he not like kernel debuggers or something?


"No" would be an understatement: https://lwn.net/2000/0914/a/lt-debugger.php3


He apparently softened somewhat on the issue and did eventually (2008 / 2.6.26) merge a debugger into mainline. I don't think he ever really had a good answer to Alan Cox pointing out that you can't always reason your way through hardware misbehavior.


His argument also necessarily presupposes that you're never going to use a debugger to root out latent bugs, that is, you're only using a debugger because you've just written some buggy code and and you're trying to figure out where the problem lies. Which is silly. There are bugs in the kernel right now, there were bugs when that was written (he even alludes to it, but seemingly fails to understand the implication of it), there will be bugs introduced in the future, and someone at some point might like to find those bugs.

Some of the best "What is the worst bug you ever encountered?" war stories are cases where people end up narrowing things down to a bug in what they were previously treating as bedrock.


> And quite frankly, I don't care. I don't think kernel development should be "easy". I do not condone single-stepping through code to find the bug. I do not think that extra visibility into the system is necessarily a good thing.

That's insane.


That's a really interesting post, and I'm glad I read it. I find myself agreeing with Linus quite a bit, and until now I did not realize that there is programming being done without step by step debugging. And I think Linus's claims that people would be more careful when first designing and writing code if they didn't have a debugger to help them going forward makes a lot of sense.

Unfortunately I work with too many people whose approach to programming is "Read spec, code, debug why code isn't working to spec, fix that specific bug". Design, architecture, etc. are simply not part of the process. I can see how a lack of easy debugging may force some forethought into the development process.

That being said, how valid would this be in a development environment (like the kernel) where a lot of the work you are doing is making changes designed and created by someone else? Linus says the solution is to make sure you were careful at the start. But what if you weren't even there at the start, and had to step in later?


The point is his view about kernel debuggers is complete nonsense, and I'm baffled that you and so many others could ever take it seriously.

Yes, people would be more careful without debuggers, the same way they'd be more careful with cooperative multitasking hanging the system when they forget to yield. That doesn't make it good.


Quite often I read posts by Linus and find myself unable to take him seriously. He's arrogant, abrasive, and some of his opinions (security, debuggers apparently) are ridiculous.


Honestly Linus is a genius about the social side of big projects. He managed to scale Linux development very well, even writing Git in the middle. About the technical parts in their epoch context, it is more mixed: technically before 2.6 versions the kernel had far too many architectural flaw to be considered for serious workloads (even if I enjoyed using it even from the 2.0 series); and today I actually don't know of a single programmer around me that think he is right about the approach of security. On the other hand he was one of the first high profile programmer to call the modern interpretation of C undefined behaviors by modern compiler writers insane.

But at least the beginning technical shortcomings could be explained because Linux kernel hackers of that time were unexperimented, and they actually several times silently learned from (some) NT designs and/or maybe big Unixes of that epoch (probably mainly Solaris) and adapted them not a too long time after trolling about why their previous simpler one was good enough (when it was clearly not). The security approach today coming from people having work that long in the industry is inexplicable. (maybe it is just to early and they will just silently convert to reasonable opinions in 2016? :p )


No, he's not. He is a little arrogant - it's hard to be such a significant tech lead without being so - but if you actually read what he writes, he's not actually that abrasive.

What happens is that some sensationalist idiot will see Torvalds lose his patience and post an angry retort to someone, and post just that response to places like HN, completely out of context. I've lost count of the number of times it's happened here, when everyone's tutting about Torvalds' behaviour and how he's "so aggressive right out of the gate", when no-one looks at the thread history to see him being patient and explaining things.

In any case, zero of the tech heads from Jobs on down satisfy your demands to be taken seriously. I really don't understand why Torvalds gets held to this higher standard of hippie-level friendliness when people don't expect the same from other major tech project managers. I mean hell, Jobs was adored for having opinions that people considered ridiculous.


And how people would be more careful drivers of you replaced airbags with a metal spike.

You would wind up with more careful drivers, but anyone who thinks that's the only goal is misguided.


I think the better analogy would be driving with a very very dirty windshield. Or a slippery stealing wheel.

It's not that absence of debugger would make you (in Linus' opinion) more careful because it's risky, but because it's really inconvenient.


And yet the kernel that he has created and is developed with his methods, it runs on the greatest variety of systems, from toasters right up to being the predominant OS in the top 500 supercomputers (by a wide margin).


Not so sure I really disagree. If I had to guess, I'd think there are a lot more people working on the Linux kernel in aggregate than on either the Windows kernel or the OS X kernel.


Aren't a significant chunk of those people "working on the linux kernel" 1-commit wonders? My guess is if we rank by man-hours, Windows would top the list, possibly followed by OS X.


The desktop computer market is a pretty small part of the overall kernel developer market. Embedded systems is by far the biggest employer. There are just many more phones and other devices out there than PCs-- a trend that seems set to continue. Linux dominates embedded systems and the server market, so I would expect it to dominate the overall mindshare numbers as well. I would expect OS X to have the least number of kernel developers because there just aren't that many hardware configurations for iDevices, and everyone who is bringing up an OS X board works in Cupertino (which has good and bad aspects, of course).


I wasn't sure OS X would be ahead of Linux, that's why I hedged my statement.

Also, while the embedded Linux ecosystem is rather large, what fraction is kernel code that gets upstreamed/mainlined? As an outsider, my guess that it's a minority (some work with is on non-kernel code, and not all through embedded kernel code changes are/will be in the mainline kernel, so I would not count those as actual contributions).

I say this as a Linux desktop user/layman.


There are a large number of people who do kernel-level integration work-- configuring things, downloading drivers, setting things up, and otherwise dealing with the hardware. This kind of work isn't the most glamorous, but it definitely counts as "actual contribution" to the community. Integrators find bugs, make suggestions on mailing lists, and help develop products that make investing in Linux worthwhile for everyone involved. Most integrators that I've known like to submit patches every now and then, just for fun.

I would guess that there are more people whose primary job is integration than people whose primary job is kernel development. I don't really have any concrete numbers, though. On the driver development side, companies are starting to get smarter about getting their code upstream first-- look at NVidia and Intel's recent efforts in that area, for example.


Most committers only review or care about a small area of code, and thus have a lower frequency of commits. That's a good thing, IMO.


> Microsoft got drivers under control with the Static Driver Verifier, which uses automatic formal proofs of correctness to determine whether a driver can crash the kernel. ... Linux has no comparable technology.

That's a pretty cool idea (also similar to the static verification in NaCl.) Is there a reason one couldn't implement a static verifier for Linux drivers? Would the problem be harder in any sense for Linux, because of e.g. number of exposed kernel APIs? Or could a static verifier "read off" the kernel's API with no manual annotation required?


AFAIK MSR sponsored lots of research in this area. Program verification is still not that easy.

[edit] http://research.microsoft.com/en-us/projects/slam/


not really, people are doing some experiments with linux kernel as well. https://www.usenix.org/system/files/conference/osdi14/osdi14...


Seems like a good way to cope with this, from a market standpoint, would be for Linux development to focus on specific laptop. If say, Dell, were to pay Ubuntu to test and verify for some of their specific laptops and then Ubuntu subsequently was able to list "100% Certified" laptops for purchase it could create an opening.

Same thing with developing to put it on Apple laptops with that very specific hardware set, etc.


Isn't this already happening? http://www.ubuntu.com/certification/desktop/


Beware. I went out of my way to get an Ubuntu certified laptop[1]. It took me months to get it to a usable state. Graphics drivers crashed or corrupted the screen[2]. Bluetooth didn't work. Wifi didn't work. While suspended to RAM, it drained 10% of the battery every hour. In short, it was a nightmare.

I've had it for almost two years now, and I've given up on getting Bluetooth to work. After resuming from suspend, the wifi works about half the time, and screen brightness is always set to maximum. It's tolerable, but I only use the laptop when I have to.

I doubt I'll ever buy a Linux laptop again, but if I do, I'll be sure to try it hands-on before buying.

1. The Lenovo X140e: http://www.ubuntu.com/certification/hardware/201309-14195/

2. https://twitter.com/ggreer/status/548923450640056321


> Beware. I went out of my way to get an Ubuntu certified laptop[1]. It took me months to get it to a usable state.

Why didn't you return it and get -say- a Thinkpad? (Were you -perhaps- just curious how shitty the "Ubuntu Certified Laptop" program is?) It clearly failed the "Fitness for advertised purpose" test. AFAIK -if you're in the US- the seller can't refuse to accept your return... unless it was sold as-is.

> I doubt I'll ever buy a Linux laptop again, but if I do, I'll be sure to try it hands-on before buying.

I've found great success with the following method:

* Get a detailed list of the parts inside a given laptop. (lspci info from the target model is a really good sign)

* Run screaming if the video card is made by Nvidia. [0]

* Find if there are in-tree kernel drivers for each of the parts. (If there are, this is a really good sign.)

* Find the out-of-tree drivers for the remaining parts, and see if there are solid plans to get them in-tree. (If there are any such plans, that's a good sign.)

* Discover the known issues for all of those drivers.

* If the drivers seem to do everything that I need them to, and the known issues list doesn't contain any show-stoppers, the laptop will likely work just fine. :)

[0] I know that this is a controversial opinion. I've had awful luck with the nouveau driver and really bad luck with the official Nvidia driver. Other people haven't. I'll stick with Intel-powered laptop video cards if I can. :)


> Why didn't you return it and get -say- a Thinkpad?

The Lenovo X140e is a ThinkPad.[1] I didn't blindly trust Ubuntu's certification. I made sure to get a brand that historically has had good Linux support. I also knew about Nvidia graphics and avoided them. Still, I got burned.

I don't doubt your checklist is good advice for buying a Linux laptop, but it's simply too time consuming to check all of those things. Even if it wasn't, the likelihood of everything working well is low. All it takes is one bad driver for one piece of hardware and the laptop becomes a constant annoyance. Considering the number of hardware devices (Bluetooth, wifi, mic, camera(s), trackpad, GPU, fan, power saving, etc.) it's all but certain something will go wrong. Maybe audio won't automatically switch between headphone and speaker output. Maybe the fan will run at a few discrete speeds instead of gradually ramping up/down. Maybe it will wake from sleep if you open the lid, but not if you hit a key on the keyboard.

I'd rather just pay money and get something that I know will work. That's why my main development machine is a MacBook. I wish there was a competing brand of unix laptops, but so far… no dice. :(

1. http://shop.lenovo.com/us/en/laptops/thinkpad/x-series/x140e... Though after I purchased it, some people told me it wasn't a true ThinkPad, whatever that means.


> The Lenovo X140e is a ThinkPad.

Oh, heh. Derp. Edit: I mean to say: My bad. I overlooked that. :(

I see at [0] that the only Ubuntu Certified configuration is with a rather ancient pre-installed version of Ubuntu. Did you get the system in that configuration, or did you purchase it and put Linux on it? [1]

Regardless. Why didn't you return it and get something that worked? Curiosity? Cussedness?

> [I]t's simply too time consuming to check all of those things.

Odd. I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years. Perhaps my opinion is atypical.

> Considering the number of hardware devices (Bluetooth, wifi, mic, camera(s), trackpad, GPU, fan, power saving, etc.) it's all but certain something will go wrong.

I guess I've had fantastic luck with my personal selections and the recommendations that I've given to others. Given that luck is my super power, I'm somewhat willing to believe that my experience is somewhat atypical. :)

Anyway. Good luck with your projects and such, and I hope that Apple keeps producing hardware that meets your needs.

[0] http://www.ubuntu.com/certification/hardware/201309-14195/

[1] Still... one would expect that any Ubuntu Certified Laptop that has a supported hardware configuration would be detected by the Ubuntu installer and configured appropriately (or you'd get a big fat warning when the hardware isn't "supported" by a later Ubuntu release). OTOH, Canonical isn't the best at getting things right, so... :-/


> I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years. Perhaps my opinion is atypical.

Aargh, this is precisely the sort of attitude that is causing problems in the first place! Of course some research is necessary prior to any purchase, but the issue here is that clear and correct information doesn't even exist in the first place!!

The expectation of nerds that people have this sort of time to take off simply to get a working computer in this day and age is mind boggling. People shouldn't HAVE to take a week to do research to get basic things like this to work.

It's not reasonable and your opinion is atypical for people who have full time jobs with long hours and families to look after. I really want to use and support Free/Libre Gnu/Linux, whilst still being productive. But many Linux users have better things to do than trawl through lspci, do literature reviews of ancient threads on email lists and bug reports, etc, etc, and then finally somehow manage to design a hardware/software configuration that even mostly kind of works. The very worst thing of all is that the typical Linux nerd thinks that this is normal.


So just get a laptop with pre-installed Linux then...


Can you give me a recommendation?


I think that Dell do some. There is a company called System 76 that sells pre-installed Ubuntu boxes. I can't speak for either companies products having personally tried neither myself. I will be using one of them for my next purchase but I'm not sure which.


> Did you get the system in that configuration, or did you purchase it and put Linux on it?

I tried to get it with Ubuntu preinstalled, but neither Lenovo's website nor their phone support could configure it that way. After about 20 minutes on the phone, I managed to get the exact hardware configuration shown on Ubuntu's certification page: AMD A4-5000, Broadcom BCM43142, etc. In hindsight, I doubt Lenovo ever sold an X140e with Ubuntu preinstalled.

> Regardless. Why didn't you return it and get something that worked? Curiosity? Cussedness?

When I first turned it on, I noticed the X140e had several annoying LEDs. Both "ThinkPad" logos had glowing red dots in their i's. There was also a large green LED near the camera. It glowed whenever wifi was powered-up (pretty much all the time). I found these annoying, so I painted them over. Oops. Next time I'll use nail polish, which can be removed with acetone.

> I find it reasonable to spend between a couple of days to a week researching the suitability of something that I'll use throughout the day, every single day for next three-to-ten years.

Our thoughts on this matter are quite similar. On average, I spend almost 10 hours a day using my primary development machine. I upgrade every 1-2 years, which works out to 5,400 hours of use. That's a lot of time interacting with one piece of hardware. I definitely want to make sure I get the best tool for the job. That 5,400 hours has another implication: Amortized over the life of the machine, even a $3,000 laptop will only cost ≈50 cents per hour. That makes me extremely insensitive to price. I simply want whatever works best.[1]

As peatfreak said, research won't guarantee satisfaction. The only way to really know if a piece of hardware will work for you is to actually use it. That's one huge advantage of Apple (and now Microsoft) products: I can walk into a store and test the hardware/software combo. In just a few minutes, I can tell if the it lacks the annoyances in my list[2]. These details are extremely hard to verify without actually using the machine.

I would be much more open to getting a Linux laptop if I could try it out before buying. Unfortunately, I don't think the market is big enough to make brick-and-mortar stores feasible.

1. I've written about this in more detail at http://geoff.greer.fm/2010/10/30/expensive-computers-are-wor...

2. http://geoff.greer.fm/2015/07/25/laptop-annoyances-or-why-i-...


> Run screaming if the video card is made by Nvidia.

In my experience, it's the opposite. 5-6 years ago, ATI was the friendly one and nvidia gave you hell trying to get it to work. Now it's flipped - the ATI cards I've tried just plain don't work, whereas the nvidia ones will work, and with a few choice harsh words, will work well. Just my anecdata, though, and this is with desktop cards, not laptops (I use thinkpads with intel graphics...)


Are the ATI cards you've been using the absolute newest ones, or a couple of generations back?

Also, are you using the closed-source or the open-source ATI drivers?

And, are you using Ubuntu, or are you using some other distro? (My experience with non-LTS Ubuntu has been... substantially less than stellar over the past several years.)


Well then. It's as if the people at Ubuntu are pretty smart and know what they're doing. :-)


Except they certify systems with lots of problems. Dell had to stop selling their XPS 13 with Ubuntu for a while due to the number of issues.

http://bartongeorge.net/2015/07/20/xps-13-developer-edition-...


I happily paid a premium to System76 for their Linux laptop. Then I took a calculated risk and replaced their Ubuntu distribution with CentOS 7. I've had small hardware troubles (the biggest being video initialization during boot - sometimes it just locks up at a black screen; power cycle eventually resolves the issue, which does not recur until the next power cycle) - but my biggest troubles come from my employer's enthusiastic embrace of proprietary Microsoft protocols. For which I use the company issued Windows 7 machine, and the Mac users get a virtual machine image.


Nonconformity, customization, and choice are major reasons people use Linux instead of OSX, just as they are for Androids over iPhones. People who are okay with One Good Product Per Category are using Apple hardware already.


Only a minority of the problems in the list are laptop-specific. The hybrid graphics mess is an unfortunate one, though most people are well served by laptops with integrated-only video. People who want to play 3D games on laptops get the short stick.


> though most people are well served by laptops with integrated-only video

You're right and it's amazing. I use Linux exclusively on my laptops and had stuck with dedicated GPUs running Nvidia binary drivers since 2009. But this summer I bit the bullet and got a new Broadwell laptop, and I honestly do not miss a single thing about the Nvidia graphics. My Intel 5500 can push well over 10 million pixels across three displays, 3D-accelerated, drawing minimal power.


intel drivers are awesome (and made by intel, mind you)


> The monolithic Linux kernel is just too big. What is it now, 20,000,000 lines? There's no hope of debugging that. It shows.

That makes no sense. Most of that are drivers and platforms you won't ever use. And some forms of symbolic checkers exists for Linux (maybe not that advanced as for the Windows kernelspace, though, I don't know -- although even simple local heuristic checks are quite useful to check automatically in their ability to find real bugs -- sometimes even more than very complex solvers)


"Most of that are drivers and platforms you won't ever use"

That's the point -- developers can't test all possible hardware configurations and resulting driver mixes. Some of the most often used architectures and configurations usually work pretty well out of a box, but if you had something a little more exotic, prepare to get your hands dirty. Honestly, I don't see a solution to this problem, unless some "openness" is sacrificed by signing drivers (Microsoft style).


Drivers and support for platforms should be separate from the kernel and work via a stable and well-designed API & ABI. The lack of this has been Linux's single biggest design flaw from the beginning.


Is Tanenbaum going to have the last laugh?


Tanenbaum was vindicated two decades ago, but most people have yet to get the memo. A lot are under the impression that basic software design principles apply to everything besides operating system kernels, for some reason. And the myth is highly persistent.


It's quite telling though that most microkernel proponents tend not to be kernel developers.


It is also quite telling that micro-kernel haters fail to acknowledge that they won in the embedded space and software systems for high integrity deployments.


I... never said anything about these spaces, which I suspect you would agree require significantly different kernel design from a "desktop system" which is what this topic is supposedly about.


"Desktop system" is vague, but obviously u-kernels can serve as workstations, and hence desktops. CTOS was a major example of its day, if not the purest possible: https://en.wikipedia.org/wiki/Convergent_Technologies_Operat...


FWIW, CTOS was surprisingly unportable, which is hugely contradictory to the claimed advantages in this particular discussion.


Right, because most Linux proponents are kernel hackers. It's quite telling that you have no leg to stand on but to resort to hapless diversions. Or I suppose the flood of researchers who have built viable and innovative microkernel-based architectures throughout the decades are all a bunch of phonies?


"Right, because most Linux proponents are kernel hackers."

They don't have to be. All they have to do is not go around smugly suggesting they know better than kernel developers and they're ok in my books.

Incidentally, precisely how many of those research kernels have become widely used, mainstream kernels capable of high-throughput?

And do you really think it has turned out that way because the whole industry is full of blind dumbasses? I think it's a far more likely proposition that they understand something you don't.


I'm not sure what fantasy world you live in where the software industry is always adopting the most technologically superior solutions by default. No industry works like this.

YMMV on mainstream (they are widely adopted, though), but: OKL4, PikeOS, QNX...

It's quite obvious you have no background on the issues and are using this as an opportunity for provocation.


"OKL4, PikeOS, QNX"

High throughput, mister, high throughput.

Realtime != high throughput. It just means deterministic throughput. FSVO deterministic.

Show me people running big farms of servers running these operating systems where even single-percentage computational overheads really matter.

(added:) The reason for this is that it costs one hell of a lot flipping your page tables and flushing your TLBs every time you have to switch to ("pass a message", whatever) to a different subservice of your kernel.

(also added:) Oh and interestingly many (most?) users of OKL4 go on to host Linux inside it because, hey, it turns out that doing all your work in a microkernel ain't always all that great. So 90% of the "kernel" work in these systems is happening in a monolothic kernel.


QNX4 is high throughput, mister.

Other contenders include eMCOS and FFMK, though those are obscure.

That said, I don't even understand the logic. HPC clusters where single-percentage overheads really matter are an extremely specialized use case, so of course COTS u-kernels might not cut it. Where's the shocker here?

Response to added: Not necessarily with message passing properly integrated with the CPU scheduler.

Response to added #2: Hosting a single-server is a valid microkernel use case. What's your problem? Isolation and separation kernels are a major research and usage interest.


"QNX4 is high throughput, mister."

Ok then, show me the server farms...

I'm not even really talking about HPC, just the massive datacentres that run everyone's lives. All for the most part running monolithic kernels. I doubt the thousands of engineers who work on such systems consider the "huge monolithic kernel" "undebuggable". And I don't see examples of microkernel OSs that are able to cut it in these circumstances.

Even in a mobile device, you don't really want to waste battery doing context switches inside the kernel.

Microkernels have their place, but believing that the world that chooses not to use them are just clearly dumbasses is bullshit dogma.


You appear to be assuming (not being a mind-reader, whether you actually are or not is of course unknown to me) that QNX would automatically be used in server farms if it was high throughput; and, since it's not visibly used there, it is not high-throughput.

(As an aside, I'll grant that even a high-throughput microkernel seems likely, to me, to have a lower throughput relative to a more tightly-coupled monolithic kernel. That's just one of the architectural trade-offs involved here.)

As I see it, there are technical (e.g. hardware drivers, precompiled proprietary binaries) and social (e.g. relative lack of QNX expertise = $$, proprietary licensing) reasons for many people to choose one of the more popular OSes, running monolithic kernels.

I can't say what's technically superior, but even if QNX was, nobody's a dumbass for choosing something else -- and I don't think the fellow you're replying to was saying so. There are, of course, reasons and trade-offs.

An OS's adoption is a social thing, and proves nothing technical about it. If it wasn't for licensing (a social problem), BSD might have taken off, and Linux been comparatively marginalized.

Just sharing my perspective here.


Entertain for a moment the idea that someone's rationale in choosing to deploy a given OS lies deeper than the 1-dimensional rubric you're suggesting, and instead may have something to do with questions like "how easy is it going to be to support this?" and other network effects.

You're getting all red in the face using some really dubious arguments to back you up here.


(response to 2:) Er, so bypassing the microkernel for the vast majority of your work is a vindication of the "microkernels are just better" line is it?


It's not bypassing the microkernel. It's using it either as a hypervisor or as a separation kernel [1].

[1] https://en.wikipedia.org/wiki/Separation_kernel


You missed Tron/ETron and the *tron variants, which is likely the most used kernel in the world.


> they are widely adopted, though

For some definition of "widely".

And feel free to stop adding personal attacks to your comments. They do not enhance the credibility of your posts.


"Widely adopted" is not a synonym for "known by the average consumer".


I know it's not. And I know about QNX, at least (the others are new to me).

And I know that you didn't claim they are mainstream, so we may be quibbling about where we draw lines around the word "widely". But...

What's the installed base of systems running QNX, say? (Throw in the others if you wish.) Estimates are acceptable, too, if you don't have hard numbers.


BlackBerry doesn't seem to give out hard numbers for installations, but they have overviews of the market and lists of particular customers per category here: http://www.qnx.com/solutions/industries/automotive/index.htm...

It's not only worth looking how many, but what. They're in vehicles, medical devices, industrial automation, military and telecom. Those are all areas where blunders lead to loss of lives, not just annoying downtimes. Insofar as infotainment and telematics is concerned, they estimate at 60% of 2011, so it's likely your car runs QNX.


OK, if it's in cars (even if only one CPU per car, or even only in high-end cars), then yes, that certainly is "widely used". (In terms of numbers shipped, not necessarily in terms of "design wins" - but then, Windows doesn't have that many "design wins" either.)


So now you're moving the goalposts with "design wins". Just what are the design wins of a SysV Unix clone like Linux, pray tell? It's hard not to be on the offensive when you seem to beg for it. Where did the Windows comparison come from?

The design wins, of course, should be obvious to anyone willing to do a modicum of research.


Nope, not moving the goalposts. Re-read my previous post.

To clarify: Windows is, by any definition, both "mainstream" and "widely used". Yet it has very few "design wins". Therefore, the argument that cars are "only a few design wins" cannot be used to say that QNX, say, is not widely used or mainstream, since Windows is obviously mainstream and widely used.

> It's hard not to be on the offensive when you seem to beg for it.

You need to re-calibrate your sensitivity. You seem eager to take offense at nearly everything. Very little of it is worthy of your outrage.


The "design win" comment should be read as a concession to your earlier point about the role of technological superiority in industry decisions.


Every iPhone now shipping and most Android devices too I believe are running their main operating system kernel as a layer on top of an L4 kernel. They mostly handles low level security and the cell modem and stay out of the way except for that. Still, I think that should certainly count as widely adopted.


Correction: in Apple's case, the Secure Enclave, which runs L4, does not run on the application processor but on a separate ARM processor integrated on chip. Competitors tend to use TrustZone and hypervisor mode for this, but Apple currently uses them only for kernel patch protection rather than anything more important.

Not that that changes the core fact that Apple is shipping L4.


Your comment would imply that Javascript may not be the most technologically advanced solution for execution on remote clients. This is obviously wrong, so by implication the Software Industry DOES adopt the most technologically superior solution by default.


There are zillions of academic solutions that if implemented properly would be better than the industrial version. Academics are just notoriously bad at building real-world systems. I think this is mostly because it's a waste of time and money as far as publications are concerned.


This is basically the "everyone's a dumbass" argument.


I didn't say people in industry weren't smart. There's plenty of stuff that gets published at conferences where the industry guys are like, we did that 15 years ago.

My argument is that there's lots of great-in-theory but untested-in-practice stuff in academia, and that you can't discount something altogether just because it's untested. It's hardly fair to compare the output of a few grad students over a few years with all of the effort that goes into a major industrial product.

And anyway, the architecture of Linux originated in academia too.


Some of them are.


I don't disagree.


Marginally off-topic, but Minix 3.3 runs the (almost) complete NetBSD userland now. To use X11 you have to compile the development branch, however.

It's a real, POSIX-compliant, reliable microkernel that can run useful things now.


Which is cool and all, but the isolation is reportedly (USB driver crash) a very leaky abstraction and the entire thing is written in C. I'm cheering on Redox OS, which looks better from my very limited level of understanding.


It's also got almost no one actively developing it and and very little project or organizational infrastructure to speak of.


[flagged]


Real time OSes used in high integrity systems, mobile radio stacks, car control systems, ...


Lower throughput. Almost by definition real-time systems focus on latency at the expense throughput.


QnX.


QNX is making a comeback, via robotics and automatic driving. All the Boston Dynamics robots ran QNX; all the hydraulic valves were coordinated by one CPU. The valve servoloop ran at 1KHz and the balance loop at 100Hz.

QNX is behind some automotive dashboards, and they're moving into automatic driving. They have some big announcement coming at CES in January.

But nobody runs QNX on the desktop any more. This year, they finally stopped supporting the self-hosted development environment. Ten years ago, you could run Firebird (pre-Firefox) and Thunderbird on a QNX desktop. But when QNX stopped offering a free version, free software development for it stopped.


I have a barebones re-implementation of QnX for the 32 bit x86, I don't have time enough to clean it up or port it to the 64 bit model. It blew the doors off the competition back in the day (about 2 decades ago), 200K context switches per second on a 33 MHz 486. Fast enough for real time control of all kinds of hardware and with a seamless path from a self hosting desktop environment to embedded hardware. I never got around to porting 'X' to it, but I did build a rudimentary window manager and some apps (terminal, calculator, some graphic demos to test the blitting software). Best demo was 250 tasks running independent graphics demos in windows without any noticeable lag or stutter. I really liked that project, some of the best code I ever wrote.


Is there source available for this or QNX?


Not for my stuff, I do have it (obviously) but it is definitely not worth sharing in the state it is in. Essentially it is a kernel, some userland device drivers and a rudimentary (but functional) network stack cobbled together from various bits and pieces. The toolchain was GCC and djgpp to bootstrap the development until it was capable of self hosting. It would need serious work (several man-months) before it could be opened up.

QnX had an open source program going for a while but it was shut down again. See: http://www.qnx.com/news/pr_2471_1.html


Did anyone here manage to snag the code before the open source program was shut down?


Even if, it all depends on the terms of the license whether or not it could have been used to fork it or to base a free version of it.

" Access to QNX source code is free, but commercial deployments of QNX Neutrino runtime components still require royalties, and commercial developers will continue to pay for QNX Momentics® development seats. However, noncommercial developers, academic faculty members, and qualified partners will be given access to QNX development tools and runtime products at no charge.

Customer and community members will also have the ability to participate in the QNX development process, similar to projects in the open source world. Through a transparent development process, software designers at QNX will publish development plans, post builds and bug fixes, and provide moderated support to the development process. They will also collaborate with customers and the QNX community, using public forums, wikis, and source code repositories."

Suggests that it was open source more in name than in fact.


I'm more curious to read it than to use it, though.


Wasn't there a patent on the message passing part of the QnX kernel?


I don't know. Frankly I don't give a rats ass if there is or isn't but if there is such a patent it would now be the property of RIM (or whoever gets it after RIM folds).

Message passing pre-dated QnX by a considerable time, they just did a really nice and clean implementation of it.

I'd absolutely support their copyright claims on their code, at the time their implementation was unique but I'd totally object against any patent claims, message passing systems had been used widely by that time, also at the kernel level. QnX may have been the first microkernel on that principle that received wide adoption because of the strength of the implementation.


High throughput. As I've said elsewhere, realtime != high throughput. Just deterministic throughput. Users of these systems are willing to use slightly overpowered hardware if it means hitting processing deadlines.


For the desktop, most users care about responsiveness, not throughput. That high throughput Linux kernel makes it utterly craptastic for professional audio usage, with insane amounts of latency. Of course, that has more to do with ALSA being a steaming pile than the kernel in general, but it's one example that shows throughput isn't the only thing that matters /especially/ on the desktop.


Linux users often take the idea of 'self-examination' way too far and it turns into 'self-disparagement'.

The fact is, if you vet your hardware and use a major distro (Ubuntu, OpenSUSE, Fedora) you'll wind up with a perfectly functioning Linux desktop or laptop.

You think about it, OSX only runs on a few laptops. Linux runs perfectly on more laptops than exist for OSX. Windows run on many laptops, more often than not quite well, not always perfect though, despite being bundled together.

I've been using Ubuntu on a ThinkPad T530 for several years, it just works. Couldn't be happier. Everything works BTW, function keys, fingerprint scanner, everything.

As for the Linux eco-system - major browsers work, Steam works, there's a phenomenal ecosystem around Linux if you do any sort of programming, data science, etc... I really have nothing to complain about these days.


Exactly. A non negligible part of his rant was about proprietary graphic drivers. Free software developers can't do anything about that, and anyway what is the point? Today, if you want to game just use Windows, if you want a fine Linux laptop get one with something like an Haswell with integrated graphics (or maybe Broadwell, still wait a little for Skylakes)

And anyway if you look at what the "competition" sells while being this much critical, you can probably write something at least as long. Even last MS flagship devices running last Windows 10 versions are full of bugs now -- and likewise for major PC vendors like Dell -- so GNU/Linux distributions might as well become attractive just because Windows devices are of terrible quality today :p


> A non negligible part of his rant was about proprietary graphic drivers. Free software developers can't do anything about that, and anyway what is the point?

Whether or not it is fixable by the community is unimportant to an end-user that just wants something that works.


I'm really waiting for the day an open source gpu with decent capabilities and clean design for simple drivers hit the market. The reverse engineering headaches are getting absurd these days.


I thought a similar thing when I was reading it, but regardless of who is responsible for a problem, it's still a problem that should be acknowledged in such a list. Graphics on linux is a hard problem to solve cleanly. It's not anyone in particular's fault, and it's entirely reasonable when you understand the context, but it's still a problem. In context, linux does very well given the restrictions, but it's still not as buttery as the proprietary offerings.


> It's not anyone in particular's fault

If the company producing and selling the hardware is not giving the specs to their users, then it's their fault. Perhaps that's a bit too RMS for some people, but in this case I basically agree with him. It's mine, I bought it, I want to run whatever I want to on it.


> If the company producing and selling the hardware is not giving the specs to their users, then it's their fault.

AMD started releasing the low-level documentation for their GPUs in 2008, and although the FOSS drivers have benefitted stability-wise they're still lagging in API features and often offer less than half the performance of the proprietary counterparts. As far as I know we don't have a complete FOSS OpenCL 1.0 (ca. 2009) implementation for any ISA, nevermind newer versions or competitive performance.

Unfortunately GPUs are so complex that specs alone don't guarantee good drivers.


I don't think this is about self-disparagement, but about holding ourselves to a high standard. I love Linux, and have been using it without major problems for some time now (mainly Ubuntu-based distros). Especially if you are into programming, I'd say it beats MS hands down.

However, I do acknowledge that there are still many problems that bar Linux from being an operating system "for the masses" (i.e. all those people who are not computer nerds). Many small problems can be fixed with a few commands in terminal - but which grandmother/stressed office worker/gamer kid is willing to learn how to use a UNIX shell or configure fstab just to do their stuff? And there are other problems that aren't solved as easily. I help out at the Ubuntu Forums, and I see plenty of posters with problems that the combined wisdom of a few thousand experienced Linux users can't solve. (Just have a look at the "Unanswered Posts" section.)

So yes, Linux is a fantastic OS with great software available, and by all means let's keep advertising it. But let's not pretend that "it just works, right out of the box!"(TM) every time.


> But let's not pretend that "it just works, right out of the box!"

It does though. Get an Ubuntu laptop from Dell, it works. Get a Chromebook from Google and friends, it works.

Yes, install Linux on a random 5 year old laptop, and you may have problems. Ever built a 'hackintosh'? Same deal. Ever install Windows? It's a pain.

As for shell commands, Windows has that. So does OSX. Linux also has GUIs that can install packages, that can change settings. The shell is quick, but it's not the only way.

You have to compare apples to apples. And the fact is, if you install a popular distro on popular, well supported hardware, it does work. If you buy a laptop/desktop/server that comes with Linux, it works.


>Yes, install Linux on a random 5 year old laptop, and you may have problems

This is in fact the opposite of reality. Old hardware works relatively well. New hardware can take quite a while to get support because it's not a priority for vendors, especially GPUs which the article harpes on, is a very risky gamble - even if it works "works well" is misleading - it's always behind Windows performance wise and very often power management is inferior too.


Please don't misquote me. What I actually said was:

> But let's not pretend that "it just works, right out of the box!"(TM) every time. [Emphasis added]

Of course Linux often works. Perhaps even most times. A couple of weeks ago I reinstalled my laptop, switching from Ubuntu to elementaryOS. Took me about two hours (not counting most of the backups). On Windows, I would have needed two days. I have done an OS install dozens of times, with various versions of Windows and various Linux distros. And I find that Linux is often easier than Windows, because you can install all the software from one official repository instead of hunting through the download pages of a dozen vendors. BUT, and here comes the big but - that still doesn't meant that it always works smoothly. (Not that Windows always works smoothly, but that's not what we're talking about. We're talking about Linux' problems right now.) To claim that there are never problems is simply not true.

About shell commands: of course Windows has those. But when do you ever really have to use them? (If you are a sysadmin, perhaps, but again, that's not what we're talking about. We are just considering "normal" users.) Everything that needs to be done can be done graphically. The various Linux DEs have made a lot of progress in that area in the past few years, but don't kid yourself. There's still a lot you can't do with a GUI.

And on a final (not quite serious) note:

> You have to compare apples to apples.

If I did that, I would never get away from Mac OS X, would I? :D


its nice to hold oneself to a high standard, but this is not what the article is about. most issues cited are non-issues/have been fixed over the past year. seems just like a random rant.


I spent a whole afternoon+evening reading before recommending my brother a laptop that is a) available where he happened to be and b) would probably just work fine with Linux.

The article discusses several points that would substantially improve Linux. And it addresses your concerns several times, like here:

"There's a great chance that you, as a user, won't ever encounter any of them (if you have the right hardware, never mess with your system and use quite a limited set of software from your distro exclusively)."

Ignoring criticism and possibilities of how to improve Linux ("cause it works for me already") doesn't do any good. IMHO the article is a great write-up that could help to improve these issues in the long run.


Exactly, I've always used whatever hardware I wanted, and Linux mostly just worked.

The occasional hiccups are proprietary Nvidia drivers, and some wifi chips being too recent and needing an open source driver readily available.

I use Ubuntu on a dual boot, I hardly ever boot Windows. And I'm a gamer! Steam + wine is enough for me.


I've been using Ubuntu Linux as my primary OS since 2006, and I can't disagree with many of the annoyances. Especially with regards to graphics drivers and the switch to the Unity desktop. I tried Linux Mint with the Cinnamon desktop and it is nicer, but couldn't commit to it I guess.

But everytime I use Windows and Macs again, I get even more annoyed. Mainly with how slow things are. Even on a new Windows 10 desktop or laptop, it's not uncommon to have to wait minutes for things to settle down after booting and logging in, or waiting while updates are installed during startup or shutdown. And on OS X, the spinning beachball was one of the main reasons why I stopped macs for the most part in 2002 or so. I figured it was just a byproduct of OS X still being early in development and computers not being fast enough or having enough RAM. But no, everytime I try a brand new mac, it still pops up, especially when trying to type in a URL or something like that. The bouncing in the dock and not knowing if an app is running or shutdown or not is annoying, too.

But as long as you keep your stuff backed up or in the cloud (like google drive, google music, etc.), and stick mostly to stuff that works cross-platform (Chrome browser, Libreoffice, cross-platform IDEs, etc.), it's very painless to switch being OS's or devices or to wipe and reinstall things. Even Microsoft Word and Excel work in the browser nowadays, though I still usually just stick with Google Docs.


> it's not uncommon to have to wait minutes for things to settle down after booting and logging in

This is no longer my experience with Windows on an SSD. I'm always a bit shocked when I reboot and I'm back at my desktop in under 30 seconds.

On a machine without an SSD, I'm annoyed at how slow everything is -- not just booting.


I've got a windows 10 machine with a pretty fast NVMe SSD. It takes 10 seconds to get to the login screen, and another three minutes to load up all of the services that are set to run on login. I'm convinced that the NTFS driver must be a nightmare of blocking I/O.


That's definitely not the norm. On both my Windows 10 machines the desktop becomes usable within seconds after login. Might be worth having a look at what's taking so long on your machine...

As far as I remember there is even a Microsoft tool that highlights startup jobs that are slow to run, isn't there?



Task manager shows that.


I have to agree with others who point out that this is not at all typical. I have two SSD-backed Windows 10 machines (one at home, one at work), both with much less fancy SATA SSds than yours. Neither takes more than 20 seconds to go from powered off to fully usable (unless I mistype my password a few times).

I'd check my autoruns if I were you.


As others have pointed out, that's not normal. I have a huge number of services, not only Windows development related services but also MySQL and multiple instances of Apache and it literally has no noticeable impact.

It might be worth turning off any applications that launch on login that you don't need. This now easily accessible from the task manager. It even tells you how much startup impact each app has so you can find out which ones might be causing you issues.


There must be something wrong with your installation or you have an insane amount of services. I have around 16 auto starting background programs including MS SQL Server and it's about 5s to login screen and another 10-20 until chrome is running and I can start doing stuff.


That's actually the reason windows is and will stay my main desktop system. Former Windows 7, now an "unfucked" Windows 10 Enterprise. It's just smoother than OSX or Linux Desktop and I always test some linux distros if I get new Hardware. Last time in novembre I upgrade to i7-6700k, Titan X, Samsung 950Pro 1TB. Of course it's necessary to have always at least one linux based server vm running I use putty to connect with. That's my ideal desktop setup i am using for years.

IMHO X11 needs to be replaced as fast as possible.


Linux biggest problem is perpetual rewrites. X11 is pretty much the only subsystem that hasn't suffered a backwards-incompatible rewrite in the past 10 years.

X11 could use some improvement, but I doubt a new system will address the core problems, which are basically the result of hardware drivers being written for Windows with Linux as an afterthought.


I agree, thankfully Wayland is well on its way. GDM uses it by default as of 3.16 and the Gnome DE just needs a --session=-gnome-wayland parameter.

It has actually gotten to the point where it's stable enough for daily use now.


> It has actually gotten to the point where it's stable enough for daily use now.

Really? What about X11 compatibility?


Really recommend Fedora 23 - login using Wayland. You will be truly surprised. FYI - you can try it out without installing (as a livecd)


I was surprised by Fedora 23 when it asked me to reboot in order to apply patches. And these were not kernels that were updates. Huge step backwards.


I would settle for a high resolution tty that works without much fiddling.


> Former Windows 7, now an "unfucked" Windows 10 Enterprise.

Congratulations ;-)

http://www.networkworld.com/article/2956574/microsoft-subnet...

> IMHO X11 needs to be replaced as fast as possible.

Why? If you don't like X11 you can switch to Wayland right now. I am content with X11 for many years.


"IMHO X11 needs to be replaced as fast as possible."

X11 has very little to do with any of your complaints.


On what machine btw ?

I have a minimal system, but it boots in 4 seconds from BIOS to ion3 (arch). It's funny when the BIOS takes longer than booting. #coreboot


What hardware are you running coreboot on?


I'm not (at least for now, I have a spare compatible thinkpad), I was just implying that the BIOS is now the bottleneck.


Oh, I see. I also ran Arch (on MBA 2012, perfect compatibility). It's wicked fast to boot using the kernel as an EFI payload (efistub).


> I'm always a bit shocked when I reboot and I'm back at my desktop in under 30 seconds.

30 seconds? That's still a lot of time. Debian Linux starts within 7s on a dual core 2GHz with SSD.


It seems to me that Linux is slightly faster on an SSD and a hell of a lot faster on an HDD than Windows 10 when it first boots up. I have the exact same problems the parent describes.


FWIW -- I have my own complaints about OS X (a lot of them), but I can't remember the last time I got beachballed, certainly not something like typing a URL.


The entry level mac mini is terrible and beach balls surfing. I am surprised they sell it. ( cant upgrade the memory either )


What would consider entry level? I use a Mac mini from 2012 (i7 with 8GB) every day and I'm not seeing any beach balls unless Flash crashes some web page.


http://www.apple.com/shop/buy-mac/mac-mini?product=MGEM2LL/A...

Dont add any extras... its simply not usable on the net

Hardware

    1.4GHz Dual-Core Intel Core i5 (Turbo Boost up to 2.7GHz)
    4GB 1600MHz LPDDR3 SDRAM
    500GB Serial ATA Drive @ 5400 rpm
    Intel HD Graphics 5000
    User's Guide (English)
    Accessory Kit


Maybe you have a faulty unit or are driving a very high resolution display. Otherwise at first glance that seems powerful enough to browse the net.


"The net" is a very varied place these days. It would be helpful to mention how many tabs you're using, the sites you're visiting, etc.

My work computer is a 3GHz i7 Mac Mini bought in early 2015, with 16GB ram, and I still have times when news sites freeze and won't scroll because someone has done something stupid when coding the site.

But my system uptime is 89 days and I get to the point where chrome shows a red bar because it hasn't been updated recently. There are a lot of irritating little bugs, but the only things that seem unstable are individual tabs and MyEclipse.


The memory's a little low but yeah that's an otherwise fine machine.


Own one, can confirm that OSX is unusable on a 5400rpm drive. Replacing it with any SSD makes it decent.


Not usable on the net?? Shit this was close spec'd to my compiling machine a few years ago.


I think the issue is usually really poor quality HD supplied by Apple on low end machines.


It's slow, but not poor quality.


I'm not sure /which/ entry level mac mini you're talking about, but the current entry level ($500) is neck & neck with the $1000 model of 2012[0]. I wouldn't recommend the mac mini to someone trying to squeeze every last penny out of their computing dollar, but it's not a rip-off either.

[0] http://cpuboss.com/cpus/Intel-Core-i7-3615QM-vs-Intel-Core-i...


Sounds like your hard drive is faulty: I recommend you hold down the D key or the Command and D keys while the Mac mini is booting to run the Apple Hardware Test. (One can replace the hard drive on that mini.)


I have my things backed up on the cloud and I used linux for awhile for my development system. Apparently for MacBooks there is something screwed up with ACPI so that it won't suspend correctly. Also, every time I would run out of battery the system would have problems booting up. Eventually I went back to Mac OS X after running the computer out of battery and then having it unable to start.


my macbook pro is easily the fastest, most responsive system I have ever used.


When visiting this blog post on my Android phone, the ads (or something) redirect me to a page with popups telling me my phone has a virus and encouraging me to install an app to 'clean' it, using Google branding on some dodgy domain. I seem to get a different one each time.

Reported it at https://www.google.com/safebrowsing/report_phish/ but mentioning here so that others don't fall victim to it.


Easy to see for yourself that this happens by using user agent spoofing in your browser's dev tools: http://i.imgur.com/vlLV8PW.png

Hard to track what exactly is causing this because of the ~200 requests this website makes before redirecting your browser.

The article itself links archive.org's copy of the article which I would recommend using over the original website since it appears to be free of malicious redirects:

http://web.archive.org/web/20151230152933/http://linuxfonts....


The site is now blacklisted for Chrome users. I've emailed the author letting him know about this issue.


http://sitereview.bluecoat.com/sitereview.jsp#/?search=http%...

The page you want reviewed is http://linuxfonts.narod.ru/

This page is currently categorized as Suspicious

Last Time Rated/Reviewed: > 7 days

This page has a risk level of High


AV on Windows flagged this site as being malicious.


Yep same here.


I stopped using Linux as my primary desktop OS around 2012. Until then I was an Arch user with my own desktop environment built on StumpWM and a hodge-podge of hand-selected tools. There was no Gnome or KDE in my setup. I liked it quite a bit.

I used Ubuntu on my laptops since I wanted to spend less time administering drivers and arcane configuration formats.

This is a good list.

I just got tired of the configuration formats, crappy drivers, inconsistencies, dependencies... I was irritated at how easy it was for Apple users to plug in a projector and have it just work. I was irritated by every update to some random library that would cause a sub-system to stop working. I hated having to spend any amount of time administering my desktop environment. To me it should just work and the less time I have to spend trawling forums, logs, and restarting processes to find the correct incantations of dependencies and configuration variables the better.

I've stuck with my MacPro Retina, despite my early trepidation about a GUI-driven proprietary OS, because I've spent probably less than an hour in the last 4 years administering it. It's still snappy and works as well as it did on day one (also the hardware is nothing short of amazing). The only thing that sucks at this point is that the OpenGL drivers Apple ships are woefully out of date and I'm thinking of jumping to Windows unless something changes (damnit I wants me compute shaders).

I still use Linux every single day... just in a VM, container, or on some server.


My counterpoint: as a full-time Mageia user on my laptop/workstation since 2011, my linux configuration knowledge has atrophied to the point of near-uselessness.

Mageia is the first and only distribution I've ever tried that, on a variety of hardware, Just Works. Every hardware feature works out of the box, and with as little surprise as you'd expect (e.g. plugging in a monitor, using wifi out at a coffee shop, plugging in a printer). I've even done the distribution upgrades (e.g. version 3 to 4), which I was always scared of, and somehow those too just work. I literally cannot remember the last time I hand-edited a config file (not counting Apache configs on a Linode VPS with Ubuntu Server).

I've had a chance to use recent versions of OS X, and I even experienced more crashing (not a lot, mind, but my linux laptop never crashes) than on this stock HP with mageia.

Happy that you've found a solution that works for you, but I'm also extremely happy that I get to keep using Linux without spending one more second thinking about hand-editing configs! :)


we're soon in 2016, 4 years later. you might want to try again (with arch in particular).

I've zero issue with Plasma 5. I plug my laptop into the projector and it just works ;-) I used to have that problem as well, like everyone else. But honestly its been a thing of the past since 2 years or so.

Also, using Plasma ensure I don't have to fiddle with configuration. It's not as leet as stumpwm or many others, but its certainly more customizable and more easily than OSX - while still working out of the box, just like OSX.


Oh... and integration with my devices too. I use hand-off to take calls from my phone on my MBPr all the time. I also use the Keynote app on my phone which controls the application on my laptop. The sync features I don't use as much only because I trust their cloud about as far as I can throw it.

I don't remember Bluetooth or any device integration working well or at all on any Linux distro I've tried. Maybe that has changed but that kind of stuff is nice!


I have a friend who went through much the same process. I converted him to full-time Linux usage, and he stuck with it for a good 5-6 years, but eventually went back to Windows as his host OS so that he could conveniently play games, connect projectors, modify photos and video, and do all of the other basic things that only Windows and OS X users get to do reliably. He still does all of his development and sysadmin work from a Linux VM that runs on his Windows host.

I've considered doing this myself, especially with WINE taking eons to implement adequate DX11 support (most games released in the last 3 years won't work), but it's a plunge I haven't been able to make yet. I think I'm going to just set a Windows box right next to my Linux workstation and move the mouse, kb, and monitors over when I want to do something Windowsy.

The ideal would be getting hardware that supported raw GPU passthrough, getting a spare GPU, and doing all of the Windows stuff in a VM (this allows you to get 90-95% of your full GPU performance, and have the card controlled by the Windows system), which would really be the ideal solution IMO. However, I found that even the full technical manuals rarely mentioned if the hardware had the IOMMU, the require capability (both the processor and the motherboard must support), and that by far the vast majority of hardware doesn't.

I have a dual-boot partition now, but I use it probably about once every 9 months, and mostly just to poke around for a second and see if it has spontaneously combusted whilst unsupervised before rebooting back into my workstation OS. It's simply too much hassle to have my primary workstation offline for hours at a time while I game, because it hosts many services that my house relies on. I also have a Windows VM that I do most of my photo editing in.


That's about the date I went back to Linux.

I really hated that the old KDE4 was just a clone of WinXP, and later Gnome until 2010 was just a clone of OSX.

Ubuntu's Unity is the first linux that's proud to be itself, and it shows.


from StumpWM to OSX? ouch


I was not happy at first. :)


Yep. I'm glad that someone compiles a list like this every year so I'm not tempted to lose another week trying Linux again.


As a daily user of Linux on the desktop for over 15 years, I can appreciate many of these complaints, but despite it's flaws Linux on the desktop is a pleasure to use and is better than any other options. This is a nice resource for kernel developers and OSS contributors who wish to make a difference and solve tough problems.


Same here: started using Linux in 2000. Back then hardware support was a real big issue, software was missing, graphical environments for normal people not completely ready, etc. I think things really started to change after Ubuntu, and I would say that Linux got "ready for desktop" around 2008-9. My parents for instance have been using Ubuntu for many years (they also have a windows 7 laptop that is completely owned by malware) and they are very happy with it. Things changed a lot, but yeah, Linux isn't perfect yet, and I agree with some points (for instance the dpi thing). The only way that Linux could be massively used on desktop is by having a company like Microsoft making deals with every single manufacturer in the world for decades.


This has been my experience as well. I started using Linux some time around 1993-1995. Linux was inferior to FreeBSD in many ways back then (and for the following few years). Things got better and better and by 2008 it was a lot better on the desktop, and I think Ubuntu gets a lot of credit for that. It still had some trouble with wireless drivers and that sort of thing, but all these problems started clearing up over time.


Same here. I'm so used to Ubuntu that Windows annoys me tremendously, which goes to show that a huge part of the user experience is familiarity with the environment.

In my experience some things work better on Ubuntu than on the last Windows version I used, Windows 7. Detection of wireless HP printers, for example. Some things are exactly the same. Finally, some things work slightly worse: drivers for 3D gaming are still a bit worse on Ubuntu (unless you luck out and find the one driver that runs wonderfully and doesn't require any magical incantations), and configuring everything to work just right with some games can still be a pain. Less demanding games are just fine, and I use my Linux laptop a lot for this (heavy user of GOG.com, Humble Bundle and Steam here!).

The jury is out on OS X for me. Using my wife's macbook drives me nut, mostly due to lack of familiarity. Some things are just as bizarre as with any Linux/Windows computer; for example, the other day I was advised to hard-reset the macbook because it wouldn't recognize any USB device... if this nice bit of WTF-ery had happened with Linux, the response would have been "what did you expect? Nothing 'just works' with Linux". But I guess computers will be computers regardless of the OS...


Agreed. I've been using Linux as my only desktop OS for over a decade, and if you pick the right hardware it's just great. Usually it takes a bit to get it working just right, but once you do that, a well configured Linux box with Gnome is a superb desktop environment.



Gotta say I agree with the basic gist (certainly haven't read this very long, excellently-detailed article yet), and I've run linux as my primary OS on various ultrabooks for the past sereral years.

Just skimming, I found this bit on-point and amusing:

> It's worth noting that the most vocal participants of the Open Source community are extremely bitchy and overly idealistic people peremptorily requiring everything to be open source and free or it has no right to exist at all in Linux.


I must not be a vocal participant of Linux, then. Adobe! Shut up and take my money, already!


I have one more which this article doesn't mention: Bluetooth. It's an extremely fragile house of cards (as far as I can figure, there are a few kernel modules, dbus, a bluetooth daemon and pulseaudio involved) and every upgrade you roll is extremely risky. Currently my BT works but after a day or so uptime it will simply stop working and nothing short of reboot helps. (More https://bbs.archlinux.org/viewtopic.php?id=206032 here)

And yes, I have been running Linux since before there was 1.0. And no, I don't like it. But the alternatives are even worse IMO.


I constantly battle Bluetooth issues on every platform. It sucks and needs to die.


This is true even on my Mac with wireless Apple accessories. Pairing issues galore if I have to change the batteries.


The basic thing is that Bluetooth didn't start on a multi-user OS. It started on Ericsson mobile phones.

And there it was at first basically a way to do handsfree without a wire.

From that you have OBEX from IRDA added to handle various data transfers, and a concept of profiles that are half in hardware half in software.

In essence the very design jumps back and forth between kernel/hardware and user space.

Still, the place of failure i most often encounter is something getting stuck in dbus. Meaning that i can restart all the daemons, but unless i restart dbus shit all changes once a failure happens. But restarting dbus at best kills my desktop, at worst reboots the computer (hello systemd).


Bluetooth is a complete mess. Every new bluez version breaks backwards compatibility (all applications depending on it break!) and removes features.


Perhaps one of the major problems with bluetooth is it requires such a deep stack of confusing software which it's hard to find logs for when things are acting up.

For me, it turned out one piece of my stack had been set up by debian by default (or perhaps KDE's bluedevil?) to try and put uploaded files into a directory I didn't have permissions on. Leading to a near-silent error when trying to push files. Go figure.


I solved it and all of my wireless-networking issues by installing Intel wireless cards in all of my computers. Broadcom and Atheros have always given me problems.


Does Intel make USB bluetooth sticks? If my laptop didn't come with a combo miniPCIe card I doubt the necessary antenna is there. In a desktop, I looked for mPCIe - PCIe x1 adapters with lots of antennas but I can't really find any in low profile. So... how did you do that?


For desktop, perhaps the Gigabyte GC-WB867D-I?

Specs: http://www.gigabyte.com/products/product-page.aspx?pid=5157

Price: https://pcpartpicker.com/part/gigabyte-wireless-network-card...

It's basically an Intel 7260 Wireless-AC/Bluetooth chip plus a PCIe adapter.


Intel has a desktop version and it comes with both brackets! http://www.intel.com/content/www/us/en/wireless-products/dua... http://www.amazon.com/Intel-Wireless-AC-Desktop-Network-7260... Also, I will be adding to a somewhat old HP SFF so I am using the single PCI slot to add an additional USB header (Rosewill RC-100). PCI speed is plenty for USB 2.0.


Thanks! However, http://www.amazon.com/forum/-/Tx18X5FNV15HHU8/ref=ask_dp_dpm... it looks like this does not come with a low profile bracket. And, are two antennas ... enough? Just my 6300 in my laptop have three.


I don't understand - I never see a mention of the biggest annoyance to developers on linux : consistent copy paste. It is a cognitive exercise to copy from the terminal or paste to the browser...or (the horror) copy from the terminal and paste on vim.

It does not help that this works beautifully on the Mac.

Is this not an annoyance to anyone else...and more importantly, considering all distros are now using libinput,can I compile my own libinput that will universally copy paste using win+c ?


Is the issue specific to terminal or are you seeing it somewhere else?

The issue is that the default terminal behavior of ctrl+c sends a SIGINT to the running application[1], and so terminal programs override it with shift+ctrl+c, and likely add the shift for the others for consistency.

I use "Terminator" as my terminal app - which allows you to set your keybindings - so I replace the "shift" in copy / paste and it's worked great for me for years. Unfortunately it doesn't allow you to override the SIG* keys, so it can be an issue when using `watch`, which removes your selection when it updates and then if you don't tap ctrl+x in time, it will send SIGINT the watch command, which is the exact opposite if what you want at that moment.

As for VIM, being sure you're in insert mode is essential no matter what OS you're using.

Is there anywhere else besides the terminal where copy / paste are inconsistent? I haven't seen any that I recall in quite a few years of Ubuntu on my desktop, laptop, media PCs, etc.

1: https://en.wikipedia.org/wiki/Control-C


no please - I'm not talking about ctrl-C. I'm talking about CUA - https://en.wikipedia.org/wiki/IBM_Common_User_Access

Even on OSX, the bindings that are used are cmd-c and cmd-v. It is universal - if you have never used the terminal, vim and firefox on a mac... I really urge you to do that.

you'll see what i mean.


I see, thank you for clarifying. Having used all versions of Windows up until 8[1] and 8+ years of ubuntu, I'm surprised I never realized this standard existed. It makes perfect sense that it does - I just didn't know it. I agree that it's unfortunate it's not as well supported on Linux as it should be.

Coming from the other side of things, I had a lot of trouble when I was lent a brand new Macbook Pro for travel during my last job. It seemed some things worked using cmd and others using ctrl. I don't remember the specifics, as I didn't use it often enough, except that I'd find myself mashing keys on occasion trying to figure out the right combination.

It was far worse when I dual booted with ubuntu. The keys made no sense to me there either. Finally, I replaced OSX completely and the keys were "normal" again (ctrl+* for everything).

1: Also lots of DOS, but I don't even remember what versions. I had managed lots of desktops and servers running DOS for a few years in IT a couple lifetimes ago.


You've got the causality reversed. The Apple Lisa introduced the command-Z/X/C/V keyboard equivalents for undo/cut/copy/paste, and they were kept with the Macintosh and Apple IIgs.

IBM CUA and Microsoft Windows adopted them afterwards, adapted to the PC keyboard; I think Windows only adopted them with Windows 95 and NT4.


No, they were already there in Windows 3.x


> Unfortunately it doesn't allow you to override the SIG* keys

You should be able to set these with `stty` (they're not actually a function of the terminal but of the tty, a detail no one should have to know).


Highlight to copy, middle-click to paste. Works every time for me.

Obviously, if you want to paste into vim you have to be in insert mode.


Technically vim exposes the x11 clipboards (plural) as registers - I never use registers, but I think

  "+p
[Ed: for those not familiar with vim: " can be thought of as 'with/from/into register named:', + is the register and p is paste. Similarily "+yw is 'yank/copy current word into register named +' ]

should be paste x11 selection. I don't normally use gvim/vim as an x11 application, but looks like * (star) is typically system clipboard:

http://vim.wikia.com/wiki/Accessing_the_system_clipboard


why does it work on osx then ? is there something special happening at the OS level there ?


Afaik ctrl-v works in gvim? (I think its just bound to "+p or similar).


Only with :behave ms (default on windows), else ctrl-v starts block visual mode.


Yes, except when you have to select something in between the time you copy & paste, e.g. change focus to the URL bar of a browser. Then ctrl-C is your friend, and works everywhere except the terminal.

But the terminal is a special case when it comes to copy/paste and always will be a little weird on any platform. Especially when it comes to line-wrapping in full-terminal applications like vi.


Pretty much, never had a problem with it.


my laptop does not have middle button. so I have to do a weird left-right button click. Very inconvenient for me.

I truly envy the OSX people just on this one aspect.


shift-insert works in X if you can't middle click. In gvim too, out-of-the-box if you're in insert mode.


I was HOPING for that. Its part of IBM CUA. Unfortunately it doesn't work everywhere.. Especially the terminal (unless you remap keybindings)


Oh, this was an annoyance until I installed a clipboard manager (parcellite in my case) which automatically syncs the different clipboards.


But how do you paste? I mean I have all sort of keymaps to enable paste in vim using CUA (shift-insert), but try explaining that to a first time user.

Its the small things that OSX does right.


Why would you need keymaps? As far as I know, the terminal emulator (urxvt in my case) takes care of pasting the text into Vim. I never had to configure anything to use shift-insert or the wheel button to paste.


ahh.. I see.

s/vim/gvim/g


Why would a "first time user" use a program that has a user interface designed in 1976? Why would a "first time user" expect this program to obey interface guidelines that did not exist until 11 years after that user interface was designed?


ctrl-shift-C on the terminal, ctrl-V on everything else. How is this a significant annoyance to you?


Tried double-tapping in the article text to get it to reflow, and got one of those bogus popups claiming my phone is infected with a virus. Thanks for trying to make my new year interesting. :-(


The site is now blocked by Google Safe Browsing as a result of these complains. Author has been emailed asking that he corrects this behavior in order to be removed from the blacklist.


This happens to me as soon as I enter the website on my mobile. Get forwarded to one of those websites. Although I'm not sure why I want to read yet another person complain about Linux anyway, when I find it is excellent for certain purposes.


Me too. I can't read the article at all because of it.


Flagged the link. But surely I'm not the first. How does this malicious post stay up for four hours?


Yeah, is this considered acceptable in today's world? I goto your site and get attacked by your ad service?


Me too: my response was a) f@#$ this article/website and b) f@#$ Chrome for allowing websites access to the phones vibration and message pop-up window and C) closes Chrome


Mine was 'You were selected! Click OK to claim blah blah!'


Happened to me using Hacker News 2 on Android, which made the "Malware Warning" popup show over the HN comments. Tried a few different ways to actually read the article, since the conversation was interesting, but in the end, just gave up and figured I'd read it later on my desktop.


The site auto-redirected to malware for me. Didn't have to click anything.


> X.org 2D acceleration technologies and APIs aren't as mature and fast as Direct2D and DirectWrite in Windows.

This complaint is confused. X is the wrong place for that stuff (the failure of XRender to live up to expectations being a testament to this). The windowing server should be multiplexing GPU buffers and that's it.

Direct2D and DirectWrite are user mode, client side libraries on Windows. (Maybe the author has them confused with the legacy GDI, which lives in the kernel?)

What this should say is that Skia-GL/Ganesh and Cairo-GL, which are the open source Direct2D competitors, are not at performance parity with Direct2D. I've heard this in the past, though I don't know how accurate it is anymore. Thanks to the work of Google, which depends on Skia-GL for Chrome and Android, it's made rapid progress lately. In fact, in my experience, Skia-GL is basically at performance parity with CG::OGL (i.e. the Mac/iOS 2D rendering backend)—so if the Mac is your benchmark, Linux has caught up there, if you use the latest Skia and configure it properly. Very few Linux desktop apps use Skia-GL in practice, though, which is a separate, and unfortunate, issue.

(Finally, I should mention that Direct2D and its competitors constitute a really low bar if you compare to the actual state of the art in GPU vector graphics, which is stuff like Scaleform.)


Oh, here's another I noticed:

> Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002. More eyes, less vulnerabilities you say, right?

Glancing through the list that was linked to, most of these vulnerabilities affected Chrome on Windows/Mac and/or Safari on Mac too. It's not fair to use general vulnerabilities in widely used Web browser engines as an indictment of Linux specifically. Nor is "look at the number of vulnerabilities in Web browser engines!" a particularly good proxy for anything other than how popular, and security-critical, Web browser engines are. (Some of the other security criticisms, for example of X, seem fair though—e.g. DRI2 is really bad.)


The website is hard to read on mobile because it is wider than my screen. Bit unfortunate.

Regarding VA-API: Since gstreamer-vaapi 0.7.0, Totem finally works with VA-API. Before only MPV worked. IMO if you used mplayer you better just switch to mpv. It still doesn't work perfectly in VLC, though AFAIK VA-API should be built in. In my experience filing bugs was enough to get improvements. Once it works well enough distributions should install it by default. IMO it pretty reasonable to achieve "va-api installed by default" for major distributions during 2016 provided enough people put their time in. Heard that gstreamer-vaapi will be merged into gstreamer itself, so that'll help a lot as well.

Regarding laptop support: Canonical seems to do a lot in this area. Testing Ubuntu and adding (upstreaming) workarounds/quirks to e.g. the kernel.

Some problematic items I wasn't aware of (good to have this list). E.g. font anti aliasing. Stuff like that is why freedesktop.org started, to have a way to agree on such things.

At least for some of the items on this list I have the idea that I can help to improve. Some items seems impossible. Still, just a little bit of effort can help a lot. Often I spend less than 30min to get improvements (though mostly thanks to actual paid developers putting in their effort).

Edit: The site should remove the swearing. You can be explicit in your disagreement without needing to swear.


Up until fairly recently I had a number of issues with VLC and vaapi / vdpau, but now it all seems to work well. I even have working accelerated video in Chromium by removing the GPU blacklist and it works fine. The only things missing now on the video front for Linux is good encoder support, but I can get that right now with ffmpeg - its just missing in a lot of capture software.

Though we really should all converge to openmax at some point. Like, all encode / decode with openmax in the future. It just makes bloody sense.


To me this reads as "I can't use hardware xyz on linux". Well, that's no surprise. There's "wxy" hardware that you can't use with Windows or OS X as well.

Today Linux supports the same amount (or more) hardware that Windows does. Hardware support is not a reason to use or not use Linux. The software ecosystem is much more important - not to mention the freedom aspect...


I often see people proposing others install Linux without actually considering hardware, and that easily and quickly loses potential adopters.

No, Linux does not support every piece of computer hardware ever made. You cannot blame Linux for that, if in the case of - ex - Broadcom, the company does not want to provide any documentation, support, drivers, or anything whatsoever, there is nothing you can practically expect this community to do about it. It is insane to think that if a hardware vendor won't support their own products its my job by telling you you should use Linux to then reverse engineer drivers for everything, and somehow magically break the signed firmware bullshit on a lot of modern hardware. In the same way your 1998 Lexmark printer probably doesn't work at all with Windows 8+, <insert arbitrary random thing from a decade ago> won't get support in Linux if it isn't already there.

But I also blame the advocates who sell false promises about supreme compatibility - its an operating system. It supports a lot of hardware. A lot of companies support it. That does not mean all companies support it, or that all computers can work with it. But there is a difference between not supporting any hardware in a product class (ex: I have great experience with Epson xp-XXX printers and always buy them for Linux clients because they work out of the box with gutenprint in scan, print, and ink levels on Cups and do rotation / double sided properly no problem / I use Atheros ar9462 Wifi cards everywhere because they have great bluetooth + wifi that are stable as hell out of the box without configuration) and supporting enough to be functional, and Linux is absolutely functional.


No, there's no consumer-level desktop hardware you can't use on either Windows or OS X or both. That's false.

Hardware support is indeed a reason not to use Linux, but the question is who needs to pick up the slack. With other operating systems, manufacturers put (more or less) effort into writing device drivers that are functional. With Linux, they really do not. It's a large and moving target, and the market value of doing so is not clear to them.


Really? There's plenty of old hardware which no longer works in recent Windows versions. It's not in a manufacture's interest to update a driver to recent OS versions. In Linux drivers usually remain for a long time.


Example: Sony VIAO laptops

http://esupport.sony.com/LA/perl/os10upgrade.pl?stage1=24&st...

"IMPORTANT:

Drivers for other hardware components such as Network cards, etc..., may be available on the manufacturer's web site. Sony software applications that originally shipped with the computer may not work after installing Windows 10. Sony cannot guarantee your system capability after installing Windows 10."

Had a friend with this. Sony sold division to some other corporation. Which leaves Win10 qualified drivers for such optional things as, oh, WIRELESS NETWORKING in limbo after a Win10 upgrade.

Sure, you or I would pull the wifi card model, go to the manufacturer's site, and use the drivers from there. But Joe, Sally, Bob, or Jane public aren't going to get past "Wifi stopped working, and it wasn't on the update screen".


Yeah. GP's point was overstated. But there's a weaker claim that is just as important and, I think, indisputably true: Windows supports virtually all hardware that is popular among desktop users. Or, even better, Windows supports far more hardware than desktop Linux, especially if you weight hardware by popularity.

It's understandable, of course--Windows has a greater consumer installed base resulting in greater manufacturer interest in supporting the platform. For Linux desktop users, unfortunately, the dynamic is often the opposite: the manufacturer has no interest in supporting the platform, and users/volunteer developers have to do all the work...which the manufacturer may then break at will.

But that doesn't mean that hardware support is not still a major problem for Linux desktop adoption.


Whether there's continuing support for a product is a very different issue. At some point, all consumer hardware was functional in a version of Windows or OS X, since it was released precisely with support for one or both of those OSs. Yes, updates break things, but that is a very different problem than the one facing Linux. The problem facing Linux is that manufacturers are not supporting (or not seeking with much vigor to support) any Linux distribution upon initial release of the hardware.


It is a very different issue but can be very much in favor of GNU/Linux distribution when they work on a given piece of hardware: they are far more likely to be able to get version N+1 of said distribution than Windows N+1. Now Windows 10 might bring the best of both worlds for the Windows world, but for now it does not work well that much.


Sure, there's lots of old hardware that the manufacturer no longer supports. That, to me, seems like a very different problem, as the older hardware /was/ supported by an older version of windows. It's true that this traps you in an un-patched OS, but (as pointed out many other times in this thread) patching Linux components also risks creating hardware incompatibilities.


Probably, but I'd also wager that the majority of those devices use physical connections that any machine capable of running recent windows versions do not have. E.g. DB-25 ports.

I will also wager that the only real "wall" that would prevent old hardware from working was the removal of 16-bit support from x64 versions of windows.


> No, there's no consumer-level desktop hardware you can't use on either Windows or OS X or both. That's false.

Oh, I can show you multitude of printers, scanners, tv tuners that do not work with Windows or OSX. Or they work with exactly one Windows version and that's it.


Maybe old printers, old scanners and old tv tuners?

The reality is that when a new laptop is released, Windows will support everything it has from day one. If that's because of Microsoft's or the manufacturer's efforts, that doesn't matter for regular users.

On Linux, you buy the laptop and wait 1-2 years for bugs to be ironed out or even for things to barely work for the first time. Afterwards, Linux will continue to support that for a very long time while Windows may drop support after 2-3 versions because the manufacturer has no interest in upgrading drivers for the newest Windows releases. Specially for something that is not selling anymore.

Most people I know in the open source world barely use any fancy features like tv tuners, fingerprint scanners, touchscreens, or even printers. From reading some comments here, these are usually the people saying "everything is fine". I do think everything is fine for open source developers but no so much for regular users. And if the focus can be on the latter, then I guess everybody will benefit at the same time.

EDIT: typing this from Fedora 23 on a laptop with a NVIDIA card that is disabled because I got tired of bugs in nouveau and the constant hassle of updating the kernel and the GPU drivers stopping to work. I'm happy with the ingrated graphics though so my next laptop won't have a dGPU, probably.


I think just the opposite - most of my software runs on Linux, but hardware is the main obstacle.

I've installed Linux many times on desktops and laptops, and I have yet to simply install a graphics driver and have it work first try. My laptop locks up the cursor after two minutes of use, doesn't support two-finger touchpad scrolling, and exhibits constant graphical artifacts, all under multiple driver and kernel combinations.

Speaking as a big fan of Linux, the hardware support is abysmal, except where the money is: server hardware.


The parent is correct, modern Linux supports a lot more hardware than any single Windows version. The difference is that Linux is much better at supporting old hardware, and Windows is slightly better at supporting new hardware. Try using a 20 year old non-generic printer on Windows 10, for example.


As it so happens, I'm pretty sure the marginal Linux laptop installation is on a new machine. So from the universe of possible hardware installations, Linux might be superior; from the universe of actual installations in 2015, it lags well behind Windows.


That used to be a problem, but I don't think it is any more. Linux "just works" on both my laptop (including sound, multitouch touchpad, wifi, and graphics, also when I'm not using it, I only ever put it to sleep by closing the lid, and I've never had a problem) and my desktop, which has an AMD graphics card (Radeon HD 7950) and an external USB audio interface (Focusrite Scarlett 2i2).


I haven't tested but I suspect there is still a distro issue - I haven't had any hardware compatibility problems on Ubuntu, but there are plenty of groanworthy desktop distro recommendations on the net.


No, funtionality is important. It doesnt matter how free the software is if it doesnt work. I don't know where you live, but in both the countries I live in you cannot buy linux compatible wifi adaptors over the counter. I can't close my office or loungeroom doors at the moment, because none of the three wifi adaptors I own work with ubuntu, so theres a massive network cable snaking through both. The hardware compatibility lists for distros are regularly out of date and rarely comprehensive. If you expect the average citizen of the world to trawl forums for compatibility info and pay hundreds of dollars in shipping and wait weeks to get compatible hardware that only seems to be sold in the US, you've got buckleys. To be frank, your attitude is what holds back desktop linux. We need to admit these flaws and fix them, not blame the world for being insufficiently hardcore.


"I can't close my office or loungeroom doors at the moment, because none of the three wifi adaptors I own work with ubuntu, so theres a massive network cable snaking through both."

Homeplug[1] devices any good?

I used a pair of those for a bit to save putting cat 5 in the wall (I'm lazy about DIY). Worked OK but I didn't need super-fast.

[1] https://en.wikipedia.org/wiki/HomePlug


They're very expensive compared to wifi, especially compared to their performance, and based on anecdotal reports their lifespan isn't very good. How's your mileage been?


I used a couple of slower ones (German make, around £30 for the two from PC World, not the most economical outlet) for about three years until I got rid of the desktop PC. No issues. Just worked. Throughput fine at 4 mb/s Internet speeds which is all I used it for.

If you need AV speeds, and if you have 'unusual' wiring, best ask on an appropriate forum supporting home networking or something.


Driver availability is only a small portion of that list.


I don't know about this article. I bough a 3rd gen Thinkpad Carbon earlier this year and Ubuntu runs better than Windows 10 on it. Not a single crash. The only thing that doesn't work out of the box is the fingerprint reader. Not that it doesn't have some rough edges, but this is enormous progress compared to just a few years ago, and the fact that it runs better than the OS _the laptop was designed for_ is a testament to that.


A few years ago I wanted to buy a ubuntu laptop and while researching Thinkpads always came up as the best choice. So I guess your perspective may be a little skewed.


Your single laptop datapoint sure makes this article invalid :)


I add multiple data points in the form of an X32, a X61, an R61i, an X200 and a X220 all running Ubuntu perfectly. The only real problem is running Windows or Mac software which is always doable but somewhat clunky under Wine or in VMs. If Adobe and a couple others made linux versions of their software and Lenovo stopped messing around with their Thinkpads, the laptop issue would be solved forever.


Thinkpad 750 ('95), dell something(until 2000) and then thinkpad x24, x31, t42, x200 tablet, x230 (not good), x220

Basically with various forms of linux - Mandrake, Redhat and then Ubuntu for the past decade. Rule of thumb is to pick thinkpads (or business class laptops) and pick Intel hardware where possible.

http://www.thinkwiki.org/wiki/ThinkWiki


The article is just as data free as my single data point. My experience contradicts it. I use Linux _everywhere_ these days, on the desktop, on the laptop, and on my workstation at work. It works well. It's just not Windows, which seems to be the author's main consternation.


>The article is just as data free as my single data point.

It is not. The lack of a crash does not indicate lack of bugs. But, even a single crash confirms that bugs exist. (modulo the obvious)


>The article is just as data free as my single data point.

No it isn't. The article contains links that confirm most of the problems it mentions. The problems may not affect you, but they definitely exist, and you can't dismiss them just because your laptop works properly.


I don't get articles like these.

First of all this website is worse than any OS environment I've ever used. So right from that standpoint I sorta gulped a bit before reading on.

Graphics driver issues in linux are nothing new, Nvidia a few years back started officially porting drivers to linux but that doesn't solve all the problems. There's also projects like Nouveau; so if you're complaining about linux desktop from a drivers standpoint....get in line.

As for the Windows issues:

>Windows boot problems are often fatal and unsolvable unless you reinstall from scratch

Someone has never used Hirens boot disk

>no system wide update mechanism

wat

>no enforced file system and registry hierarchy

wat

>Android is not Linux (besides have you seen anyone running Android on their desktop or laptop?)

Actually yes I have, I have it running on mine now. Android-x86 Project. At first I'll agree Android was not linux, there were some major differences but these have since been changed.

I'm sorry if this is nitpicking the post but my I don't understand how these make it to the top of HN


It's a single location summary of things that still have to be done / are lacking in the project. It's thus useful for contributors to see what needs work, since it's easy to get tunnel vision when working on a project.

Don't you have such yearly reviews on your long-running projects?


Contributors of distributions already have their goals and and sights set. Everyone knows these issues of linux, they've been ongoing for years.

This list has been compiled many times, that's why I'm so harsh to it. It's literally as if someone just copied ans pasted.

This is a direct copy from wikipedia:

__________________________________________

https://en.wikipedia.org/wiki/Criticism_of_Linux

Critics of Linux on the desktop have frequently argued that a lack of top-selling video games on the platform holds adoption back. As of September 2015, the Steam gaming service has 1,500 games available on Linux, compared to 2,323 games for Mac and 6,500 Windows games.[14][15][16]

As a desktop operating system, Linux has been criticized on a number of fronts, including:

A confusing number of choices of distributions, and desktop environments. Audio handling, particularly before PulseAudio became stable and widely supported. Poor open source support for some hardware, in particular drivers for 3D graphics chips, where manufacturers were unwilling to provide full specifications.[17] As a result, many video drivers have both open and closed source versions. Lack of widely used commercial applications (such as Adobe Photoshop[18] and Microsoft Word).[19][20] Lack of standardization regarding GUI API.[19]

_________________________________________

Notice the things he complained about, and the things that have been edited in there for years are the exact same.


1.) "Everyone" is a very wide net to cast, considering people in THIS VERY comment thread try to prove how their personal GPU works so Linux is fine.

2.) It's a yearly review. The issues haven't gone away. That's why they're still listed, just because "everyone" knows them, they're still outstanding issues. You know, like GitHub issues - they don't go away UNTIL YOU FIX THAT.


facepalm

1)What are you even arguing? Yes some cards actually do work fine with linux I've had similar experiences

2) We have lists, and it's pretty funny they actually are Github issues!

https://github.com/linuxmint/Cinnamon/issues for example

As an actual linux developer, this webpage is fucking useless. It has no specifics on project or targets. Thats what a real development review is about. What do we have, what priorities should we set, and what should get done in the next month.

This article says "okay here are the issues" (yet again) but the author has no incite into the development teams or plans for any of the packages/projects he's talking about.


To add to what you said: Some parts are useful on its own. Don't need targets, having a list of Linux issues is helpful enough. If you're involved in a project you can pickup bits and take it from there.

But that's combined with a lot of offputting content. E.g. the need to say everything is correct because Slashdot agrees and saying/suggesting Slashdot is unbiased and representative. Swearing at the developers who actually put in work (disagree heavily all you want, but no need to swear). Same for saying/suggesting that some developers don't mean well. Initially I only read the first part and thought it quite improved from the last time I read it. But no, again the argumentative stop energy. :-(

Further, a few of his issues are just opinions. Why combine that with the others?

Someone else said here: "he's right". Unfortunately not and again I already regret reading the various drivel parts.


It's mostly trolly though. I mean, read through some of his complaints:

> No high level, stable, sane (truly forward and backward compatible) and standardized API for developing GUI applications

Really? He's calling Win32 sane and stable? The API where they're constantly having to introduce terrifying new hacks to stay bug compatible with themselves and still failing?

Most of it's the same trolls we hear every day. A lot of it is good, well-founded bitching that the Linux community can absolutely do nothing about (we don't have access to hardware specifications we don't have, that simple. Until various trouble manufacturers, nVidia first and foremost, decide to start playing ball, their hardware is always going to suck on Linux).


Yeah. Except he doesn't mention Win32 in this context. Does Win32 have to be sane/stable before Linux should have something adopted?


Thank you, that last part said it better than I ever could.

>A lot of it is good, well-founded bitching that the Linux community can absolutely do nothing about (we don't have access to hardware specifications we don't have, that simple


Tell me how one can update a Windows system (incl. all installed applications) in a straightforward manner. Windows comes with no package manager, as far as I know.


OneGet and the Appstore might be taking steps in the right direction. Still no unified (apt-get upgrade and dist-upgrade) - but currently doing an in-place upgrade win7->win10 and I actually expect settings to be retained and programs to continue working.

For (especially) FOSS software, you might want to look at http://scoop.sh/

OneGet appears to be more of a wip:

http://blogs.technet.com/b/packagemanagement/archive/2015/05...

https://github.com/OneGet/oneget/wiki/cmdlets

But still useful.


Yes, Windows doesn't come with a walled app repository. That's one of the benefits -- I don't have to ask Ubuntu and Debian and RedHat and god knows who else, "mother may I" before I write a program. I just do it. Though I do know people who now do get most of their apps from Microsoft's package manager these days.


> I don't have to ask Ubuntu and Debian and RedHat and god knows who else, "mother may I" before I write a program.

You can just offer your own repository and have the package you distribute install it. I think you are confusing linux package managers with appstores.


You could have a system-wide update mechanism without a common repository. Linux distros have that: you can use as many different repositories from different vendors as you like, but there's still only one update command.


This is probably one of the theory vs. practice issues. While I admit that the article is mostly correct then again, in practice, I haven't had much trouble with Linux in the 00's and 10's while using Debian based distributions exclusively.

Surely I am not immune to the shortcomings of Linux but I don't remember having had to tweak my system as part of routine maintenance for... years. This was commonplace in the 90's. Merely upgrading or changing your X setup required tweaking configuration files, setting modelines and whatnot. I don't even know where the X configuration files live anymore. While the foundation is shallow, things generally work to the extent that I don't have to do "maintenance" on my Linux computers. Even external displays seem to work mostly just fine when presenting.

I haven't missed binary compatibility once: the distros compile everything anyway when libraries update. Compared to clinging to the cruft that makes 30 year old Windows programs work in Windows 10, I feel I'm better off with Linux even if I have to kill pulseaudio a couple of times each year and there's a slight chance that one of my laptops won't resume after suspend if it's unplugged.

I'm not saying that these Linux problems aren't problems. They exist and when they hit you, it hurts like hell. But my point is that Linux is imperfect as is the real world and people just tend to work around imperfections, again, like in the real world. So, mostly things work 5x better than 10 years ago and 50x better than 20 years ago.


As a full-time "linux desktop" (whatever that is) user, all of these problems pain me. But of course the truth is that no other system "just works" either, so it's not as though it's unique. (We have an office full of Macintoshes which cause people endless trouble, so don't go there...)

I do feel that the "linux desktop" has regressed slightly in the last few years however. My transition from a debian wheezy to debian jessie desktop certainly felt like going backwards. GTK3-ified applications have significant focus and scrollwheel problems. KDE4 appeared to have reached a peak of maturity around 4.7/4.8 - jessie's now has notifications that steal focus. Akonadi/kmail2 has made what used to be one of my favourite applications essentially unusable.

NetworkManager continues to baffle as ever.

Systemd/journald, well, it's mostly that I'm not particularly used to them yet, and so tracking down problems requires grappling with a bunch of new concepts every time there is trouble.

It's a chore occasionally, but I still thank god every day I don't have to use macos or windows.


Use Plasma 5 (which is "KDE5" if you will). KDE4 is not even supported anymore on some distros.

It fixes most remaining annoyances of KDE4 and has been really stable for me. Even kmail 5.5 is a huge step forward compared to the buggy kmail 4.x


I'm a debian stable user. KDE5 will come to me when it does. Anyway, I'm truly sceptical of KDE's trajectory now I've been burned a bit (and I'm saying this as a sort-of former "KDE person"). It seems like a lot of focus & motivation was lost around the time of the mobile/tablet/vivaldi distraction nonsense.

My main problem with kmail2 is that it feels like POP3/local mail support & local filtering is a (very broken) afterthought, and while being able to do everything on IMAP would be lovely, some of us are stuck on POP3 mail for a little longer...


ive no experience with the POP3 support even in kmail5 (I stopped using my last POP account a decade ago at least ;-) but the IMAP support in in kmail5 has been pretty good (it used to be terrible).

That said, I still prefer Thunderbird.

By using Debian stable you choose to use 1 or 2year old technology - so you'll have what I have in 2 years. Courage, that's just 24 month to wait ;-)


So is there anything that a normal developer could do to help fix these issues? After all, isn't one of the points of OSS that we should be able to mess with it and change it to our liking?

Or, am I correct in assuming that Linux has gotten so big and complex it'd be unreasonable to expect your average developer to be able to make a meaningful contribution?

I really enjoy using Linux, but I'll admit I still barely understand anything about it after using it for several years and it isn't from a lack of interest or trying.


Main problems is still apps.

After 15years with OS X I'm more or less fed up with Apple for various reasons (some bad hardware, crashing and buggy OS, ...).

But what keeps me are the apps, like Lightroom, Omnigraffle, Sketch, Scrivener, ... - without them I'd use Linux.


Hmm... I use Fedora (and have been for many few years) and have experience none (or _very_ few) of these problems.

In the past (5 or more years ago) there have been problems, and I had to tweak things a lot, compile projects myself, etc. With every release I had to do less of that, and the past years or two everything just works.

Seems the article is outdated.

Video: Just works. No tearing at all, automatically uses H/W decoding. External monitors just work. Resolution is automatically adjusted.

3D/GL: just works. Both Intel and NVIDIA (use Intel most of the time and simply start games with "optirun") I do not play the latest games. For that I use a Playstation/XBox.

Sound: just works. Local speaker, bluetooth external speakers, all just works.

GUI: Despite X11 being old it works remarkably well. I use GL compositions for desktop effects. KDE works flawlessly.

Printing: Just works. When I got a new network printer KDE found it, asked me whether I wanted to use it, then downloaded the driver and asked to print a test page. That's it.

And on top of this Linux enables me to always know what's going on when needed. I can drop back to a shell at any time and everything is accessible that way.

I also have a MacBook Pro from work. I have _way_ more problems with that one. Random hangs with the (admittedly pretty) spinning color wheel. It did not recognize my network printer. I had to tell it the IP address, find a driver myself, install that, and then play around for a bit until it finally worked.

Now I do a fair bit of software engineering on my machines, and so I might not notice the non-obvious things I do anymore. I also have the full development chain easily at my fingertips completely for free.

And I might very well be biased, since Linux is what I am used to.

Edit: Spelling


This is my favorite -- "! Applications development is a major PITA. Different distros can use a) different libraries versions b) different compiler flags c) different compilers. This leads to a number of problems raised to the third power. Packaging all dependent libraries is not a solution, because in this case your application may depend on older versions of libraries which contain serious remotely exploitable vulnerabilities."

Sometimes I explain it that Linux is roughly 65,000 different operating systems with the same name :-) It isn't as bad as that of course but the freedom to choose means that people do choose, and the challenges of selling your non-opensource software into that pool of choices is insurmountable to most companies.


> Applications (GUI toolkits) must implement their own fonts antialiasing - there's no API for setting system wide fonts rendering. What??! Most sane and advanced windowing systems work exactly this way - Windows, Android, Mac OS X. In Wayland all clients (read applications) are totally independent.

That's interesting, I didn't know that. Is that going to be a huge mess after Wayland transition (which is likely to happen in 2016)? I'm used to fontconfig managing global fonts settings. What will happen in Wayland case?


> What will happen in Wayland case?

Nothing at all. Very few modern applications use server side font rendering anyway, the ones that don't either are obeying fontconfig rules independently already or will keep ignoring them.

It's not even different for Windows and Mac OS X, some programs do their own font rendering that's different from the one provided by the system.


In both Windows and OSX there are multiple choices for included font rendering libraries, so I'm not sure what the author is on about. There is no "system-level" font renderer on any of the three major OSes, and frankly such a thing doesn't make sense from an engineering standpoint.


What choices are there?

On Windows, there is Uniscribe, DirectWrite (since Windows 7) and also the legacy GDI font rendering APIs. Probably Microsoft will never remove any of these options to ensure backward compatibility.

On Mac OS X, you use CoreText - the legacy QuickDraw Text is deprecated and not even available for 64 bit applications.

There are also applications that do not use the platform font rendering, e.g. Adobe has their own font rendering code and current versions of Chromium and Gecko use Harfbuzz on all platforms.


It might be not completely golbal, but fontconfig affects the vast majority of applications as far as I know. So it's something of that sort in a sense.


Its very doubtful that Wayland will happen in 2016.

I'm not sure it does anything to solve real GNU/Linux problems for consumer and/or workstation devices. At least not without breaking everything else far more. The idea to let all applications render all their pixels without any further system integration is just completely insane when you see the future of GUI computing and interactions. Work on higher/sideway layers should be absolute priority, and X should be considered good enough until then (at least when you are not maintaining it, but that is a completely different story).


It can happen in 2016. KDE is almost ready, and even Nvidia is approaching releasing their KMS/DRM driver with EGL support. So pieces to at least being able to use Wayland will be in place. However as you say, stuff might be still broken for some use cases, so those will have to be fixed going forward.

Wayland doesn't dictate how compositors approach desktop integration. For instance there can be client side or server side decorations and so on.


Yes, Wayland doesn't dictate anything, and that's the problem!

It is the moral equivalent of a glorified framebuffer. Number of subjects relevant in today interfaces interactions? I'm not sure there are any. Meanwhile graphic/interaction stacks on other systems concentrate on what is really needed by advances in tech and usages like HighDPI / mixed DPI / responsive GUI while updating their existing tech (e.g. RemoteDesktop) instead of deprecating them.

What does Wayland provides in term of modern usages with modern tech? Maybe I missed something but from what I read it is just backward-incompatible (for apps) low level tech, and for actual users this kind of stuff often have negative value...


> What does Wayland provides in term of modern usages with modern tech?

More direct and simpler (and therefore efficient) way to deal with drawing. X is bloated with tons of overhead and obsolete legacy stuff.

Here is a good talk about it: (The Real Story Behind Wayland and X) https://www.youtube.com/watch?v=RIctzAQOe44


Very interesting.

I'm glad they were sufficiently pushed to eventually start thinking about remoting.

I get that X protocol was not that good -- but I'm still not convinced that their model is, not even talking about future proof, suitable for present needs; you have to at least provision for multi-dpi, touch and changing form-factors; you also have to provision for multi-gpu more seriously (his answer: client pb and very difficult) and multimedia (sound synchro, ideally including in remoting cases).

I'm also not 100% convinced that the argument that gedit round-trips too much is a good one. You have to rewrite toolkit to use Wayland anyway, so why not comparing that approach to a mere cleanup of the way they use X?

And to finish, you will still need quasi perfect retro-compatibility with X for lots of users.

So if eventually all of that is handled properly, why not, and given how he said they orient the protocol descriptively more than imperatively this is possible that this can be added in a second time without falling again in X situation full of workarounds, but even in this best case scenario it will still be very late to the party, especially if they feel so much in the mood of stuffs like touchpad driver rewrite (why is that not in a common lower layer anyway?...). And even then, this will only go as far as the client who render every pixel will be able to do anyway... I get that this is more or less what is largely already done anyway by some software, but guess what else is also done today? Using X... so not a really good reason when you are already breaking everything anyway.

And also guess what a really efficient remote protocol focused on bitmaps would intrinsically be kind of equivalent to for a lot of use cases? Describing transformation operations on previous images already known by the server. There are tons of operations which today exists and that are reimplemented in web browsers as hacks of e.g. javascript and tilled jpegs; either future "display" stacks are powerful enough to model them efficiently, or we abandon all hopes and the native graphic stack should just be a glorified frame buffer and the real UI should be implemented in a browser -- after all, that's also what is partly already done!


its not really true. for ex i use Xorg dpi settings and xorg provided font rendering and driven antialiasing.

gui toolkits can override this (and usually propose to), specially for older versions of Xorg that did not support native dpi scaling, antialising, etc.


This is a situation that hasn't changed much since 1996 and the story of why it hasn't changed is a lot more interesting than a large enumeration of the faults.

To compare it to Windows, Windows really has evolved a lot since the bad old days. Linux beat the pants off Windows 3.1 completely. Both Win95 and WinNT were unstable -- Win2K was the first decent version of Windows.

Linux 2.6 was the one really important thing that happened to Linux in all of that time because it was the first one that worked properly on multiprocessors.


That simply isn't true.

The Kernel is just one aspect of a Linux distribution. The part that is most responsible for supporting H/W. It has evolved a lot since 1996. Whenever I try I find the range of supported H/W is larger than Windows and (obviously) Apple.

On other fronts things have changes radically: Video, Sound, 3D/GL, GUI (KDE, Gnome), printing, etc, etc.


It has all changed and it is all busted.


OK...

Care to elaborate? Are/were you using a Linux distribution yourself? Or are you just parroting what you hear elsewhere?


> Win95 and WinNT were unstable

Not to worry, in my experience Win 10 is returning to its roots of being unstable. At least on laptops.


It feels like most commenters on this thread have not read past the first page of that list, because the discussion ends at "my hardware works" or "my hardware doesn't work". There's much more on that list and imho everybody who cares about Linux should probably read past the first page.


Hopefully the author is reading this: fix your ads! I can't even read 1 sentence on an android phone before your ads do full page redirects to spam. Completely unusable.


Not to excuse the owner but a friendly tip - use adblocking (Android uses a hosts file too!). Also helps with the rest of the Internet.


I've considered it before, but I've been avoiding using it on my phone out of some sense of social obligation. After my experience with this site, I think you're right and its time for me to put on ad blocking on mobile also. Its really a shame that a few irresponsible websites have to ruin it for everyone.


Same problem here on a Windows Phone.


I've been using Linux on the desktop daily since about 1998. It may look a little prettier today, but I find it crashes and freezes a whole lot more these days.


In my experience the kernel and drivers are better than ever, but the desktops are awful and have grown worse. When KDE 4 came out I went to simple window managers like i3 and fvwm. These are extremely stable but things as simple as looking at photos in file manager are painful, and forgetting how to change the clock when I fly to a new timezone is frustrating.


Unity, like it or not, is quite stable.


This is surprising to me. I do most of my web development on Linux Mint and crashes are rare - like Sasquatch rare.

I've also used:

Mandriva Unbuntu 14 Slackware

And had the same near-perfect performance with those distros as well. I was always told one of the great advantages with Linux is how stable it is and for the last decade or so, I've found that to be true.

Now I'm curious to know what you're doing to make your OS crash so much.


I have run Ubuntu and RedHat on multiple hardware, Desktop, not just server, dozens of machines the past 10 years. Freezing on all those hardware combinations was never a problem. Crashing was absolutely never a problem.


There's a lot right about this, but some factual inaccuracies like this make me wonder:

Too many layers of abstraction lead to the situation when the user cannot determine why his audio input/output doesn't work (ALSA kernel drivers1 -> ALSA library2 ( -> dmix3 ) -> PulseAudio server4 -> Alsa library5 + Pulse backend6 -> Application - in other words, six layers of audio redirection; or seven layers in case of KDE since they have their own audio subsystem called Phonon).

That's just not true. In the case of an application that directly supports PulseAudio, there are merely 3 levels:

ALSA kernel drivers -> ALSA library -> PulseAudio Server/library -> Application

In the case of an application that doesn't directly support PA (most that don't will support ALSA), it's one level little worse:

ALSA kernel drivers -> ALSA library -> PulseAudio server/library -> ALSA library (pulse plugin) -> Application

In that case, 4 levels of indirection.

If dmix is set up in the case where PA is being used, that's a bad system misconfiguration and the distro should be slapped.

So that's 3 levels of indirection for a "modern" app, and 4 if you need the compatibility layer. How are OSes like Windows and MacOSX designed? Unless they have a sound API that goes right to the kernel (and the kernel handles mixing if the hardware doesn't support it), I don't see how you can do much better than the current state of affairs, aside from having PulseAudio directly hit the ALSA kernel interfaces, which seems unnecessary.

(Note: for the purposes of debugging & figuring out why sound isn't working, I'm considering the PA server and library [or "backend" as the OP calls it] a single layer. It's true that there's an extra "hop" involved there, but in practice that's unrelated to any possible audio issue but latency, which isn't what the OP is talking about.)


oh linux, don't listen to them... there will always be people who say your desktop experience isn't good enough. the reality for most of the computing world is that linux has already won, it's just that they're using everywhere but the desktop. it's in robots, servers, billions of devices, every android phone, the boxes that power our tv's, inside our tv's now too, and it keeps spreading. if it isn't already, it'll be powering cars soon enough.

so linux... don't sweat the people who still don't have a working nvidia driver. it doesn't matter. not even a little bit. you do you, linux, and just go where ever people writing code take you.


Tesla runs Ubuntu in their cars :-)


Interesting list.

One of the biggest issues for me personally is the split effort. There are >300 distributions actively maintained, 300 projects whose sole purpose it is to package the kernel and some userland-stuff to make a usable operating system and provide for update mechanisms. 300 projects who spend resources on creating documentation, websites, giving support to users, ...

And the simple truth is: It does not matter too much which distribution you use, far too much weight is put on that. How much of what you do is distribution-specific? The type of init-system (mostly not important for a Desktop), the update mechanism, packages available, available/supported Desktop Environments / Window Managers, anything I forgot?

Most of the time you are running a browser, editor, terminal, video player, torrent client, openssh, steam, are using some version of python, ruby, perl, php, ... it quite simply does not matter whether you do these things on Fedora, Ubuntu, openSUSE, ArchLinux, Gentoo or FreeBSD.

Linus Torvalds described it in a similar manner once (too lazy to look for the link, but basically something like "I don't care too much about what distribution I use, I mainly work on the kernel anyways")

We are users, as such we use an Operating System in order to use programs.

The fragmentation probably mostly is rooted in the fact that it is quite easy to create your own distribution, and that many out there are head-strong people who are not patient enough to work their way up a command chain (regarding distributions with a democratic structure) or to succumb to what one dictator is saying. And some projects are just boring to work on, for example GUI-stuff or the code-monster Xorg.

I would not want to bundle all 300 distributions into 1, but (and this of course boils down to my opinion) 20 or so would do. It is important to have alternatives, like non-systemd distributions, rolling releases, source-based distributions, et cetera, but 300...

And if the focus is less on having 300+ distributions and more on making 20 or so distributions great, quality would rise without a doubt, not only ease of use, but also support, documentation, et cetera.

btw: Visited this page from Firefox, was greeted by the following:

Reported Web Forgery!

This web page at linuxfonts.narod.ru has been reported as a web forgery and has been blocked based on your security preferences.

Web forgeries are designed to trick you into revealing personal or financial information by imitating sources you may trust.

Entering any information on this web page may result in identity theft or other fraud.


Same issue here: "Reported Web Forgery!"

It doesn't seem valid, I already took a quick look on my phone and did not see anything phishy.


Linux has gotten much better.

I say this as someone who booted and ran some software on desktop fedora machine from 3 years ago this year (long story, but that machine has working custom software, not rewriting this year...).

Despite all the complaints, Desktop linux still remarkably better than it was just 3 years ago. That machine was almost unusable. Setting up the network on that machine was a couple hour ordeal. All the little fiddly parts that kinda work most of the time don't make for a great desktop experience. The server experience is exceptional however (I spent enough time on Solaris and HPUX to not miss those two at all)


Linux' philosophy (generally open source') is to create a paradise for hackers and it has accomplished that and it should be content at it. GUI desktop is for consumers who are opposite to hackers. Trying to meet both ends are silly ambitions. I am not saying Linux cannot meet the consumer ends, but it need adopt an opposite philosophy from what it starts with. When that happens, some may be happy but hackers would weep (or would they? Wouldn't they just start another paradise?)

EDIT: on second thought, Google's android demonstrates my point.


My usage patterns is close to "Chromebook + Android Studio" The rest is occasional use of Eclipse, Libre Office, a smattering of command line utilities, mainly for internet information (e.g. dig). I'm not a gamer. So Windows holds no attraction, and I dislike the effort needed to secure it against Microsoft and evildoers getting at my critical data. I bought a dead MBP on a lark and will get it fixed to see if I like MacOS X better. But despite ubuntu developing rather slowly, I'm not so discontented as to leave.


Can't remember the last time I had an issue with suspend under Linux (years ago), but on my current desktop I never managed to get suspend to work under Windoes 7. Go figure.


bla bla bla.. bunch of lines and lists all of which are refutable easily. sure linux is far from perfect (and far from bad, too) and thus there are a lot of potential valid criticism, none of which this article mention.

Therefore, the author just spent time typing down a load of BS/FUD probably intended for people who don't seriously run Linux on the desktop nowadays. wow:

So many items are misleading. Some of the links even lead to resolved bugs, did the author click them? Or the "no alternative to SMB". well duh... linux users... use SMB. simple as that.

Or another funny point - I'd like to see the author play with a windows laptop using optimus (its a POS), or seriously gaming on macosx with all AAA titles (oh, they dont exist, too bad, yet people still love it as desktop...)

Then there's list that "affect every linux user".. none of them affect me, I'm such a special snowflake! Not even one. Not even the video acceleration one that seem to be his "biggest point". Firefox uses gstreamer with hw acceleration support and h264 on youtube. In fact, my laptop uses much less power playing youtube videos on linux that on windows (turns out I do have a watt meter and I did use it when I got the laptop).

So yeah.. maybe the author should inform himself a bit before typing random things.


i am turned off when the criticisms towards windows, whilst valid, typify the out-of-touch-with-reality-ness of many linux fanboys. real world users do not notice, much less care about these issues, nor want them fixed, nor would their user experience be significantly improved by addressing them. along similar lines they barely notice that linux exists at all.

there are good reasons why linux dominates the backend space, and windows dominates user space. linux caters very well to the power user and is discoverable with a lot of research - the kind power users will do. windows caters acceptably to the casual user and is so well known they needn't bother advertising (but continue to do so anyway...).

the one problem that is bigger than anything listed here, imo, is that the linux development community is building something they want rather than what others want and then not selling it to anyone especially well.

there have been great strides in this area, but its still not enough. ubuntu is a joy to install and use compared to linuxes of old. (i remember well struggling with 'red hat for dummies' in the 90s).

but... its not advertised in the mainstream or known about as an alternative - why its worth using etc. and appears that it might suffer the classic problems for regular users that put them off alternative platforms - things like "doesn't have MS Word which I (think) I need for work (because I've never even heard of OpenOffice or LibreOffice etc. let alone tried such things)"

its great to see that Dell offer it as an option these days, at least. but until more people know about it and understand that it won't prevent them from doing what they want, they just aren't going to use it.


What planet are you on? Of course real world users notice these issues. It's just that real world users eventually realise Linux isn't going to change so they go back to Windows (see Linus's various "I'm doing it right; shut up" comments on security, stable driver ABIs, etc.).


Earth. you might want to come back down to it...

a minuscule proportion will complain about those things.

the vast bulk of real world users haven't even tried linux or know it exists.


This is a good opportunity for us to be positive and to list up-to-date resources that clearly list laptops that work on Linux, or what to look for, what models are good and not-so-good, what is Free and mostly-Free, etc, etc. I am looking for a "mostly-Free", modern Gnu/Linux laptop that I can purchase either new or recently second hand but am having a terrible time trying to figure it all out.


#25 is wrong:

> There are no antiviruses or similar software for Linux. Say, you want to install new software which is not included by your distro - currently there's no way to check if it's malicious or not.

http://www.eset.com/int/home/products/antivirus-linux/


Considering that most Windows AV products appear to be remotely exploitable and to actively install malware-like things, I think the lack of Linux support for these tools is a good thing.


On Android this link seems to go to a malware site. Anyone else see that?


I had the same problem. I tried three different ways of getting into the link (copy link to chrome, googling link, and of course from here) with the same spammy popups. Ended up emailing the article to myself so I could read if from my desktop.


One problem specific to gaming: Adjusting mouse pointer speed and acceleration is a huge pain.

I've tried SteamOS for gaming but especially playing FPS type games is really bad with acceleration. Disabling it usually requires X.org config changes and knowledge of some semi-arcane acceleration math. It's doable, but certainly not in any way easy.


If you're just using xset, you're missing out massively.

Use xinput instead, it gives you full access to all of X's acceleration profiles and all the parameters you can tweak with them.

http://www.x.org/wiki/Development/Documentation/PointerAccel...

I cannot agree more that said xinput interface could do with a UI though. Not because I hate the commandline but because repeatedly entering long commands with numbers just kills it for me - there's no rapid feedback loop.

And so I continue to avoid using my ThinkPads' Trackpoints because I can never get them to a point where they aren't a strain to use.....


The HiDPI point is pretty critical - many higher end laptops these days have HiDPI screens, and HiDPI desktop monitors are getting affordable. I think with GNOME you only get integer scale factors: way too tiny or way too big on most HiDPI screens.


>I think with GNOME you only get integer scale factors: way too tiny or way too big on most HiDPI screens.

OS X has the same property (of giving only integral scale factors).


No such problem with plasma/kde. I use it on a 1080p 13' screen and a 4k 13' screen. its in fact much better than windows 10 in that area...


I have one of the new Dell XPS 13 with a high-res screen, and it works great with the exception of a few things that I mostly don't use any more, like Tk.


Anyone else having trouble viewing this page? I'm on iOS and every time I go to the link it redirects me to a page claiming I "won" a "reward" with an alert box that I can't dismiss without tapping OK.


Well that would explain why I get

     Deceptive site ahead

     Attackers on linuxfonts.narod.ru may trick you into doing something dangerous like installing software or revealing your personal information (for example, passwords, phone numbers, or credit cards).


As a developer I feel neither Windows nor Mac OS are ready for the desktop. Windows is almost completely useless and you need to immediately resort to Cygwin or a VM to get anything useful done. Mac OS is much more useful out of the box but lacks up-to-date packages and a good package manager. I haven't used Mac OS recently enough to comment on Homebrew, but the last time I tried it was no where near as robust as many Linux package managers are. Arch Linux has a huge amount of user-submitted packages so that even relatively unknown projects tend to be easy to install.


Thanks you so much for putting a stop snowing link in. We released some christmas easer eggs without any way to turn the music off or make the snow stop and it drove everyone insane :P


I think Arc messed up: your comment is in https://news.ycombinator.com/item?id=10812214, a thread about all OSes needing to die horrible deaths.

I'm very curious about whatever context this comment was intended for though :D


This "Android is not Linux" meme is pretty silly.

"Android contains the only Linux component - the kernel" can also describe any distribution/OS using Linux.


So how does classifying Android as being Linux help anyone? Who does that definition help? The most important drivers (video, camera) aren't compatible with desktop, userspace isn't either, most binaries won't work properly due to use of libbionic, etc.

So what kind of useful definition is "Android is Linux" when you can't really reuse any Android components on desktop Linux distributions?


It's just completely arbitrary. Other than "It uses a linux kernel" there's no list anywhere of what makes something "a real Linux". Everything around the kernel is a pile of reusable components that make up an operating system.

As far as I can tell there's no gatekeeper that says "Tool A in combination with the kernel is a real linux, but android components don't count." And no one could be that gatekeeper because Free Software lets people use what they want. So who draws that line?

This is a list of complaints of what happens when putting certain blocks that make an OS not work like a cohesive whole. One vendor (Android) decided to use an entirely different set a blocks (as is their right), and then people are like "Well that doesn't count." How does it not count? If you don't like the set of blocks that are out there you make your own blocks, that seems to be a fundamental point of using OSS in the first place, the right to say "I want to use different blocks". It's not any less a Linux than my Ubuntu machine or my Nest thermostat or whatever flavor is on my router.


Google could replace Linux with any POSIX kernel and only OEMs and NDK users that use unofficial APIs would notice.


Technically correct. But no Linux distro has so radically altered the userland as Android: One runtime and one UI stack for all apps. That might not be the only way to make Linux into a widely used consumer OS, but it is, by several orders of magnitude, the most successful approach so far.


There's another distro like that: ChromeOS :)


I was about to say that ChromeOS is a lot closer to being a conventional desktop Linux, with X11, blah blah blah... Good thing I checked that! I had not been paying much attention to ChromeOS and was surprised to learn of Freon.


Shouldn't it be "Android is not GNU/Linux"? Then it'd be true.


When you think Gnome, you probably think Fedora. Fedora and Gnome are buddy-buddy, and being basically the upstream of RHEL makes it the place to be if you are big on Gnome. KDE has Kubuntu, kind of, since its sponsored by Blue Systems, but there is a huge issue with how KDE is presented to users in that the Ubuntu release cycle has no correlation to the KDE one, and since Plasma 5 has started shipping in Kubuntu that problem has become a catastrophic deal breaker. The KDE community made a great move by breaking up the release cycles of frameworks / plasma / applications, but Ubuntu itself does not care - which lead to releases like 15.04, where 5.2 was shipped despite 5.3 coming out the same week and being significantly more stable, but Kubuntu users would never see that release until 6 months later. Now, 5.5 is out a few months after 15.10, and again a much more stable release won't see Kubuntu until April of next year.

I'm pretty high on the KDE koolaid, and contribute a lot to the project, and it is the most painful experience to know there is no actual answer to the question of what distro I should be recommending to people for a KDE experience. A sane KDE distro should be shipping these new updates after a testing period - all at once, preferably, I use Arch mainly but its release model also conflicts with KDEs since - for example - 5.5 came out developed against upstream Frameworks which eventually released weeks later as 5.17 (frameworks are released monthly), but 5.5 with 5.16 was broken as hell. Applications 15.12 came out weeks later after that, and were a seamless upgrade because they were also developed against the same Frameworks version, but they were already available and thus everything went swimmingly.

Fundamentally my point is that I imagine despite Fedora and Gnomes friendliness, the problem happens there too. It doesn't hurt that software availability on Fedora is awful and it is not presented at all as a consumer desktop since its more of a test bed for RHEL. Distros today are shipping around either fixed releases - that because of the fragmentation of the ecosystem nobody is targeting upstream - or around arbitrary goals, which can be years of waiting (cough, Suse, Fedora). But on the flipside an average user almost certainly does not want a fully rolling release like Arch, because breakage in Arch often doesn't stop a package from being pushed to release (and often those bugs are missed because Arch doesn't have enough users of testing to catch everything) and the constant churn means your system only works precariously in its current configuration - trying to roll back anything on Arch is insanely hard, because everything is built against the latest and greatest of everything else. Thus, there remains significant value in releasing updates at once.

What I really wish we had was Kubuntu, sponsored by Blue Systems, just with updated userspace software on a regular basis (feel free to freeze the LTS, or even have two repos at install - frozen or not) This problem is mentioned in the OP, but it can be soul crushing for a developer in this purgatory because your fixes and improvements may not see the light of day for almost a year or more.

Personally I'd love to see an Arch derivative (since it has IMO the least age rot and bloat of modern distros, cough Suse /etc/init.d after having systemd for years) that just did frozen monthly releases. You release, you wait two weeks, you freeze, you test, you release again all at once. Users get latest and greatest but you can also hold back anything broken until its unbroken. But that kind of commitment requires work, the kind of work you cannot just throw on hobbyists and hope gets done, the kind that needs salaries and business. I still wonder why there isn't any commercial presence of full stack computer vendors of Gnome / KDE - boutique vendors that ship distros running non-Ubuntu that then use their revenue to develop it.

That was a lot of typing, but just some anecdotal remarks:

Linux grsec is the greatest security project ever, and nobody supports it. Selinux / Apparmor / Tomoyo are all trash by comparison, and Arch's stock kernel doesn't even have MAC at all. It is also probably the greatest notch in Linux's belt over Windows that it can be objectively secure, when you actually care about security cough Wayland. There is a distro I see in my dreams that has secure packages, user repositories, and PAX enforcement to prevent malware from accessing anything but its own personal .config file and .share directory. Users can install their own packages rather than having to install them systemwide, but the package manager is smart enough to promote software installed by multiple users to a shared hierarchy rather than duplicating space usage. And the filesystem layout makes sense (/usr...).

xdg-app is supposed to be this pancea that saves the desktop from application fragmentation. It should give us sandboxing, appstream metadata dependencies so we don't need insane duplication of all the system libraries like in click packages, and a puppy. Lets see if that ever happens.

Audio will never get better. Just use pulseaudio and fix it where you can. I'm working on improving plasma-pa for KDE right now to make it more like the old Veromix widget (which was great). It already does its job of catching ALSA applications that would try to steal the soundcard, and rather than try to throw away all the work that went into it we just need to make it suck less.


I found that using Eclipse on KDE lately was very problematic.

Something about SWT requiring Gnome themes, that had been recently unilaterally converted to use CSS for styling, or something...

Gah!

It took lots of fiddling to be able to open dialogs on Eclipse without crashing the whole application, and even after all that fiddling it never worked right.

Had to switch to Gnome to get it to work right, on linux, but hated that...

I know it's not KDE's fault if Gnome wants to change how it's themes work, or that SWT uses gnome, but between that and systemd making everything utterly confusing I don't use linux as a desktop os anymore, despite using it since Yaggsdril.

tl;dr: I now develop on OS X now, so I can open dialogs in Eclipse without crashing.


Add the following to your eclipse.ini:

--launcher.GTK_version 2

Alternatively set the environment variable SWT_GTK3 to zero to launch eclipse:

export SWT_GTK3=0; /path_to_your_eclipse/eclipse

As a side benefit: GTK2 with a compact theme will also make a much better use of your screen real estate. Especially important, if you want to work on laptop with a small screen diagonal. GTK3 as it comes out of the box is just ridiculous in this regard. Eclipse is completely unusable with GTK3 on my 1200x800 12" machine because of that.


Nix perhaps?


Relatedly the article mentions "No unified installer/package manager/universal packaging format/dependency tracking across all distros (The GNU Guix project, which is meant to solve this problem, is now under development - but we are yet to see whether it will be incorporated by major distros)."

I'm not sure this accurately characterizes the goal of guix or nix but am hopeful for them anyway.


This comment FTW.

I'm using Slackware right now; a few years ago I tried Arch. I completely agree that it felt too tentative, way too tentative. You basically have to have a nervous system that's in top shape to be able to deal with what Arch throws at you - it might be this update that renders your system broken, or the one after that, or maybe the update next week, or "oh, 3 packages changed, I'll add them before I go to bed in 30 seconds."

Granted, it's not usually that bad, but the anticipation and mental preparation must be there. And, in my case, my nervous system doesn't actually work properly :) (appt. with specialist in a couple months, hopefully!) so I'm the worst-case example of where this approach definitely doesn't work - hand me a system like this and expect me to maintain it, and I'll just go NOPE. I don't have the attention span for it, you might as well have asked me to carry 1000 spiders across a room.

But, the problem with Linux is the widespread policy of fragmentation, otherwise known as "do one thing and do it well". In practice that translates to "know about one thing and know about it well," and if you read Linux soley from that point of view, IMO everyone fulfills it perfectly. Zoom out a little and take in the bigger picture, and it becomes painfully obvious that everything is compartmentalized to an incredibly damaging degree/extent.

And yet, because of this prevailing attitude, all the "system integration" attempts and efforts are plagued by their own NIH and intentional scope limitation, so everyone tiptoes around everyone else for the sake of choice, and nobody says "that's it, we need do do XYZ this way," accepts the responsibility of world domination for XYZ, and, if it's necessary, evangelizes XYZ as The Way To Do This(TM). It can be done: https://lkml.org/lkml/2011/5/29/204 (note how (a) nobody contested it and (b) it was socially appropriate to accept it due to the context - it's often socially inappropriate to do this kind of thing in Linux.)

And so, because of the lack of integration, "looking at the bigger picture according to a standard" and so forth, it's impossible to formally define my Arch configuration which differs from yours in 10 areas such that the definition succinctly represents those 10 datapoints. Rather, you'd crash into the point of irreducible complexity at n-thousand, or at best n-hundred datapoints, because nothing integrates with anything else. (No, I definitely don't have the attention span to even fathom how you'd try to begin integration work. Nuke it from space maybe? :P)

Nix and co. are trying to build superintegrated systems the only way that's currently possible: teaching the package manager about the entire system, making the package manager the source of authority (which makes high-level integration possible), and having the package manager regenerate (or at least check/retouch) everything on every invocation. This necessitates the user learn to configure their system solely by way of the package manager, but at least it means you can pass the representation of an entire system around as a multi-KB configuration file, and handing said file to the package manager on any system then waiting a while will reproduce a predictable configuration...

...for those package versions. That's the point where everything still breaks apart.

I didn't completely use to understand (is that correct grammar?) why systems just did security releases/updates, why backporting was a thing, etc. Now I do: it's so you can apply the security update yesterday with minimal eyeballing and integration testing and have a fairly confident outlook that the patch won't introduce subtle bugs or require system changes.

The likelihood is that in practice the small occasional ancillary change a security patch might introduce might occasionally require integration work or removing said small side change(s) from the patch. So effort does still need to be invested in a small niche percentage of situations. (I've always been curious about what kinds of contexts this would call for; anecdotes welcome.)

But just about every sysadmin or Linux user out there would probably react the same way if they discovered me sitting at their main workstation (possibly SSHed to their production server >:D) adding 'apt-get -y update; apt-get -y upgrade' to a daily cronjob: I would imagine 100% of them would be leaping in my direction, horrified; 90% would be exceeding their average Sound Like A Sailor quota, and about 20%-40% would be using their keyboard as a bat (with the 2% who own IBM Model Ms unfortunately succeeding).

And this is because, for any given upgrade, because nothing's integrated in a truly neural-network-analogous model - where everything can see everything else and react to it - things inadvertently step on the toes of other things because they're blind to anything except their own existence. And the problem is, there are so many different layers, the toes in question might be anything from library version conflicts, package manager confusion (dependency resolvers can reach mathematically unsolvable conclusions, requiring force-removal etc etc to fix), ABI/API breakage (because the maintainer for 'unicorn' never realized the package for 'libwobble' hasn't been pushed yet because they were using a local version), software- or hardware bug regressions, forcing users to use an older software version... the list goes on. It's not a big list, but the consequences are disastrous enough that it might as well be infinitely large.

I remember reading a long time ago how Chrome OS autoupdates on every bootup. "Wow," I thought, "that's really impressive." I'm not sure if it still does that - I expect autoupdating is a background thing now - but it's still impressive. Google has to do this since, it's practically part of Chrome, and since Chrome OS' app sandbox is inside the web browser there's no real need for system-level flexibility or additional capability, and this makes things significantly easier. The other boon is, but of course, the unique hardware situation; Chrome OS isn't something a manufacturer grabs, Google approaches a manufacturer with a contract (basically a pile of money to make Windows temporarily look slightly less interesting, I'm guessing). This means Google are largely in charge of the OS, and the impetus is coming from Google to get it working in the first place. This means the system has pretty much perfect driver support.

I ran through a Slackware-current update the other day that fell apart most delightfully when I tried to upgrade aaa_elflibs to i586 by accident. After realizing what I'd done after 15 minutes (I use both 32-bit and 64-bit machines, the former more than the latter at this point) I fixed that then figured out the original "fell apart": udev no longer existed, eudev had taken its place. Slackware will not upgrade a package not already installed.


I don't have nearly that much trouble with Arch anymore. You know when you get 3GB of updates to watch out, but the general trend in recent years has been positive - more issues have been fixed than made. My most recent one being that Dolphin can't properly save edited desktop files anymore.

There is a testing repo for a reason. If Arch had more adoption, the model can and would work, because you would eventually get enough powerusers in testing to live that life of worry the next update kills your desktop. More people would mean more packages are held for testing, and more people would mean more of the AUR packages would be pulled into community. The only potentially defective design is releasing known regressively bugged software to release channels just because the bugs don't completely break the system.

ChromeOS can sound impressive until you wonder when the last time OpenRC + the kernel broke, especially when you have Google engineers auditing and patching the Gentoo kernel to insure compatibility on their devices. By comparison the last issue I had with systemd was.... never, really. I even made it through the switch with a fully working system, I just had a lot of missing services I had to reenable.

When Arch breaks, its always either A. the graphics stack, B. Xorg and its ilk, or C. bootloader. Its super rare for a kernel to be pushed upstream that regresses wifi (unless your broadcom out of tree driver breaks), or audio, or peripheral connectivity (usb, bluetooth). Its super common that major upgrades to X or Mesa or libGL can either kill acceleration or your entire desktop period. And having switched to directly efistub loading my kernel, I haven't had bootloader problems ever. And systemd has built in gummiboot now.


Have you tried manjaro?


I'm talking about a consumer grade implementation of KDE I can recommend people over Windows. Manjaro has no MAC, has an unstable release cycle, doesn't have the manpower to maintain that testing window I describe, and is reliant too much on the AUR to a fault. Arch gets away with the AUR because the audience is power users. You cannot push a consumer grade distribution with access to arbitrary software installers that have system level access that anyone can upload.

Like I said, it would take commercial level support and infrastructure and work. To make consumer grade Arch would take sallaried testers and auditors of the software "store" that the AUR would be, except what you would be doing is maintaining your own repository of AUR packages you trust and maintain yourself, and would need some means for users to get their software approved. And like I said, you need security profiles for every package, and pacman doesn't support per-user application installs, and has nascent delta support (that is there, but sadly not mainstream).

Manjaro is a start. Netrunner Rolling is too. Chakra is a third. But none take the task seriously enough, because they are made to be less rapid Arch's, not a consumer product. Its simply not something a hobbyist community can accomplish because inherent to being a community distro is the inability to design and target those outside it effectively. Its simply a problem that requires money and commercial support that could be easily picked up and done by any company that wants to take it on but it isn't something the community can make themselves.


At home: Older Thinkpad (x220) with ssd. Slackware 14.1/Microlinux desktop. Seems to work OK. Noticed no mention of xfce in the OA.

At work: Dual core atom small form factor PC. Large and bright 19" monitor (I swapped for an 1280x1024 one as I don't like letterbox format monitors). Windows 10 for Education works OK and has stable graphics drivers for the PC.

I suspect my use cases are so basic any reasonable system would cover them.


Quite a few of the points only cause problems to closed source development whilst having a positive effect on open source development. I don't think it's objectively a bad thing.


>You can thank me by clicking the ad at the top of the page ;-)

Not a good idea.


The biggest problem is that the moment I touch the screen the page redirects to advertising and malware.


I'll add:

NFS home directories don't seem to be tested by KDE developers, taking 30seconds or so to start fully everytime you move machine because they generate stuff in /tmp.

Most programs don't take quota into account when calculating free space, and fail in weird ways when it gets exceeded

Cups password authentication can't send a prompt to the user. Windows did this in XP (OSX has this problem too)


FYI I just got redirect spammed to a phishing/virus site when loading the link on Android.


This link has ads that are trying to execute malicious code (I'm on mobile).


This sounds like it's finally going to be the year of Desktop Linux.


FYI mozilla has this site flagged. Not sure if this is a joke or real.


I agree with all the points here.

Despite all of this, Linux is still my preferred desktop. Why?

The power! I can literally do anything with it. Including writing my own drivers/applications. Linux is my go to for doing things with zero crap. The same can't be said for Windows nor Mac.

For me, this pro weighs very close to all of the cons I have experienced, experience daily, or will experience.

Also as a desktop Linux user since 2002, it has gotten a lot better. Xrandr and pulseaudio apt-get are good examples of this, despite their design criticisms.

Of the things he listed, the things I would like to see most:

* application-level Sandboxing

* stable and universal high level application API, like Win32/GDI32/DirectX or Cocoa. GTK2+/Xlib/Xwhatever really dropped the ball here. An API that not only assumes a file system, or additionally a graphics system, but an all-encompassing desktop.

* stable and universal application packaging


“same can't be said for Windows” is subjective. I can do literally anything with my Windows. Including writing my own drivers/applications. And I can’t say the same about Linux. Not because that's impossible, but because I ain't a Linux developer.


> Windows, in some regards, is even worse than Linux and it's definitely not ready for the desktop either.

uh huh.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: