Right, because most Linux proponents are kernel hackers. It's quite telling that you have no leg to stand on but to resort to hapless diversions. Or I suppose the flood of researchers who have built viable and innovative microkernel-based architectures throughout the decades are all a bunch of phonies?
"Right, because most Linux proponents are kernel hackers."
They don't have to be. All they have to do is not go around smugly suggesting they know better than kernel developers and they're ok in my books.
Incidentally, precisely how many of those research kernels have become widely used, mainstream kernels capable of high-throughput?
And do you really think it has turned out that way because the whole industry is full of blind dumbasses? I think it's a far more likely proposition that they understand something you don't.
I'm not sure what fantasy world you live in where the software industry is always adopting the most technologically superior solutions by default. No industry works like this.
YMMV on mainstream (they are widely adopted, though), but: OKL4, PikeOS, QNX...
It's quite obvious you have no background on the issues and are using this as an opportunity for provocation.
Realtime != high throughput. It just means deterministic throughput. FSVO deterministic.
Show me people running big farms of servers running these operating systems where even single-percentage computational overheads really matter.
(added:) The reason for this is that it costs one hell of a lot flipping your page tables and flushing your TLBs every time you have to switch to ("pass a message", whatever) to a different subservice of your kernel.
(also added:) Oh and interestingly many (most?) users of OKL4 go on to host Linux inside it because, hey, it turns out that doing all your work in a microkernel ain't always all that great. So 90% of the "kernel" work in these systems is happening in a monolothic kernel.
Other contenders include eMCOS and FFMK, though those are obscure.
That said, I don't even understand the logic. HPC clusters where single-percentage overheads really matter are an extremely specialized use case, so of course COTS u-kernels might not cut it. Where's the shocker here?
Response to added: Not necessarily with message passing properly integrated with the CPU scheduler.
Response to added #2: Hosting a single-server is a valid microkernel use case. What's your problem? Isolation and separation kernels are a major research and usage interest.
I'm not even really talking about HPC, just the massive datacentres that run everyone's lives. All for the most part running monolithic kernels. I doubt the thousands of engineers who work on such systems consider the "huge monolithic kernel" "undebuggable". And I don't see examples of microkernel OSs that are able to cut it in these circumstances.
Even in a mobile device, you don't really want to waste battery doing context switches inside the kernel.
Microkernels have their place, but believing that the world that chooses not to use them are just clearly dumbasses is bullshit dogma.
You appear to be assuming (not being a mind-reader, whether you actually are or not is of course unknown to me) that QNX would automatically be used in server farms if it was high throughput; and, since it's not visibly used there, it is not high-throughput.
(As an aside, I'll grant that even a high-throughput microkernel seems likely, to me, to have a lower throughput relative to a more tightly-coupled monolithic kernel. That's just one of the architectural trade-offs involved here.)
As I see it, there are technical (e.g. hardware drivers, precompiled proprietary binaries) and social (e.g. relative lack of QNX expertise = $$, proprietary licensing) reasons for many people to choose one of the more popular OSes, running monolithic kernels.
I can't say what's technically superior, but even if QNX was, nobody's a dumbass for choosing something else -- and I don't think the fellow you're replying to was saying so. There are, of course, reasons and trade-offs.
An OS's adoption is a social thing, and proves nothing technical about it. If it wasn't for licensing (a social problem), BSD might have taken off, and Linux been comparatively marginalized.
Entertain for a moment the idea that someone's rationale in choosing to deploy a given OS lies deeper than the 1-dimensional rubric you're suggesting, and instead may have something to do with questions like "how easy is it going to be to support this?" and other network effects.
You're getting all red in the face using some really dubious arguments to back you up here.
(response to 2:) Er, so bypassing the microkernel for the vast majority of your work is a vindication of the "microkernels are just better" line is it?
I know it's not. And I know about QNX, at least (the others are new to me).
And I know that you didn't claim they are mainstream, so we may be quibbling about where we draw lines around the word "widely". But...
What's the installed base of systems running QNX, say? (Throw in the others if you wish.) Estimates are acceptable, too, if you don't have hard numbers.
It's not only worth looking how many, but what. They're in vehicles, medical devices, industrial automation, military and telecom. Those are all areas where blunders lead to loss of lives, not just annoying downtimes. Insofar as infotainment and telematics is concerned, they estimate at 60% of 2011, so it's likely your car runs QNX.
OK, if it's in cars (even if only one CPU per car, or even only in high-end cars), then yes, that certainly is "widely used". (In terms of numbers shipped, not necessarily in terms of "design wins" - but then, Windows doesn't have that many "design wins" either.)
So now you're moving the goalposts with "design wins". Just what are the design wins of a SysV Unix clone like Linux, pray tell? It's hard not to be on the offensive when you seem to beg for it. Where did the Windows comparison come from?
The design wins, of course, should be obvious to anyone willing to do a modicum of research.
Nope, not moving the goalposts. Re-read my previous post.
To clarify: Windows is, by any definition, both "mainstream" and "widely used". Yet it has very few "design wins". Therefore, the argument that cars are "only a few design wins" cannot be used to say that QNX, say, is not widely used or mainstream, since Windows is obviously mainstream and widely used.
> It's hard not to be on the offensive when you seem to beg for it.
You need to re-calibrate your sensitivity. You seem eager to take offense at nearly everything. Very little of it is worthy of your outrage.
Every iPhone now shipping and most Android devices too I believe are running their main operating system kernel as a layer on top of an L4 kernel. They mostly handles low level security and the cell modem and stay out of the way except for that. Still, I think that should certainly count as widely adopted.
Correction: in Apple's case, the Secure Enclave, which runs L4, does not run on the application processor but on a separate ARM processor integrated on chip. Competitors tend to use TrustZone and hypervisor mode for this, but Apple currently uses them only for kernel patch protection rather than anything more important.
Not that that changes the core fact that Apple is shipping L4.
Your comment would imply that Javascript may not be the most technologically advanced solution for execution on remote clients. This is obviously wrong, so by implication the Software Industry DOES adopt the most technologically superior solution by default.
There are zillions of academic solutions that if implemented properly would be better than the industrial version. Academics are just notoriously bad at building real-world systems. I think this is mostly because it's a waste of time and money as far as publications are concerned.
I didn't say people in industry weren't smart. There's plenty of stuff that gets published at conferences where the industry guys are like, we did that 15 years ago.
My argument is that there's lots of great-in-theory but untested-in-practice stuff in academia, and that you can't discount something altogether just because it's untested. It's hardly fair to compare the output of a few grad students over a few years with all of the effort that goes into a major industrial product.
And anyway, the architecture of Linux originated in academia too.