Hacker News new | past | comments | ask | show | jobs | submit login

Not knowing much about OS design, is there any inherent reason why operating systems built on microkernels seem to be so unsuccessful? They always seem really elegant, but are never widely adopted (with the possible exception of Mach -> OS X).



I tend to think that Linus was right and microkernels are in reality a bad design.

Case in point: Minix 1.5 (which I hacked on before Linux came along).

Minix 1.5 has two "daemons" called mm and fs which run the memory management and filesystem respectively. Now consider process creation and loading (fork and exec). Creating and loading a process intimately involves both mm and fs, so in Minix 1.5 the program sends a message to mm [IIRC] which sends a message to fs and both daemons have to coordinate with each other. This makes it a lot more complex than if there was just one daemon (ie. a monolithic kernel).

Another example is that if mm or fs die, your OS dies. You can't restart one or the other because there's so much process state spread across the two daemons. So the claim that microkernels are more resilient because you can restart daemons seems to be nonsense (but I should say that QNX can apparently restart some(?) components transparently).

Nevertheless it's not all roses for monolithic kernels either. There's no process protection and they're usually written in deeply unsafe languages like C. Exokernels might be the answer to this because they have monolithic qualities (fast calls and shared state) but keep virtually everything running in userspace so you can use sane, safe programming techniques.


This shows nothing more than a badly implemented api. POSIX is a bad api for anything modern (distributed).


Since 80's, operating systems are a commodity and in the commodities market any elegant or premium product will end in the marginal.

People want their operating systems to manage disks, processes, cpus, network, and peripheral hardware: unless the operating system totally fails at these basic tasks, nobody will pay any attention to how the kernel was implemented. There are some folks who are interested in performance, and there are some folks who are interested in stability, but there are virtuall nobody who is, as a user, interested in elegance.

If you just buy a car to get from point A to point B, do you care if the engine has a carburetor and a purely mechanical fuel ignition system or if it comes with an engine control unit computer that electronically controls that its high-pressure fuel injection system and its computer-controlled ignition system are in sync, keeping the engine at its optimal parameters at all time and avoiding ignition knocks?

Given the nature of operating systems as for the markets in general, it's really great that people like Linus Torvalds and his fellow gurus keep making their kernel better and better. For most people, it sounds like really gritty, mundane work to do.


It's because Richard Gabriel was right.

Monolithic kernels are good enough, perform well enough and are available right now with a large enough body of software.

Being Unix-ish is a great thing as it allows you to be creative with the implementation while presenting a familiar API to the applications. Netscape ran on about 30 different platforms (27 of them more or less identical under the hood because they were Unix ports).

I see a bright future for microkernel and other architectures that can provide a unix-like appearence to programs, but only after we get rid of Windows.


the OSX version of Mach has subsumed so much back into the kernel that it can barely be considered a microkernel any more.

The thing is that for most systems, microkernels are a performance liability, (it's possible to make them perform well, but it's very hard to do while keeping memory protection) and with lots of hardware they tend not to be a big safety win - if you can wedge the hardware through bad commands, it doesn't matter if the commands originated in userspace or kernelspace.


This happened to the Windows NT microkernel too - they just kept shoving more and more stuff back into kernelspace for performance reasons, starting with Graphics drivers in NT4, and going from there. I think they may have yo-yo'd on Graphics Drivers later for stability reasons, but I've taken my eye off Windows since I stopped using it.


Unsuccessful on the desktop, qnx for instance is a microkernel and is quite successful. On the desktop there's a lot of inertia, you need apps (which is a chicken and egg thing). As wmf said, you see a lot of poor unix on top of the kernel so there isn't even a compelling reason to use it.


I think the main reason is poor marketing. Teams built microkernels, but of course a microkernel is not useful on its own so they had to build a complete OS. But being already tired (or near graduation) from building the kernel itself, they simply ported Unix to run on top of their microkernels, which led to a slower OS with no new features. There are benefits to microkernels, but all these benefits were hidden or wasted by Unix. By this time, patience (and thus funding) for microkernels had run out and researchers had to find something else to work on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: