Hacker News new | past | comments | ask | show | jobs | submit login
The Linux Edge (1999) (corememory.io)
67 points by chmaynard on April 19, 2019 | hide | past | favorite | 47 comments



I find that remark towards the end of the article interesting; of him being happy if someone came along and did the cool things that Linux can do, but leaner and meaner by getting rid of the accumulated cruft of the decades-dead architectures upon which Linux was born and must still run on. From his remarks earlier on the article, that's more or less what he did with regards to Unix.

I suppose we may see something like that eventually, but this article is now 20 years old. Isn't that about the same amount of time between Unix and Linus beginning to work on Linux? I know there's various new OSes in the works, but nothing has caught on yet. But, it's not like Linux caught on immediately either, ("just a hobby, won't be big and professional like gnu"). Maybe in another 5 or so years all the systems hackers, professionals and amateurs alike will all be flocking around NewOS, which is so much better and fun to hack on than crufty old Linux.


I'm not convinced we'll have anything that follows Linux. The GPL means that people can't rip it off and fork a closed special version that's needed for their hardware, the result of this being that most ports end up in mainline eventually so you can just kind of expect it to work.

This also means that people doing new things know they'll have the most impact if they do it on Linux and so that's were they do it (plust it being everywhere means they're likely to be familiar with it.)

Personally I think if something new were to come along it would have to have something that makes it interesting socially and I'm not sure there's a lot of room for improvement on that front. It's not impossible it's just a little difficult to imagine.


I think Linus echoes the idea of Linux architecture allowing it to become its own successor in these paragraphs:

"By constructing a general kernel model drawn from elements common across typical architecture, the Linux kernel gets many of the portability benefits that otherwise require an abstraction layer, without paying the performance penalty paid by microkernels.

By allowing for kernel modules, hardware-specific code can often be confined to a module, keeping the core kernel highly portable. Device drivers are a good example of effective use of kernel modules to keep hardware specifics in the modules. This is a good middle ground between putting all the hardware specifics in the core kernel, which makes for a fast but unportable kernel, and putting all the hardware specifics in user space, which results in a system that is either slow, unstable, or both."

So the combination of a kernel model with interfaces reflecting good system design choices, and extensibility through a well defined kernel module interface, allows Linux to migrate to new hardware designs and leverage new components, while continuing to leverage all of the Linux compatible code written over the years. That is a very difficult combination of advantages for a new system to overcome.


Ironically, Linux is becoming more microkernel-like with support for user space drivers and file systems. :)


Graphics drivers have definitely gone the other way. Those used to be in user space and aren't any more (although not in a microkernel sense, it was more of the DOS "go leave me alone and let me just poke crap into registers" sense.)


On Windows graphics drivers are now userland because historically they were the top cause of BSODs.


That's a serious problem on windows because the only people who can fix it are the authors of the driver. On Linux anyone has access to the source and can fix really bad driver bugs.


AFAIK only part of the graphics driver is in userland, with the performance critical part still in kernel space.


> if something new were to come along it would have to have something that makes it interesting socially and I'm not sure there's a lot of room for improvement

The Innovator's Dilemma model of technology disruption suggests that the "next Linux" will be worse in most ways but better for some narrow use case. Perhaps bare-metal containers or unikernels on servers and something like Google's Fuchsia or some RTOS for mobile phones and IoT devices?


> RTOS for mobile phones

The market has clearly demonstrated that it doesn't care about phones being real time. They used to be considered critical systems from what I understand and many did actually run realtime OSes. Now we've got android and iOS (android is about as far from an RTOS as you can get, it's almost impossible to write an app that's not guaranteed to just killed by the OS at some point.)


The RTOS stuff happens in the handset’s baseband processor. The user-facing GUI environment has no hard real-time constraints.


> better for some narrow use case

The examples you gave boil down to 3 narrow use cases:

- The retention project / intellectually stimulating activity of writing your own operating system.

- Licensing.

- Some really specific hardware, where in some sense you're building an operating system in name only. Might as well just call it an application.

I think what OP means by "interesting socially" is that the new Linux isn't going to succeed on any technical or economic merits.

My guess would be the next Linux innovates in some way concerning national or ethnic identity, environmental conscientiousness, or radical politics.

For example, an operating system written in a non-Latin alphabet programming language. Or an operating system that is only delivered for solar-powered hardware. Or whatever is going to be the radical anarchism or communism of the software world (if you think GPL / free software is that, it ain't).


Linux isn't perfect when it comes to hardware. Google has used it for both Android and Chrome OS and the issues that seems to keep showing up are related to updating and testing.

In fact from my personal experience Linux and hardware especially on laptops is very hit or miss.

I'm guessing this is because many hardware companies don't upstream do to patents and other legalities.

If a project comes along that can solve those issues I can see Linux having a GCC LLVM moment.


The update issues are because google is circumventing the mainline kernel and relying on hardware manufactures to provide updates for the entire OS so they can keep the drivers closed source. This benefits google as well because it forces everyone to use their special user space which gives them a lot of control.

If instead drivers were contributed back upstream this wouldn't be a problem.

Personally it's been a while since I've had a laptop with serious driver problems on Linux. Some discrete cards from Nvidia cause problems and there's some weirdness with secure boot but other than those I really haven't seen what you're talking about.


You can use libhybris to provide support for AOSP/Android kernels and vendor components within a mainline userspace. With Project Treble, you could even have a generic userspace image be compatible with a sizeable fraction of "Treblized" devices - there are a few variations for 32-bit vs. 64-bit ARM, and for changes in the system image format, but other than that the Treble images are generic.


> mainline userspace.

I don't think you quite understand what I'm saying? Mainline here refers entirely to the kernel. There is no "mainline linux userspace." (unless you count a handful of Linux specific userspace tools for stuff like ext4 filesystem creation.)


> Personally I think if something new were to come along it would have to have something that makes it interesting socially and I'm not sure there's a lot of room for improvement on that front. It's not impossible it's just a little difficult to imagine.

Google Fuschia (BSD) is planned to succeed billions of Linux-Android installations.

Linux Foundation now favors BSD over GPL.


The Linux Foundation is made of a number of companies who are told to behave by the GPL.

IMO, if it weren't for the GPL we would see very few of the contributions made to Linux get handed back to the community. That it more or less runs everywhere is part of what makes it interesting (and was a major point of TFA.)


There are some startups making new kinds of computing hardware. The limiting factor in the success has been software to go with it. There is an opportunity here, maybe


I think history has proven that the microkernel people were right from a conceptual point of view, but maybe had some wrong ideas about how to implement one and what level to put abstractions.

The whole mess of VM hosts, hypervisors, containers, and container images is basically a messy half-baked microkernel architecture. A kernel running under another kernel with thin I/O abstractions (virtio, etc.) is sort of like a partition within a microkernel. VM hypervisor hardware pass-through is like allowing a microkernel service to drive one system component.

The fact that the market created all this shows that one monolithic kernel is insufficient from a security, permission management, and configuration management perspective at least if one assumes the limitations of Unix-like permissions and isolation.

I'm not necessarily saying Linus was wrong. Linus was right in the 1990s and from a "get it working and ship it" point of view.


We might see containers running closer to metal as market interest demands can drum up money to do it.


Well, the micro kernel delusion is about the same as the micro services delusion. Modularity is a good thing when it applies to code and either irrelevant or detrimental when it is attempted to be applied to executables.


I don't think this is right, the point of microkernels is that kernel space vs. executables is a desirable protection boundary. It's not obvious that we are consistently making the right choices as to what to run in kernel vs. userspace, and sometimes moving some subsystems across the divide is the sensible choice, e.g. as with userspace filesystems or even userspace network stacks.


Some Amiga diehards await the day when we all realize that virtual memory and process isolation are passing fads, and go back to setting interrupt vectors and walking kernel data structures directly in RAM like real programmers did on the Amiga back in the day. What they don't realize is that Amigas crashed a lot more than they remember.

If you value speed above all else, modularity is detrimental. If you value security or robustness, modularity helps you design components in such a way that one component's failure doesn't bring down the whole system, or compromise data being worked on by other components.


> Some Amiga diehards await the day when we all realize that virtual memory and process isolation are passing fads, and go back to setting interrupt vectors and walking kernel data structures directly in RAM like real programmers did on the Amiga back in the day.

You mean, like a unikernel? I don't think anyone has tried to write a modern desktop stack that runs within a unikernel, as the Amiga OS did (parts of the desktop widget interface were even hardwired in ROM) but it can't be that hard if you can get useful language-based guarantees.


Well, there is AROS, but language guarantees don't stop a rogue binary from bringing the whole system down. For that you want ISA guarantees, and then you're talking a moddern OS. Unikernels only make sense when the entire application of the system is defined AOT.


I know I've seen a design for mostly-single-address-space OS that use a VM just for running untrusted binaries. I think it was from someone working on one of the EROS successors, but not sure.

I've also seen proof-carrying code proposed as an alternative; binaries must carry proof that they do not exceed certain capabilities and the proof is verified before they are run.


That really doesn't sound so bad if the applications are small and everyone can see the sourcecode.

That's not the way amiga software was though heh.


Keep in mind Linus thoughts on microkernels made sense in 1991, with first generation microkernels.

Then Liedtke's L4 happened, spawning the 2nd generation based on the principle of minimality. This changed the situation.

We're currently in the 3rd generation (with capabilities), its main representative being seL4, and the current landscape bears no resemblance to that of 1991.

With google's financial muscle behind fuchsia, operating systems will only get more interesting thereon.


> In fact, this made me think that the microkernel approach was essentially a dishonest approach aimed at receiving more dollars for research. I don’t necessarily think these researchers were knowingly dishonest. Perhaps they were simply stupid. Or deluded. I mean this in a very real sense. The dishonesty comes from the intense pressure in the research community at that time to pursue the microkernel topic. In a computer science research lab, you were studying microkernels or you weren’t studying kernels at all. So everyone was pressured into this dishonesty, even the people designing Windows NT.

A pretty accurate description of fads and fashions in academia.


Intel's management processors run Minix, the Android bootloader is LK, and Fuschia may soon displace Linux on Android. It's not necessarily bad for academia to be very early in a cycle.


Torvalds is so kind and generous, just a great human being:

"I don’t necessarily think these researchers were knowingly dishonest. Perhaps they were simply stupid. Or deluded."


Yes, he merely gave us the greatest operating system of human history for free. Such an incredible lack of generosity.

By the way, I read this quote as a sincere attempt to understand what was going on in the mind of some people. I am afraid the actual explanation may be a bit more prosaic. A monolithic kernel might -o horror of horrors- be quite mundane. And where is the tenure in that?


Plus -- git!

I'm a confirmed Linux user [switched from macOS], but to me it's clear that git dwarfs Linux in terms of sheer _significance_.

Sure, Linux is what runs most big systems and that is really cool, but if Linux hadn't come around, there was already BSD [which is also good and a lotta my buddies were advocating in the mid-late 90s]. But git is quite unique and has / had no easy substitute. I don't [personally] know a single technology person who doesn't use or interact with git daily. And the benefits of reliable, free/open, and distributed version control are just monumental for the average working code stiff.


> But git is quite unique and has / had no easy substitute.

There was Monotone, which was git's main inspiration. If it weren't slow as molasses, we might be using it today instead of git.


I'd say the main inspiration was Bitkeeper, which is what Linus used until the free usage was withdrawn by the author.


Mercurial appears to have come out at the same time as git and covers almost exactly the same usecases.


Plus: non specialists have a chance of being able to use it proficiently, without having to know its internals.


> Yes, he merely gave us the greatest operating system of human history

While I like Linux, that may be overstating the case


Essentially, Linux was a copy of UNIX (yeah, I know MINIX) so...


A copy of UNIX written by a MINIX user and OS design&impl reader, Linus.


Which would have gone nowhere without those getting paychecks from IBM, Compaq, HP, Intel, Oracle.


He has been quite open about being too abrasive in the service of directness/bluntness in the past.

At the same time, the OS research field really was like this at that time. At MIT, for example, it felt a bit like “which flavor of exokernel would you like to study?”


"The first very basic rule is to avoid interfaces."

...

"But from a design point of view this is also the right thing to do: keep the kernel relatively small, and keep the number of interfaces and other constraints on future development to a minimum."

This is exceptionally poor writing. The second sentence is what he meant to say, which is completely different from the first sentence. An operating system is mostly just a bunch of interfaces for user space programs to use, so they don't have to talk to hardware directly. An OS without interfaces is pretty much an oxymoron.

Limiting the number and types of interfaces, of course, is a very sensible idea. But I was very confused for the several paragraphs between those two sentences about what Linus meant.


It seems more like questionable reading and strange selective quoting than poor writing. The writing as written is pretty standard - first sentence is the thesis and the rest of the paragraph supports it:

The first very basic rule is to avoid interfaces. If someone wants to add something that involves a new system interface you need to be exceptionally careful. Once you give an interface to users they will start coding to it and once somebody starts coding to it you are stuck with it. Do you want to support the exact same interface for the rest of your system’s life?


He meant to say "Avoid unnecessary proliferation of interfaces", but said "avoid interfaces" without any qualifiers, which makes zero sense.

Sure, I figured out what he meant a few paragraphs later, but his writing made it much more difficult than it needed to be.


I think you might have just missed it on a quick scan when you first read it. The broader notion is in the paragraph before. The paragraph with the sentence itself is specifically about that - it lays out a very standard 'design by contract' sort of argument. It's not true that there are 'no qualifiers', the immediate surrounding context is plenty of qualification.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: