Hacker News new | past | comments | ask | show | jobs | submit login
We Need More Operating Systems (acthompson.net)
119 points by Alfred2 on Jan 1, 2013 | hide | past | favorite | 95 comments



It is interesting how folks who come from a commercial exposure to OS'es hated UNIX. I admit I was one of them, I was a big TOPS-20 fan, thought RSX-11M had a lot going for it, Unix not so much. Then I went to work for Sun Micro and later joined the kernel group there.

Operating systems are 'easy' in the sense that a small group can create them, and pretty much do all the legwork needed to get them into a product. Building hardware is more complex, it involves coordinating manufacturers, part suppliers, and the software to run it.

What happens then is if you're a hardware manufacturer and an operating system supplier, then you can tell your designers to make the hardware like you want, you can write the libraries you need in the OS, and everyone leverages the work of everyone else. In the Windows world the 'duopoly of Microsoft and Intel' led to an artificial relationship between hardware manufacturer and OS vendor but it was the same.

But if you're not a hardware supplier, you spend an inordinate amount of your operating system resource trying to deal with random bits of hardware you might want to use. Then your OS makes as few assumptions about the hardware as possible and it becomes as simple as it can in order to avoid incurring a huge overhead in getting new hardware into the system. It was this latter feature of UNIX which I really didn't appreciate until I went to work at Sun.

I agree with Thompson that it would be nice to have more general purpose operating systems. They are out there, Spring, Chorus, eCos, VMX, Qnx, Etc. but generally they target a more specialized niche whether its embedded or research or some level of security. I'd love to see more accessible OSes as well.

One of the projects I started a while back has been an OS for ARM systems that is more like DOS or CP/M or AmigaOS than UNIX or Windows. The thought was something simple that was straight forward to program, not your next Cell Phone OS or something that you'll need a couple of Gigabytes of memory to run, just something you could write and deploy code on for fun or educational purposes. It has been fun to look at "apps" like this. Imagine something that had a shell that was sort of like the Arduino processing UI except self contained. But again, its not general purpose. It's just a tool.

I think the next wave in the 'real' OS wave is going to be something very much different than what we think of today, something where it assumes there is a network behind it and that the rest of itself is out there somewhere as APIs. Of course civilization may collapse first :-)


> I think the next wave in the 'real' OS wave is going to be something very much different than what we think of today, something where it assumes there is a network behind it and that the rest of itself is out there somewhere as APIs.

When coming up with design goals for my OS (http://daeken.com/renraku-future-os) there were two things that I conceived of that naturally led to a third: 1) Everything is an object (you can see the Plan 9 inspiration there, just brought to a different level), 2) All objects are network-transparent. Those lead to 3) Your computer is not just your computer, but a billion services that just 'exist' on it.

Because of the fact that it was 100% managed code with these strict APIs, it was theoretically possible for the following scenario to work: At work, you dock your tablet at your desk, and your applications are seamlessly split between the CPU(s) in the tablet and the CPU(s) in the dock; same with memory. You can work on it as you see fit, then bring your tablet home. On the way home, you still have access to everything, it's just running on a more limited platform. When you get home, you could just transfer your running state to your 'house computer', and access everything from whichever interface you happen to be nearest at the time.

As much as I hate the phrase, "the network is the computer" is going to win in the long run. Just a matter of time.


Are you planning to go all out on this at some point?


I spent about 6 months working nearly full time on it, but there are some fundamental flaws with the implementation. Eventually I'll reboot it, but I have no idea when that'll be.


Well in the Project DOE world, which spawned Tivoli systems and a bevy of object brokers, one of the challenges is the subroutine. Or more precisely the semantics of making a 'call' from one context to another, where anything can happen between point A and point B. The sorts of challenges were things like "at most once" semantics where the programmer could assume that if a function call completed it did so only one time on the destination, or "receiver makes it right" data exchange where the person receiving the data is responsible for unmarshalling it into something intelligible. Sun's RPC layer was very light weight, HP/Apollo (and later Microsoft under the same guy) was quite heavy weight.

Process migration was another interesting bit, and something I think we'll see more of in the future. At Blekko we don't run a program (called a mapjob) against a petabyte of data, instead we send that program to all of the 'shards' and run it in all those places at once. This 'networking as execution' is one of the things I experimented early on with in Java. We had network management software which needed to manage bits of gear, and I devised a set of classes which presumed a JVM and basic set of classes on the target, then packets were themselves simply byte codes to be executed at the target. I had a kind of hacked up JVM at the time which used a capabilities model rather than the security manager model, I was concerned that you were essentially injecting code into a remote device you really wanted to be sure it couldn't do anything unexpected!

Basically having objects that can serialize themselves and move from station to station depending on what they are trying to do is a useful abstraction for some problem sets.


To both you and Daeken, Fascinating stuff this, I really hope to see something along these lines hit alpha at some point.

At some level I am deeply dis-satisfied with the ways we approach security, clustering and other bolt-ons onto the models that we are using today. Unix gets the security bit partially right, is hopeless at clustering but of all the os's with more than trivial levels of adoption in both these arenas it is still the best there is. There is definitely a window of opportunity to do this better, I think the years of exploits and patchy style of spreading work over multiple machines has prepared the ground somewhat.

At the same time, if giants like google and amazon shy away from OS research then you have to wonder what it is that they know that the rest of us don't.


Just a note, you would be mistaken to say that Google has shied away from OS research but beyond that I cannot say a whole lot.


> I think the next wave in the 'real' OS wave is going to be something very much different than what we think of today, something where it assumes there is a network behind it and that the rest of itself is out there somewhere as APIs.

I'd be quite interested to hear more about this, if you have more ideas and care to elaborate.


Understand that I'm a networking guy from way back so I tend to see in that light :-)

So there are a number of forces at work in the market, one is the decreasing cost of compute and storage, another the increasing availability of wide bandwidth networks. A seminal change in systems architecture occurred when PCI express, a serial interconnect, became the dominant way to connect "computers" to the their peripherals. Now sprinkle in the 'advantages' of keeping things proprietary, with the 'pressure' of making them widely used, and those bread crumbs lead to a future where systems look more like a collection of proprietary nodes attached to each other by high speed networking than the systems we use today.

From that speculation emerges the question, "What does an OS look like in that world?" and so far the only thing I can think of that seems to work is a bunch of 'network apis.' Lets say you build your "computer" with pieces that are connected by 10Gbit serial pipes and a full bandwidth cross bar switch. Your 'motherboard' is the switch, the pieces are each completely 'proprietary' to the manufacturer that built them but the API is standard. This is not a big change in storage (SAS, FC, and SCSI before it have always been something of a network architecture) its becoming less uncommon for video, and why not compute and memory? Memory management is then a negotiation between the compute unit and the memory unit over access rights to a chunk of memory.

The Infiniband folks touched on a lot of these things 10 years ago but they were a bit early I think and they aimed pretty high. But they have some really cool basic technology and show how such a system could be built. Trivially so if there were a 'free API' movement akin to the FOSS movement.

This removes the constraint of size as well. And if you're cognizant of the network API issues amongst your various pieces you get to the point where your 'OS' is effectively an emergent property of a bunch of nodes of various levels of specialization co-operating to complete tasks. Computers that can be desktop, or warehouse sized, and run the same sorts of 'programs'.


Have you seen Plan9?

A Plan 9 system comprises file servers, CPU servers and terminals. (...) Since CPU servers and terminals use the same kernel, users may choose to run programs locally on their terminals or remotely on CPU servers. The organization of Plan 9 hides the details of system connectivity allowing both users and administrators to configure their environment to be as distributed or centralized as they wish. Simple commands support the construction of a locally represented name space spanning many machines and networks.

http://plan9.bell-labs.com/sys/doc/net/net.html


Yes, I am aware. I was at a Usenix conference where Rob Pike presented a paper on it, back when it was a bright idea out of Bell Labs. It is the curse of brilliant people that they see too far into the future, get treated as crazy when they are most lucid and get respect when they are most bitter [1]. I was working for Sun Microsystems at the time and Sun was pursuing a strategy known as "Distributed Objects Everywhere" or (DOE) but insiders derisively called it "Distributed Objects Practically Everywhere" or DOPE, it was thinking about networks of 100 megabits with hundreds of machines on them. Another acquaintance of mine has a PDP 8/s this was a serial implementation of the PDP-8 architecture, Gordon Bell did that in the 70's well before serial interconnects made sense. It was a total failure, the rest of the world had yet to catch up. Both Microsoft and Google have invested in this space, neither have published a whole lot, every now and then you see something that lets you know that somebody is thinking along the same lines, trying to get to an answer. I suspect Jeff Bezos thinks similarly if his insistence on making everything an API inside Amazon was portrayed accurately.

The place where the world is catching up is that we have very fast networks in very dense compute. In the case of a cell phone you see a compute node which is a node in a web of nodes which are conspiring to provide a user experience. At some point that box under the table might have X units of compute, Y units of IO, and Z units of storage. It might be a spine which you can load up with different colored blocks to get the combination of points needed to activate a capability at an acceptable latency. If you can imagine a role playing game where your 'computer' can do certain things based on where you invested its 'skill points' that is a flavor of what I think will happen. The computers that do shipping, or store sales will have skill points in transactions, the computers that simulate explosions will have skill points in flops. People will argue whether or not the brick from Intel or the Brick from AMD/ARM really deserves a rating of 8 skill points in CRYPTO or not.

[1] I didn't get to work with Rob when I was at Google although I did hear him speak once and he didn't seem particularly bitter, so I don't consider him a good exemplar of the problem. Many brilliant people I've met over the years however have been lost to productive work because their bitterness at not being accepted early only has clouded their ability to enjoy the success their vision has seen since they espoused it.


Be sure to also see http://herpolhode.com/rob/utah2000.pdf "Systems Software Research is Irrelevant" by Rob Pike


(upfront confession: I love Linux)

Ah yes, the old "why did UNIX win? OtherOS was so much better!" chestnut. Reading the article, and yes, the usual suspects come up: VMS, TOPS-20, etc, etc. What I'd like to know is why these OSes were supposedly better, or at least list some of their advantages (eg, versioning filesystems[1]). I'm not questioning the pedigree of the author (someone who actually worked with almost a dozen really different OSes), but the article is almost vacuous, and entirely predictable.

I don't doubt that maybe back then UNIX was a dog (just look up the UNIX haters handbook sometime), and it would be really neat to see some revolutionary research in OSes gain traction (my interests in this area include too many to list, but Plan9, Coyotos and Oberon come to mind).

But I think a few important factors have lead to Linux leading the pack: first, it's open source. Second, "worse" is better[2]. And perhaps most important, Linux will (probably) be the last OS[3].

[1] - http://en.wikipedia.org/wiki/Versioning_file_system#Files-11...

[2] - http://dreamsongs.com/RiseOfWorseIsBetter.html

[3] - http://linux.slashdot.org/comments.pl?sid=101317&cid=863...


> Linux will (probably) be the last OS[3]

As a long time linux user I sincerely hope you're wrong about that.

There is definitely room for something more modern.


The biggest takeaway I took from that comment on slashdot was that Linux changes and adapts: it's already more portable than just about anything out there, supporting more hardware than almost everything else, and we're already seeing the effects the comment talked about (ZFS being ported to Linux). It may not be Linux as we think of it today, but it will take something with significant advantages and wildly different design to both take over and not be amenable to being "ported" to Linux. I agree that it would be nice to see more variety, different designs, but if the only reason to change is because Linux isn't "more modern" (if that's true), that's not good enough. Change for the better? Certainly! Change for change's sake? No thanks.


Are you talking about a new kernel or a new userspace? It's an important distinction.


I'd be very happy to see something like QnX, unix on the outside, robust message passing kernel with clustering built into the heart.

QnX is much more than just a neat little kernel to do embedded stuff with.


I remember working with QnX back in the mid-90s and found it quite fun (compared to Unix at the time). A friend of mine worked at a company that made commercial X servers and that company's fastest X server ran on QnX (yes, even with the overhead of message passing, it was faster than their Unix version on the same hardware).


Hummingbird by any chance?


No, it was Metro-X from MetroLink.


That vaguely rings a bell but I don't think I've seen it in action. Pity!

Maybe the future will bring us something like that again, I sure wouldn't mind. The nay-sayers argument that 'micro kernels are slow' seems to be mostly limited to those that have never actually used a micro-kernel based OS for anything. What I remember was - for the time especially - nothing short of astounding performance, 30K slices / second on a lousy 486/33 was not exceptional at all. At the time most other OSs were doing 20 or so...


I guess the point of the article is not "who's the best" neither in usefulness or cool features, but "diversity is good". Both to try new ideas (as you mention) and for diversity sake.

While I too love Linux and the pragmatic approach, it would be really sad if Linux really became the last OS.

Regarding the technically superior features of the mentioned OSs, I'm also curious and trying to find more info on them. ;)


That's the thing that gets me about a lot of posts (and this OP in particular). Here you have someone knowledgeable about the subject matter, but they've done little more than say "Harrumph! OSes used to be so much better! You kids don't know what you are missing!" and then not telling us what we are missing. Given that a large chunk of these OSes are consigned to the dustbin of history (either due to licensing, or lack of hardware to run them), it makes it harder to find out about them for ourselves. As the rule for writing goes, show, don't tell.


I'm not really saying the old OSes were better. In many ways we have come a long long way. Yes the command lines in the old OSes were better. They were easier to learn and understand. They had features that seem to be missing in many shells today. Things like autocomplete for example. But mainly I think we need more new stuff. Stuff I don't know is missing. I don't want a return to the old but a look to what would be new.


If you have to use a UNIX shell again, say bash, try looking into the bash_completions package (http://bash-completion.alioth.debian.org/). I hear tell that KSH and ZSH also do dynamic completions pretty well. Not sure if that's exactly what you mean, but I've found tab-completion to be getting better and better.

As for the great unknown, that sounds like basic research. Would love to see more of that stuff, but not many people seem to be paying for it. Still, much research is happening at universities around the world. If I hadn't given up on my postgraduate degree, that might be where I would be today :)


I'd like to twist the title to be, "We need to solve more hard problems".

A considerable amount of what we do in computing is either (1) re-solving old problems with a new shell/language/os/widget library or (2) plumbing between said solutions for communication. This is a fairly ridiculous state of affairs. At the risk of being a snob & starting a flamewar, I'd like to point out that node.js doesn't need to exist: it reimplements systems that already existed, and forces a contorted programming style. Why isn't that energy poured into Erlang or better libraries in C++ for event handling? The same argument can be made for a number of different technologies (why do we have F# divergent from OCaml, why do we have D instead of a better C++? why does MS have to keep spinning their APIs and forcing rework?).

My point is, when something is solved, it's good to move on until a compelling reason is found. Operating systems are largely solved, but most of the research stems from the 60s-80s, AFAICT. A number of concepts brought up back then got dropped in the rush to PCs and DOS. One that I particularly remember is the idea of multilevel security partitions. OK, maybe that's a bad idea. But it'd be good to look into it again. What about operating systems written in dynamic languages? What would that look like in a multicpu/multithreaded/multiuser environment? It'd likely be absurdly slow, but maybe it doesn't have to be. There's got to be some great work that was done in Genera that can be leached into the modern world.

Why, for instance, is the most common text entry solution a QWERTY keyboard? Why even a keyboard? Isn't there a better solution (voice has worked really badly for me, sadly). Isn't there a better way than what amounts to a revamped typewriter? What about the mouse? My mouse looks suspiciously like Englebart's mouse from '69 or so. Tablets are a great step in this regard. I don't really like them, but I have great respect for Apple, Google, and MS for trying to move beyond the WIMP paradigm. Even if in a decade we still use it, at least we know it's a local maxima. :-)


An OS just needs to get out of the way and let applications work so that people can actually get things done. If the OS is succeeding in allowing the required applications to be made, then we don't need more operating systems. If it is failing to cater to the specific needs, then a new operating system will be created to run those apps - see iOS, Android.

You don't build an OS and then build things for it. You build things you need to run, and then you build an OS if you can't run them without it. The article has this relationship backwards imho. In the same way, many people create frameworks and then try to fit them to problems instead of creating frameworks to solve problems.


I think the time that new operating systems are able to gain traction is right on our doorstep.

The bar for making a desktop OS has been lowered from needing to support Microsoft Office or implementing a feature-equivalent office suite, having a full fledged UI system and many system libraries to just needing webkit to be ported to it.

When your operating system supports html and javascript it can already run 90% of what most people do with their computers.

To add to that, hardware that is easier to boot (ARM SoC's) is becoming cheap enough to produce that almost anyone (Raspberry Pi, OUYA) can launch a new hardware platform.

So the distance is being closed from both sides.

I personally think the next step to really making operating system development a popular field again is a new programming language. Yes, really, I think a new C would really be great. A C that has sane syntax and semantics, allows you to structure your code and supports integrating higher level constructs.


In what way does C fail to have sane syntax and semantics? You may not like the syntax or semantics, but they are reasonably simple and clean; I would say they are eminently sane. The portion of the C spec that defines the language (chapter 6) is actually quite straightforward compared to most language specifications.

The syntax is especially straightforward. I can find a few complaints about the semantics (mostly that there would ideally be less implementation- and undefined-behavior), but no worse than my complaints about any other language.


The syntax may be familiar, but I've always thought it's rather arbitrary. There's relatively little over-arching design: it's like they just stuck together features until they had something relatively big. The exact selection of syntactical forms only seems reasonable because it's been around so long and influenced so many popular languages, not because it's particularly neat or simple.

Some syntactic forms are rather annoying. For example, you have different ways to write pointer types: int* foo vs int * foo. (Hmm, I am no good at escaping asterisks in HN comments.) This just causes confusion; conceptually, the * is part of the type, but syntactically it isn't! The array syntax is also rather arbitrary. Why can you do foo[0] or 0[foo] when every other similar operation is an infix operator? Why do you write int foo[] when the type of foo is really an int array (say int[]) rather than an int? Why have both an if-statement and a ternary operator? (Actually, the whole statement/expression divide is annoying especially in how it influenced a whole bunch of other languages.) There are a ton of other little inconsistencies and annoyances (let's not even talk about the C preprocessor). Ultimately, the main thing C syntax has going for it is familiarity, but that seems to be more than enough. I certainly find the syntax strictly worse than a bunch of non-C-like languages, although it's difficult to compare them because they have vastly different semantics.

The semantics are also a problem. As you mentioned, the overabundance of undefined behavior is certainly not good. This certainly causes very real difficulties for even experienced programmers. Then there are a whole bunch of classic C errors that cause rampant security problems and hard-to-debug behavior. Now, some of these are inevitable due to C's providing low-level access for memory management, but others could probably be avoided with different design decisions.

Now, C is obviously a practical and widely-used language. But, as ever, a language's popularity is far more a social issue than a function of its design. The main reasons it seems simple and clear is because it's been around forever, it's influenced the syntax of most popular languages and everyone has either learned C or a very C-like language.


"Why have both an if-statement and a ternary operator?"

Because one is for control of statement execution and one is an expression that returns a value. Two similar-in-purpose but different-in-meaning constructs.

In my experience, if the ?-: operator is written in the same way as the if-else (condition, true, and false parts each on their own line), nested expressions are quite clear and understandable.


Declaration matches usage.


Some of C's syntax is distinctly kooky, particularly declarations. By way of illustration, my two favourite C declarations:

    const struct strange typedef volatile;
    typedef evil;
Or the following, in which the struct tag is a forward declaration, but only within the declaration, and only if the struct has not already been declared:

    void func(struct odd);
There's no good reason for the declaration syntax to have so many meaningless edge cases. It simply was not designed very well.

Then there's the other odd bits of C syntax: & and | have the wrong precedence, variables are bound inside their own initialisation forms, labels can appear in bizarre places but can't appear at the end of a compound statement, case statements can be interleaved with other control structures, etc. And there's the preprocessor.

None of this stuff has done much damage to C's success, but let's not pretend that the language is free from quirks and corner cases.


Ideally less? We should have adopted zero tolerance years ago. Letting end users' machines be pwned to save 1 ns not checking for a buffer overrun is absurdly bad engineering.


We can afford it in 2012(in most areas of computing at least). We couldn't in 70's. Imagine scanning through an array with thousands of elements and trying to make it happen real-time with weak hardware. Suddenly that 1 ns grows to 1 second. No can do. Lots of infrastructure is built atop of C, which would need to be rebuilt with something new, supposedly better. Who is up to the task, with what motivation and agenda?

What would be the desired language we have today to replace C with? C++? D? Go? Rust? Quite frankly I would place my bet for Rust, and even it(right now) relies on LLVM.

It is horribly hard task to provide truly stable and reliable abstractions. C is one of those which has succeeded, despite the problems with it. It is even harder to try to replace such a well-established layer of abstraction.


Right you are. I am not at all arguing for using any of those languages. I too think automatic buffer overrun checking is the sort of thing a true C replacement should not have.

Nor should it have _even more_ idiosyncrasies like C++ or automatic memory management or even a runtime (so no rust).

You bring to light the painful question, who is going to do it? Language designers are far too keen on fancy features. All C replacements that have been suggested in the past decade, Java, Go, Rust, Vala, rely on runtimes that take away true control in trade for features like typesafety, memory management and security.

The person who is going to be the creator of the next C has to be a hero in an epic tale, first he should grow up with clean comfortable languages like C#, Ruby or Haskell, but then dark times will force uncomfortable languages like C and C++ on him. But the rough times will make him stronger, and in the end he will slay the dragon with a tool that unifies his love of comfort with the cold pragmatic of bare metal code.

Someone should make a movie out of it ;)


I'll watch that movie. :)

You might want to look at the languages ATS, deca, BitC, and Clay, if you haven't already. Yes, I've been keeping a list, but I haven't looked at all of them in depth recently.

Edit: just discovered Tart. Can't tell if it requires a runtime.


Not sure if House requires a runtime or not.

Deca has been... delayed by my continually having to revise its goddamned type system. Maybe if ECOOP says yes to me in February... or at least sends a rejection notice in which the reviews don't point out some unsoundness issue.


Googling "house [programming] language" gives me nothing useful. Can you give me something more specific?

And keep it up on Deca. We need it. ;)


Sorry, turns out House was the kernel and Habit was the language.

http://hasp.cs.pdx.edu/

And keep it up on Deca. We need it. ;)

If you know a type theorist who could check over my paper without any obligations to the ECOOP programming committee so I could finally have someone confirm that this algorithm isn't Doomed To Failure, that would be incredibly helpful.

Also, if you could come up with any neat ideas on the following matter, that would be great.

http://marmoach.blogspot.co.il/2011/10/multi-method-type-cla...


Thanks, a fun new OS to look at too. I don't know any type theorists, and I'm afraid I'm not much of one myself (yet). I'm mostly just looking forward to something DRYer and safer than C for low-level programming. I'll do my best to grok the type class versus multimethod thing. That seems to be within reach.


I'll do my best to grok the type class versus multimethod thing. That seems to be within reach.

Thanks, actually. My brain one-tracks itself too often to handle everything.

In the meanwhile, I'm taking a look at the paper below, "Integrating Nominal and Structural Subtyping". If its type-system works the way I intuit it does, I might be able to tear out both existential types and sum types from Deca and replace them with something more familiarly object-oriented-looking. The bit-level implementation would work a lot like Deca's current existentials and extensible sum types, but the syntax would transform to become more Scala-like (I really like Scala) and I could substitute class extension for existential packaging.

http://dl.acm.org/citation.cfm?id=1428525

EDIT: Confirmed. I can tear out existential types, sum types, and recursive types for a single, intuitive, object-oriented-style type construct.


Hmmm... If you think of all parameter lists as tuples with some anonymous type, [un]packed transparently like Python parameter lists, then a multimethod defines a type class with one method (itself), the set of parameter lists to which it can be applied. So it would seem that multimethods can be a special case of type classes. That's the best I can do right now, assuming I've understood all the terms. Maybe you can do more with it.

Figuring out what type classes have to do with implicit parameters will take more than one afternoon of reading. It seems I can't read the other paper because I'm not an ACM member. But I guess Deca is going to be delayed again? :)


I would say, "Delayed for simplification". Being able to take away features is a Good Thing.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.124....


We seem to have reached the depth limit.

Thanks for the link. I need to remember to look harder when I hit paywalls.


Wow thanks! I actually didn't know any of those languages!


If C had only one source file (i.e. headerless C), then I'd agree that C was great. Otherwise, its a verbose piece of shit language.


Something Hacker News has taught me is that no matter what, someone's opinion will surprise me.


Seconded, but Linux and OSX seem to be the root causes of this (Net-/Open-/FreeBSD people flocking to these).

What happened to all the extremely interesting efforts like EROS (http://www.eros-os.org/) with its global persistence (i.e. can basically recover back to the previous state from switching off the system), Amoeba (http://en.wikipedia.org/wiki/Amoeba_distributed_operating_sy... multiple hosts seen as one) and others that actually brought extremely useful (even from today's perspective) new paradigms to the table?

Most new OS-related efforts seem to happen around virtualization nowdays.


A truly distributed OS would be incredibly useful in times like this where people may have a mobile phone, a tablet, a desktop workstation and maybe even their own server.

Having all files, processes and system resources accessible from all my devices would be great.


Another Thompson has worked on it with others: It's the Plan 9 project, and later Inferno. Currently efforts are headed right in that direction both at OS and language level.

What you describe is already under way for devices of the same brand/manufacturer is already happening (I'm a afraid make it happen between vendors is going to be a much longer story). It seems to me like a strong selling argument and I don't understand that some big consumer electronics vendor like Sony or LG didn't make it happen already. Maybe the technology isn't cost effective yet.

I think that the next major step is to make these technoknowlegies both user-friendly and programmer-friendly in order to make them mainstream. The innovation the author wishes for already happened 20 years ago. What really needs to happen is that someone take it out of the cupboard and use it.


Not trying to be a MS fanboy here. But, while not truly distributed, this is the one area where I feel Windows 8 might be interesting. I really want to see how (if?) they'll try to provide that experience as it's on phones, tablets, and regular PCs.

Any practical experimentation in this area can provide good insights and help future attempts at other OSs.

At least that's what I hope happens. :)


The Octopus system (http://lsub.org/ls/octopus.html) attempts to do something like that. It builds on lessons learned from Plan 9 and Inferno.


>What happened to all the extremely interesting efforts like

The same thing that happened to more modest Operating Systems, limited driver support. And since they are a niche already, they had a very small window of opportunity.

The new paradigm seems to have moved to linux kernel drivers/modifications, not unlike Haiku OS, that is able to take advantage of the existing linux stack.


     (Net-/Open-/FreeBSD people flocking to these).
 
Perhaps on the client side - I'd say that OS X pulled most desktop Unix users away years ago.

On the server side the BSD's and "Solarish" (OpenSolaris derivatives) still have a considerable amount of mindshare in their respective niches.


There are two primary issues. First is the performance of a monolithic kernel versus a message-passing microkernel. Linux has been tuned pretty extensively; as a monolithic kernel with a lot of eyeballs trained on it, it's hard to beat.

Second is the fact that the vast majority of OS kernel source represents deep OS _design decisions_. There are a lot of decisions to make. If you're going to make a lot of decisions that are the same as the Linux set, there's not much point in writing your own code. User space has a set of expectations; Linux conforms. Traction is difficult with non-conformance.


Rob Pike actually wrote something similar in his short, self-proclaimed polemic, "Systems Software Research is Irrelevant":

http://herpolhode.com/rob/utah2000.pdf

"Only one GUI has ever been seriously tried, and its best ideas date from the 1970s. Surely there are other possibilities. (Linux’s interface isn’t even as good as Windows!) There has been much talk about component architectures but only one true success: Unix pipes. It should be possible to build interactive and distributed applications from piece parts. The future is distributed computation, but the language community has done very little to address that possibility."

"The world has decided how it wants computers to be. The systems software research community influenced that decision somewhat, but very little, and now it is shut out of the discussion..."

And particularly relevant to the HN community at large: "Be courageous. Try different things; experiment. Try to give a cool demo."


Up front: I like Unix and Linux. But I think a big part of why Unix/Linux won is you could acquire them. 90% of life is showing up. In the bad old days you could not buy machines or OS's. Until IBM got investigated for anti-trust you leased them with a huge service contract. You had to sign non-disclosures just to use the things. BSD/Unix was available and then became unavailable during the Unix wars, giving Linux an entry.


> I expect some innovative OS research is being done in universities.

Sadly, this statement is predominantly false. That's not to say that there aren't interesting ideas being tried out -- there are. It's just that the ways in which Unix is flawed aren't significant enough to warrant switching to a new system. Yeah, the security model of 'sudo' for any permissions at all sucks, but it gets the job done on production systems, and there are (ugly) workarounds to get better security. Yeah, dependency management sucks, but we have VMs to work around versioning issues. There are tons of people proposing solutions to these types of problems in academia, but Unix is entrenched, there's tons of software written for it, and industry is not going to adopt an alternate platform unless it is 10x better than what's out there. Academia is proposing systems that are 10% better, not 10x better, and add complications to the relatively simple process / file model of Unix.


I don't think we need more operating systems. We just need more ideas, and to make sure that our existing operating systems are modular enough to allow them to fit into the picture.

Trying to recreate an entire operating system from scratch is almost universally doomed, even when the end result is something "better" -- Plan9, BeOS, take your pick. Doomed. Because if it's too great a departure from existing platforms all at once then after you're done reinventing the wheel somebody has to go out and build all new roads before you can actually deploy it at scale. All the apps people expect have to be ported (and who is going to port them to an OS nobody uses because it has no apps?), all the drivers for all the different hardware have to be written (and who is going to write them for an OS nobody uses because it has no drivers?), on and on.

Which doesn't mean they aren't useful as research projects rather than operating systems: You build something that works on one hardware platform and has no apps, but does something cool, and now you've got a proof of concept and the operating system(s) people actually use can adopt it. And then after they do, the research project OS fully dies because it no longer offers anything novel and it still has no apps and no drivers and no developer community hunting bugs and so forth.

The thing we need is an operating system modular enough to allow the implementation of new ideas. Which we pretty much have; and if you can identify an exception, kindly do your part and submit a patch.

Most people don't need a versioning file system, but if you do, they're available. And if you don't like the existing ones then you can write your own and plug it in without having to write your own SATA controller drivers. The list of features in Plan9 is basically a TODO list for the Linux community with a good chunk of the boxes already checked.

Evolution beats revolution better than 99 times out of 100. All competition is simultaneously healthy/necessary and a wasteful duplication of effort. Having the competition occur at a lower level of granularity reduces the reduplication. Once you get one piece right, that piece can stay as long as it fits while you keep iterating on everything else, without having to reinvent and reimplement all the things that already work.

Saying "we need many more operating systems" is like saying "we need many more mutually incompatible version of Internet Protocol" -- even if you can point out a hundred reasons why IPv4 and IPv6 are both terrible, you're still wrong. There are areas where it's better to work collectively toward getting it right the first time (or at least, getting it right the second or third time instead of the thousandth time), because incompatibility and wasted reduplication are not without cost.


After all you would have to compete with an OS that is firmly entrenched on 85-90% of desktop systems on one hand and a free operating system on the other

Linux proves this wrong. People built that when the market was completely dominated by Windows.

I think the author misses the real reason we don't see a bunch of new OS's: Unix variants and Windows solve their problems very well, and at this point most of the improvement is at the GUI/application layer (a la OS X).


>Linux proves this wrong. People built that when the market was completely dominated by Windows.

Not really, I remember first seeing linux when I was at university in '93/94ish. The computer labs there were mostly SunOS (later Solaris) with a few Windows (3.11, I think) machines for business students. Personally many people had Macs or Amigas (or Atari ST) - or indeed they had PCs but running DOS/DRDOS etc. Windows wasn't completely dominating in business at that point either (although it was clearly winning, thanks to running on IBM compatible machines), DOS was still prevalent but I remember entire trading floors where all the desktops were Sun Ultras. When I started work a few years later the software we built worked on OS/2, NT, Mac (OS 8, I think) and various Unixes.

I'd say the windows dominance wasn't long after, but the original Linux wasn't created in a market of complete windows dominance.


But the server OS market was much more fragmented, and that's where Linux had its original success.


Also, the phone OS market was much more fragmented, and that's where Linux had its other success.

The desktop, on the other hand… let's say there is disagreement there as to whether Linux is a success there at the moment.


Unix variants and Windows solve their problems very well

Maybe, maybe not, I think the bigger issue is the development cost it would take to get to Linux/Windows standards would be huge. This is why we have one Intel and AMD has to fight like hell to stay alive.


have we really seen that much improvement in UI over the years? I wonder how much more we could see if the underlying OS was more supportive.


I'm writing open source networking software that has potential to become an operating system like Cisco IOS. I see a niche opening up here because networking firmwares running on hardware have been doing a great job for years but are now being squeezed out by virtualization. There's room to replace them with new software (but we have to act fast before everything but the kitchen sink is shoehorned into the Linux kernel).

This is open source so if you want to do some relevant OS-style hacking then you're welcome to get involved. It's called Snabb Switch and it's at https://github.com/SnabbCo/snabbswitch/wiki


I'd like to suggest we actually need more diversity on every level of the stack. Just about everything being sold as a "computer" is a modernized version of the IBM 5150 that runs (and sometimes is built to run) Windows. Those that don't, run something either inspired or directly derived from AT&T's Unix. Every computer you buy today has one or more CPUs connected to a single memory address space. When they have non-volatile memory attached, it's treated like a disk drive.

Sometimes, when I treat a file, I have the disturbing feeling I'm reading a stack of cards... Palm and Newton didn't even have files. I spent far too much time trying to hammer trees into serial stores and vice-versa because those were the tools of the time. There has to be better ideas needing exploration.

And a new OS idea, for the first time ever, does not require you to give up a usable collection of software - most software that make up a modern Unix desktop require little more than X and POSIX. Regardless of how weird your underlying OS is, as long as you have that, programmers will be able to use your OS and work on it, which is the only real way to make them write software for it.


I wish there was a reasonably speced out microkernel OS that, if not gaining traction, wasn't based in conspiracy theory or academia but in the practical benefits of having it.

I see a very beautiful world where a small kernel could handle preemptive scheduling, virtual memory, and provide a virtual file system and virtual socket abstraction. Devices could use the socket layer and file system to take control of DMI devices provided by the firmware, but still run in userspace. If you used the plan9 "everything is a file" model, system devices could easily exist in their own directory, etc.

Then you use a newer privilege model, where an application derives its rights from its parent, who can either down-cast their rights or keep them at the same level as the parent, and the privileges influence the application specific view of the file system (like in Plan9).

For IPC, I'd really want to put in an effort to use the lower range ipv6 addresses in a highly optimized virtual socket dispatch so you could have one protocol to rule them all. Every IPC and remote communication, to any device, to any service on any machine, could be done over an abstracted ipv6 layer. Screw the maligned utility of dbus, unix sockets, RPCs, etc, use what you already have and optimize the hell out of that.

Also, I'd like to see a reasonable and sane file system. I like how /usr in Unix has nothing to do with a user anymore. How some systems expect /cdrom to be a thing in an era where a tremendously small fraction of machines have baked in CD roms, and those would be SATA devices if not USB externals. The Linux and OSX top level file systems always drive me nuts.

I'd use that visibility privilege model to have multiple users - an all user, public user, root user. System binaries would be under root, users would have visibility on themselves and any other user they are given access rights to (passworded or not) including the all and public. You can install and compartmentalize binaries and libraries appropriately. If you want a jail, create a user without any access privilges on other users, and they can only run in a personal vacuum. If you do a good security model where only applications within +/-1 of the privilge heirarchy can see in the socket layer another process, a jailed user would only have visibility on servers like what we have in pulseaudio and X instead of hardware like the gpu or alsa, and those services could be steeled against aggressive input.

Such a system I feel would be a breath of fresh air for writing applications. If the runlevels were something like:

0: kernel 1: hardware devices and init 2: hardware service servers, daemons, and login manager 3: max privilge user session, possibly more login managers and daemons to support lower privileged users. 4..n: restricted user sessions with limited views of other contexts.

Any layer could explicitly passthrough a higher level service or daemon to visibility on the lower tiers, but it would need to be explicit, you have a very limited scope by default.

I'd love to write software in a context like this. Security sounds better to me, communication is unified, and what you see (in the filesystem) is what you get.


Replacing pipes with IPv6 sounds like a recipe for disaster. It may be distasteful, but the ease of forking processes and communicating between them in C on UNIX has been very useful to me. I haven't even bothered looking into DBus, etc. because they're overkill for my requirements.

Providing only one, complicated way to do something is not great, especially when simple alternatives exist.

P.S. I'm not sure if your proposal rewrites 'pipes' in the shell as well - the overhead in a one-liner would be awful.



And don't forget http://www.minix3.org/ which, although unix-like, certainly tries some new things (e.g. the reincarnation service).


From my understanding of your paragraphs about "privilege model" and "file systems", Genode is a candidate you may have a look at: http://genode.org/documentation/general-overview/


Many of your ideas sound like they come from inexperience with writing operating systems. That's not your fault, and rather than attack all of them and sound like an extreme poopoohead I'll point out the one that made me wince the most:

IPv6 for IPC gains you nothing except preference, and carries with it extreme overhead for forming Internet packets to do a simple syscall. That choice alone would probably cut performance by an order of magnitude at the least. You still have to make an IPC protocol on top of IPv6, since IPv6 is just transport; you would end up with the same situation you decry (some things speak DBus over IPv6, others speak ..., etc.)

The 'I' in IPv6 stands for 'Internet'. Let's leave it there.


What is the point of having more OS if even Linux isn't well supported by driver suppliers or software developers? Just to have more fragmentation?

Sure, I love the idea of Plan9, I love small OS like MINIX and revolutionary UI ideas... but let's face the hard truth, most of this projects are destined to live in the research field.


I'm wondering if it would make sense to build an OS based on principles important in functional programming, such as immutability. I can't think of any benefits right off the bat, but I'm also no OS designer..


This may seem silly, but I'd love to see an extremely simple, single user, single tasked OS like CP/M, but written to exploit modern hardware.

The incredibly tall software stack we all live on makes our hardware seem much slower than it actually is, I've long wondered what we could actually get out of modern hardware if we treated it like we used to treat hardware back in the 80s and early 90s - close to the metal and highly optimized.


> The incredibly tall software stack we all live on makes our hardware seem much slower than it actually is

Actually, it doesn't really. Modern OSes take up far more RAM (because more services are kept available) but when you're performing operation solely in user space on a modern system, the OS doesn't really interfere. The only impact on any multitasked system -- process scheduling -- is carefully written to have an undetectably small overhead when there's only one busy process.

The reason why computers can still occasionally feel slow is:

1) They are processing millions (sometimes billions) of times more data than 1980's era computer.

2) You can always fill up the resources on your computer or overwork your hard disk. This can slow your computer down a 1000 fold.


Sorry I should have been more clear. I'm thinking about modern webapps written in Javascript, living in browser, on top of a modern OS pushing all that around (while managing security and virus scanning everything crossing through I/O) with dozens of services running, listening and/or polling various bits of hardware controlled by a scheduler -- once had a printer driver that ate 100MB of RAM and 5% of CPU time background checking to make sure I had paper and enough ink. Sure the convenience of all that is wonderful, but I can't help wonder if things should be much faster than they are.

It really does seem insane to me that I can, at times, type faster than my computer can keep up.


It's interesting to me how different the opinions are of people who are interested in research and advancing technology vs. business people and consumers who, ultimately, are the ones using that technology.

I love seeing new progress but I have to admit that my job would be a lot easier if there was one operating system and one browser!


It's particularly interesting to people like me (and I suspect many others on HN), because we straddle the line ("wear many hats"). I personally love crazy ideas (everything is a file! No, really, look at Plan9!) and wild research directions. But for my day to day work, where I have to ship? Linux just works. But I also like having options, both to play with (personal) and to solve problems (work). In many respects, Linux already has enough flexibility and options to satisfy me.

As for fragmentation, get used to it; it's part of the job. Things are more unified now than they've ever been, it might not stay that way, and one of the major thrusts of the OP (and many comments here) is that there may be too much unification, and not enough variety and options.


Whenever these discussions come up people tend to blur kernel and userspace. It's happening here, and it's not unforgivable since most operating systems package both together. That's actually half the reason Richard Stallman wants you to say GNU/Linux, the technical correctness of the name (the other half is what makes that annoying).

What adventures like Debian on FreeBSD (kFreeBSD) and even Android teach us, though, is that you can take a great kernel with its sweeping hardware support and build something better on top of it. The hardware work has been done for you. Apple could have easily built OS X and Darwin on top of Linux-the-kernel instead of Mach (what an interesting world that would be), but they had a limited range of hardware to target and the resources at their disposal to tailor Mach to their known hardware targets.

The kernel for the Intel architecture is, I think, at this point mostly figured out. Microkernel had its shot and on Intel we have seen that it's too slow (even though it makes more sense on paper). Linux is pretty darned good, and I think any attempt to be better will, after a few years, end up looking like Linux from several years ago. The Intel architecture has been stable for a long time, and a lot of clever minds have come up with ways to squeeze every drop out of the machine in Linux.

When people say "I want a new operating system!" I'm not hearing them desire a complete kernel rewrite; that's doomed to fail for a plethora of reasons including hardware drivers. I'm hearing them desire a new userspace and GUI. Make something like OS X based on Linux and I'd spend $500+ to buy it and fund its development.


"The kernel for the Intel architecture is, I think, at this point mostly figured out. Microkernel had its shot and on Intel we have seen that it's too slow"

Does that judgment include QNX? I'm under the impression that it has quite a good performance for a microkernel.


It has great performance. And on top of that, it is as stable as a rock. Disclaimer: I love microkernels.


It is not only QNX, but also research of the last two decades shows that microkernels are not doomed to be slow. Examples are L4 and Nova (http://os.inf.tu-dresden.de/papers_ps/steinberg_eurosys2010....).


Having recently studied operating systems in school, I am not so sure the Linux kernel can be used for such a great variety of systems. It is built to support a set of system calls that are tightly coupled with Unix heritage, including its filesystems. You may create a user-level system on top of this, and I do find Inferno interesting for that approach, but you (the parent poster) also dismiss microkernels for their lack of performance.


Don't forget that Android is built upon Linux. Just because at the core you're speaking largely Unix doesn't mean that you have to show a Unix-like system to the user.

Really, you can do whatever you want in userspace.


>Make something like OS X based on Linux and I'd spend $500+ to buy it and fund its development.

This is such a common wish that I wonder why noone has taken up the challenge. Sure, we have Canonical, but Ubuntu is still not at a place where I would say the experience is better than OSX (disclaimer: I use Debian on my primary machine, not an Apple fanboy by any measure)

<rant>Dell had the resources to do just this. They could have put together a team to get a Dell-Specific Linux-based OS and ship it out when they were the top dog. Now they're finding themselves to be increasingly irrelevant</rant>


You need world-class UI/UX and graphics designers to have a shot, and the FOSS community is criminally short on those. With all the programmers on Hacker News it could probably be written in hours but it would look like shit because code can only go so far.


"Make something like OS X based on Linux and I'd spend $500+ to buy it"

OK but please keep the good part of X Window System.

How do you display an app launched by one user in another user's session on OS X?

You don't...

How do you remotely log in to one user account on an OS X system while another user is logged in at the physical machine?

You don't...

These (and many others) are trivial (and, very, very convenient) things you can do on a stock (free) Linux (or any other X-based Un*x UI).

I'm not disagreeing with you but simply pointing out that it's not as if Apple got everything correctly with their OS X UI...


  How do you remotely log in to one user account on
  an OS X system while another user is logged in at the 
  physical machine?

  You don't...
This ceased to be true in the past year or two, FYI.


Since probably more than 99% of Macs are single-user workstations in practice, I'd wager that they got it right, just not for your needs.

I like that aspect of X too but I find myself getting by without it. I can also SSH in to my Mac as any user regardless of who's logged on physically, which leaves only GUI programs, which I never need remotely.


In order to transcend the modern operating system we need to transcend the modern programming language paradigms.


Why, may I ask? Isn't that a bit like saying to truly understand the human brain, we'll need to transcend the use of English?


Good point!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: