Hacker News new | past | comments | ask | show | jobs | submit login

It is interesting how folks who come from a commercial exposure to OS'es hated UNIX. I admit I was one of them, I was a big TOPS-20 fan, thought RSX-11M had a lot going for it, Unix not so much. Then I went to work for Sun Micro and later joined the kernel group there.

Operating systems are 'easy' in the sense that a small group can create them, and pretty much do all the legwork needed to get them into a product. Building hardware is more complex, it involves coordinating manufacturers, part suppliers, and the software to run it.

What happens then is if you're a hardware manufacturer and an operating system supplier, then you can tell your designers to make the hardware like you want, you can write the libraries you need in the OS, and everyone leverages the work of everyone else. In the Windows world the 'duopoly of Microsoft and Intel' led to an artificial relationship between hardware manufacturer and OS vendor but it was the same.

But if you're not a hardware supplier, you spend an inordinate amount of your operating system resource trying to deal with random bits of hardware you might want to use. Then your OS makes as few assumptions about the hardware as possible and it becomes as simple as it can in order to avoid incurring a huge overhead in getting new hardware into the system. It was this latter feature of UNIX which I really didn't appreciate until I went to work at Sun.

I agree with Thompson that it would be nice to have more general purpose operating systems. They are out there, Spring, Chorus, eCos, VMX, Qnx, Etc. but generally they target a more specialized niche whether its embedded or research or some level of security. I'd love to see more accessible OSes as well.

One of the projects I started a while back has been an OS for ARM systems that is more like DOS or CP/M or AmigaOS than UNIX or Windows. The thought was something simple that was straight forward to program, not your next Cell Phone OS or something that you'll need a couple of Gigabytes of memory to run, just something you could write and deploy code on for fun or educational purposes. It has been fun to look at "apps" like this. Imagine something that had a shell that was sort of like the Arduino processing UI except self contained. But again, its not general purpose. It's just a tool.

I think the next wave in the 'real' OS wave is going to be something very much different than what we think of today, something where it assumes there is a network behind it and that the rest of itself is out there somewhere as APIs. Of course civilization may collapse first :-)




> I think the next wave in the 'real' OS wave is going to be something very much different than what we think of today, something where it assumes there is a network behind it and that the rest of itself is out there somewhere as APIs.

When coming up with design goals for my OS (http://daeken.com/renraku-future-os) there were two things that I conceived of that naturally led to a third: 1) Everything is an object (you can see the Plan 9 inspiration there, just brought to a different level), 2) All objects are network-transparent. Those lead to 3) Your computer is not just your computer, but a billion services that just 'exist' on it.

Because of the fact that it was 100% managed code with these strict APIs, it was theoretically possible for the following scenario to work: At work, you dock your tablet at your desk, and your applications are seamlessly split between the CPU(s) in the tablet and the CPU(s) in the dock; same with memory. You can work on it as you see fit, then bring your tablet home. On the way home, you still have access to everything, it's just running on a more limited platform. When you get home, you could just transfer your running state to your 'house computer', and access everything from whichever interface you happen to be nearest at the time.

As much as I hate the phrase, "the network is the computer" is going to win in the long run. Just a matter of time.


Are you planning to go all out on this at some point?


I spent about 6 months working nearly full time on it, but there are some fundamental flaws with the implementation. Eventually I'll reboot it, but I have no idea when that'll be.


Well in the Project DOE world, which spawned Tivoli systems and a bevy of object brokers, one of the challenges is the subroutine. Or more precisely the semantics of making a 'call' from one context to another, where anything can happen between point A and point B. The sorts of challenges were things like "at most once" semantics where the programmer could assume that if a function call completed it did so only one time on the destination, or "receiver makes it right" data exchange where the person receiving the data is responsible for unmarshalling it into something intelligible. Sun's RPC layer was very light weight, HP/Apollo (and later Microsoft under the same guy) was quite heavy weight.

Process migration was another interesting bit, and something I think we'll see more of in the future. At Blekko we don't run a program (called a mapjob) against a petabyte of data, instead we send that program to all of the 'shards' and run it in all those places at once. This 'networking as execution' is one of the things I experimented early on with in Java. We had network management software which needed to manage bits of gear, and I devised a set of classes which presumed a JVM and basic set of classes on the target, then packets were themselves simply byte codes to be executed at the target. I had a kind of hacked up JVM at the time which used a capabilities model rather than the security manager model, I was concerned that you were essentially injecting code into a remote device you really wanted to be sure it couldn't do anything unexpected!

Basically having objects that can serialize themselves and move from station to station depending on what they are trying to do is a useful abstraction for some problem sets.


To both you and Daeken, Fascinating stuff this, I really hope to see something along these lines hit alpha at some point.

At some level I am deeply dis-satisfied with the ways we approach security, clustering and other bolt-ons onto the models that we are using today. Unix gets the security bit partially right, is hopeless at clustering but of all the os's with more than trivial levels of adoption in both these arenas it is still the best there is. There is definitely a window of opportunity to do this better, I think the years of exploits and patchy style of spreading work over multiple machines has prepared the ground somewhat.

At the same time, if giants like google and amazon shy away from OS research then you have to wonder what it is that they know that the rest of us don't.


Just a note, you would be mistaken to say that Google has shied away from OS research but beyond that I cannot say a whole lot.


> I think the next wave in the 'real' OS wave is going to be something very much different than what we think of today, something where it assumes there is a network behind it and that the rest of itself is out there somewhere as APIs.

I'd be quite interested to hear more about this, if you have more ideas and care to elaborate.


Understand that I'm a networking guy from way back so I tend to see in that light :-)

So there are a number of forces at work in the market, one is the decreasing cost of compute and storage, another the increasing availability of wide bandwidth networks. A seminal change in systems architecture occurred when PCI express, a serial interconnect, became the dominant way to connect "computers" to the their peripherals. Now sprinkle in the 'advantages' of keeping things proprietary, with the 'pressure' of making them widely used, and those bread crumbs lead to a future where systems look more like a collection of proprietary nodes attached to each other by high speed networking than the systems we use today.

From that speculation emerges the question, "What does an OS look like in that world?" and so far the only thing I can think of that seems to work is a bunch of 'network apis.' Lets say you build your "computer" with pieces that are connected by 10Gbit serial pipes and a full bandwidth cross bar switch. Your 'motherboard' is the switch, the pieces are each completely 'proprietary' to the manufacturer that built them but the API is standard. This is not a big change in storage (SAS, FC, and SCSI before it have always been something of a network architecture) its becoming less uncommon for video, and why not compute and memory? Memory management is then a negotiation between the compute unit and the memory unit over access rights to a chunk of memory.

The Infiniband folks touched on a lot of these things 10 years ago but they were a bit early I think and they aimed pretty high. But they have some really cool basic technology and show how such a system could be built. Trivially so if there were a 'free API' movement akin to the FOSS movement.

This removes the constraint of size as well. And if you're cognizant of the network API issues amongst your various pieces you get to the point where your 'OS' is effectively an emergent property of a bunch of nodes of various levels of specialization co-operating to complete tasks. Computers that can be desktop, or warehouse sized, and run the same sorts of 'programs'.


Have you seen Plan9?

A Plan 9 system comprises file servers, CPU servers and terminals. (...) Since CPU servers and terminals use the same kernel, users may choose to run programs locally on their terminals or remotely on CPU servers. The organization of Plan 9 hides the details of system connectivity allowing both users and administrators to configure their environment to be as distributed or centralized as they wish. Simple commands support the construction of a locally represented name space spanning many machines and networks.

http://plan9.bell-labs.com/sys/doc/net/net.html


Yes, I am aware. I was at a Usenix conference where Rob Pike presented a paper on it, back when it was a bright idea out of Bell Labs. It is the curse of brilliant people that they see too far into the future, get treated as crazy when they are most lucid and get respect when they are most bitter [1]. I was working for Sun Microsystems at the time and Sun was pursuing a strategy known as "Distributed Objects Everywhere" or (DOE) but insiders derisively called it "Distributed Objects Practically Everywhere" or DOPE, it was thinking about networks of 100 megabits with hundreds of machines on them. Another acquaintance of mine has a PDP 8/s this was a serial implementation of the PDP-8 architecture, Gordon Bell did that in the 70's well before serial interconnects made sense. It was a total failure, the rest of the world had yet to catch up. Both Microsoft and Google have invested in this space, neither have published a whole lot, every now and then you see something that lets you know that somebody is thinking along the same lines, trying to get to an answer. I suspect Jeff Bezos thinks similarly if his insistence on making everything an API inside Amazon was portrayed accurately.

The place where the world is catching up is that we have very fast networks in very dense compute. In the case of a cell phone you see a compute node which is a node in a web of nodes which are conspiring to provide a user experience. At some point that box under the table might have X units of compute, Y units of IO, and Z units of storage. It might be a spine which you can load up with different colored blocks to get the combination of points needed to activate a capability at an acceptable latency. If you can imagine a role playing game where your 'computer' can do certain things based on where you invested its 'skill points' that is a flavor of what I think will happen. The computers that do shipping, or store sales will have skill points in transactions, the computers that simulate explosions will have skill points in flops. People will argue whether or not the brick from Intel or the Brick from AMD/ARM really deserves a rating of 8 skill points in CRYPTO or not.

[1] I didn't get to work with Rob when I was at Google although I did hear him speak once and he didn't seem particularly bitter, so I don't consider him a good exemplar of the problem. Many brilliant people I've met over the years however have been lost to productive work because their bitterness at not being accepted early only has clouded their ability to enjoy the success their vision has seen since they espoused it.


Be sure to also see http://herpolhode.com/rob/utah2000.pdf "Systems Software Research is Irrelevant" by Rob Pike




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: