Hacker News new | past | comments | ask | show | jobs | submit login
Renraku: Future OS (daeken.com)
46 points by daeken on July 4, 2009 | hide | past | favorite | 31 comments



I am convinced that this is the right path for OS evolution. Perhaps not necessarily .NET based, but the future OSes simply have to borrow heavily from the best of the current VM implementations: garbage collection, JIT compilation and rich standard library should all be provided by the OS (as opposed to in-process VM) and shared between all processes written in higher level languages.

The separation of kernel/userland must also go away and process isolation should be done via code verification, another trick a modern (yet non existent) OS must borrow from sandboxing VMs.


* but the future OSes simply have to borrow heavily from the best of the current VM implementations: garbage collection, JIT compilation and rich standard library should all be provided by the OS (as opposed to in-process VM) and shared between all processes written in higher level languages.*

Hey, that sounds familiar...OH FUCK! It's been done a long time ago! The Lisp Machines and the Smalltalk machines!

It shouldn't take much convincing to see that this is a damn good path for an OS to down...


Heard of Inferno? http://code.google.com/p/inferno-os/

Bell Labs as usual well over a decade ahead of their time...


Inferno is very interesting. However, I tend to believe that having the entire kernel written in managed code is the way things need to go. By doing so, you reduce the attack surface to a tiny amount of code and make it easier to develop.

That said, Inferno was way ahead of its time, and quite a few of the ideas in my system are based on ideas from it and Plan 9 itself; for example, the 'everything is an object' paradigm is really a natural extension of Plan 9's 'everything is a file' paradigm.


> for example, the 'everything is an object' paradigm is really a natural extension of Plan 9's 'everything is a file' paradigm.

I used to think that way long ago, over time I realized that it is the inverse: 'everything is a file' is an extension and very powerful refinement of the 'everything is an object paradigm.

What makes file based interfaces so powerful is that they provide an uniform and almost universal framework for representing any kind of resources. The constrains that this imposes are very useful both technically (eg for uniform caching, remote access, proxying, filtering, namespaces, ...) and as a way to narrowly focus the mind when designing interfaces.


I wasn't speaking in terms of actual relations (since files are a subclass of objects in my mind), but in terms of what inspired me.

That said, I disagree that files are a more powerful refinement. So much of our code is spent converting to and from files, it's a curse more than a blessing. The only really powerful thing about files is that there are tools on our current systems to manipulate them, but I don't think that has to stay that way. Why can't we grep over a collection object from the command line like we do with a file now? Why can't we have good network-transparent objects (a key feature in Renraku)?

The file paradigm in general is tired, in my opinion. Streams just aren't a good mapping for the way we handle data.


Strongly disagree re streams; streams / data-flow oriented programming is extremely easy to parallelize, and isn't going away in the future for at least that reason.

It's trivial today to write a bash script utilizing fork/join parallelism (& and wait, in bash) with streams and fifos for data transfer, and only a little care with ordering to guarantee no concurrency bugs. I do it all the time.

Objects overemphasize a method-full interface. Files force one to use a single pair of methods, read and write, which are the fundamental units of message sending. Objects as implemented in languages like C# are a bastardization of message sending, where the optimization of the message call reduces its composability in the large. When your messages are small packets of data, abstracting over communicating processes gets a whole lot easier.


Perhaps the design should be "everything is a Resource" (ala REST) rather than "everything is an Object".

The beneficial aspect of "everything is a File" is the uniform interface: you can read a file, write to a file, seek to a particular position in a file (sometimes), and that's about it. It seems limiting, but that's what allows the huge number of interoperable tools to be built.

By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.

Resources are like objects, but constrained to uniform interface: their methods are GET, PUT, POST, DELETE, OPTIONS, HEAD. That's all the methods you need to manipulate individual objects and collections of objects. Of course, you'll need uniform identifiers (URLs) for the objects, and a uniform representation (or a set of standard representations.)

This will give you network-transparent resources, assuming you use globally unique URIs. It also turns the OS into a generic Web Service. I'm not sure what the implications are of that, but it seems like it might be interesting to explore.


I've always loved the Perl idea of "If you want to treat everything as objects in Perl 6, Perl will help you do that. If you don't want to treat everything as objects, Perl will help you with that viewpoint as well." (from http://svn.pugscode.org/pugs/docs/Perl6/Spec/S01-overview.po...).

From a traditional OS level this isn't very useful because it's your job to implement the lowest-level capability of storing data. But what if you're already running inside of an OS that provides all of that stuff for you. Then you are free to implement all sorts of crazy abstractions that can directly apply to even greater problems, like network transparency and distributed objects.

Now think of this as PG has with Arc (http://paulgraham.com/core.html) by creating basic axioms for an object interface given the pre-existing facilities of working inside a traditional OS. Something that could run a desktop to running on the cloud and you have some intriguing problems to work on that cannot be addressed by current OSes.

I think there's a lesson to be learned from current database trends. It seems that no one is expecting a database to automatically achieve scalability anymore, so now we have all of those projects that provide a distributed interface to a run-of-the-mill RDBMS (I'm thinking of Amazon's Dynamo work).

So, yeah, throw out the file concept, and everything else that has a chapter title in your OS text book. Then you'll be able to find ways of working in new OS/language concepts and quickly arrive at great problems like the current concurrency terror and cloud computing.


I think the thing people miss about objects in this scenario is that they don't have to be used directly by everything. That is, if I have an object that contains a list of names, `grep' doesn't have to support grepping over my object, it just has to support grepping over a list. You can just call `grep myObject.names Phil'.

I don't think making objects less generic is a good idea, but rather we need to interact with them differently.


>By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.

Not if every object inherits from a base interface that defines basic operations


While I think REST is great (and the only hope for some sanity in the future of the web), it doesn't seem fundamentally different from the "everything is a file" model, except for some historical limitations (like making navigation of the resource/file hierarchy extremely painful due to no standardized way to list resources/files).


>>The beneficial aspect of "everything is a File" is the uniform interface: you can read a file, write to a file, seek

My C days weren't in this millenium, but ever tried this? :-)

  man ioctl
(A quick check shows that you get the real info in man ioct_list. Even fcntl() has some extra, like locking parts of files.)


ioctl was a mistake by people that didn't understand the "everything is a file principle" (a huge mistake I might add).

The original Unix from Bell Labs had no ioctl, Plan 9 has no ioctl, and the Linux people have been claiming to want to eventually get rid of all ioctls due to all the problems they cause, but the inertia and all the people that seem incapable of writing interfaces without ioctl means it will be ages before they get there.


So locking parts of files will be a file-based api?

Do sound weird.


Plan 9 definitely has some interesting ideas but I think there are some hidden pitfalls in the "everything is a file" paradigm.

The first is that it's not strictly accurate - some things like network devices are actually collections of artificial files which need to be manipulated in strict ways. In my opinion it would be better to say that in Plan 9, "everything has a file-like protocol". Unfortunately most programmers aren't taught to think this way and are unable to extend this philosophy. They don't know how to apply the "everything has a file-like protocol" idea to their own projects and aren't given sufficient time to learn so they just go back to writing the same kinds of stuff they already know. Pretty quickly they tire of making simple protocol mistakes and start doing things like wrapping the protocols in frameworks of objects and using those to build their applications without realizing the additional complexity they've reintroduced since the paradigm effectively becomes "lower level things have file-like protocols, but other things are objects".

The second problem is that the classic open/read/write/seek/tell/close interface isn't always the most obvious way to manipulate data. Editors would be much simpler with primitives like insert/delete, job processing is simpler with things like enqueue/dequeue, and so on. The plan 9 "systems" viewpoint is that developers should construct the appropriate protocols: editors should send insert/delete primitives to a document's "control" file, batch systems send enqueue/dequeue requests to queue control files, etc. But application developers generally don't want to think this way - they want paradigms which directly fit their model.

This can become particularly challenging when the application wants to deal with more complex structures such as hierarchies or graphs.

However I think Plan 9 did get a few things right here. The idea of per-process namespaces is very powerful, even when "everything is a file" doesn't hold true. More recent things like FUSE are still playing catch up.

Protocols, API's and objects all have their problems. If I were ever to do this kind of research, I would try to come up with new ways to express and implement the essential state models of a system that would make retargeting software easier. Over the years it seems like I've had to write and rewrite several of the same basic state management schemes to suit the requirements of whatever platform or service is in fashion.


Yes, so we can all have the sluggishness of Java on mobile devices, as opposed to the sprightly iPhone application style.


My suggestion: plan on adding some kind of real file system. I've worked with OS's that try and de-emphasize (or eliminate) the traditional file system and it just doesn't work. Programmers eventually hack their own "file system" to work with the files that everyone has. When every computer is networked and connected to the Internet, communicating by files, file names, and file extensions, is universal.


I agree, this is essential. The reason I haven't mentioned it is that I simply have no idea how to do it in Renraku yet. That said, I have a lot to figure out about the object store in general, so I'm hoping it comes to me during that.


Ignore the above suggestion. You don't need a traditional/heirarchical file-system. What you will need is an emulation layer to interface with the outside world.

An object store could work, but I'm not sure what OO language you're using, but I think the single-inheritance model may mess that up...


i have looked at the other comments and what i found was lacking was to address the versioning problems with objects--which can be a problem for files, too. there was a very good article on evolving APIs in the eclipse technical articles (2003~ish), but i guess that is not the way to go for an operating system...

(then again, .NET has versioning capabilities, but i guess the "interface" becomes complicated very fast)


Indeed, versionning is very important. The trouble will be the structure. Sure everything's an object, but how will you keep track of differences in images or text documents? You might need something more fine-grained.


I saw a news article today from about a month ago, saying that Microsoft had decided to release the .NET Micro Framework SDK and porting kit for free (it's available for download now, I checked). I've been wondering for a while why people don't use it (or something similar) to write device drivers; it seems to me like that would go a long way towards making a system rock-solid.

Also, one good thing about a properly written, all-managed system is security. Assuming that the underlying OS is bug-free (a huge assumption, I know), would there be any way to exploit such a system remotely?

One other cool feature I thought of would be to implement some base classes for things like images, sounds, and movies, then implement various codecs and file formats using formatters and such in the System.Runtime.Serialization namespace. Thus, it'd be pretty easy to add support for a new codec or file format, since the codec class' assembly could just be copied to a special directory, then loaded via reflection.

A final note...think about how awesome it would be to have a full-featured, well-written managed OS. Since everything on top of it is also managed, it'd be very easy to port it to another platform (all you need to do is port the very fundamentals of the OS, and the CLR takes care of the rest!)


Managed code ensures things like buffer overflows are a thing of the past, so long as your compiler is secure. It doesn't prevent against the design flaws that often lead to security breaches, but it's a start. I'll take securing a compiler and design over securing hundreds of millions of lines of code, though, any day of the week.

Edit: As for codecs and such, this is why I like my object store idea. Your codec class would encode to a bitstream like now, but the entire class would be there on disk. You could send the whole object across the wire and so long as they are using the same ICodec (or whatever) interface, it Just Works (TM).


Something I've been thinking about - "Everything is a filesystem" seems to be a more powerful focus than "Everything is a file". It encourages you to think about wrapping file-system driven APIs around applications. This is a focus in my current project.


This also reminded my of the Singularity research project from Microsoft: http://research.microsoft.com/en-us/projects/singularity/

I've actually used some of the concepts in this research for writing a secure distributed testing platform. So it has many applications outside of the operating system.

Really cool stuff is going on in this area between OS and languages.


Had to double-check my article to make sure I didn't cull out my Singularity reference, as it was a big influence of mine in Renraku's development. If Singularity were released under a non-tainting license, Renraku likely would be a distribution of it rather than an OS unto itself. MSR is doing some amazing things; can't wait to see what else they do with it.


I noticed Singularity hadn't been mentioned directly on HN before so that's why I threw a link in here :)

I also wish the Singularity project had a friendly license. I also had to do a lot of work up front, so congrats on the work you've done! Truly cool.


It's good to see some folks on HN who have an interest in Singularity-like concepts. It makes me want to take this opportunity to ask if anybody has noticed the similarities between those projects and things going on over in the LLVM world.

To my eyes, LLVM brings many of the same things to the table as the Bartok compiler, which also uses an SSA IR to provide the safety needed to run everything in ring 0.

Furthermore, if one reads the pubs directory over at llvm.org, one sees research papers where a few instructions were added (LLVA) that give LLVM the ability to host a modified version of linux where everything is managed within LLVM, save a very tiny shim between LLVM and the hardware.

There's also some papers on LLVM-SVA (Secure Virtual Architecture) where the same concept is extended to "enforce fine-grained (object level) memory safety, control-flow integrity, type safety..."

So to my amateur eyes, it looks like these research projects are very similar, with one being less overt about the direction it's headed.

Am I high? Has anybody else noticed this?


Yes! I've also seen the similarities between all of these things and LLVM.

Part of LLVM is an interest in correctness. I've seen more of an interest in these areas in research as well. For example, there was even a recent research highlight in an ACM magazine about "Formal Verification of a Realistic Compiler": http://pauillac.inria.fr/~xleroy/publi/compcert-CACM.pdf

Plus, newer companies like Coverity (http://www.coverity.com/html/research-library.html) bring a sense of credibility to a practice that hasn't had much traction in the industry.

I think all of these ideas can come together to make something quite useful. But I suppose bringing it all together is the hard part :)

Update: I also saw lots of cool associations to the Self Programming Language (http://research.sun.com/self/language.html), which includes some great research, especially in their paper "Self: The Power of Simplicity": http://research.sun.com/self/papers/self-power.html


More current Self link is the official homepage at http://selflanguage.org




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: