Inferno is very interesting. However, I tend to believe that having the entire kernel written in managed code is the way things need to go. By doing so, you reduce the attack surface to a tiny amount of code and make it easier to develop.
That said, Inferno was way ahead of its time, and quite a few of the ideas in my system are based on ideas from it and Plan 9 itself; for example, the 'everything is an object' paradigm is really a natural extension of Plan 9's 'everything is a file' paradigm.
> for example, the 'everything is an object' paradigm is really a natural extension of Plan 9's 'everything is a file' paradigm.
I used to think that way long ago, over time I realized that it is the inverse: 'everything is a file' is an extension and very powerful refinement of the 'everything is an object paradigm.
What makes file based interfaces so powerful is that they provide an uniform and almost universal framework for representing any kind of resources. The constrains that this imposes are very useful both technically (eg for uniform caching, remote access, proxying, filtering, namespaces, ...) and as a way to narrowly focus the mind when designing interfaces.
I wasn't speaking in terms of actual relations (since files are a subclass of objects in my mind), but in terms of what inspired me.
That said, I disagree that files are a more powerful refinement. So much of our code is spent converting to and from files, it's a curse more than a blessing. The only really powerful thing about files is that there are tools on our current systems to manipulate them, but I don't think that has to stay that way. Why can't we grep over a collection object from the command line like we do with a file now? Why can't we have good network-transparent objects (a key feature in Renraku)?
The file paradigm in general is tired, in my opinion. Streams just aren't a good mapping for the way we handle data.
Strongly disagree re streams; streams / data-flow oriented programming is extremely easy to parallelize, and isn't going away in the future for at least that reason.
It's trivial today to write a bash script utilizing fork/join parallelism (& and wait, in bash) with streams and fifos for data transfer, and only a little care with ordering to guarantee no concurrency bugs. I do it all the time.
Objects overemphasize a method-full interface. Files force one to use a single pair of methods, read and write, which are the fundamental units of message sending. Objects as implemented in languages like C# are a bastardization of message sending, where the optimization of the message call reduces its composability in the large. When your messages are small packets of data, abstracting over communicating processes gets a whole lot easier.
Perhaps the design should be "everything is a Resource" (ala REST) rather than "everything is an Object".
The beneficial aspect of "everything is a File" is the uniform interface: you can read a file, write to a file, seek to a particular position in a file (sometimes), and that's about it. It seems limiting, but that's what allows the huge number of interoperable tools to be built.
By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.
Resources are like objects, but constrained to uniform interface: their methods are GET, PUT, POST, DELETE, OPTIONS, HEAD. That's all the methods you need to manipulate individual objects and collections of objects. Of course, you'll need uniform identifiers (URLs) for the objects, and a uniform representation (or a set of standard representations.)
This will give you network-transparent resources, assuming you use globally unique URIs. It also turns the OS into a generic Web Service. I'm not sure what the implications are of that, but it seems like it might be interesting to explore.
I've always loved the Perl idea of "If you want to treat everything as objects in Perl 6, Perl will help you do that. If you don't want to treat everything as objects, Perl will help you with that viewpoint as well." (from http://svn.pugscode.org/pugs/docs/Perl6/Spec/S01-overview.po...).
From a traditional OS level this isn't very useful because it's your job to implement the lowest-level capability of storing data. But what if you're already running inside of an OS that provides all of that stuff for you. Then you are free to implement all sorts of crazy abstractions that can directly apply to even greater problems, like network transparency and distributed objects.
Now think of this as PG has with Arc (http://paulgraham.com/core.html) by creating basic axioms for an object interface given the pre-existing facilities of working inside a traditional OS. Something that could run a desktop to running on the cloud and you have some intriguing problems to work on that cannot be addressed by current OSes.
I think there's a lesson to be learned from current database trends. It seems that no one is expecting a database to automatically achieve scalability anymore, so now we have all of those projects that provide a distributed interface to a run-of-the-mill RDBMS (I'm thinking of Amazon's Dynamo work).
So, yeah, throw out the file concept, and everything else that has a chapter title in your OS text book. Then you'll be able to find ways of working in new OS/language concepts and quickly arrive at great problems like the current concurrency terror and cloud computing.
I think the thing people miss about objects in this scenario is that they don't have to be used directly by everything. That is, if I have an object that contains a list of names, `grep' doesn't have to support grepping over my object, it just has to support grepping over a list. You can just call `grep myObject.names Phil'.
I don't think making objects less generic is a good idea, but rather we need to interact with them differently.
>By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.
Not if every object inherits from a base interface that defines basic operations
While I think REST is great (and the only hope for some sanity in the future of the web), it doesn't seem fundamentally different from the "everything is a file" model, except for some historical limitations (like making navigation of the resource/file hierarchy extremely painful due to no standardized way to list resources/files).
ioctl was a mistake by people that didn't understand the "everything is a file principle" (a huge mistake I might add).
The original Unix from Bell Labs had no ioctl, Plan 9 has no ioctl, and the Linux people have been claiming to want to eventually get rid of all ioctls due to all the problems they cause, but the inertia and all the people that seem incapable of writing interfaces without ioctl means it will be ages before they get there.
Plan 9 definitely has some interesting ideas but I think there are some hidden pitfalls in the "everything is a file" paradigm.
The first is that it's not strictly accurate - some things like network devices are actually collections of artificial files which need to be manipulated in strict ways. In my opinion it would be better to say that in Plan 9, "everything has a file-like protocol". Unfortunately most programmers aren't taught to think this way and are unable to extend this philosophy. They don't know how to apply the "everything has a file-like protocol" idea to their own projects and aren't given sufficient time to learn so they just go back to writing the same kinds of stuff they already know. Pretty quickly they tire of making simple protocol mistakes and start doing things like wrapping the protocols in frameworks of objects and using those to build their applications without realizing the additional complexity they've reintroduced since the paradigm effectively becomes "lower level things have file-like protocols, but other things are objects".
The second problem is that the classic open/read/write/seek/tell/close interface isn't always the most obvious way to manipulate data. Editors would be much simpler with primitives like insert/delete, job processing is simpler with things like enqueue/dequeue, and so on. The plan 9 "systems" viewpoint is that developers should construct the appropriate protocols: editors should send insert/delete primitives to a document's "control" file, batch systems send enqueue/dequeue requests to queue control files, etc. But application developers generally don't want to think this way - they want paradigms which directly fit their model.
This can become particularly challenging when the application wants to deal with more complex structures such as hierarchies or graphs.
However I think Plan 9 did get a few things right here. The idea of per-process namespaces is very powerful, even when "everything is a file" doesn't hold true. More recent things like FUSE are still playing catch up.
Protocols, API's and objects all have their problems. If I were ever to do this kind of research, I would try to come up with new ways to express and implement the essential state models of a system that would make retargeting software easier. Over the years it seems like I've had to write and rewrite several of the same basic state management schemes to suit the requirements of whatever platform or service is in fashion.
Bell Labs as usual well over a decade ahead of their time...