Perhaps the design should be "everything is a Resource" (ala REST) rather than "everything is an Object".
The beneficial aspect of "everything is a File" is the uniform interface: you can read a file, write to a file, seek to a particular position in a file (sometimes), and that's about it. It seems limiting, but that's what allows the huge number of interoperable tools to be built.
By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.
Resources are like objects, but constrained to uniform interface: their methods are GET, PUT, POST, DELETE, OPTIONS, HEAD. That's all the methods you need to manipulate individual objects and collections of objects. Of course, you'll need uniform identifiers (URLs) for the objects, and a uniform representation (or a set of standard representations.)
This will give you network-transparent resources, assuming you use globally unique URIs. It also turns the OS into a generic Web Service. I'm not sure what the implications are of that, but it seems like it might be interesting to explore.
I've always loved the Perl idea of "If you want to treat everything as objects in Perl 6, Perl will help you do that. If you don't want to treat everything as objects, Perl will help you with that viewpoint as well." (from http://svn.pugscode.org/pugs/docs/Perl6/Spec/S01-overview.po...).
From a traditional OS level this isn't very useful because it's your job to implement the lowest-level capability of storing data. But what if you're already running inside of an OS that provides all of that stuff for you. Then you are free to implement all sorts of crazy abstractions that can directly apply to even greater problems, like network transparency and distributed objects.
Now think of this as PG has with Arc (http://paulgraham.com/core.html) by creating basic axioms for an object interface given the pre-existing facilities of working inside a traditional OS. Something that could run a desktop to running on the cloud and you have some intriguing problems to work on that cannot be addressed by current OSes.
I think there's a lesson to be learned from current database trends. It seems that no one is expecting a database to automatically achieve scalability anymore, so now we have all of those projects that provide a distributed interface to a run-of-the-mill RDBMS (I'm thinking of Amazon's Dynamo work).
So, yeah, throw out the file concept, and everything else that has a chapter title in your OS text book. Then you'll be able to find ways of working in new OS/language concepts and quickly arrive at great problems like the current concurrency terror and cloud computing.
I think the thing people miss about objects in this scenario is that they don't have to be used directly by everything. That is, if I have an object that contains a list of names, `grep' doesn't have to support grepping over my object, it just has to support grepping over a list. You can just call `grep myObject.names Phil'.
I don't think making objects less generic is a good idea, but rather we need to interact with them differently.
>By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.
Not if every object inherits from a base interface that defines basic operations
While I think REST is great (and the only hope for some sanity in the future of the web), it doesn't seem fundamentally different from the "everything is a file" model, except for some historical limitations (like making navigation of the resource/file hierarchy extremely painful due to no standardized way to list resources/files).
ioctl was a mistake by people that didn't understand the "everything is a file principle" (a huge mistake I might add).
The original Unix from Bell Labs had no ioctl, Plan 9 has no ioctl, and the Linux people have been claiming to want to eventually get rid of all ioctls due to all the problems they cause, but the inertia and all the people that seem incapable of writing interfaces without ioctl means it will be ages before they get there.
The beneficial aspect of "everything is a File" is the uniform interface: you can read a file, write to a file, seek to a particular position in a file (sometimes), and that's about it. It seems limiting, but that's what allows the huge number of interoperable tools to be built.
By going with "everything is an Object", there are no constraints on the interface. Every class of objects has it's own set of methods, and tools need to be designed for specific classes/interface rather than for "everything". Interoperability will be lost.
Resources are like objects, but constrained to uniform interface: their methods are GET, PUT, POST, DELETE, OPTIONS, HEAD. That's all the methods you need to manipulate individual objects and collections of objects. Of course, you'll need uniform identifiers (URLs) for the objects, and a uniform representation (or a set of standard representations.)
This will give you network-transparent resources, assuming you use globally unique URIs. It also turns the OS into a generic Web Service. I'm not sure what the implications are of that, but it seems like it might be interesting to explore.