Nice to see Plan 9 making a bit of a comeback these days (at least in HN). I would certainly be glad if some of the ideas behind it became more popular today.
Its features are steadily getting added to the Linux kernel, at least in some form:
- You have FUSE, which has some overlap with 9P. You lose the network transparency, but still can build specialized filesystems entirely in userspace with no special permissions, rather than requiring a kernel driver.
- The clone(2) system call, which has a lot of options equivalent to those of Plan 9's rfork(2) allowing explicit control of what gets shared between the parent and child processes.
- In particular, the filesystem namespace (read: mount table) can be one of the unshared items, though my understanding is that this still requires root permissions.
>though my understanding is that this still requires root permissions.
Unless you also unshare the user namespace, then an unprivileged user can act as the root user in the context of that namespace and create mount points.
I've been following it intermittently for years, and I, too, am glad to see it slowly being discovered and examined by techie folks. It really does have some great ideas, and I would welcome some of those being incorporated into systems in the future. But the argument always comes back to this: how can one even hope to bring anything like this up to part with current systems? And the answer is simple, yet not warm-and-fuzzy: You can't, not right away. This is something that will take time.
I think highly of Windows, OSX, and Linux, but I don't think we should rest on our laurels. If we have the time and energy to do better, we should.
Plan 9 makes extensive use of pure-text IPC, more commonly known as file writes and reads. This is facilitated by the fact that each process sees its own filesystem hierarchy.
For instance, GUI programs are built by reading from files like /dev/cons and /dev/mouse and writing to files under /dev/draw. Now it seems that this will end up in a mess when you try to run multiple GUI programs, but that is not the case - a WM can provide virtualized versions of these files, and each program still feels that it is using the entire screen and does not need to know about the WM at all. My description is likely not very accurate because I haven't really tried it, but you can find the whole story in the related man pages: http://man.cat-v.org/plan_9/3/conshttp://man.cat-v.org/plan_9/3/mousehttp://man.cat-v.org/plan_9/3/draw.
Another example is the authentication facility of plan 9. Instead of being a dynamically-linked library, it is a separate process that can be reached via IPC. Russ Cox et al have a talk on this (https://swtch.com/~rsc/talks/nauth.pdf).
This is also why Plan 9 has no dynamic linking, since those things are done with IPC instead.
I am not sure I understand why that's desirable. Why is reading plain text out of /dev/mouse and writing plain text to /dev/draw an improvement over how Linux or OS X or Windows does it?
EDIT: I get that Plan 9 is "more UNIX than UNIX" and that this is an extension of how UNIX behaved. I view that as a philosophical, rather than practical, point though.
Why is reading plain text out of /dev/mouse and writing plain text to /dev/draw an improvement over how Linux or OS X or Windows does it?
Because it has nothing to do with the way other OS implements files. The kernel is effectively an I/O and network multiplexer for synthetic file systems that abstracts common resources to enable location/network transparency, single system imaging and interface uniformity. This is done through the use of the 9P protocol, and process boundaries and file system hierarchy are enforced through namespaces.
9P is cacheable, stateful, encryption/auth-agnostic and can run over any transport layer.
This actually has far more consequences than "everything is a file":
a) There are no symlinks. You don't need them.
b) There is no root or superuser. You don't need it.
c) Your applications don't necessarily need to be network-aware.
d) Remotely debugging, migrating or copying a system resource is as simple as a mount and it will just work.
e) There are no sockets. You don't need them. You have ndb.
f) There are no ioctls.
g) Networking information being a file server obsoletes the need for NAT and VPN, among other things. [1]
Then, of course, the Plan 9 userland is great. Plumbing, a compiler chain that makes cross-compilation stupidly easy (so much so that OpenBSD considered using it but backed away because of the Lucent license at the time), the acid debugger, mk, Venti, etc.
Becase basically you just need to know one set of APIs, this time the file open/close/read part of your C library. The rest is "just" protocols. No arcane calls to open sockets or query mice, you treat everything like a file-handle.
There are plenty of attempts at these outside of P9, of course. Linux and other Unices does part of this with the /proc filesystem. Plenty of scripting languages let you access URLS like files in their IO system. File managers treat archives like directories. FUSE lets you mount a foreign system's file system via ssh…
In Plan 9 this was the core concept and didn't require special treatment. Ideally you'd build your whole user-accessible system around this.
Other attempts at unified APIs were single-language image-based systems like Smalltalk, Common Lisp or Oberon (the latter being a major influence on P9, to be polite). There function-/method-calls took the place of the file system api.
Or object-based systems, where you had an interoperable object protocol taking care of interconnectivity. Doing everything via CORBA or COM calls. Actually, Windows' PowerShell is a pretty decent combination of the old Unix UI and this.
Personally, I see two problems that really ruin things for the Plan 9 approach. First of all, "just protocols" isn't as simple as it seems. So you access your hardware device or network server as a file or set of files. Great, but what do you read and write to that? Conceptually you moved from calling a "do_widgety_thing()" function to a "DO WIDGETY THING" protocol message. API design is still hard. Just like "do everything via REST" isn't the solution to all of your programming problems.
Second of all: This aims at a high level of genericity. And in this world of the web, apps and services, that's not popular due to lack of a commercial appeal.
Under Windows, to run a program on one host and display and interact with it on another requires a special "remote desktop" client and special hooks into the graphics driver and kernel.
Under Plan 9, the tool you use to redirect output from one machine to another is just "mount": mount /dev/mouse and /dev/draw from the other machine onto this one, run the command you want, and they appear on the remote machine - no changes required to the kernel or graphics drivers or any other part of the system.
Of course, remote display is just one application - since Plan 9 exposes just about everything as a filesystem, you can mount a remote machine's TCP stack or editor session or authentication keys or even actual ordinary file storage, transparently and easily.
Okay,but what does this buy you other than some leaky abstractions? A local file is not the same thing as a remote file, a local mouse is not the same thing as a remote mouse, and a local file is definitely not the same thing as a remote mouse. All that code in Windows that Plan 9 'doesn't need' contains a lot of logic for dealing with remote displays and input devices that is probably going to have to reside in your application if the OS isn't providing it. Simplicity is only a virtue if it's real; false simplicity is just kicking the can down the road for the next poor bastard to deal with.
I'll give you a practical example: Since the RaspberryPi doesn't have a real time clock, I bought a GPS module from adafruit, hooked it up to the Pi's serial line, and ran this two line script to start using it.
aux/gpsfs -b 9600
aux/timesync -G
gpsfs reads data stream from an NMEA compatible GPS device and provides four synthetic files: position, time, satellites and raw. timesync then uses the time file to synchronize local time to GPS. I can read the satellites files to see which GPS satellites were seen where and their SNR etc. The entire gpsfs program is under 1200 lines of C. The point being, it is fairly easy to provide such an interface for new devices. Since it is "just" a filesystem, standard systems tools can be used with them -- you don't have to teach them new tricks. Not everything fits in this paradigm but a surprisingly large number of things do.
Simplicity is a virtue only if you take it seriously.
Someone else mentioned that plan9 features are steadily being added to Linux but that misses the point entirely. Adding a set of simple features to a complex system makes the system even more complex as now you have even more things that can go wrong.
Please actually try researching the concepts and papers about Plan 9 and Inferno before making such sophomoric statements as "A local file is not the same thing as a remote file." As if no one has thought about this before.
As much as I love the way p9 modeled the world, I agree a tiny little bit about cwyers, in that the notion of file is stretched a little too thin. I do love the consistency but these aren't files anymore but namespaced objects, unless a file is a pipe `or` a register `or` a document `or` container of files.
Yes, it is. A file is just a representation of resources, nothing more. You can have files as a user convenience without even having a file system, but rather a persistent object store.
Are you objecting to the _notion_ that is called a file in Plan 9, or to the fact that it is _called_ a file? I would regard the latter as just a synecdoche, using the name of the most well-known realization of that general concept to stand for all of the realizations of that concept.
> A local file is not the same thing as a remote file
That's actually the easy case. Remote files are not local files, but local files are remote files: or more precisely, local files are a special case of remote files from the point of view of file access. If your filesystem API starts with the assumption that files are remote and must be handled as if they are remote, then you get pretty good local file access for free, as HTTP has been demonstrating for a couple of decades now. Of course if you instead start with the assumption that files are local and thus it's safe to do blocking I/O on them then when you add remote files you'll end up with an NFS-like mess instead. That doesn't prove that abstraction and generalisation can't work cleanly, it just shows that you were looking through the wrong end of the telescope. (Really, the distinction here isn't truly between remote and local but between unreliable and reliable. All remote file access may be unreliable, but it's certainly not the case that all local file access is reliable, or should be expected to be reliable. Especially not when you bring in user-space filesystems, as Plan 9 does.) Apparently Plan 9 actually started with a blocking-I/O-plus-thread-spawning model which was only so-so before it was fixed up: http://pdos.csail.mit.edu/papers/plan9:jmhickey-meng.pdf .
On Plan 9, network sockets are available via the /net filesystem (for instance, /net/tcp and /net/udp). If you use the standard remote filesystem mechanism (9P) to mount a remote system's /net filesystem, you can access files there to create sockets from that system, rather than your local system: VPN with just the filesystem.
If you want to forward a few specific ports from that system to yours, mount the relevant bits of your /net over the remote system's /net: port forwarding with just the filesystem.
If you want to prevent an application from accessing the network, or firewall it from accessing certain ports, you can limit its access to /net or a subset of files in /net: firewalls with just the filesystem.
> Why is reading plain text out of /dev/mouse and writing plain text to /dev/draw an improvement over how Linux or OS X or Windows does it?
It's very useful because uou don't need a C FFI to access OS functionality, graphics libraries and the like. Any language whose implementation allows you to read and write files can already do a lot of things (including GUIs, IPC and so on). There are very few wheels to reinvent. Furthermore, any program that wants to integrate with yours also needs to do nothing other than open, read, write and close files. You can expose any function or configuration variable of your programs through files -- and all you need to do for that is to read and write byte or UTF-8 streams.
Things go a bit further than just "IPC is done by reading and writing files". It's only one of the principles behind Plan 9's approach to files and filesystems.
You might get a better idea by watching a bit of Russ Cox's tour of the Acme editor, in particular the couple of minutes starting around 12:40 [1]. In there he shows how the contents of the various editor windows in the editor are available as files in the filesystem, which means that editor macros can be written in any language that can access the filesystem, without any special bindings.