Hacker News new | past | comments | ask | show | jobs | submit login
Plan 9: The Way the Future Was (2003) (uri.edu)
78 points by ktamura on Dec 18, 2015 | hide | past | favorite | 41 comments



> it looks like Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor.

I really don't think so. Unix caught on because it was (initially) available for free with source code. It was freely available at the time because the Bell System, as a regulated monopoly, was prohibited from getting into the software business. Universities loved it as an object of study, and it spawned many derivatives, also freely available. After the breakup of Bell, they started marketing Unix.

Plan 9 in contrast was not released for free, had restrictive licensing, because Bell Labs was now under AT&T. I think it is hard to overstate the impact of this difference.

Linux became popular first because it was free, and because it invited hacking, and then because the combination of these caused a snowball effect.

I would like to add that Plan 9 is in fact very compelling, but the advantages are maybe hard to appreciate. A big issue was also that it initially supported a limited set of hardware because they didn't use the hardware BIOS.


And I'm going to go further and claim: Plan 9 is contentious. It continues where Unix left off and pushes the idea of radical simplicity of design.

The "Worse Is Better" essay[1] illustrates a bit of the controversy over radically simple design, which Gabriel contrasts with "doing the right thing."

The philosophy that led to Plan 9 is articulated in "cat -v considered harmful" : "One thing that UNIX does not need is more features. It is successful in part because it has a small number of good ideas that work well together."[2]

I am firmly convinced that this is an unpopular position. People really do want more features, and are not really concerned about how well they work together ... until they don't, in which case they have a reason to post an incredulous condemnation.

[1] https://www.dreamsongs.com/WorseIsBetter.html

[2] http://harmful.cat-v.org/cat-v/unix_prog_design.pdf


Eh, "the right thing" is in large part about simplicity and uniformity of interface, not just punting on corner cases to save 100 lines of implementation. That Plan 9 managed to keep it's LoC down is credit to the Pike et al but (in part) because Plan 9 actually pushed everything is a file far enough to be useful Plan 9 is -at least compared to Unix- waywayway closer to the right thing than it is worse is better.


People also wanted a faster horse.


Those people got motorbikes.


Go has connections back to Plan 9. Pike and Thompson are credited as designers on Plan 9 and Russ Cox did a ton of work on it. (Pike's wife Renée French drew Plan 9's bunny mascot Glenda, and the Go gopher.) Go was in Plan 9 C until it became self-hosting. I think it even inherited that quirky asm syntax that that poor illumos dude didn't like. Plan 9 introduced UTF-8 and of course Go uses it, though most new projects today would use UTF-8 anyway.

I wonder if the team members' experience designing an OS made them a bit bolder doing some things differently from most of the ecosystem around them, like starting with their own ABI (everything on the (variable-sized) stack) and static linking.

There's certainly a focus on networked uses in both Plan 9 and Go, and the lightweight threading (for apps that juggle a lot of clients but spend a lot of time waiting on other machines) and the servers in the stdlib (including HTTP2 by default in 1.6!) are part of that.

"Everything is a file" makes me think of interfaces like io.Reader/Writer in Go. I remember as a newbie being impressed how it was elementary to string together a pipeline. I suppose you can string together pipelines fine in other languages too, but I still think Go does a pretty good job keeping it simple (a couple of method definitions get you started) but clear on essentials (when things block, how errors look).

Anyhow, I'd really love to hear more about the connections back to Plan 9 from someone who knows about them.


You're pretty spot on with the io.Reader/Writer idea. In Plan9 everything is a file, thus meaning that - potentially - everything implements a read/write capacity. If everything implements a read/write capacity and is designed to be interacted with as such over a unified protocol, such as 9P in Plan9, you end with a system that is interactive via files alone, leading to a high level of abstraction where the only concern is reading and writing, leaving communication concerns an afterthought. Go and Plan9 both operate with the network in mind. Plan9 can leverage namespacing and its file servers to export more or less all of the system, or import however much is desired of another. Go adopts this simplicity in mind, providing ways to operate logically to leave communication between things (network, channels, etc) as some form of a lesser concern.

As I mentioned in another comment, a lot of nomenclature and idiomaticism is derived from Plan9 in Go.

http://man.cat-v.org/plan_9/2/dial

http://man.cat-v.org/plan_9/2/thread

http://man.cat-v.org/plan_9/2/print


You know, I've always loved the "everything is a file" religion. That is, until I tried to actually create it in an environment[0].

Files, network connections, program interfaces, local IPC, and hardware are completely different, and that difference is important on about any interaction you make. They do have some common behavior, but one can't just make they look alike without losing a lot.

[0] http://hackage.haskell.org/package/uniform-io


> Anyhow, I'd really love to hear more about the connections back to Plan 9 from someone who knows about them.

Learn about Limbo and see how much of Go resembles it.

http://doc.cat-v.org/inferno/4th_edition/


Furthermore Go resembles Plan9's C libraries in many ways. Channels as chan, the concurrency support, and the focus on read/write are very much Plan9-isms.

http://man.cat-v.org/plan_9/2/


  > Some Plan 9 ideas have been absorbed into modern Unixes, particularly the more
  > innovative open-source versions. FreeBSD has a /proc file system modeled
  > exactly on that of Plan 9 that can be used to query or control running
  > processes. FreeBSD's rfork(2) and Linux's clone(2) system calls are modeled on
  > Plan 9's rfork(2). Linux's /proc file system, in addition to presenting process
  > information, holds a variety of synthesized Plan 9-like device files used to
  > query and control kernel internals using predominantly textual interfaces.
  > Experimental 2003 versions of Linux are implementing per-process mount points,
  > a long step toward Plan 9's private namespaces. The various open-source Unixes
  > are all moving toward systemwide support for UTF-8, an encoding actually
  > invented for Plan 9.

This is interesting. Anyone know of other ways Plan9 has influenced Linux etc. since 2003?


That's probably mostly it. Plan 9 doesn't really have that much in terms of features. It's greatest strength is in how they go together. Having everything as a file is about the only thing I can think of that unixes don't have (which is the most important, yet not initially clearly how, feature). Oh, and on the programming end maybe the simplified api and CSP based threads/concurrency library.


I would say that union file systems. Recently overlayfs got merged. It doesn't work like Plan9's, but it is probably impossible within POSIX constrains. You probably shouldn't use overlayfs to merge /bin directories so you could get rid of PATH as it is the case in Plan9.

Interesting thing is missing mmap in Plan9. Aside other things it makes most programs work with remote files quite easily. There is no separate path for local and remote files, because either way you have to read/write to them. That also makes local access slower than it can be, but that is the price for network transparency. You can hammer mmap to work with remote files, but it may have too many corner cases. That's at least how I understand it. Please correct me if I'm wrong.


If memory serves Linux is just getting around to proper private/dynamic namespacing as of some recent version, or so I was told by a friend contributing to the kernel. Linux has been slow, but steady at adopting features out of Plan9 as listed in the paste you provided. Most of the features are incomplete once ported, however. All features and the depth of things like /proc aren't fully transferable to *nix due to architecture, kernel design, or other oddities. Plan9's kernel, while small, deviates from most commonly found kernels in that things such as networking are built-in and expected from the moment booting takes place.


Plan 9's spiritual successor is Inferno, which is open source. https://en.wikipedia.org/wiki/Inferno_%28operating_system%29

Inferno seems like the obvious choice for the IoT.


Not sure I'd call it a successor as much as a cousin. Inferno is certainly based upon Plan9, but it is not Plan9. If you want a more up to date or maintained Plan9 check out 9front[1] or 9atom[2].

[1]: http://9front.org/ [2]: http://www.9atom.org/


I've only ever toyed with it but Inferno has some interesting properties. They've got a system that will run the dis virtual machine on a 32-bit cpu without an mmu.

I remember a couple of Lucent telecom products from the mid nineties that ran on it but I'm not sure what else was out there.


Does anyone use Inferno in a commercial product right now?


Not me, but I tried very hard to put it in a device that's deployed in > 2 million field locations. I did manage to get Russ Cox' libtask in the product though.


I know Vita Nuova uses it in their consulting. I've heard of other fringe cases for various projects or systems.

http://www.vitanuova.com/


Yeah, Vita Nuova's stuff is interesting - running each node on a supercomputer as an Inferno instance sounded cool - but I guess the fringe cases are what I'm curious about.


One of the core problems of Unix is this focus on textual streams.

Every configuration file, every proc-style file, every interchange format ends up with its own unique take on what format is easiest for it to present/consume. Every program has its own text parser & generator that generally is the minimum bar to deal with the text that it assumes.

And at some point, when taking a bird's eye view at all of this, it just turns into an unpredictable and insecure mess.


> One of the core problems of Unix is this focus on textual streams.

> ...every proc-style file

You're conflating Linux with UNIX, wrt what /proc emits[0], or if it's even the interface to do what you think you want[1].

[0] https://illumos.org/man/4/proc

[1] /proc/mounts vs getmntinfo(3)[2]

[2] https://www.freebsd.org/cgi/man.cgi?query=getmntinfo


Text-based virtual files also predominantly feature in the Plan9 model, if I'm reading it correctly, which is why I bring it up.


I think this is a very valid point and I've only heard it said once before in an HN discussion about the merits of Powershell. The idea of "do one thing and do it well" requires that things can communicate relatively easily, which is certainly not always the case, especially with more complex data. I think Powershell deals with this by allowing for (IIRC XML-encoded) intermediate representations of data to be passed through pipelines.


While I agree with the article's reasoning, I never seen the argument on performance. Could it be that the overhead of Plan9 "everything is a file" abstraction was too much to handle when compared with the more pragmatic UNIX sockets?


I don't have an immediate factual retort, but I personally have never experienced any performance issues. Do you have some form of benchmark or evidence you'd want to see on Plan9's performance?


Every year at the Plan9 International Workshop, we tussle with how to improve networking performance, particularly for streaming. Every 9p request needs to be rec'd. There's no look ahead. This makes it particularly unsuitable for streaming, for instance.

We have tried queued 9p responses, multiple 9p packets in the same tcp packet, extensions to the protocol. Nothing seems satisfactory enough to make it into the core.


That is very interesting to hear.

Given the need to retrofit X11 with various mechanisms for fast local access to framebuffer or graphics accelerator, I have never been convinced that the 8 1/2 and Rio window systems, in which all drawing happened through streams of bytes, were complete and future-ready graphics systems.

I wonder if the "everything is a file" principle is actually Plan 9's weakness, a simplistic oversimplification of the nature of OS resources. What if we keep Plan 9's structure/topology (private namespaces, network transparency where possible) but redesign the fundamental file abstraction. Maybe this needs to happen at the level of a new instruction set which provides primitive data block transfer operations which can be implemented efficiently e.g. by memory mapping or dma on local hardware and which automatically fallback to streaming in the network case. Not new silicon but a fast virtual machine which provides network transparency and high performance where possible.


One thing to remember is that Plan9 was born as a research project. Another is that we don't consider it a jack-of-all-trades OS. It was built by programmers to facilitate their daily work. Importing namespaces is an amazing abstraction. You can do some truly wonderous things.

We can play Quake though so it's not all bad :)


I was mainly curious about the performance hit of drawing operations. But your subjective assessment is enough to understand that the penalty is lower than I expected.


A lot of drawing on linux already goes over a "file", a PF_UNIX socket from the application to the X server - with the added penalty of making an additional round trip to the kernel, whilst on Plan 9 you just go directly to the kernel(as long as the application and display is running locally)


How does that work? Does Plan 9 run the display in the kernel?

Also communication with the X server can use shared memory - AIUI the modern GUI toolkits render everything to buffers themselves rather than using X drawing protocol calls, so I suspect drawing on modern linux is mostly a shared-memory thing.


Yes, it runs in the kernel on Plan 9. And sure, these days there's various optimzation using shared memory for a lot of the gfx or going via OpenGL when using X11.

I'm just saying unixes have done it a similar way to Plan 9 since almost forever - and it's not that big of a deal.


I can try to run some form of rudimentary benchmark tomorrow and I'll try to get back to this thread if I can.

As for drawing. Plan9 is intended to be used graphically. Although the interface is by majority text (which is very handy). Images and window usage in general is as run of the mill as would be expected in a graphical environment.


So perhaps Plan9 at least can serve as a clear definition of where Linux ought to move over time. I often work with old software this way. I try to figure out first what I actually want regardless of the limitations of existing software is. Then I look at what we have an try to figure out if there is a gradual path that can take one to the ultimate goal. I prefer this over just incremental improvements without any clear end goal.


I was sure this article was going to mention FUSE, but I guess it was still in the future in 2003.


Fuse is kind of a cheap hack in comparison.


It is a cheap hack. Plan 9 is designed around 9P where FUSE is a kinda shitty but useful bolt on. Why they didn't just steal 9P instead of making FUSE is beyond me.


9P is only a protocol, and not a very good one for many purposes.

9P in Plan9 is not great because it is some super advanced technology (it is actually quite simple), but because of its ubiquity. You can use 9P from Linux, with or without FUSE, but this won't give you a completely consistent system with a single interface as Plan9 is.

Another comment made some analogies of Plan9 and Go. gofmt is not the most awesome program ever written; but a consistent style for all the code written in the language is one of the most appreciated features of Go. You could write a cfmt, a cppfmt or a jsfmt program, but this won't give you the advantages of gofmt in those languages.


For what it's worth Plan9's C was written to a standardized format that was strongly enforced. All, or at minimum, most, of Plan9's C code was all written in a singular format, similar to how Go (contributed to by Plan9 developers) requires a specific format, but obviously without the absolute rote software-side enforcement.

See:

http://man.cat-v.org/plan_9/6/style

http://doc.cat-v.org/plan_9/programming/c_programming_in_pla...

http://www.literateprogramming.com/pikestyle.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: