Hacker News new | past | comments | ask | show | jobs | submit login
Inferno-OS: distributed operating system where everything is a file (github.com/inferno-os)
209 points by nateb2022 on April 24, 2023 | hide | past | favorite | 123 comments



Some context from top of my head.

In the 90s our sector started to look (again) at VMs as viable alternatives for software platforms. The most prominent example of that is Java. But Inferno was born at the same time at the other side of the Atlantic Ocean[0], sharing the same goals: an universal platform intended for set-top boxes, appliances and the like, with the same write once, run everywhere philosophy.

Inferno programs ran on top of a VM called Dis and programs were written in Limbo (designed by Rob Pike). If you use Go nowadays, you're using a descendant of Limbo, as both share a lot of syntax and concepts (and Limbo descended from Aleph, by Rob Pike too, but that's another story)

Going back to 90s. This space was in a very "exploratory" state then, and any technology had to probe it's superiority to gain adoption. For example, Inferno creators implemented Java on Dis[1] simply to demonstrate how fast was Dis vs the JVM.

I don't know exactly why things were the way the went, but I suppose Sun had more deep pockets to push it's technology and Inferno ended as a the rarity we have today.

From a pure tech standpoint, I've studied the Dis VM to write my own (incomplete) interpreter[1] and, IMHO, it's design is far less abstract than the JVM. It seems to be too low level and tied to that era processor design. That make it less future proof (but that's only my own point of view, of course)

[0] https://www.vitanuova.com/inferno/ [1] http://doc.cat-v.org/inferno/java_on_dis/ [2] https://github.com/luismedel/sixthcircle


There were many:

1. Java,

2. Inferno,

3. Oberon System (Niklaus Wirth, ETH Zurich),

4. Squeak (Apple, Walt Disney Imagineering)

5. Erlang (Ericsson, Joe Armstrong) still going strong.

IMHO, Technologically Java was the worst and least innovative. Platform adaption is "Worse is Better" so Java won.

Oberon, Erlang and Squeak were are all amazing in their own way.


Inferno was a Plan9 derivative kinda research OS, not necessarily what people wanted.

Squeak/Smalltalk had the issue of programs and runtime images being conflated, making version control awkward (at the time at least).

Erlang is a functional language.

All the above had relatively small and poorly documented libraries.

Java combined an imperative C-like and relatively simple language, with an evolvable bytecode-oriented VM, garbage collector, large and well documented libraries + lots of tutorials, and really intense VM engineering. Easy to forget but Cliff Click and colleagues were the first to show that a JIT compiler could produce code competitive with gcc, Sun also bought/implemented stuff like deoptimization which is still relatively advanced, and then gave that tech away for free. From the end user POV all these things mattered a lot.


You express beautifully the Worse is Better philosophy. Worse is a preferable option to better in terms of practicality and usability.

Innovation loses to taking small steps to improve the worse alternative. Community is built around tinkering and fixing errors, getting around design failures.


To some extent yeah. I definitely became way less radical and more incremental in my own design approach over time. Still, "better" does contain usability, performance, docs and other aspects of practicality. Like, those aspects do genuinely make things better. Innovation doesn't always improve things, it can be a dead end of bad ideas too.

Java was a pretty major clean break when it came out. It was probably right at the limit of what could be done innovation-wise whilst still being targeted at the mainstream.


With the possible exception of Erlang/BEAM, none of the candidates for "better" of OP cited ever (ever) actually got tested in the field -at scale- to see if it actually delivers the goods. So it is merely an elitist opinion of a niche subset that apparently thinks its take on these matters is beyond question. Gabriel wrote his essay about two actually deployed alternatives, with the "worse" option (UNIX) coming on top. LISP was in fact the incumbent and UNIX the scrappy upstart. None of the OPs cites were incumbents. They are just simply exceptionally safe bets to high horse on.

Also agreed: Java and JVM are were not the "worse is better" options at the time of adoption. Hype alone did not create the massive shift to Java that occurred in late 90s. It was a genuinely positive development for the practice and people were getting results. It was Java's 2nd act of moving into "enterprise" (and Sun's mismanagement of that effort) that created its current sense of 'heaviness' of language.


People who designed Java are on the record expressing elitist reasons for Java's success and design goals. It boils down to: Java classes allow code monkeys to write spaghetti code that is forced to stay inside classes designed by others.

The reason I always hated Java was because it was designed by good programmers for bad programmers in a looking down at them way, not a good programmers for themselves to use.

btw. Java design team licensed the Oberon compiler sources years before to study it.


Citation regarding Oberon.

The code has been available for years before Oak was born, it is even on the book.


People that profess Java heaviness only reveal lack of historical background.

Not only were CORBA and DCOM much worse, Java EE was the reboot of Objective-C framework for distributed computing at Sun. And those Objective-C lengthy methods are quite the pleasure to type without code completion.

While they certainly could have done better, it was already an huge improvement.


For most IT programmers (who were now the Java coders) CORBA was esoterica. So the JEE specs were never appreciated for their clarity and the necessary abstractions were deemed as "ceremony". Sun did a great job on the specs. They dropped the ball on (a) the pedogogical front (your precise point actually), and (b) crippling JEE specs to induce the likes of IBM to invest in J2EE app server development.


Sun bought whole team of Self [1] language to make its HotSpot JIT.

[1] https://en.wikipedia.org/wiki/Self_(programming_language)


Java is really not simple, and not at all C-like, except in some of the syntax.


It is, in the sense that Objective-C was the inspiration.

https://cs.gmu.edu/~sean/stuff/java-objc.html


Quick note for folks not familiar with language history: Java is a C-like language.


Syntax is similar, semantics not at all.


It’s ALGOL all the way down.


> IMHO, Technologically Java was the worst and least innovative. Platform adaption is "Worse is Better" so Java won.

In the words of Guy Steele: And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?

* https://people.csail.mit.edu/gregs/ll1-discuss-archive-html/...


Java was perceived (quite rightfully) as a C++ with the rough edges filed off. It gained a ton of mindshare by making it trivial for C++ developers to jump on the bandwagon. This was critical because in terms of desktop development (that's how all apps were built in the 90s) and even on the server side C++ was the undisputed king of app development.


> Java was perceived (quite rightfully) as a C++ with the rough edges filed off.

Java has only familiar surface syntax.

Semantics has almost nothing common with C++. Different object model, memory model, execution model, ...

That's the genius of both Java and Javascript. Make it look familiar without being one at all.


Indeed, it is based on Objective-C semantics.


It was a great time! There were some interesting developments also in the open source space:

https://argante.sourceforge.net/concept.html


> In the 90s our sector started to look (again) at VMs as viable alternatives for software platforms.

Now we are back at it with WebAssembly, as if it was something new.


WASM started making is way into browsers just as we removed the last remnants of Java Applets. Now we have too re-secure a whole new stack.


At least WASM is first-party code written by the browser developers. I'll take that any day over a pile of third-party plugins.


the WASM people knew that going in and it's part of the design.

it's also why WASM is so slow compared to native executables; security guarantees have a performance cost.



Hum... You are aware that those are one where the attacker gains execution capabilities inside the sandbox and one hardware vulnerability that affects every single language, right?


Gaining execution capabilities inside the sandbox is already good enough to compromise its behaviour, e.g. everyone gets true back when is_admin() gets called.

More devs should get security trainings.


Yes, but it's a complete mischaracterization to claim it's a failure of the sandbox.

On this specific case, it is quite a big deal to add write and execute controls to the WASM memory, so it requires further justification than "I can do stack underflow attacks on my C code". Even though "I can do stack underflow attacks on my C code" is relevant information.


well it's not perfect, no.

my point stands though. WASM is not ActiveX where the whole applet has admin permission on the computer.


In abstract, in practice it depends on which runtime is being used.


Nobody is pretending that it's new.

In fact, everybody is trying very hard to use all the experience we've got with the 90's VM, and getting annoyed here and there when it's different enough that they can't.


God damn it I missed the holy grail era of the computing industry. It’s all so drab compared to the 90s to the early 00s. So much more experimental and willingness to try very new things


I started my career in the early nineties. It's different now. Not sure if worse or better. Different.

What I enjoyed about the 90's is that there was still plenty of meaningful work implementing published algorithms. I remember for example, coding Delaunay triangulation, our own Quartenions etc in C++. These days you just download and learn an API to do it.

What this means is now you can build comparable stuff much quicker. It's all been implemented and you're mostly gluing libraries.

Gluing stuff is a lot less fun than doing it from scratch and you learn much less but it's the only way to stay competitive.

If you want to build something rapidly to show off though, today it's much easier.

Web development seems to suck more with every year passing. I wasn't all that into it in the 90's as it was very nascent and primarily for text and images due to bandwidth limitations. But VRML was mighty impressive if you used it on a LAN.

Web devs seem to reinvent wheels more than other types of devs. Everything is hailed as a breakthrough in productivity that subsequently fails to materialize. They impress each other with how concise the framework du-jour makes the code, forgetting that troubleshooting ease is much more important.

Troublueshooting stuff was much easier in the 90's even with the simple tools. Not many systems were distributed. It was considered craziness to build distributed software unless you really needed it and had a massive budget.

Everyone builds distributed systems now, whether they need it or not.

AI/ML is amazing today. None of this was possible in the 90's we had nothing approaching the crunching power required to produce decent results. So a lot of ML work was deemed a failure even though it's being exonerated now.

I think with the resurgence of ML the software engineering field is exciting again. Things were rather boring for the last two decades when hipsters kept reinventing mundane stuff only to go back to tried tested and true (vide resurgence of SSR or RDBMS).

Overall, I think the 90's were fun. More fun than what followed. ML/AI might make things fun again.


> Web devs seem to reinvent wheels more than other types of devs. Everything is hailed as a breakthrough in productivity that fails to materialize. They impress each other with how concise the framework du-jour makes the code, forgetting that troubleshooting ease is much more important.

I really wish more people could see this as you (and I) do.

Web development is really its own horribly sheltered community. they are driven to deliver at faster and faster paces and they can't do anything properly if they even wanted to.


My graduation project, a particles engine in OpenGL, nowadays is a basic feature in any engine worth using.


I was around then, and I was/am interested in things like this, and I paid zero attention to these things because I didn't know about these things, because things like these just weren't well known like they are now.

there weren't entire categories of Wikipedia articles about things like this. there weren't widespread communities welcoming new users or lots of YouTube videos explaining everything and getting viewers excited.

yes, that was the time to be into Inferno and projects like it, but the entire internet was a different place.

Maybe I just wasn't looking in the right places for this stuff or maybe I was too invested in other things, but I recall these things being terribly hard to penetrate even if you did find out about them.


Nice piece of history.

From what I remember reading, JVM is a stack machine, while Dis is a register machine, right? Is there any other significant difference that would make Dis less "future proof"?


Why is stack more future proof


It’s less work to port it to a future machine that you don’t know the number and type of registers of yet.

If a future CPU has more registers than your VM, you have to either cripple your VM by not using all registers of the new hardware or write code to detect and remove register spilling that was necessary on the smaller architecture.

If, on the other hand, it has fewer or a different mix, you have to change your VM to add register spilling.

Either way, your byte code compiler has done register assignment work that it almost certainly has to discard once it starts running on the target architecture.

If you to start at the extreme end of “no registers”, you only ever have to handle the first case, and do that once and for all (sort-of. Things will be ‘a bit’ more complex in reality, certainly now that CPUs have vector registers, may have float8 or float16 hardware, etc)

You can also start at the extreme end of an infinite amount of registers. That’s what LLVM does. I think one reason that’s less popular with VMs because it makes it harder to get a proof of concept VM running on a system.


It isn't, unless you consider SPARC the future. SPARC had an interesting register architecture, where making a function call would renumber registers 9-16 to 1-8, and give the function a fresh set of 9-16; IIRC most SPARC CPUs had 128 or so registers total, with this "sliding window" that went up and down as you called and returned, which essentially gave you something like a stack.

The rest of the world's CPUs have normal registers, which is one aspect of what makes register-based VM bytecode easier to JIT to the target architecture, which was one of Dis' original design goals (with an interpreter being a fallback). It also happens that we know a lot more about actually optimising register-based instructions (rather than stack-based), so even if you had to fall back on an interpreter, the bytecode could've gone thru an actual proper optimisation pass.


Not an expert but I'd argue a stack is somewhat more future proof in the sense that it's less tied to a particular number of physical registers, whereas a register machine must be. Which is exactly what makes it a bit harder to optimise for, but that's what abstractions tend to do :-)

The sparc sliding register window turned out to be a very bad idea, but I guess you already know that.


I don't think the concept of register windows is necessarily a bad idea. IMHO, SPARC was flawed in that every activation frame needed to also have a save area for a registers window just in case the processor ran out of internal registers.

I think the Itanium did register windows right: allocate only as many registers as the function needs, and overflow into a separate "safe stack". Also, the return address register was among them, never on the regular stack, so a buffer overrun couldn't overwrite it.

There is a third option to stack and registers: The upcoming The Mill CPU has a "Belt": like a stack but which you only push onto. An instruction or function call takes belt indices as parameters. Except for the result pushed onto the end, the belt is restored after a function call – like a register window sliding back. It also uses a separate safe stack for storing overruns and return addresses. Long ago, I invented a very similar scheme for a virtual machine ... except for the important detail of the separate stack, so it got too complicated for me and I abandoned it.


> The sparc sliding register window turned out to be a very bad idea, but I guess you already know that.

Yikes, that triggered flashbacks of 8086 segmented memory.


It wasn't really like that AIUI. Using a stack introduced hardware complexity while also serialising instruction processing (because you only work from the top of the stack, unlike a register set where you can access any part of it at any time) which caused the chip not to be the raging speed demon the designers thought it was going to be.

I'd very much like to understood what was going through the sparc designers minds when they did that. Looking back on it with my own current understanding of CPU designs and all that, they seem to have made some incredibly basic mistakes, including designing the hardware without talking to the compiler writers (a cockup the Alpha designers very definitely didn't make). It's all very odd.

Another mistake they made was apparently deciding to leave out instructions based on counting them in the code – if an instruction didn't appear very often they omitted it. Sounds reasonable but that meant they left out the multiply instruction initially, which might not have been so common in the code was actually executed quite often (e.g. in array lookups) and there were complaints about the new Sparc stations with their new superior chip, that they were slower than the 68000-based CPU that preceded it. Hardware multiply was later added.


Among all the other reasons stated, like independence from platform registers, stack-based VMs are really easy to implement -- you don't need to worry about register allocation in your VM code generator, you can leave that bit to the stage of the VM that generates native code which would need register allocation, even on a register-based VM.


If you want to explore more OSes that look like this, [Fuschia](https://fuchsia.dev/) is a good one to look at. Rather than having a file be the core primitive, it has an "actor" as the core primitive. And processes send messages to these actors to do anything.

Interestingly, Plan9 started moving in this direction in their later papers. They'd walk through all the many different file operations you'd need to get something accomplished, and then say "but we made a library which does all these things, so you don't need to do it yourself," which kinda defeats the purpose of having everything be a file--and brings you toward the Fuchsia approach.


What can you do with Fuschia today? I'd imagine that it's not usable as a mobile OS or a desktop OS.


Google released some Fuschia powered smart home devices. I believe that’s roughly the extent of its deployment thus far.


That's roughly the extent of interest in fuchsia within Google itself.


It was supposed to be used instead of one of android versions (to simplify future updates and disconnect them from manufacturers and cell operators) but it didn’t happened…


Good for getting your system pwned, more CVEs than I can count!

Big shame though, really love the idea behind it :/


You can type a command and if the application is missing it will install it on the fly and the command will just work.

I wish Linux distros had this feature as well.


Something like command-not-found is as good as it gets. Because you don't want to run an arbitrary program based on user input. You want to be able to know beforehand which program is going to run. Which you specifically cannot be sure of when you make a typo (and you don't know beforehand that you make a typo).

Its why rm evokes -i by default and requires -f to avoid interaction on some OSes. Users can and will make a mistake with it, especially new users. And everyone makes typos.


I am using tea (brew's creator multi-platform successor) and you can set it up so if a command is missing it will install it.

As you say, it's a bit scary that any typo can install random commands lol

I already installed several 1 character langauges by mistake

$ v == https://vlang.io/


The final form of typosquatting attack vectors.


They do. I disable it always because it is annoying if you typo a command.


I imagine sl would be even more annoying if you had to download it.


If the download and installation is fast enough, it feels smooth, and not annoying.

If a distro were to appear with this feature enabled by default, I hope they would have the foresight to not include "sl", or at least put it in the default blacklist.


The big problems with Fuchsia is that i work with files.


> in their later papers

Can you point out which ones? I thought I was pretty familiar with the Plan 9 papers but don't remember that part. Or maybe I just missed it?


How "alive" is the Fuchsia project today?


Deep 6


Some additional recommended reading...

Lynxline's Inferno Labs on porting Inferno to the Raspberry Pi:

https://github.com/yshurik/inferno-rpi (the related web site http://lynxline.com/ is currently not reachable)

David Boddie's port of Inferno to the Ben NanoNote, a tiny MIPS-based handheld from 2010:

https://www.boddie.org.uk/david/www-repo/Personal/Updates/20...

https://en.wikipedia.org/wiki/Ben_NanoNote


I'm disappointed to see that the binaries for the Raspberry Pi aren't available anymore at the bitbucket linked on https://github.com/yshurik/inferno-rpi.

I've run Plan9 in the paste on Raspberry Pi and found it to be a neat experience. Inferno I've only run under Windows, and which seemed kind of pointless.


The zip file available in the github releases of the project contains the final binary release (0.6):

https://github.com/yshurik/inferno-rpi/releases/tag/v0.6

Note that this Inferno port only works on the original Raspberry Pi 1 (probably also the 1B and the Pi Zero).



It wasn't fully FOSS till 2021:

> The Inferno Business Unit closed after three years, and was sold to Vita Nuova Holdings. Vita Nuova continued development and offered commercial licenses to the complete system, and free downloads and licenses (not GPL compatible) for all of the system except the kernel and VM. They ported the software to new hardware and focused on distributed applications. Eventually, Vita Nuova released the 4th edition under more common free software licenses, and in 2021 they relicensed all editions under mainly the MIT License. [1]

[1] https://en.wikipedia.org/wiki/Inferno_(operating_system)


Was looking for the framebuffer file: if everything is a file then the framebuffer must be a file (as Plan 9 [1] intended), changing the pixel (123, 324) to some color should be a simple file update, found this funny line [2]:

    gscreendata.data = (ulong *)(va+0x800000); /* Framebuffer Magic */
[1] https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

[2] https://github.com/inferno-os/inferno-os/blob/48f27553574bf5...


I wonder why this pattern of managing distributed system at OS level is not widely adopted? and we end up with various customized systems handling distributed system issues at application level.


Hard to say, here is an interesting take on it from a few years back: https://cs.brown.edu/people/malte/pub/papers/2013-apsys-dios...


… legacy applications and dominance of the win32 and posix APIs.

It’s proven easier to implement many of the features of Plan9 lineage and Erlang via containers and orchestrators, rather than porting software and reteaching developers.


I was curious to see what the language used in this project, Limbo, looks like. And here it is: https://www.vitanuova.com/inferno/limbo.html

It's interesting to me how channels in Limbo look and work very similar to Go's channels.


Makes sense, given Rob Pike was one of Limbo's creators.


The plan9 ecosystem is amazing. It's a pity we work with much older abstractions - although they still didn't consider security to be an issue and I can't comprehend how we'd jerry rig it to work without trusting anything.


I helped build a large message switch using something called QnX, which supported some very basic security mechanisms but it definitely wasn't bullet proof by any definition. The way we dealt with that was by treating the whole cluster much the same way that you would treat a single instance. If you had user access to any node it was assumed you had super user access to the whole cluster and all other security was handled at the physical and the application layers. Given that all of this was written in classic 'C' I don't doubt that there were many ways to exploit that system. But the niche application and the very limited way in which it was connected to the internet (it was mostly a replacement for a very large number of telexes) helped us to get away with it.


> Inferno represents services and resources in a file-like name hierarchy. Programs access them using only the file operations open, read/write, and close. `Files' are not just stored data, but represent devices, network and protocol interfaces, dynamic data sources, and services.

Great. But `open()` needs to be hyper-extensible. Think of URI q-params and HTTP request headers as `open()` options. Think of all the `ioctl()` APIs that have arisen because "files" are a relatively poor way to represent devices. But most importantly `open()` needs to be async.

I'd be very interested in an OS where all system calls that can block are async, and where something like io_uring is the only way to make system calls.

The "everything is a file" thing is fine, but we need more innovation around that if that metaphor is going to stick.


> I'd be very interested in an OS where all system calls that can block are async,

Welcome to WinRT application model as it was introduced in Windows 8.

Sadly it was yet another reason why the Windows development community rebelled against it.


The Readme says that things such as network interface is abstracted as files, but isn't Linux the same way?


It is - until you want to do something interesting, then it's all ioctl(), recv(), and the various gather calls!

I'd like to see an OS that drops the concept of files. Files are very low level (generally a stream of bytes) - the app has to interpret it as config, or data, etc "manually". An OS should be providing higher-level data management, and insisting that is what is used.

(And please not SQL. It's only a little higher level that raw data, and has serious interface problems that permit prompt injection ... the db equivalent of buffer overflows.)


The IBM System/38 and its descendants (AS/400 which was renamed to iSeries, then i5, now just the horribly unsearchable IBM i) sort of did/does this, except 'newer things' ported over like modern web stuff generally use the Unix portability layer, so it's not enforced. Also, it has a SQL database built-in at the OS level..

The original/legacy OS is quite interesting and high level. It's Object Based. You can only use the objects built-in methods e.g. you can't WRITE arbitrary bytes to an Application object, or a User Object. They are accessed as a giant Single address space, and whether they are on disk or in RAM is transparent to higher layers, i.e. if an object isn't in ram it will page it in..

The designer of System/38 etc. Frank Soltis wrote a couple of interesting books on it - Inside The AS/400 and its 2nd edition Fortress Rochester: Inside the iSeries. They're out of print and expensive unfortunately, I wish I hadn't gave my copy away.


IBM Redbooks are an alternative source of information for that.


> It is - until you want to do something interesting, then it's all ioctl(), recv(), and the various gather calls!

Indeed, Plan9 and inferno don't have ioctls. For special operations ("control"), there is often a file named "ctl" to which you can write commands. So you open the file, write some text, see if the write succeeds, and close the file. E.g. to make a tcp connection, to flush a write buffer to disk, etc. The commands are typically just plain ascii. That's easy for scripting and there is no need to import C struct types into that new programming language you are using/developing. Of course, the commands have to be parsed (for kernel devices, this happens in the kernel), typically done with some simple tokenization functions. When the commands become complicated, you could still choose to write binary data, or even straight C structs...


The purpose of a good primitive is that it can be cleanly composed. Both files and streams of bytes have these properties, because the primitive allows code written by different people / teams / orgs to interact due to the common primitives. The higher level your primitives, the larger your api surface, and the less likely they are to be composable. Do you really want that? I certainly don’t.


> An OS should be providing higher-level data management, and insisting that is what is used.

This assertion seems normative. Could you please expand upon how higher-level data management improve the overall performance and efficiency of the system? Or could you point me in the direction of some good sources? Also what are the benefits of an OS providing higher-level data management instead of relying on lower-level data management solutions? Doesn't abstraction lead to less fine control? That is to say, how does insisting on using higher-level data management provided by an OS affect the development and maintenance of applications? I've seen some object-centric systems adopt this approach and find it very interesting.


I think the OS should begin with a lower level abstraction - object capabilities, and build out from there.

Rather than a loss of control, capabilities enable fine-grained control over authorization, because a capability is both designates a resource and provides authority to use it.

But pathnames are not useful as capabilities, because they can be easily forged. So at best a pathname could serve to discover resources for which you could then request a capability to access.


Isn't Windows built this way?

Everything in Windows is an object, on a centralized resource broker Ob.

Windows uses capabilities based access to enable fine-grained control. It is EAL4 - Methodically Designed, Tested and Reviewed.

This by itself doesn't prevent Windows from having security issues.


Windows does not have the kind of capabilities I'm referring to.

With proper capabilities, the capability itself provides the authority. There's no need to have separate access control lists or some kind of central resource broker. Each process manages its own capabilities, can create new capabilities and can delegate them to others. And importantly, capabilities can always be revoked, at any time.

See: http://www.erights.org/elib/capability/overview.html, https://en.wikipedia.org/wiki/Capability-based_security

Also see seL4 for an example of this done right.


You seem to be answering your own questions :)

Of course, "abstraction lead[s] to less fine control" - at the the lowest (assembly) level, you can do almost anything - and make all the mistakes imaginable. Sometimes you want to genuinely maximise performance, or do things otherwise difficult - fine, use assembly and hit the hardware. But most of the time, software is made to be read, to be trouble free, to build on other's work, and to be written easily - that's when abstraction is valuable.


no, in inferno or plan9, if a fileserver exports the part of its filesystem that contains its network interface, you can mount that on your filesystem and use its network interface. instant vpn! (except 9p isn't encrypted, oops)

the plan9 window system also worked this way, so you could access a window on someone else's display if you mounted it locally; this was how you would run a graphical program on a remote server, by mounting your desktop display in its container on the remote server

linux is not like this at all; you have to use separate protocols for graphics, vpning, and filesharing


> except 9p isn't encrypted, oops

could you somehow mount or pipe the openssl library on top of the network interface via a helper program and then access it that way to encrypt it?


In a word: yes, that's the essence of this system.


Do you have any plans on making a Rio clone for yeso?

Also, unrelated, do you have any advice for a very short-sighted and poorly thought out move to Argentina, assuming the person you're talking to is completely set on it?


(cf. https://news.ycombinator.com/item?id=35693474 for the response; unfortunately the parent comment was flagged at the time i wrote it)


in response to the unfortunately flagged sibling comment, the planned vaguely-rio-like windowing system for yeso is called wercam, but there is only a prototype implementation of it in the repo so far

i decided that probably write() and read() is not a good way to send a large volume of pixels to the display hardware because, say, 2048×1080 32bpp at 60Hz is 530 megabytes a second, and that's a significant fraction of typical bandwidths to main memory, so it is important to strictly minimize the number of memory copies. indeed, even today, i think this is a primary driving concern in the design of usably efficient graphics systems

if you write() some pixel data to a socket, the semantics of write() imply that you can overwrite it (for example with the next frame) as soon as write() returns; and the semantics of read() mean that the pixel data read from the socket by the display server needs to get put in the display server's memory space at a buffer aligned where the display server allocated the buffer. moreover, if the pixel data is intermixed in the same byte stream with control and framing data (such as the w×h dimensions of the following chunk of pixel data, for example) that will also tend to misalign it

so wercam allocates the pixel data in shared memory, on unix in the form of a memory-mapped file, and then transfers the ownership of the shared memory space from the drawing application to the display server — ideally this would be done in such a way that the display server automatically knows that the drawing application cannot continue overwriting it, but on unix that is impossible, so they share access. when the display server is done with the pixel data buffer, it returns it to the drawing application for reuse

writing this, though, i am struck by the realization that you can probably avoid the extra memory copy with write() and read() by the simple expedient of allocating the write() and read() buffers at page boundaries and in multiples of the page size, so that a sufficiently smart kernel can handle write() by marking the written pages copy-on-write rather than actually copying the data, and handle read() by adding a (copy-on-write) mapping for the same pages to the receiving process's address space. then the misalignment induced by the control and framing data is inconsequential, and possibly even helpful, if the framing data is a multiple of the cache line size and potential simd register size, so that the pixel data is cache-line aligned and less likely to create cache contention with whatever framebuffer the display server eventually copies it into

— ⁂ —

as for moving to argentina, it's a good idea to have officially certified copies, made within the last year, of all your immigration-relevant documents; apostilles for those documents; a lot of money, ideally in bitcoin or 100-us-dollar bills (smaller bills and the old bills with a smaller benjamin franklin face trade at a discount, and bitcoin trades at a wide spread); fluency in spanish; a small number of easily salable electronic gadgets that you nevertheless actually use, such as macbooks, iphones, and recent phones or tablets from xiaomi, samsung, or motorola (compatible with the frequency bands we use here!), along with powerbanks and bluetooth earbuds; and enough of your savings to live on the form of 18-karat gold jewelry worn under your clothes

you can't legally bring pepper spray on the plane, but you should probably buy some as soon as you arrive. expect to get robbed about once a week during the first part of your stay, including from your checked luggage before you arrive, so don't be too attached to your possessions

withdrawing money from a dollar bank account at argentine atms does work but you only get about 45% of the money you withdraw thanks to the fake exchange rate; some expats have reported success in sending themselves money with western union, which doesn't use the fake exchange rate, but only scales up to about 200 dollars at a time, and may trigger investigations by the tax authorities

because voip on cell data is illegal, voip accounts from companies with a nexus in argentina will not work, at least on cellphone data (which is quite cheap, but i think signing up for a cellphone line will require an argentine citizen or permanent resident to vouch for you). this means google fi doesn't work at all, but for example voip.ms works fine with sipphone, and so does jitsi. also most cafes, restaurants, hotels, etc., have free wifi for customers, secured with a wpa psk key that changes every year or two

once you have a place to live that isn't a hotel or hostel, leave your passport there so that if you get robbed at least you won't lose your passport. unless armed robbers escort you at knifepoint to your house and demand entry in order to loot your house, which is a thing that happened to a couple of friends of mine. still you can probably hide your passport somewhere that they won't find it, and tell them that you lost it and are waiting for a replacement if they ask. a usa driver's license is generally enough to satisfy cops who demand to see your papers, and in 17 years that has happened to me only once, while i've been robbed on the street several times

don't expect to get a job! we're in the middle of the worst economic crisis we've had since 02001, so you'll probably have to live off your savings (or earnings from working for overseas clients) indefinitely; also keep in mind that living in the middle of an economic crisis can be depressing and anxiety-inducing, which can exacerbate any mental health problems you may have

stay out of the villas and la boca


with respect to the memory bandwidth thing, typically with ddr4 ram you get 30 or 40 gigabytes a second, jason cook tells me. a single main memory copy at 530 megabytes a second eats up one of those 30 or 40. if it's 4k (3840x2160), four times that, 10% of the computer, or 20% at 120 hertz. if you go from one copy per frame to two copies per frame, instead of using 20% of the computer's memory bandwidth just to update the screen, you're using 40%. or 50% if you are also writing that memory before it goes through the two copies

if you're trying to do something else on the computer, even on a different core, that's bottlenecked on memory bandwidth (as opposed to cpu or i/o or something) that's like using 50% of the computer, which means the whole computer is effectively half as fast

i think it's worth a significant amount of complexity to get your display system to suck up 30% of your computer instead of 50%, even when you're doing something that updates the full screen every frame, such as smooth scrolling, a 3d fps, or watching a movie, and that's why I think it's important to try to minimize copies in the pixel paper path

also keep in mind that many people dual-wield monitors these days, and 3840x2160, though common, is not as big as they get


I vouched a functionof's comment.


yay thanks!


Linux isn't quite the same way, no. Though over the years, it is true that Linux has incorporated various Plan9 concepts (Plan9 is the progenitor of Inferno, roughly).

In Linux, you often use sockets to communicate over the network, perhaps with sendmsg[1] to send data.

On Inferno (as I understand it), everything is done via the standard filesystem operations of open/read/write/close. And that includes everything on the system - printers, mouse+keyboard, even the entire windowing system.

[1] https://linux.die.net/man/2/sendmsg


It was. Now we have systemd, and there are magical kernel subsystems which you don't always see anywhere represented as a file.


It isn't. A network interface on linux is a special creature, traditionally named something like eth0, and it doesn't appear in a filesystem. You have to use tools like ip to interact with it.


> a special creature, traditionally named something like eth0, and it doesn't appear in a filesystem.

/dev/eth0 ?


If you have one that corresponds to each of the network devices that show up in ip, that is unusual in my ~15y of Linux experience. The other half of the argument above is that even when such devices exist there is a lot of configuration for the device that uses ioctl to put it in the correct mode. Very little of the code treats that device as something to read from or write to.


   $ ip addr show eth0
   2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
      link/ether 00:15:5d:23:d8:89 brd ff:ff:ff:ff:ff:ff
      inet 172.27.50.201/20 brd 172.27.63.255 scope global eth0
         valid_lft forever preferred_lft forever
      inet6 fe80::215:5dff:fe23:d889/64 scope link
         valid_lft forever preferred_lft forever
   $ ls /dev/eth0
   ls: cannot access '/dev/eth0': No such file or directory
And besides, what would it even mean for eth0 to "be" a file? What happens when you try to read it, what happens when you write to it?


In Inferno (and Plan 9) it appears that network devices are filesystems -- e.g. there's a /net/ether0 directory which contains files with access semantics corresponding to various configuration and throughput tasks. I've read that it's even possible to 'mount' a remote network device on top of a local one, effectively tunneling traffic through the remote system. It's an interesting concept but I'm not sure of its utility, having never really used it.


Are there, or have there been, devices where one might have seen Inferno running 'in the wild' e.g. digital signs, PoS terminals etc.?


There was a land-line telephone with a built-in screen that ran Inferno. Unfortunately it's very difficult to find any screenshots of the phone running Inferno; I'm not even sure they really sold them to the public, but a guy got a couple thousand for cheap and resold them as Linux devices:

https://web.archive.org/web/20070311234222/http://tuxscreen....


> where everything is a file.

That didn't really age to well for Unix. I wonder if it is different for a distributed os.


It feels simple and elegant at first. It is too simple though. GPUs, real time audio, Networking protocols, etc. It requires more than a simple file system API to handle those effectively.


I can't agree with you. I've done a lot of real time stuff and networking stuff as well as graphical user interfaces on a system that was based on the file metaphor and if anything it made things far easier. If only because you don't have to think about it, there is only the one way to do it.

Similar to how HTTP is just a transport protocol, it's what you transport and how that is interpreted that matters. By restricting the number and kind of endpoints that you allow a driver or virtual device to have you ensure that all tooling is instantly composable, which is a much bigger advantage than the ones that you get from 'more effective APIs', which always turn into a giant salad of RPC. Think protocols, not functions and you're well under way to seeing why there is a lot of power to be found there that we've thrown out in the name of a couple of percent of efficiency.


plan9/9front does that just fine.


Can someone explain how this could benefit system design or from the viewpoint of a programmer?


It's interesting as a historical study. Plan 9's a direct successor to Unix, and made by same people who would invent the Go programming language. The same philosophy behind Go and Unix is woven into Inferno/Plan 9.


You can see echoes of Plan9/Inferno in e.g. the go-lang Dial() call.

Plan9 / Inferno were also design descendants of System V Streams.

https://en.wikipedia.org/wiki/STREAMS


Check the “introduction” paper about the family of OS [1], it provides a good introduction to get the how and why.

My two cents is that composing applications at the FS level is more expressive than Files/Sockets/HTTP requests.

Gluing up different of pieces of software becomes easier as most of the time it becomes just a matter of mounting a filesystem and you don’t need to worry about it being provided by a local or remote process.

[1] - http://doc.cat-v.org/plan_9/4th_edition/papers/9


In general, simple interface is easy for the programmer to understand, and provides less opportunity for system bugs and strange interactions.


The Dante references are all over this. Inferno. Dis. Vita Nuova. Is there a story here?


Good catch

> “People often ask where the names Plan 9, Inferno, and Vita Nuova originated. Allegedly, Rob Pike was reading Dante’s Divine Comedy when the Computing Science Research Group at Bell Labs was working on Inferno. Inferno is named after the first book of the Divine Comedy, as are many of its components, including Dis, Styx and Limbo. The company name Vita Nuova continues the association with Dante: his first work, a book of poetry about his childhood sweetheart Beatrice, was called La Vita Nuova. The literal translation of Vita Nuova is ‘New Life,’ which in the circumstances is surprisingly prophetic. Plan 9 is named after the famous Ed Wood movie Plan 9 from Outer Space. There are no other connections except that the striking artwork for the products is a retro, 60s SciFi image modeled on the Plan 9 movie poster.’

https://dantetoday.krieger.jhu.edu/2009/03/08/vita-nuova-and...


The Vita Nuova website looks the same as it does 20 years ago. Does anybody know who is paying for them/using Inferno in production?


What's the use case for this, exactly?


I used to have Inferno running on a workstation at work, and another one at home. Each was hosted on Linux (though Inferno can run natively on hardware as well).

One can build a tiny grid with encrypted communications to share resources using Inferno operations to glue it all together.

Nowadays 9p, (known as Styx on Inferno), is used in the Linux kernel, as part of WSL, QEMU, and various other places to share resources and files on a network.

It's a really simple protocol. I started working on an implementation in Swift, and should probably finish it someday.


Soft real time, clusters. A bit like the same space that you would aim Erlang/Elixir at.


I had a project with this back in the 2001-2003 era or so and at that time the browser could still do the "modern web" such as it was. It should never have failed like Lisp Machines, Smalltalk, Forth environments, Pascal, and the rest. The nice thing is the web is so borked that you could easily replace it and some entity will. My guess is China.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: