Just curious how hard it would be to forego POSIX entirely if you were building an OS. I know TempleOS is entirely from scratch. I'd like to implement a small LISP like SectorLISP [1] (see yesterday's posts too on HN). I don't know much about building my own OS, so I'd like to start with something like MenuetOS (my first PL was asm), SerenityOS, TempleOS, or this one. I'd like it to be completely an 'island', i.e. POSIX not a requirement. I want to use it to hack on in isolation without any easy copy/paste shortcuts. I know Mezzano exists, and it has booted on bare metal, but I would like to start with the OS's above, implement my own LISP, and go from there.
Any other OS recommendations base on my ignorant, but wishful, reqs above? I realize there are some others in Rust too. Thanks!
Someone who would make a new OS, should define a completely new system call interface, as it is likely that now it is possible to conceive a better interface than 50 years ago and anyway if it would not be different there would be no reason to make a new OS, instead of modifying an existing one.
Nevertheless, the first thing after defining a new OS interface must be writing a POSIX API translation layer, to be able to use without modifications the huge number of already existing programs.
Writing a new OS is enough work, nobody would have time to also write file systems, compilers, a shell, a text editor, an Internet browser and so on.
After having a usable environment, one can write whatever new program is desired, which would use the new native OS interface, but it would not be possible to replace everything at the same time.
Besides having a POSIX translation layer, which can be written using as a starting point one of the standard C libraries, where the system calls must be replaced with the translation layer, some method must be found for reusing device drivers made for other operating systems, e.g. either for Linux or for one of the *BSD systems.
Nobody would have time to also write all the needed device drivers. So there must exist some translation layer also for device drivers, maybe by running them in a virtual machine.
The same as for user applications, if there is special interest in a certain device driver, it should be rewritten for the new OS, but rewriting all the device drivers that could be needed would take years, so it is important to implement a way to reuse the existing device drivers.
> Writing a new OS is enough work, nobody would have time to also write file systems, compilers, a shell, a text editor, an Internet browser and so on.
> So there must exist some translation layer also for device drivers, maybe by running them in a virtual machine.
> ... but rewriting all the device drivers that could be needed would take years, so it is important to implement a way to reuse the existing device drivers.
I'd think most people making a hobby OS specifically want to do these things.
I also think most don't care about wide hardware compatibility.
Even if you do not want the new OS to run on anything else but your own laptop, that still needs a huge amount of drivers, for PCIe, USB, Ethernet, WiFi, Bluetooth, TCP/IP, NVME, keyboard / mouse / trackpad, sound, GPU, sensors, power management, ACPI and so on.
The volume of work for rewriting all these is many times larger than writing from scratch all the core of a new OS.
Rewriting them requires studying a huge amount of documentation and making experiments for the cases that are not clear. Most of this work is unlikely to present much interest for someone who wants to create an original OS, so avoiding most of it is the more likely way leading to a usable OS.
If you do not want those features, that means that the OS is not intended to be used on a personal computer, but only on an embedded computer.
For dedicated embedded computers, the purpose for an OS becomes completely different and compatibility with anything does not matter any more.
Not only personal computers cannot be used without a huge amount of device drivers, but even for a very simple server, e.g. an Internet gateway/router/firewall or a NAS server, the amount of work for writing the device drivers, the file systems and the networking part would be much more work than writing the core of a new OS.
Only for embedded computers the work needed for device drivers can be smaller than for the base operating system.
You hit it on the head: The point is purely for fun and learning. I want to learn as much as I can by rebuilding apps from scratch, etc. I had my first computer in 1977/78, a Commodore PET 2001, followed by a Vic-20, so if I can duplicate the bare system I had them and a PL to create apps, I am back where I started - having fun with computers!
> Nevertheless, the first thing after defining a new OS interface must be writing a POSIX API translation layer, to be able to use without modifications the huge number of already existing programs.
I disagree. POSIX sucks. Build a hypervisor so people can run their applications in a VM and insist that native programs use the non-garbage API. It's the only way you'll ever unshackle yourself.
The point of writing a new OS is to use it, otherwise you do not get any of its supposed benefits.
If you do all your normal work in a virtual machine, what will you use your new OS for?
Writing any useful application program in a complete void, without standard libraries and utilities, would take a very long time and unless it is something extremely self contained it would not be as useful as when it can exchange data with other programs.
It is much easier to first write a new foundation, i.e. the new OS, with whatever novel ideas you might have for managing memory, threads, security and time, and then start to use the foundation with the existing programs, hopefully already having some benefit from whatever you thought you can improve in an OS (e.g. your new OS might be impossible to crash, which is not the case with any of the popular OSes), and then replace one by one the programs that happen to be important for you and that can benefit the most from whatever is different in the new OS.
For the vast majority of programs that you might need from time to time it is likely that it would never be worthwhile to rewrite them to use the native interfaces of the new OS, but nonetheless you will be able to use them directly, without having to use complicated ways to share the file systems, the clipboard, the displays and whatever else is needed with the programs run in a virtual machine.
Implementing some good methods for seamless sharing of data between 2 virtual machines, to be able to use together some programs for the new OS with some programs run e.g. in a Linux VM, is significantly more difficult than implementing a POSIX translation layer enabling the C standard library and other similar libraries to work on the new OS in the same way as on a POSIX system.
> If you do all your normal work in a virtual machine, what will you use your new OS for?
You write replacements for or properly port your every-day workflow to the new OS. You already wrote a whole new OS for some reason even though there are hundreds to choose from, presumably there is value in replacing your tools to take advantage of whatever you put all that effort into or else why bother? The VM is for things you haven't ported yet or less important workflows.
Besides, people run windows and do all their work in WSL all the time.
> Writing any useful application program in a complete void, without standard libraries and utilities, would take a very long time [...]
So does writing an OS and you've already decided that was worth the effort, yet you balk at rewriting some commandline utilities[0] and a standard library? Please.
> It is much easier to first write a new foundation [...] then start to use the foundation with the existing programs [...] and then replace one by one the programs that happen to be important for you and that can benefit the most from whatever is different in the new OS.
Any reason not to just do that with a VM? Forcing POSIX compatibility into your OS is going to constrain your choices (not to mention your thinking) to the point that you'd probably be better off just modifying an existing OS anyway.
> For the vast majority of programs that you might need from time to time it is likely that it would never be worthwhile to rewrite them to use the native interfaces of the new OS, but nonetheless you will be able to use them directly, without having to use complicated ways to share the file systems, the clipboard, the displays and whatever else is needed with the programs run in a virtual machine.
A: it isn't that complicated. B: if you can use them so directly without having to deal with the separation provided by a VM, it's likely you didn't improve their security situation anyway. Again, why not just modify an existing POSIX OS in this case?
> Implementing some good methods for seamless sharing of data between 2 virtual machines, to be able to use together some programs for the new OS with some programs run e.g. in a Linux VM, is significantly more difficult than implementing a POSIX translation layer enabling the C standard library and other similar libraries to work on the new OS in the same way as on a POSIX system.
I doubt it is as hard as, say, writing a brand new OS that's actually in some way useful. Why go through the effort of the latter only to throw away a bunch of potential by shackling yourself with a set of barely-followed standards from the 1970s?
[0] POSIX does nothing to help you with anything GUI.
> Someone who would make a new OS, should define a completely new system call interface, as it is likely that now it is possible to conceive a better interface than 50 years ago and anyway if it would not be different there would be no reason to make a new OS, instead of modifying an existing one.
For an example of how things like this can be done incrementally, you can look at io_uring on linux.
redox is one i've been following from afar. rust, not posix, microkernel, s/everything is a file/everything is a url/
it looks pretty cool, although the url thing seems yet to prove its utility. they seem to be playing around a bit with using the protocol component (net, disk, etc), but it's unclear what this adds over just using paths. although maybe if they used the protocol to describe the encoding of the data, it would add something?
Are you? I'm paddy pretty sure I can run a single Linux and point a single gdb at it[0] and debug it in a single memory space; I don't think you can do that with a microkernel.
I'm very confused by this comment. There are a ton of other things you need to implement if you want to have desktop applications. POSIX does not specify any APIs for graphical applications. You might be thinking of something else.
If you want to support the lion's share of desktop applications, it would actually be better to implement the Win32 API...
Sorry, I meant software in general but wrote "desktop applications" instead. Anyway the sentence is still valid, even if you'll have to implement other things such as the graphical interface, the POSIX compliant code won't need modification
If you're taking an app built for Linux or GNU or BSD, then it probably will need modification, as those systems have various extensions on top of POSIX.
As a real-time OS it is known for deterministic response times. If it were exceptionally fast (and licenses cheap enough), you'd see hosts in the TOP500 using it.
I agree 100%. QNX is lurking in products you may use. It was the OS for the show control system that is used throughout entertainment where real-time is necessary for the safety of the devices it controls, which also have hardware safety at the lower level. I would drop into the QNX terminal for certain tasks. Unfortunately, you used to be able to download the show control software and play with it, but it has since been bought buy a company that sells it with the equipment they rent, so you need to buy trainging and it is behind their wall now. Not QNX, but the show control software that runs on QNX.
I'd like to see more information about that. I remember that Penguin Computing offered such some 15 years ago, but don't know where it was deployed or still is. Cray and IBM had also such a concept for their superclusers in the past, but are they still using such? The one HPC environment I worked on (a major car manufacturer in Europe) used plain RH Linux on all nodes as recently as three years ago.
The current #1 (Fugaku) uses IHK/McKernel as kernel for the actual payload. The previous #1 (IBM Summit) seems to use RH Linux though. Perhaps, since the most performance critical part is run by and within the GPGPU(s), the actual OS doesn't matter all that much (for performance -- it matters of course for programmer's comfort/efficiency).
There used to be a lot of "special microkernel on compute RPCing to Linux on I/O" on Crays and the like. Hard to say how prevalent it is now, and most annoyingly I can't recall the names. (Charon?)
Because existing desktop applications can be ported to ToaruOS
>why not a safer microkernel, keeping everything in userspace?
This is a design choice, microkernels aren't necessarily better than hybrid, they're slower, harder to debug and process management can be complicated