Hacker News new | past | comments | ask | show | jobs | submit login

Then why in the heck is he going for POSIX compatibility, when he can afford the luxury of not having to deal with blocking syscalls and all this crap? Much easier and safer multithreading. Also faster.

And why we are there, why not a safer microkernel, keeping everything in userspace? Questions over questions.




When I started the project a decade ago, my aim was specifically to understand POSIX/Unix and to "learn by doing", so of course there's some POSIX-like elements underpinning the design. Back before I decided that literally anything can be in scope and was using third-party components, this aided in porting software, as other commenters have pointed out. These days, it gives more purpose to things I build for the OS if they can also reasonably be built for Linux or macOS - things like my editor (which I wrote for the OS and now use as my daily driver in Linux), or my Python knock-off.

As for the microkernel bit, this might sound like circular reasoning but I didn't go for a microkernel because no one really uses microkernels. It's not that I think microkernels are a bad idea, ToaruOS does push plenty of stuff into userspace. Rather, my main goal at the moment is to provide an educational resource that more accurately models the way "real" OSes work than the typical academic OS projects.


>why in the heck is he going for POSIX compatibility

Because existing desktop applications can be ported to ToaruOS

>why not a safer microkernel, keeping everything in userspace?

This is a design choice, microkernels aren't necessarily better than hybrid, they're slower, harder to debug and process management can be complicated


Just curious how hard it would be to forego POSIX entirely if you were building an OS. I know TempleOS is entirely from scratch. I'd like to implement a small LISP like SectorLISP [1] (see yesterday's posts too on HN). I don't know much about building my own OS, so I'd like to start with something like MenuetOS (my first PL was asm), SerenityOS, TempleOS, or this one. I'd like it to be completely an 'island', i.e. POSIX not a requirement. I want to use it to hack on in isolation without any easy copy/paste shortcuts. I know Mezzano exists, and it has booted on bare metal, but I would like to start with the OS's above, implement my own LISP, and go from there.

Any other OS recommendations base on my ignorant, but wishful, reqs above? I realize there are some others in Rust too. Thanks!

[1] https://github.com/jart/sectorlisp


Someone who would make a new OS, should define a completely new system call interface, as it is likely that now it is possible to conceive a better interface than 50 years ago and anyway if it would not be different there would be no reason to make a new OS, instead of modifying an existing one.

Nevertheless, the first thing after defining a new OS interface must be writing a POSIX API translation layer, to be able to use without modifications the huge number of already existing programs.

Writing a new OS is enough work, nobody would have time to also write file systems, compilers, a shell, a text editor, an Internet browser and so on.

After having a usable environment, one can write whatever new program is desired, which would use the new native OS interface, but it would not be possible to replace everything at the same time.

Besides having a POSIX translation layer, which can be written using as a starting point one of the standard C libraries, where the system calls must be replaced with the translation layer, some method must be found for reusing device drivers made for other operating systems, e.g. either for Linux or for one of the *BSD systems.

Nobody would have time to also write all the needed device drivers. So there must exist some translation layer also for device drivers, maybe by running them in a virtual machine.

The same as for user applications, if there is special interest in a certain device driver, it should be rewritten for the new OS, but rewriting all the device drivers that could be needed would take years, so it is important to implement a way to reuse the existing device drivers.


> Writing a new OS is enough work, nobody would have time to also write file systems, compilers, a shell, a text editor, an Internet browser and so on.

> So there must exist some translation layer also for device drivers, maybe by running them in a virtual machine.

> ... but rewriting all the device drivers that could be needed would take years, so it is important to implement a way to reuse the existing device drivers.

I'd think most people making a hobby OS specifically want to do these things.

I also think most don't care about wide hardware compatibility.


Even if you do not want the new OS to run on anything else but your own laptop, that still needs a huge amount of drivers, for PCIe, USB, Ethernet, WiFi, Bluetooth, TCP/IP, NVME, keyboard / mouse / trackpad, sound, GPU, sensors, power management, ACPI and so on.

The volume of work for rewriting all these is many times larger than writing from scratch all the core of a new OS.

Rewriting them requires studying a huge amount of documentation and making experiments for the cases that are not clear. Most of this work is unlikely to present much interest for someone who wants to create an original OS, so avoiding most of it is the more likely way leading to a usable OS.


I think you're still missing the point here.

Not every hobby OS needs or even wants networking, gpu support, even storage I/O, etc. See TempleOS.

The goal typically isn't to make a fully featured OS.


If you do not want those features, that means that the OS is not intended to be used on a personal computer, but only on an embedded computer.

For dedicated embedded computers, the purpose for an OS becomes completely different and compatibility with anything does not matter any more.

Not only personal computers cannot be used without a huge amount of device drivers, but even for a very simple server, e.g. an Internet gateway/router/firewall or a NAS server, the amount of work for writing the device drivers, the file systems and the networking part would be much more work than writing the core of a new OS.

Only for embedded computers the work needed for device drivers can be smaller than for the base operating system.


You hit it on the head: The point is purely for fun and learning. I want to learn as much as I can by rebuilding apps from scratch, etc. I had my first computer in 1977/78, a Commodore PET 2001, followed by a Vic-20, so if I can duplicate the bare system I had them and a PL to create apps, I am back where I started - having fun with computers!


> Nevertheless, the first thing after defining a new OS interface must be writing a POSIX API translation layer, to be able to use without modifications the huge number of already existing programs.

I disagree. POSIX sucks. Build a hypervisor so people can run their applications in a VM and insist that native programs use the non-garbage API. It's the only way you'll ever unshackle yourself.


The point of writing a new OS is to use it, otherwise you do not get any of its supposed benefits.

If you do all your normal work in a virtual machine, what will you use your new OS for?

Writing any useful application program in a complete void, without standard libraries and utilities, would take a very long time and unless it is something extremely self contained it would not be as useful as when it can exchange data with other programs.

It is much easier to first write a new foundation, i.e. the new OS, with whatever novel ideas you might have for managing memory, threads, security and time, and then start to use the foundation with the existing programs, hopefully already having some benefit from whatever you thought you can improve in an OS (e.g. your new OS might be impossible to crash, which is not the case with any of the popular OSes), and then replace one by one the programs that happen to be important for you and that can benefit the most from whatever is different in the new OS.

For the vast majority of programs that you might need from time to time it is likely that it would never be worthwhile to rewrite them to use the native interfaces of the new OS, but nonetheless you will be able to use them directly, without having to use complicated ways to share the file systems, the clipboard, the displays and whatever else is needed with the programs run in a virtual machine.

Implementing some good methods for seamless sharing of data between 2 virtual machines, to be able to use together some programs for the new OS with some programs run e.g. in a Linux VM, is significantly more difficult than implementing a POSIX translation layer enabling the C standard library and other similar libraries to work on the new OS in the same way as on a POSIX system.


> If you do all your normal work in a virtual machine, what will you use your new OS for?

You write replacements for or properly port your every-day workflow to the new OS. You already wrote a whole new OS for some reason even though there are hundreds to choose from, presumably there is value in replacing your tools to take advantage of whatever you put all that effort into or else why bother? The VM is for things you haven't ported yet or less important workflows.

Besides, people run windows and do all their work in WSL all the time.

> Writing any useful application program in a complete void, without standard libraries and utilities, would take a very long time [...]

So does writing an OS and you've already decided that was worth the effort, yet you balk at rewriting some commandline utilities[0] and a standard library? Please.

> It is much easier to first write a new foundation [...] then start to use the foundation with the existing programs [...] and then replace one by one the programs that happen to be important for you and that can benefit the most from whatever is different in the new OS.

Any reason not to just do that with a VM? Forcing POSIX compatibility into your OS is going to constrain your choices (not to mention your thinking) to the point that you'd probably be better off just modifying an existing OS anyway.

> For the vast majority of programs that you might need from time to time it is likely that it would never be worthwhile to rewrite them to use the native interfaces of the new OS, but nonetheless you will be able to use them directly, without having to use complicated ways to share the file systems, the clipboard, the displays and whatever else is needed with the programs run in a virtual machine.

A: it isn't that complicated. B: if you can use them so directly without having to deal with the separation provided by a VM, it's likely you didn't improve their security situation anyway. Again, why not just modify an existing POSIX OS in this case?

> Implementing some good methods for seamless sharing of data between 2 virtual machines, to be able to use together some programs for the new OS with some programs run e.g. in a Linux VM, is significantly more difficult than implementing a POSIX translation layer enabling the C standard library and other similar libraries to work on the new OS in the same way as on a POSIX system.

I doubt it is as hard as, say, writing a brand new OS that's actually in some way useful. Why go through the effort of the latter only to throw away a bunch of potential by shackling yourself with a set of barely-followed standards from the 1970s?

[0] POSIX does nothing to help you with anything GUI.


> Someone who would make a new OS, should define a completely new system call interface, as it is likely that now it is possible to conceive a better interface than 50 years ago and anyway if it would not be different there would be no reason to make a new OS, instead of modifying an existing one.

For an example of how things like this can be done incrementally, you can look at io_uring on linux.


At that point, why not just contribute to Linux?


redox is one i've been following from afar. rust, not posix, microkernel, s/everything is a file/everything is a url/

it looks pretty cool, although the url thing seems yet to prove its utility. they seem to be playing around a bit with using the protocol component (net, disk, etc), but it's unclear what this adds over just using paths. although maybe if they used the protocol to describe the encoding of the data, it would add something?


> microkernels aren't necessarily better than hybrid, they're slower, harder to debug and process management can be complicated

I was basically on board, but how are they harder to debug? I'd think being able to run components in userspace would make debugging way easier.


You are now debugging a distributed system.


Oh, good point; I was thinking at the component level


fun fact: you already are in linux. being a monolith doesn't change the nature of the problem.


Are you? I'm paddy pretty sure I can run a single Linux and point a single gdb at it[0] and debug it in a single memory space; I don't think you can do that with a microkernel.

[0] possibly resorting to UML, but still


I'm very confused by this comment. There are a ton of other things you need to implement if you want to have desktop applications. POSIX does not specify any APIs for graphical applications. You might be thinking of something else.

If you want to support the lion's share of desktop applications, it would actually be better to implement the Win32 API...


Sorry, I meant software in general but wrote "desktop applications" instead. Anyway the sentence is still valid, even if you'll have to implement other things such as the graphical interface, the POSIX compliant code won't need modification


If you're taking an app built for Linux or GNU or BSD, then it probably will need modification, as those systems have various extensions on top of POSIX.


Isn't QNX a microkernel? I remember it being known for being quite fast?


No, it's more of a nanokernel. It's very fast.

Full disclosure: I maintain QNX toolchain.


As a real-time OS it is known for deterministic response times. If it were exceptionally fast (and licenses cheap enough), you'd see hosts in the TOP500 using it.


I agree 100%. QNX is lurking in products you may use. It was the OS for the show control system that is used throughout entertainment where real-time is necessary for the safety of the devices it controls, which also have hardware safety at the lower level. I would drop into the QNX terminal for certain tasks. Unfortunately, you used to be able to download the show control software and play with it, but it has since been bought buy a company that sells it with the equipment they rent, so you need to buy trainging and it is behind their wall now. Not QNX, but the show control software that runs on QNX.


TOP500 is chock full of microkernels though, even if the "I/O nodes" would run Linux


I'd like to see more information about that. I remember that Penguin Computing offered such some 15 years ago, but don't know where it was deployed or still is. Cray and IBM had also such a concept for their superclusers in the past, but are they still using such? The one HPC environment I worked on (a major car manufacturer in Europe) used plain RH Linux on all nodes as recently as three years ago.

The current #1 (Fugaku) uses IHK/McKernel as kernel for the actual payload. The previous #1 (IBM Summit) seems to use RH Linux though. Perhaps, since the most performance critical part is run by and within the GPGPU(s), the actual OS doesn't matter all that much (for performance -- it matters of course for programmer's comfort/efficiency).


There used to be a lot of "special microkernel on compute RPCing to Linux on I/O" on Crays and the like. Hard to say how prevalent it is now, and most annoyingly I can't recall the names. (Charon?)


You can do all of this in your own hobby project.


Shallow dismissal of other people's work.


[flagged]


Fine, I will provide the link again : https://news.ycombinator.com/newsguidelines.html


You need POSIX to compile nearly any program


True, and as another commenter hinted at, you're free to do your own hobby project however you please, there's not a wrong way to do so if you find it fulfilling.

But at the same time, it does make me sad that most hobby OSes end up seeking POSIX compatibility, because that means being destined to essentially either be another unix-variant or develop a unix-variant inside some subsystem of your OS.

Yes, being unix-like means you gain access to a trove of software and libraries that makes porting applications much easier, but it also limits the potential be truly different and experiment, as your end result will look like "yet another unix" with misc. improvements.

Since I think the enjoyment of building something like this comes from the satisfaction of building an OS from the ground up, I don't think it matters, but it would be cool to see more hobby OSes try more exotic ideas and runtimes.


Fully agree. We need more Temple OS's!


For *nix maybe, but there are many other non-POSIX operating systems in use today.


So, a preliminary problem for many OS-from-scratch projects intending to create "New Different Great OS" will be to have a language implementation which does not expect posix.

It depends on the goal. If you want to compile pre-existing software for it, or if you want to really "start anew".


That his advantage. He doesn't want to, rather write everything anew from scratch.

Also signals. With a microkernel you won't need signals


And why in the heck didnt he write it in Rust!

...


At least he didn't write it in node.js /s

Other interesting alternatives to c or rust might be vlang or zig.


For V language there is Vinix[1], for Zig there's a kernel[2] only.

[1] http://vinix-os.org/

[2] https://github.com/jzck/kernel-zig


Well, that's already been done. https://node-os.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: