Hacker News new | past | comments | ask | show | jobs | submit login
Reforming Unix (github.com/ericson2314)
158 points by Ericson2314 on Dec 25, 2023 | hide | past | favorite | 94 comments



Wow, this is awesome.

I'm working on a design for a next-gen OS, and I already have some things that are in the article, like starting a blank process to be mutated before start [1], file descriptors as handles/capabilities, and such.

But there are some other things I don't have that I like.

For the bootstrap problem, I realized something that the author did: the better interface is generally more powerful:

> When requests and responses are paired 1-1, the asynchronous APIs offer perhaps performance improvements but no expressive power. When they are not 1-1, however, things get more interesting.

Then I realized that if an OS is designed right, it will generally have more expressive interfaces, and those expressive interfaces can then be used to implement the APIs from other operating systems.

So I intend to make a microkernel that implements the APIs for the popular operating systems. Then I will make a userspace driver shim that implements the Linux driver API.

Boom! Instant OS and driver availability. Done right, Linux drivers will work, and software from the major operating systems will work.

And then I will encourage people to use the better APIs when on my OS until everybody is on my OS.

Embrace, extend, extinguish, but in a good way! :)

Will it work? Oh, heavens no. But as some comments have said, experimentation is a good thing.

[1]: https://news.ycombinator.com/item?id=35681941


Microkernels + userspace drivers end up with too many context switches. This has been the main issue with microkernels. Then you add performance sensitive services in the "microkernel" and it stops being micro! Note that KeyKOS has a microkernel arch. but they called it "nanokernel" to distinguish from all the other fat "microkernels". They even prototyped a Unix service called "KeyNIX". See http://cap-lore.com/CapTheory/upenn/NanoKernel/NanoKernel.ht...


A 400 cycle IPC doesn't seem too bad. [1]

Also, I came up with an idea to get around that, something I call "hardware pipes." [2] [3]

The idea is that the OS can map the same page(s) into both processes, and one process can write into the memory to send a message, while the other reads. Add another page for communication the other direction. You would use something like futexes for synchronization.

Besides needing to wake up a process that went to sleep waiting, the OS should not have to be involved, so context switches are minimized at the cost of atomic instructions.

Of course, it would be even better if there was hardware support for such shared memory pipes, but I think it could be done with what we have now.

[1]: https://sel4.systems/About/Performance/home.pml

[2]: https://gavinhoward.com/2020/07/testing-the-feasibility-of-h...

[3]: https://gavinhoward.com/2020/12/testing-the-feasibility-of-h...


You may still have N crossings: user process -> uK -> fileserver -> uK [-> generic Disk driver -> uK] -> specific device driver -> ... and concurrently some thread/process scheduling has to happen as there may be many such things in flight at the same time.

All this has been thought about for decades. I encourage you read KeyKOS related papers on Norm Hardy's site (link in my earlier response). seL4 is of course a very good uK but it still have not taken over the world (its niche is probably high security environments).


> You may still have N crossings: user process -> uK -> fileserver -> uK [-> generic Disk driver -> uK] -> specific device driver -> ... and concurrently some thread/process scheduling has to happen as there may be many such things in flight at the same time.

Done right, a design such as mine should cut those crossings in half: user process -> fileserver [-> generic Disk driver] -> specific device driver -> ...

IMO, seL4 hasn't taken over for a few reasons, but the biggest is that it has a poor API.


As much as I want something to replace Linux, I think the big selling point is the bottom end, all the driver and community support. That said, even Microsoft gave up trying to emulate Linux userspace


bro is reinventing windowsNT/os2


If at first you don’t succeed with a big company’s resources, try again as a hobbyist!


Definitely more NT, but he’ll hopefully make better use of the “personalities” than Microsoft ever did.

Also, NT is mostly just VMS. So much so, that DEC threatened litigation which resulted in Microsoft supporting Alpha for longer than they’d otherwise have cared for.


I have begun some work on Nix on Windows in part because I want to do some embrace/extend/minimize in the other direction.


I've wondered if process creation on Unix could be overhauled by extending some of the existing syscalls to operate on a separate process rather than the current one:

    // dup2() a file descriptor to the destination process
    int rdup2(int procfd, int oldfd, int newfd);
    
    // mmap() at the destination process address space
    void *rmmap(int procfd, void *addr, size_t length,
            int prot, int flags, int fd, off_t offset);
    
    // create a thread remotely
    int rpthread_create(int procfd, pthread_t *restrict thread,
            const pthread_attr_t *restrict attr,
            void *(*start_routine)(void *), void *restrict arg);
    ...
With this model, creating a new process would start by creating an empty, suspended one, then populating the various pieces with the returned process file descriptor:

* file descriptor table,

* address space,

* stack,

* threads...

Once everything's in place, the process would be unsuspended with one final syscall and start execution.


You mean, like DuplicateHandle, MapViewOfFile2, and CreateThreadRemote from Win32 API? Well, to be fair, it has some merit: UNIX processes are possibly the only resources/kernel objects across any existing OS that can be created only by copying an existing one and then fiddling with the copy to make it look right. That's quirky.

And if we have "procfd"s, then the need for process 1 (and the rules about a parent process having to call wait() for its children) goes away, just like on Windows: if the parent P of a would-be zombie process Z closed the procfd of Z, then when Z terminates the refcount of its process entry in the process table drops to zero and it gets cleaned up, freeing the Z's globally-visible PID, along with other B's resources like disk files and sockets, without P having to do anything.


I was actually inspired by the Zircon system calls from Fuchsia, which is a capability-based system. Operating on the current process or another one is just a matter of passing the correct handle to the relevant system calls. I'm not familiar with the Win32 API, but I'm not surprised to learn that it has similar facilities (although MapViewOfFile2 appears to be a fairly recent addition).

I'm of the opinion that POSIX is a fossilized design kept on life support, no longer fit for purpose in the modern era. Modern Unix-like are trying to plug in the holes with mostly proprietary extensions and this proposal would be just another one added to the pile... Whatever the future is made of, I do hope we won't have to carry 50 years old design mistakes for another 50 years from now.


Are there any good public resources on the design of Fuchsia?


Fuchsia's getting started documentation can be found there: https://fuchsia.dev/fuchsia-src/get-started/learn

If you want to drill down a specific topic, you can look at the concepts section there: https://fuchsia.dev/fuchsia-src/concepts


existing: process_vm_readv, pidfd_getfd; proposed: io_uring_spawn

https://lwn.net/Articles/908268/ https://lpc.events/event/16/contributions/1213/attachments/1...


Welcome to mach!


Yes, that sounds like exactly what I was thinking! It's just incredibly better.


While these Unix reforms sound great in theory, they somewhat disregard Unix’s practical, evolved nature. It’s not just about clean ideas; it’s about what works in the real world, shaped by decades of use. Implementing such sweeping changes could take way longer than one would expect.


Err I meant the entire point of this to be that we can do nice things incrementally. E.g. I am linking LWN articles and whatnot to demonstrate that actually kernel devs thought some of this stuff is feasible.

If anything, the heavy lift here is not even changing kernels, but reforming programming language's standard/popular libraries to make this stuff accessible and ergonomic --- most software today isn't written in C directly using some kernel-developer-maintained headers. If we can relatively make the OS changes, but it takes a bunch more slow steps for the new interfaces to percolate out "regular" developers, that adds uncertainty. "Latency" is a bigger enemy than "feasibility".

You can find me jabbering away in many corners of the internet about trying to make standard libraries easier to evolve for precisely this reason.

I want to keep evolving things, I am a big believe in https://www.joelonsoftware.com/2000/04/06/things-you-should-..., I just want things to evolve with a bit more foresight than the genetic algorithm can do on its own.


The author explicitly mentions Fuchsia as going too far i.e. advocates for "let's try improving things little by little".

> While these Unix reforms sound great in theory, they somewhat disregard Unix’s practical, evolved nature.

To me that's a polite language for: it's a wild west of "oh crap, we forgot X, guess it's time for ungodly hacks in Y and Z".

I do somewhat agree with you though. Let's see what did our area's emergent behavior and hacks achieve and try to distill them in a more clean future API.


The things in this list are clearly mistakes. They aren't optimised, evolved solutions. They've just stuck around due to network effects.

Nobody would design a process spawning API like fork/exec today.


>> The things in this list are clearly mistakes. They aren't optimised, evolved solutions. They've just stuck around due to network effects.

Hello, JavaScript:

https://www.destroyallsoftware.com/talks/wat

https://youtu.be/D5xh0ZIEUOE?si=HTWxo0rM7y26mlpC

Why are we waiting so long for WASM to replace JavaScript?

https://www.destroyallsoftware.com/talks/the-birth-and-death...

It's a similar story. Bad designs (perhaps they were reasonable given the circumstances?) that were set in stone due to network effects.


Mistakes?

The thing about evolving design is that the environment changes.

Fork/exec is fairly ergonomic compared to CreateProcess. Very few parameters. From a time when programs were small and security was simple.

No design is immortal but just because it’s time is over doesn’t mean it is a mistake.


Plan9 gets a lot of this right.

The author should flesh out why "Unix is bad".


Plan 9 made a number of improvements, but while "everything is a file" is an improvement, "everything is a file descriptor" think is better. namespaces aside, files as global variables, while file descriptors are local variables. It is much more easy to make an interface that "naturally" makes it easy to abide by https://en.wikipedia.org/wiki/Principle_of_least_privilege with local variables.

Unix < Plan 9 < Capiscum-like designs, for me.

(Of course, if Capsicum is "capability-based security for Unix", maybe a "capability-based security for Plan 9" is better than that. :))


Plan9 in fact uses file descriptors all over the place, including accessing synthetic or real filesystems (which are really namespaces). In fact the only "globals" are the #<dev> devices.

IMHO you're better off starting with plan9 concepts + pure capabilities (KeyKOS as opposed to Capsicum). It would not enough to have a clean architecture unless a) it matches or improves upon performance and b) there is a way to run existing Unix software via some adapter layer or library or monitor.

I suspect if you try to "reform" unix, there will be a lot of recidivism :-)


> recidivism

Yes, it is not enough to just add yet more interfaces. Using personalities to ban the use of deprecated systemcalls is an essential follow-up step. Fostering experimental kennels which don't bother to support the old interfaces at all is another.


capros. keykos is so 1980.

i miss when osnews was still decent.


Funnily enough, Plan 9 implements a kind of proto-capability security environment given how namespaces work. You have to squint a bit at it, but the semantics are mostly there.

Of particular note, the maxim "you can't attack what you can't see" is reasonably well represented. Whenever you rfork() a new process, you can shed whatever parts of the new namespace are irrelevant before ultimately exec()ing, and barring certain exceptions, the new image running in that child process can't get them back on its own. There are ways to augment other namespaces with mounts that are reminiscent of delegating a capability to another process. There are also ways to lock down a process to prevent it from changing its namespace, which is roughly congruent to blocking a process from receiving delegated capabilities.


I think Capsicum is a good start, but it does not have revocation built in — which I think is essential for a system in which capabilities can be held by long-lived apps or services. There are approaches where revocation is expressed as a network of capabilities, but that just makes using it more complex.

Also, revocation should be cascading: revoking also derived capabilities. The system would need to store a delegation-tree. Then there's the issue processor time for cascading revocation, which could depend on the size of the tree.


Yes that would be good, but I'm OK delaying it. People are used to capabilities without revocation from functional programming, Rust, WASI, etc., and don't mind so much in that context.

(Well, weak references allow revocation, but they are relatively rare.)

Yes, interprocess vs intraprocess is different, but the above makes me at least think there is some juice and first exposing the lower-hanging fruit that people are already familiar with and seem to like, and then using the success of that to motivate/fund revocation.


Not the author but I recommend: "What UNIX cost us" by Benno Rice: https://youtu.be/9-IWMbJXoLM?si=3c0YkPtfAJHWB_Q-

In fact, this video argues that file descriptors are not as good as the author claims in their opening paragraph. They are another runaway idiosyncrasy borne out of the "everything is a file (but not really)" philosophy that underpins Linux.


Keep reading. The entire article is a list of reasons why Unix is bad (and suggestions for how to fix it).


Did you accidentally a word here?

> Remember, the file system is a flavour of database. We shoudn’t just it by other standards.


Possibly just -> judge?


Yup you are right. Will fix.


FWiW, my best guess: s/just/judge/


I dunno. I think I can get behind the ideas here, but a lot of it seems a little too informed by app requirements and not the bits (hardware abstraction) that are the actual job of an operating system.

Discussing better ways to spawn a process is great, and "everything is a file descriptor" is a fine metaphor. But if there's no thought given to mapping memory, handling virtualized machines, driver interaction (ioctls, sysfs, et. al.), async signaling and IPC, etc... I don't know that this really gets very far.

You could write these APIs on top of the Linux kernel today, but you'd never be able to do anything but toy code without then wrapping a vast quantity of system interfaces in this "fd's everywhere" model.


Go already implements many of these ideas. There's os/exec (and syscall.ForkExec), and io/fs (albeit it's a family of interfaces, rather than a descriptor you could pass between processes).

It had to ban forking, because of the guarantees made by the runtime (and all the "weird" OSs it runs on, like Windows). The API is clean and easy to work with, but don't look at the implementation ;)

I'm curious what would "UNIX v3" look like today, if made by the same teams that did UNIX, and then P9/Inferno/Go.


Yes being able to seamlessly swap out the implementation of library functions is a crucial way to justify this sort of work on a shorter timescale.

I'm very glad that many such libraries already have process creation interfaces that possible, and in fact easier, to implement with this approach.


This reminds me of the Unix Haters handbook. a favourite topic back in the 90s amongst certain groups.

https://web.mit.edu/~simsong/www/ugh.pdf

I'd encourage the author to have a look at Plan 9 and GNU Mach as well, which in some shape or form do implement many of these.

But none of them ended up being a viable reality.


The best part is still:

I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memories. The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out to pasture, they are fertilizing it from below.

Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and addled by puffiness of the genome. Yet your prison without coherent design continues to imprison you. How can this be, if it has no strong places? The rational prisoner exploits the weak places, creates order from chaos: instead, collectives like the FSF vindicate their jailers by building cells almost compatible with the existing ones, albeit with more features. The journalist with three undergraduate degrees from MIT, the researcher at Microsoft, and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred.

Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. You claim to seek progress, but you succeed mainly in whining.

Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy. Bon appetit!


Mach was created at CMU, and later development moved to the University of Utah. This work became the basis of the NeXT operating system, which then turned into Mac OSX.

So I'd call this a "viable reality".

PS -- I think you meant "GNU/Hurd", not "GNU Mach".


Except that in the process, Mach absorbed BSD and became a weird hybrid of the two. I worked on Mach as an undergrad at CMU when the BSD server was intended as a temporary expedient to port the Unix userland tools quickly. Instead, it made its way into the kernel.


https://news.ycombinator.com/item?id=38761195

Called something similar, "recidivism" which I think is a very apt word!

We have to think critically about reform and how to have smooth/cheaper migrations without sloshing back and forth while making no progress.


Me 25 years ago, spending my days implementing mostly server applications in C and trying to design things in mechanical sympathy with the OS, would probably have had a lot of opinions on this. Mostly "don't make my life harder by changing things around too much".

Me today, still writing server applications, but now in Go, lives somewhat more remotely from the low level details of the system. There are more abstractions between me and the system. It is rare that I even have to deal directly with, for instance, low level networking system calls (except the odd ioctl, which, as someone pointed out years ago, is the carpet under which we sweep all the things that we couldn't figure out how to do properly).

During those 25 years the OS went from something I had to care about to something that I only cared about when something didn't work. I'd love to say that things got better, but somehow, unix managed to attract a lot of people who never understood what made unix good in the first place: simplicity.

Which leads to three thoughts.

The first thought is that I both think it is possible and desirable to evolve towards something that simplifies writing libraries and system software a bit. The vast majority of development work is not really affected since we don't typically spend our days obsessing over system calls. Heck, most people who develop on unix today are not going to be able to distinguish a system call from a library call anyway.

We don't really spend a lot of time doing low level stuff. And those who do probably wouldn't mind as much as we think. Provided things don't get worse.

Which leads me to my second thought.

The second is that we should be careful about putting too much stuff in the OS that really belongs in libraries or applications. Not that I see this here, but talking about filesystems as databases and offering transaction semantics is right on the edge. It can easily become complex and hard to implement correctly.

(Yes, I'd love to have better defined filesystem IO with certain transactional capabilities. But I really would rather we didn't even try if it means there is a risk people will overdo it and I have to spend the next decade tolerating that my filesystem craps out on me more often than is the case now)

Operating systems shouldn't do anything too complicated. Complicated isn't robust. Do the complicated stuff in userspace where it doesn't matter as much when it blows up.

Third, learn from the last couple of decades of poor system software. Managing unix used to be easy. Every part of the system used to have a single purpose and you could figure stuff out. These days I'm lucky if I can figure out how to configure which DNS servers to use and make that config stick. My system is full of overly complex system software that is poorly implemented and has horrific configuration regimens.

I haven't been a sysadmin for 20 years. If I have to give a shit about these things, it means they are poorly designed, implemented or both.

In the last 20 years a lot of people got it in their heads that they were going to improve linux by designing and implementing ambitious system software to try to configure networking and daemons. And while I understand the temptation and I'd probably have committed similar atrocities if I'd been sufficiently bored, it is okay to admit that for the most part, we didn't get much net benefit.

So my third point kind of expands to: "don't try to be too clever" and "if possible, see if we can make more system software obsolete while we're at it".

EDIT: fixed typo


Same.

The only thing I’d say in contrast is that while getting worse to manage, a lot of stuff works more often than it used to do.

Now, I’ll expand and add a fourth point. Do not have so much hubris as to think that you can achieve in one year what took 40 years to build, and certainly do not think that just because you have better tooling you will be able to reimplement something better than what has already been done. The reality is that the new implementation will likely suck in comparison to the old one for many years, and it will require ceaseless and thankless effort. Prime example: Wayland (which is now really awesome btw) that took a decade or so. If, otoh, something sucks and has done for a long time, go for it. Prime example: Linux audio (OSS, Alsa, Pulse, Jack) where pipewire is a godsend.


> Me today, still writing server applications, but now in Go, lives somewhat more remotely from the low level details of the system. There are more abstractions between me and the system. It is rare that I even have to deal directly with, for instance, low level networking system calls (except the odd ioctl, which, as someone pointed out years ago, is the carpet under which we sweep all the things that we couldn't figure out how to do properly).

This is both a disadvantage and a major advantage.

The disadvantage is that yes, changing syscalls / libc is not enough. No one is going to hand-write the FFI for your fancy new syscall; it needs to percolate down to the library they are actually going to use. That percolation takes time, and the extra latency makes this sort of reform project less enticing.

The advantage is when the library interface already works with the new system call. For example, someone pointed out the Go already doesn't expose fork/exec because it wouldn't work on Windows. Well, the proposed process spawning interface might not only support Go's interface, but make it easier to implement.

If we can swap out a standard library implementation once, and then a huge number of applications instantly benefit, that really boosts the cost/benefit ratio!

I myself do not write applications in C either. I mainly notice these bad syscalls when the higher level library interfaces all of the sudden get really awkward for no good reason.

> The second is that we should be careful about putting too much stuff in the OS that really belongs in libraries or applications.

Yes absolutely agree. I meant to talk to about interfaces while being agnostic about what thing implements them. I would be perfectly happy if the OS didn't implement a file system at all.

The reform approach probably does mean adding yet more system calls to existing heritage kernels. But that must not be the entire story. Ideally things look like this:

1. Implement more system calls in heritage kernels with better semantics.

2. Userland software can switch to new system calls and become simpler. Ideally it uses fewer different system calls than before.

3. New greenfield kernels can support the improved/simplified userland software with less work / fewer constraints.

In this manner I hope we can "pivot" to better things. (The pivoting reminds me of parallel parking n-point turn a bit, in that a bunch of small steps with a lot of "wash" somehow combine into something more impressive.)


Wouldn’t creating new processes from zero rather than inheriting from parent make it hard to do things like ‘sudo’?


I think it would be possible. There's the version where you start with a blank file and mmap as you please, and the version where you still start with by specifying an executable and then set the other state. The latter version should work just fine with setid.

Agreed with the others that setid is bad and ultimately should be gotten rid of, but we do need to support it for a transitional period. So thanks, good question.


root and suid are big hammers that should be eliminated. In a capability world maybe you can send a message to some more powerful process to ask it to do things for you.


I also had in mind features like cpu affinity, and may be the next thing that this app which compiled a decade ago didn’t know about.

I don’t like current forking for sure, just that I’m not against inheritance…


> Remember, the file system is a flavour of database. We shoudn't just it by other standards.

A word appears to be missing here. A letter, too, but I don't care about that.


“Just” is probably supposed to be “judge”.


Yup, and I received a PR for this :) thanks whoever sent it. Should be fixed now.


So plan9?


Well yeah. You already could rebuild most of Plan9 using a standard Linux kernel and a mutated userspace to take advantage of advanced features that are usually ignored by glibc and friends.


I stopped reading after the line "Unix is bad."

Bold sweeping statements like this are a red flag - they tell the reader that the remaining content is likely subjective and offers little value.

From reading the other comments here, it appears that there are some valid Unix criticisms and improvements (all of which have been discussed previously on HN), but Unix is certainly anything from bad. And like anything, it can always be improved. Unfortunately the author didn't take this angle when writing the first line of this post.


This seems like a subjective comment of little value.


How is this comment subjective and of little value?


Never mind. The parent comment was subjective and of little value.


> Unix is bad.

No. Doing one thing well and combining things is good.

The freedesktop parody is bad. It is a solution looking for a problem. Why do i need d-bus and gnome-keyring to authenticate an authenticated user ?


[flagged]


>How do you take someone seriously that opens with that?

By reading the whole article and responding to it on its merits, and not on a single three-word sentence that isn't that relevant to the core point of the article.


Thank you.

The computing world loves to be relentless positive about things. Ergo all the over-promising and hype-cycles and whatnot. I don't want to descend into nihilism, but I think we need more criticism to balance it out. If people don't know what's bad, how can that appreciate what is good?

A lot of people revere Bell Labs a lot. I dunno, I think the original version in Manhattan with a train running through the building might have been cooler! There was a huge diversity of computers and operating systems in the late 1960s and 1970s. It can easily be argued that technical merits had less to do with Unix's winning than the monopoly law that preventing AT&T from commercializing it.

Starting with an elementary schooler's negative sentence is a good way as any to break from the relentless positivity and widespread Unix worship :)


I think the Unix model is a local maxima in the design space we've been living in for decades. It's incredible how well we've managed to scale and tweak the concepts but onto the systems we have today, but I'd love to see more experimental OS architectures again.


Definitely more experimentation would be good.

On the light experimentation front, a longstanding goal of mine is to see NixOS support multiple kernels, because the status quo of having to cobble together a userland from parts for each kernel is a huge productivity drag. Having a "userland assembly toolkit" that lets you plan out what software you want to support with what patches could dramatically lower the barrier to entry.

Trying to carve out a driver layer would also be fantastic. This should be analogized to LLVM making a reusable compiler backend, and the dramatic effect that had on programming language diversity allowing things like Rust to come into existence.

At the same time, kinda what I am saying here is that maybe it isn't quite a local maxima.

Today most kernel work is motivated by performance, maybe security. The mistakes that are already worked around with more ugliness in other layers is usually not prioritized --- we have the workarounds, what's the problem? If more OS work was thought of in productivity terms --- we want to make writing correct, secure, performant, etc. software easier and more natural --- I think we would find the gradient might be bad in many directions, but still good in others.

The stuff I include here is are all things that I think could be refactored into existing "heritage" kernels without much difficulty. I actually looked a bit at FreeBSD and Linux for the process-spawning parts, for example. Just need to slice up the fork/exec code and then call it in a different order.


> Ergo all the over-promising and hype-cycles and what not. I don't want to descend into nihilism, but I think we need more criticism to balance it out.

And then you continue with “ But file descriptors are good.”

Which arguably could be one of the points against UNIX IMHO. As opposed to object-oriented interfaces like for example Windows NT, which allows for elegant shells like PowerShell.

Now let the downvotes flow in… ;)


When I say "but file descriptors are good" I mean https://en.wikipedia.org/wiki/Object-capability_model is the goal.

I don't believe "object-oriented" has any concrete meaning, so I have a hard time interpreting what you mean by that. Window's HANDLE * is good though, and I recall in some cases NT made some things handle-based ahead of Linux / BSDs doing so. (e.g. is `NtCreateFile` older than `openat`? Maybe?)

Powershell is nicer than bash, but doesn't PowerShell run on linux? I would think the integration with a proper standard library and language runtime (C#'s, CLR) is more important than what the system calls look like.

No downvote from me :)


The difference between Windows and Unix is the design of the OS. Unix famously handles "everything as a file". In Windows, everything is an "object" of some sort. This means that to be efficient in Unix, you need to be focused on files and text. But in Windows, you need to understand the OS objects, and how to work with them. Powershell excels at that. Someone ported Bash to Windows, but I'm still not sure why. It's a better shell than "CMD", but neither are well-matched to the Windows abstractions. I've never tried Powershell on Unix, but I suspect it'd suffer from the same "impedance mismatch" as Bash on Windows; it's just not designed to work on the right abstractions.


I’ve tried PowerShell on macOS, and yes, it feels a bit like Bash on Windows


What I mean with “object-oriented” vs “file-oriented” is more the philosophical approach of the systems. I think the “everything is a file” approach of UNIX has its strengths but it also breaks down at some point.

So with the example of PowerShell (on Windows), I like that I can actually pass around objects which I then can query for details if I want to, but which can also hide their details if they’re not needed.

In contrast to files (or rather text), where details are always exposed. This is often seen as an advantage as it’s directly “human readable”, but in my experience it often makes for more complex command lines.

As said, both philosophies have their pros and cons, I just don’t see the UNIX one as the one and only truth.

But I generally try to be pragmatic and will happily use Windows or some UNIX derived system (or something else) depending on the use case.


I think Unix fails at "everything is a file" but that's actually good.

When you say "pass around", what do you mean? Is this some userland struct future syscalls will inspect manipulate? Or are you passing HANDLE * references to an opaque resource?

I very much agree text is bad. :)


I mean (for example) that in the shell I can pipe the output of one command to another, like on UNIX. But what gets passed is not text, but an object.


> As opposed to object-oriented interfaces like for example Windows NT, which allows for elegant shells like PowerShell.

Now let the downvotes flow in…

No downvote from me!

I always found UNIX's everything-is-a-bag-of-bytes to be a huge pain, and file descriptors even worse.

This is a hard pill to swallow for many *nix die-hards, but I would say that objectively speaking, Windows NT has a better core OS model than Unix-like OSs, including the driver model, the filesystem API, the process management API[1], and the overall plug-and-play-ness of NT.

[1]: The OP article mentions this paper by Microsoft Research, but claims it doesn't offer alternatives—it does. It explicitly mentions posix_spawn and CreateProcess. https://dx.doi.org/10.1145/3317550.3321435


I've found delving into the structure of Windows NT (and Windows in general) is an endless stream of fascination after another. Almost 30 years of being a Windows wizard and I'm still like a kid in a toy store any time I go through it.

On the surface it seems like madness, but taking a proper look quickly shows there were and are very smart and wise people behind Windows's core.


> elegant shells like PowerShell

Microsoft really should push that harder and create some sort of standardized /usr/bin-alike so people can install CLI tools (both local user equivalent to ~/bin and globally requiring admin) in a controlled manner.

PowerShell is an awesome shell.


> /usr/bin-alike

This is called C:\Program Files.

> ~/bin

This is called %LOCALAPPDATA%, which lives in C:\Users\<Username>\AppData\Local.

There are exact equivalents to Unixisms on Windows; it's just that Unix developers don't want to use these equivalents.


So, putting executables in C:\Program Files will work to make them available to you on the shell?


No; to make an executable available on the shell, you add its directory to PATH, same as on *nix.


Shame about the language


Can you explain?


If it isn't relevant then don't put it there. Especially if it's inflammatory and highly debatable.


I read the article. That is the point. It has no relevancy at all. The article was horrible, makes an opening statement that can't be supported. You people are just sheep.


Like this comment, anyone that is serious about UNIX, knows it is a pile of hacks, everything is a file except when it is not, and being free beer with source code is what has largely contributed to its adoption by universities and workstation vendors, saving costs on OS licenses and R&D.

The UNIX creators did not come up with Plan 9 and Inferno just for fun.

And to quote Rob Pike, someone that knows more about UNIX, than most folks here, maybe you will take him more seriously,

"We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy."

-- https://interviews.slashdot.org/story/04/10/18/1153211/rob-p...


It's simple - you don't. Just close the tab and move on.


You think "true, but what does this author have to say about that?" and continue on to read the rest, that's how.


Yeah, exactly... Opposed to?


The next best thing we should try to move to, if we accept that by having more experience and knowledge makes existing tech seem "bad" in retrospective.


... the alternative solutions proposed in the article.


Which don't exist.


Because they are proposals. Are you claiming that a design can't be bad if better alternatives haven't been implemented yet? That is clear nonsense.


> Unix is bad. But file descriptors are good. Modern Unix-derived operating systems (Linux, some BSDs) are already moving towards file descriptors for everything.

Perhaps we should just call them "object handles" instead.


Fine with me!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: