I actually find Unix and Windows to be very fundamentally different. Particularly where he talks about security. Unix security concepts are built into the system architecture whereas in Windows they are implemented as features on top of the OS. A perfect example that he calls out, ACL. Some does exist in the architecture of Windows, but it is only a half hearted implementation.
Modularity is another example of something that is fundamental both in the architecture AND the philosophy of Unix, but very far behind in Windows where many applications such as a browser exploit can tie into the kernel space.
The communities and philosophies are also something I breezed over, but I think they are a non-trivial part of an operating system.
> I actually find Unix and Windows to be very fundamentally different.
There is a small child growing up in El Paso who thinks English and Spanish are "very fundamentally different", too, because in English there's no word for "quererte" or "sacartelo". But that's just because they've never heard Chinese, let along Lojban or Python, and they don't know how to read yet, so they have no idea along what lines languages might vary.
Unix and Windows are both single-node monolithic multi-user non-real-time operating systems built on a hierarchical filesystem (with names that are sequences of strings) with ACLs for discretionary access control, no mandatory access control, a global user namespace controlled by a single system administration authority, and in which executables (which share code using dynamically-linked libraries) run with the full permissions of the user who invoked them. In both systems you communicate with I/O devices as if they were files. The interface they provide to user processes uses system calls to present a programming interface that is much simpler than that of the underlying machine; those processes can have multiple threads sharing memory that block in system calls independently and can pre-empt one another, and by default their memory is not shared with other processes. They are both written mostly in C with some C++, and other programming languages are more or less obliged to use C calling conventions to interoperate. Both of them use sockets for network I/O. Users typically use them via a windowing system, which provides a huge variety of complicated ways of setting pixels inside a rectangular region on a virtual screen, and accommodates up to one mouse pointer.
Compared to any of PolyForth, Pick, Genera, Oberon, VMS, KeyKOS, VM/360, SqueakNOS, MacOS up to 9, MS-DOS, Spring, Sprite, QNX, and Amoeba, Unix and Windows are as alike as two peas in a pod.
This is a good point, though also a bit curious, proposing as it does, that msdos is perhaps a superior alternative. Protected memory is indeed an argument that windows and Unix are the same. I'm less inclined to agree it is a reason they should be bashed in unison.
MS-DOS is a superior alternative only for hard real-time systems and, perhaps, for systems where security is more important than almost any functionality. And probably running Linux under a real-time operating system like RTLinux is a better alternative in the first case.
My point, though, is not that MS-DOS is better in any way; rather, it's that "a flat space of multiple processes with independent address spaces of mutable memory, separated using memory protection, each containing multiple threads, which access I/O devices through system calls" (and, although I didn't say this, with disjoint kernel and user spaces) is only one possibility among many.
You could have only one process with one thread.
You could have multiple processes, but all in the same memory space, with any of them able to overwrite the others' data. (You could call this "one process, multiple threads.")
You could have multiple processes that share an address space but have access to different parts of it, which sounds stupid but means you can pass raw memory pointers in IPC and was the basis for a whole research program called "SASOSes" a few years back.
You could reuse the same addresses for kernel space and user space, which not only gives you a full 4GiB of virtual address space on a 32-bit machine, but also ensures that your kernel code doesn't pass even the crudest testing while dereference a pointer passed in from user space without using the appropriate user-space-access function. (It also imposes the cost of two virtual memory context switches on every system call; I think i386 can do this cheaply with segment registers, but I'm not sure, and basically nothing else can.)
You could give user processes direct access to I/O devices so user processes can access them directly instead of through system calls, which might be sensible in an environment where you wrote all the processes.
You could virtualize the I/O devices, just as we virtualize memory, so that, for example, your gigabit network card copies packets directly into your memory space — but only if they’re your packets, not packets addressed to a different process. (This was originally called "VIA" on Linux; I think it has a different name now.)
You could separate your processes through a trusted compiler, like Erlang does, instead of with hardware; an intermediate approach would use a trusted machine-code verifier, analogous to Java's bytecode verifier, or a trusted machine-code rewriter that compiled unsafe machine code to safe machine code.
You could allow only a single thread per process, like Erlang does and like Unix did for many years, either with or without explicit shared-memory facilities like shmseg and mmap.
You could entirely decouple threads of control from memory spaces, as KeyKOS did (if you look at it funny; KeyKOS domains are an awful lot like single-threaded processes, but you could instead consider them to be locks).
You could make all memory write-once, eliminating many of the difficulties that attend sharing it (Umut Acar's Self-Adjusting Computation paper is based on an abstract machine using this model) but probably requiring a global garbage collector.
You could replace the memory abstraction with a transactional store and execute transactions rather than continuous-time processes; the transactions could either be time-limited, as in CICS, or preemptively scheduled like processes, but in either case incapable of I/O or IPC.
So, considering the enormous design space of possibilities on even this single matter, Unix and Windows are huddled together in one tiny corner of the design space, as on many other design choices. It's clearly a better corner than many other possibilities that we've explored, especially on currently-popular hardware and with compatibility with the existing applications that bcantrill was deifying upthread. But the design space is so big and multidimensional that it seems terribly unlikely that we've found an optimum. We know that it has failed to ever produce a secure system against many plausible threat models, and that producing a hard-real-time system in it is possible but more difficult than with some alternative models. We know shared-mutable-memory threading is terribly bug-prone. We know that indirecting all I/O through the kernel imposes heavy performance costs, which adds complexity to user processes and adds the market barrier to high-performance I/O hardware like InfiniBand. We know that processes separated by the use of virtual memory facilities are very heavyweight, so you can't switch between them at more than a few hundred kilohertz on a single core, you can't create them at more than a few kilohertz per core, and you can't practically make them smaller than a few tens of kilobytes, all of which limit the power of the process as an abstraction facility. (Linux has actually reduced the cost of processes, both in physical memory and in context-switch time, by more than an order of magnitude; I imagine OpenBSD has too. But improving the situation much further probably requires different abstractions.)
That's a more involved response than I think I deserved. :)
No argument that winux is a only a local maximum, but this thread is in response to an article that basically claimed it was a global minimum. Fwiw, your comments are, imo, far more informative and constructive than the linked post.
> Unix security concepts are built into the system architecture whereas in Windows they are implemented as features on top of the OS. A perfect example that he calls out, ACL. Some does exist in the architecture of Windows, but it is only a half hearted implementation.
I think you have the layering completely the opposite way. NT has security descriptors on everything that has a name. Then above there is Win32, originally bolted on top of NT as a compatibility mode among others, which is historically an API for not very security conscious systems. And most Windows programs out there don't care about the security features.
So it's more like the higher layers suck in this regard.
I absolutely may have the layers the wrong way. My working knowledge of Windows is very limited compared to Unix and I may not have fully understood how Windows is put together.
> Unix security concepts are built into the system architecture whereas in Windows they are implemented as features on top of the OS. A perfect example that he calls out, ACL. Some does exist in the architecture of Windows, but it is only a half hearted implementation.
Huh?
The fundamentals of Windows NT (the object manager, the registry and NTFS) all have ACLs.
Modularity is another example of something that is fundamental both in the architecture AND the philosophy of Unix, but very far behind in Windows where many applications such as a browser exploit can tie into the kernel space.
The communities and philosophies are also something I breezed over, but I think they are a non-trivial part of an operating system.