Hacker News new | past | comments | ask | show | jobs | submit login
Unix's technical history is mostly old now (utcc.utoronto.ca)
116 points by ingve on Nov 27, 2022 | hide | past | favorite | 126 comments



This is why I'm interested in old, now mostly obsolete/dead operating systems. No one has really felt free or able to break out of the UNIX mould, since. And we've been using the mould for a long time. It wasn't always so uniform. There have been some craaaaazy things out there, that we simply never see today because it doesn't fit. Off the top of my head:

* ITS (of PDP-10 hacker fame) - processes could debug and introspect their child processes. The debugger was always available, basically. The operating system provided support for breakpoints, single-stepping, examining process memory, etc. Whatever part of the system manages processes, is the natural place for the debugger, really. Some of this has been brought to Unix systems recently, but you still can't trivially freeze a process, tweak a few values in its memory, and resume it as an integrated part of the operating system. Why not? It seems very basic to an operating system, now that I think about it.

* KeyKOS (developed by Tymshare for their commercial computing services in the 1970s) - A capability operating system. If everything in UNIX was a file, then everything in KeyKOS was a memory page, and capabilities (keys) to access those pages. The kernel has no state that isn't calculated from values in the virtual memory storage. The system snapshots the virtual memory state regularly. There are subtle consequences from this. Executing processes are effectively memory-mapped files that constantly rewrite themselves, with only the snapshots being written out. Snapshotting the virtual memory state of the system snapshots everything -- including the state of running processes. There's no need for a file system, just a means to map names to sets of pages, which is done by an ordinary process. After a crash, processes and their state are internally consistent, and continue running from their last snapshot. For those who are intrigued, there's a good introduction, written in 1979, by the system's designers available here: http://cap-lore.com/CapTheory/upenn/Gnosis/Gnosis.html (It was GNOSIS before being renamed KeyKOS.) And a later document written in the 90s aimed at UNIX users making the case: http://cap-lore.com/CapTheory/upenn/NanoKernel/NanoKernel.ht... Some work on capability systems continues, but it seems the lessons learned have largely been forgotten.


Ironically, Microsoft's Windows is one of the only big non-Unix operating systems out there. I say ironically, because people seldom bring up the Microsoft ecosystem as a beacon of adding to diversity.


I respect Windows for saying no to a hack that is fork. Fork is just bad design and no software should use it, and thanks to existence of Windows, lots of software avoid use of fork, which is a big plus to software ecosystem as a whole. Thanks to Windows, new operating systems like Fuchsia can also avoid fork.


I don't know enough to understand why fork is a bad design. Can you say some about what problems there are?


Microsoft people explained it better than I could, so I link to their explanation: https://www.microsoft.com/en-us/research/publication/a-fork-...


Thanks: Very informative.


The paper that sanxiyn linked is a great read. My summary is that forking and multithreading are a terrible mix. You have two bad options:

- Duplicate all threads. No one does this because it would be total chaos. If another thread was writing to a file or a socket, parts of those writes might happen twice, etc.

- Duplicate only the calling thread. This isn't total chaos, so everyone does this, but this also sucks. Any locks that were held by any other threads will never be released in the new process, so you need to make sure not to touch any locks at all post-fork. But that's a huge restriction, because e.g. malloc() touches locks sometimes. Code that runs post-fork ends up being as restricted as code that runs in signal handlers.


What if the fork syscall failed if there was more than one thread? Would it be bad design then?

I guess the advantage of fork relative to other hypothetical methods of creating child processes is essentially just getting a copy of all the parent's state before the fork. So maybe the question is how often getting a copy of all the parent's state is actually useful?


You'd then have an explicit early-fail footgun instead of a rake in the grass, which has to be an advantage.

However, fork() is still not necessarily a good design, and the reason for that is that the vast majority of the time the reason you want to run fork() is to immediately run exec() afterwards because you're trying to launch a separate process. So, the fork() does all the hard work of duplicating the entire address space of the process, and then exec() throws it all away again.


Makes sense.

One thought is, if the OS/libc was designed so that exec() always created a child process, then you could get the same functionality as fork() by creating a child process using exec() and passing along just the info that the child process needed, possibly using IPC. This seems like better encapsulation, for the same reason you wouldn't write a function which required a ton of arguments it didn't use. It also makes the issue of forking a multithreaded process irrelevant.

But I could imagine this approach being slow if there's a lot of state you want to pass, if copying entire pages of memory (the way fork does?) is significantly faster than than sending the same data via IPC.

I wonder what the most common non-exec() use of fork() is.


fork() doesn't actually copy entire pages of memory. But it does have to copy the page tables (note - someone else might be able to chip in and tell me that Linux has an optimisation for that too). The actual pages are held with a copy-on-write status, so they won't be copied until they are actually modified by one of the processes.

There are quite a lot of things that survive exec(). Not least of which is the set of open files. This is how a shell will pass on the definitions of stdin/stdout/stderr to a command that you want it to run, for example. Also, environment variables survive exec().


>Also, environment variables survive exec().

Does this change depending on whether the environment variable definition was prepended with 'export'?


Thanks for the info!


For Linux kernel's viewpoint where they are trying something new (io_uring_spawn) due to fork's problems, see https://lwn.net/Articles/908268/


Didn't Windows Network stack come from NetBSD? There were many BSD contributions to Windows.. BSD386 was a direct Unix derivative I would argue.


Of course, Windows (and especially Windows NT) learned and borrowed from the Unix world. But it retains a distinct personality and lineage.


Microsoft's Unix derivative, Xenix, was a big influence on DOS 2.0, in particular the concept of directories, path name syntax, and replacing the CP/M-derived FCB API for file I/O with Unix-style file handles. That influence endures all the way from DOS 2.0 up to Windows 11.

People talk about Windows NT being influenced by OpenVMS, via Dave Cutler – the influence is very obvious in some areas (e.g. asynchronous IO), but path name syntax is one area in which Windows is much more influenced by Unix than by OpenVMS (even considering warts such as drive letters, support for backslash instead of forward slash, and reserved file names such as NUL and CON.)


Other influence from OpenVMS is how many of those API surfaces look like, when I looked into OpenVMS documentation for the first time on during my digitial archeology, somehow it felt familiar territory.


Correct me if i am wrong but isn't the NT design heavily based off of (some would say a rip off of) VMS, Microsoft having poached much of DEC's OS dev team including chief architect David Cuttler.


There is the old joke that, when you increment each letter in VMS, you get WNT.


similarly HAL from 2001 Space Odyssey is IBM with each letter decremented by one.


And HAL is an important component of WNT.


Considering David Cutler was instrumental in both VMS on the VAX and then eventually Windows NT this isn't so much a rip off as a continuation.


Yes, and DOS inherited things from older DEC operating systems like TOPS10 (which IIRC was the first occurrence of using /forwardslash options on the command line).


Many are under the mistaken impression that DOS got forward slash as an option character from CP/M or from IBM. CP/M denoted options using square brackets, and IBM mainframe operating systems generally indicate options using parenthesises. (The usage of square brackets appears to be a modification of IBM's approach; I'm not sure why Gary Kildall made that change – before creating CP/M, he'd used IBM CP/CMS at the Naval Postgraduate School, which likely influenced both this option style, and also drive letters.)

My own reconstruction of what I think happened (this was all before I was born, I'm just making some inferences based on what I know about the topic) – Microsoft was heavily influenced by DEC operating systems (Bill Gates and Paul Allen wrote the first version of Microsoft Basic on a PDP-10). Microsoft didn't like CP/M's native option syntax, so for their development tools for CP/M, they adopted a DEC-style option syntax instead. Since many ISVs used Microsoft's development tools, a lot of third party software on CP/M ended up using the same option syntax too. Tim Paterson at Seattle Computer Products was influenced a lot by Microsoft (even before he worked for them), so QDOS/86-DOS/PC-DOS/MS-DOS ended up with the DEC/Microsoft-style forward slash as an option character. When DOS 2.0 came around, Microsoft wanted to switch it to dash/minus, for better Unix/Xenix compatibility. In support of this, they added a CONFIG.SYS setting SWITCHCHAR= to let you choose between forward slash and dash/minus, and an API (INT 21,37) to get/set that config setting at runtime. However, it was decided that the backward compatibility cost of that change was too great, so they pulled the feature from the documentation (although it still existed in the code; in later DOS versions, they disabled the ability to change it to anything other than forward slash, but the undocumented API to get the setting was never removed.)


The Windows XP source tree contained various Unix utilities that were ported to Windows. Eg, Perl, Vim, Vi, wc, cat. Imagine if they made it into the public release.


Yeah, I'd say that the network stack on Windows is not an example of a non-Unix design. And there are other bits and pieces here and there that show some Unix influence (like the hierarchical filesystem). But on the whole, one can clearly see that Windows comes from a different heritage.


The hosts file in Windows is still called etc/hosts which I’ve always found amusing ;P


Yes, although it was rewritten on Windows Vista, the NetBSD is no longer part of Windows.


Do you have a reference for that? I remember that many TCP implementations were derived from the Reno/Tahoe tape source, but I hadn't heard until now that this included NT. Iirc that work was done by or managed by J Allard.


fun fact: early versions of windows nt included some libraries/runtimes that made it posix compliant.

the sockets api was modeled after berkeley and the libc itself is posix compliant to this day, i believe.


As I remember it (and I had no inside knowledge, I was just an application developer who read the docs), NT was supposed to have 3 distinct APIs for interfacing with the kernel: the POSIX subsystem, Win32 for compatibility with Windows and the next-generation OS/2 API.

Then IBM and MS fell out and Microsoft put all their resources into Win32.


Windows NT was originally supposed to be "OS/2 3.0". Then IBM and Microsoft broke up.

However, Windows NT did actually have those three APIs – the Win32 subsystem (which also incorporated Win16/DOS support), the POSIX subsystem, and the OS/2 subsystem. But the POSIX and OS/2 subsystems were always rather half-baked, and never saw that much use. The OS/2 subsystem was removed in Windows 2000; it only ever supported OS/2 1.x apps, and only character mode unless you purchased a separate graphics mode add-on from Microsoft. The POSIX subsystem was not removed until Windows 8.1, although by then it had changed its name to "Subsystem for Unix-based Applications", and had grown a lot compared to the original NT 3.x POSIX subsystem. WSL is effectively the replacement for the POSIX subsystem, however its implementation is very different (which is true of both WSL1 and WSL2, despite the fact that they are also very different in implementation from each other.)


In Windows XP and Server 2003 the original POSIX subsystem[0] was replaced with Interix[1] as part of Serivces for Unix.

[0] https://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem

[1] https://en.wikipedia.org/wiki/Interix


Interix wasn’t really a “replacement” - the fundamental architecture hadn’t changed, PSXSS was still there. From what I understand, Interix was just an enhanced version of the original POSIX subsystem code, not some from-scratch rewrite.


I thought that Softway replaced the Microsoft PSXSS with their own but I don't remember for sure. This would be an interesting software archaeology rabbit hole to go down.

Interix 2.2[0] shipped with its own PSXSS.EXE. I don't have a contemporaneous version of the PSXSS.EXE that shipped with Windows at that time to compare handy.

Finding a copy of OpenNT prior to the Microsoft acquisition is proving to be difficult.

[0] https://archive.org/details/INTERIX2.2.7z


Wasn’t their updated version a modified version of Microsoft’s original code, which they’d licensed from Microsoft? If that’s true, their PSXSS was more an enhancement than a “replacement”.


That would be interesting to know. I’d love to see a copy of OpenNT. Sadly, my experience started with it when it was already Interix. I know Microsoft negotiated source access/licensing with various third-party software “manufacturers” (like w/ Citrix for WinFrame). I’d love to know more about the arrangement with Softway.

So much history is slipping away.


Apparently "BetaArchive" has a copy: https://www.betaarchive.com/database/view_release.php?uuid=d...

However (putting aside the legalities of downloading "abandonware"), they won't let you download anything unless you first upload something they don't have. Heck, I probably do have something they don't have in my box of old floppy disks (which I keep on telling myself I will image one of these days), but my curiosity about this topic isn't strong enough to motivate me to do that.


notably the actual subsystem wasn't really necessary, if i recall it was some poorly implemented stubs for unix style system services. (think a backend for functions like getpwnam)

the important part was the posix api functions that made porting unix daemons that had been written in c/c++ to nt pretty easy. there were some quirks around shared memory/mmap, getpwnam, the sockets api and threading support, but beyond that doing a port was pretty easy.

the work there was plugging into the service control and event logging apis. and doing an installshield, that was no fun.


The subsystem absolutely was necessary to properly support fork(). The NT kernel has always supported fork, but Win32 can’t cope with it, forking confuses CSRSS.EXE. POSIX/Interix/SFU/SUA used PSXSS.EXE instead which wasn’t upset by forked processes.


that's right, i remember now fork being an issue. wasn't that much of a deal if i recall as most programs only really used it to spawn new processes by calling exec immediately after. (so, just swap it for CreateProcess)

if a program did use fork as part of its concurrency model (like say, preforking apache) then making use of actually functional nt threading support and gratuitous ifdefs was usually the answer.


exec() was another problem - NT doesn’t support reusing a process for a new executable; exec() on Windows actually spawns a child process and then exits the parent - which means exec() changes your PID.

Both Cygwin and POSIX/Interix/SFU/SUA use the same workaround - maintain a separate Unix PID for each process. exec() changes the NT PID but leaves the Unix PID unchanged. So exec() creates a new process, but to Unix apps it looks like the same process.

You are right that for many apps, rather than rely on emulated fork/exec, it was better to switch to native Windows facilities. Still, there was some value in being able to recompile Unix software with minimal changes (as Cygwin demonstrates). Even WSL now confirms that, albeit in a rather different way. Also, it was necessary to pass the POSIX test suite, which Microsoft briefly was concerned about (until they successfully lobbied the US government to drop POSIX certification as a procurement requirement).


Posix compliance was/is often a checkbox requirement to bid for government contracts. So M$ tossed in the Posix subsystem so they didn't get excluded based on that requirement. Lots of otherwise odd features in software fall into this category (See also: Apple A/UX).


Enjoy the demo of Mesa/Cedar then,

"Eric Bier Demonstrates Cedar"

https://www.youtube.com/watch?v=z_dt7NG38V4

One for Oberon,

https://www.youtube.com/watch?v=OJGnpmnXR5w

IBM Redbooks are a source of information for IBM i and z

https://www.redbooks.ibm.com/

And Burroughs is still sold as ClearPath MCP.

https://public.support.unisys.com/search2/DocumentationSearc...


> The debugger was always available, basically

Because ITS had the debugger (DDT) as its shell. ITS wasn't the only OS for which this was true – the same was effectively true of CTSS, since in CTSS debugging was just a bunch of shell commands, not launching a separate "debugger". The same is true of IBM VM/CMS, for assembler-level debugging. (Source-level debugging is launching a separate program though.)

One interesting feature of ITS – it had an API (called "valret" or "valretting") which an application could call to run a command in the OS shell/debugger. So basically, an application could actually control the debugger debugging it. I've never seen any other system with that feature, although it is possible to implement with GDB – allocate a buffer, put a "run this debugger command" request in it, pass it to a dummy function which just returns. Have a "processed" flag in the buffer – without a debugger, calling the dummy function will not set that flag, so the app knows no debugger is listening for commands. Now write a GDB Python script which sets a breakpoint on that function, when the breakpoint is hit, it reads the command out of the buffer, runs it, optionally writes any reply back in to it, sets the processed flag, then continues execution. The app can see the processed flag is set, so it knows the command was run, and can read any reply from the buffer too.

MS-DOS COMMAND.COM did have a somewhat related feature – INT 0x2E. You could use that to tell COMMAND.COM to run a command. The difference from e.g. the C system() function, is it didn't spawn a child process COMMAND.COM to run the command, it told the parent COMMAND.COM to run it for you.

(What many people today don't understand about MS-DOS, is it was actually a pseudo-multitasking operating system – it supported having multiple processes in memory at once, unlike many earlier operating systems such as CP/M which only supported having a single process in memory at a time – but unlike an actual multi-tasking OS, only the single foreground process could actually be running, all other processes would suspended – so you could have a child process, but you'd be suspended while it ran. An ancestor process could export an API to its descendant processes by installing an interrupt handler, which is exactly what COMMAND.COM did by installing a handler for INT 0x2E.)


I think that a lot of interesting things were lost from the IBM and DEC operating systems. I spent a lot of time in VM/CMS and VAX/VMS (plus a little MVS/TSO) in the 80s and 90s and my Unix time was relatively limited for quite a while. I really liked Rexx (IBM’s scripting language which they brought into OS/2, but is now pretty much dead) and the way that command line options for VMS programs could be declared independently of the executable (plus its automatic allowance for abbreviations where you could abbreviate a command to its minimal unique prefix).


> I really liked Rexx (IBM’s scripting language which they brought into OS/2, but is now pretty much dead)

I’m not sure if it is the same, but back in my Amiga-days we had Arexx which I thought was an amazing tool to do tool-automation.

To this day I haven’t seen anything quite like it. The closest is P to ably Windows and COM, but nobody likes that ;)


Yes, the Amiga Rexx was based on IBM’s Rexx. I don’t know if they coded it to be compatible or licensed code from IBM.


> you still can't trivially freeze a process, tweak a few values in its memory, and resume it as an integrated part of the operating system

Can't you do this with gdb? Running `gdb -p <some pid>` pauses that process, you can inspect it, set breakpoints, change values in memory, etc, and then resume execution.


If you have the slightest interest in the history of operating systems, I strongly recommend reading Joe Armstrong's thesis. While it is mostly about Erlang, he also goes into detail about many of the experimental systems of the 1960s, 1970s and 1980s, which gave him his ideas:

https://erlang.org/download/armstrong_thesis_2003.pdf

Chapter 5 is especially relevant:

excerpt:

What is a fault-tolerant system and how can we program it? This question is central to this thesis and to our understanding of how to build fault-tolerant systems. In this chapter we define what we mean by “fault-tolerance” and present a specific method for programing fault-tolerant systems. We start with a couple of quotations:

We say a system is fault-tolerant if its programs can be properly executed despite the occurrence of logic faults. — [16]

...

To design and build a fault-tolerant system, you must under- stand how the system should work, how it might fail, and what kinds of errors can occur. Error detection is an essential com- ponent of fault tolerance. That is, if you know an error has occurred, you might be able to tolerate it by replacing the ocending component, using an alternative means of compu- tation, or raising an exception. However, you want to avoid adding unnecessary complexity to enable fault tolerance be- cause that complexity could result in a less reliable system. — Dugan quoted in Voas [67].

The presentation here follows Dugan’s advice, I explain exactly what happens when an abnormal condition is detected and how we can make a sodware structure which detects and corrects errors.

The remainder of this chapter describes:

• A strategy for programming fault-tolerance — the strategy is to fail immediately if you cannot correct an error and then try to do some- thing that is simpler to achieve.

• Supervision hierarchies — these are hierarchical organisations of tasks.

• Well-behaved functions — are functions which are supposed to work correctly. The generation of an exception in a well-behaved function is interpreted as a failure.


Just an FYI in case you don't already know of this, i found the following two papers by Armstrong useful too;

* A History of Erlang

* The Development of Erlang


Some of my most enjoyable days were spent programming Erlang. I often wondered what an entire operating system would be like were it designed along the same principles.


Rob Pike’s talk “Systems Software Research is Irrelevant” (given 22 years ago!) provides some nice color here - we kinda just settled on “good enough” in a lot of ways: http://doc.cat-v.org/bell_labs/utah2000/utah2000.html


His point about how software on a high-end workstation barely changed over the course of 10 years is interesting. I feel like if you asked someone many decades ago whether hardware vs software would change more, they would likely say "software of course! it's easy to rewrite."

It's interesting to read a manpage on a slick, modern-seeming system like macOS and notice a date at the bottom which is literally last century.

I guess the key thing is the birth of self-sustaining "software ecosystems": even if the ecosystem is flawed, the benefit of being part of the ecosystem is really high, and choosing to be part of the ecosystem means adding to it, which makes the benefit of being part of it even greater. Unix is self-sustaining despite any flaws in the same way qwerty and the English language are self-sustaining despite any flaws. Unix is like a protocol for software projects to coordinate with each other and with human beings.

Ironically, if it wasn't for rapid development of hardware and broad penetration of computers throughout society, we might still be in a phase of rapid systems evolution, since the "ecosystems" wouldn't be as well-developed. Popularity increase -> lock-in.


Saying we settled on something social shows a misunderstanding of the humanities. For example.. we didn't pick Linux to win out of all the OS's available at the time or in the near future. It just became the winner through a extremely complex series of events to which diminutive references don't do justice.


And in that time, the state of the art in systems software has advanced so dramatically as to be almost unrecognizable to someone that short length of time ago. In programming interfaces, bringing the hypervisor into the OS, concurrency and scalability and performance techniques, etc.

I think the subject is right: systems software "research" is irrelevant. Because academia could no longer keep up.


"All these things academics invented that are now commonplace..."


Academics didn't invent them.

IBM and Bell Labs, maybe.


You may be surprised that academics exist in places that don't have 'university' or 'college' in their name. From experience, folks at places like IBM TJ Watson Research Center, AT&T Bell Labs, Xerox Palo Alto Research Center or DEC Western Research Lab considered themselves academics, had academic job descriptions and did the work of academics.


I'm not even slightly surprised that people who were worth their salt and could actually create fled universities and went to private industry and did great things there, while traditional academia languished and stagnated. It's not the people who are necessarily unsuited, it's the institutions.


None of those things is novel in academia though...


Sometime in the 90s, I was at Oracle and got invited to what was probably the most boring meeting I've ever attended in my life:

A bunch of systems admins who were puzzling over how to create a standard way to administer Unix systems. The premise was "in the old IBM mainframe days, every shop ran the exact same way, so it was easy to train a new hire. Now, with Unix, there are a million different customs, so how do we standardize it?"

You probably never even thought this was a problem. My big problem was staying awake. I guess now that it's all Linux, it's easier. Back then, there was HP-UX, IBM AIX, Sequent's whatever-it-was-called, Pyramid, SCO & all the other PC variants...

The group was called Moses, which makes it almost unsearchable since there are so many other uses of that word.


Except good luck keeping up with all the differences among Linux distros, their package managers, location for configuration files outside the classical UNIX ones, with or without systemd, sandboxing model, which desktop is configured as default, sound stack,...


I mean... it's not ALL LEENUCKS. You still have to grok the difference between Darwin and Leenucks. And Minix if you work with PCHes.


It wasn't me who downvoted this. I didn't understand some of the words, but no big deal; I could look them up (but didn't bother).


"I think that if you took a Unix user from the early 1990s and dropped them into a 2022 Unix system via SSH, they wouldn't find much that was majorly different in the experience."

I agree. In fact, since there's fewer actively used unix variants, the same job would seem simpler in many ways. I think the fragmentation of tech above the OS would be much more intimidating. So many more languages, build+deploy systems, frameworks, things like api gateways, caching platforms, load balancers, and so on. In the 1990's an admin could maintain a decent amount of personal knowledge about what a developer might be doing on the platform.


In my early 20s I was a Linux sysadmin contractor in the Midwest US and got dropped into a job site where the core team was being offshored. I had to quickly learn how to maintain HPUX, AIX and some Solaris boxes using docs in the form of word docs scattered across network drives. These old admins hated Linux… and I hated the unfamiliarity of the non-Linux systems. How could anyone live with a shell where tab completion wasn’t a feature? Gag!


You couldn't count on emacs being available on some random Unix system. Bad times, those were.


You still can't.. many (most?) mainstream distros don't ship vim or emacs by default.


Emacs, sure, but I've yet to find a distro without at least vim


the bare minimum at one point was just ed(1), the line editor. vi (without the m) wasn't a guarantee. I had an all-purpose cshrc that added /usr/ucb because vi was installed there on some systems, along with other BSD programs


Unfortunately many distros aren't even shipping ed anymore.


What does /usr even mean?


As I understand it, historically, it was the tree for user home directories. The two main designers, `dmr` and `ken`, built tools in their home directories under there and shared stuff from there.

Then they ran out of space and needed to spill over onto another drive, so home directories got moved to `/home`.


Unix Shared Resources, if I recall correctly.



Oh, you, IBM.

I'm more of an

  $ apt remove --purge nano > /dev/null
man myself.


Same here. It was called "vi" and not "vim" as I recall, but it was always there.


You'll find smaller editors like nano or nvi installed by default more often than vim.


Have you used Ubuntu or Fedora? Both default to nano.

IIRC Ubuntu atleast comes with vi, but vi != vim.


NixOS


I feel the same way today when I can't tab complete parameter or variable names in typical unix shells.


As someone who used to administer solaris, sunos, hpux, and aix boxes, i can almost guarantee that the first thing they'd say is "What the fuck is up with this systemd shit?" Because i say that all the time.


systemd was necessary. In fact, Solaris did it first in the form of SMF.


> systemd was necessary. In fact, Solaris did it first in the form of SMF.

Apple's Unix variant seems fine without it though ... Of course macOS uses launchd [0] which I guess is somewhat similar. And if I am not mistaken it's released under a permissive license.

Perhaps Linux could benefit from using launchd instead of systemd?

---

[0]: https://en.wikipedia.org/wiki/Launchd


> Of course macOS uses launchd [0] which I guess is somewhat similar.

It isn't. Systemd has dependencies and many directives to express them. Launchd doesn't, and instead demands that every service either waits for its dependencies by itself or crashes deliberately if they are not met - so the autorestart by means of its brute force eventually brings the system fully up.


Systemd started as a launchd clone actually.


macOS includes launchd, because it is in fact necessary.


This doesn't make sense to me, twice:

1) the presence of launchd doesn't make systemd necessary

2) the previous absence of anything like systemd on Linux distros means I don't know what you mean by "necessary"


UNIX is fundamentally dedicated to the proposition that there is nothing that you can or could wish to do on a multicore 64 bit ARM system with a massively parallel GPU subsystem with its own memory, solid state disks and a fibernet backbone, that fundamentally differs from a PDP11 with a teletype and some magnetic tape drives attached to it.

It's not entirely clear whether that proposition's actually true.

I think in particular when you look at things like the ability of that operating system abstraction to handle stuff like GPU accelerated computation (CUDA and the like) it's possible that the insistence that the only abstractions we need are processes, pipes, sockets and inodes starts feeling a little limiting.


Unless that UNIX happens to be Irix or NeXTSTEP/macOS, with their own view of the world beyond their UNIX roots.

UNIX/POSIX never went beyond the basic of CLI applications and server daemons.


Only to be deeply confused when ifconfig is missing...


It sounds almost like the OP is bemoaning the stability of Unix-like systems. I like that ls, cd, mkdir, emacs and cp all work more or less the same way. And as much as I sometimes curse at autotools, I appreciate how nice it is I can generally just type `./configure` and something reasonable happens.


> I think that if you took a Unix user from the early 1990s and dropped them into a 2022 Unix system via SSH, they wouldn't find much that was majorly different in the experience. Admittedly, a system administrator would have a different experience; practices and tools have shifted drastically (for the better).

Indeed, I would say that it's true. I have a reference book about Unix & GNU/Linux wrote on mid/late 90's, and roughly 80% of it keeps applying to modern Linux. ps keeps being ps. Same thing with bg, chmod, etc. What really got outdated on these book, was X11 configuration (thanks!), referring to an editor called "joe", and a few bits of how install a Red Hat Linux 5.1


The other big one is that More is now Less ;)


Yeah, but less is more, more or less.


Linux's refusal to adopt RichACLs/NFSv4 ACLs forever perplexes me, and maybe some day I'll be pleasantly surprised when they do show up.

A complete superset of NTFS's ACLs, thus providing good cross-platform compatibility, and already implemented in illumos, Mac OS X, and FreeBSD, Linux is the only holdout on them.


> Linux's refusal to adopt RichACLs/NFSv4 ACLs forever perplexes me, and maybe some day I'll be pleasantly surprised when they do show up.

As someone who has had to deal with NFSv4-styles ACLs (on an Isilon server handling Linux HPC clients): they can get really messy, really quickly (especially the inheritance aspect of them).


I've never found any limitations to Linux's ACL.

I'm not saying your use case is invalid; I'm actually expressing sincere curiosity: what do NFSv4 ACLs bring to the table and what problems does it solve?


It's a different enough system, I think it's worth restating what Linux's present ACLs are, which were based on a POSIX draft: They are additive permissions that model the Unix mode bits. Per-user and per-group, you can add read, write, and execute permissions that they otherwise would not have.

NFSv4 ACLs are far more powerful, modeled after NTFS ACLs (a proper superset, with an added two permission types). In network environments (where you'd use them anyway), they can assign access control entries (ACE) with Security IDs (SIDs), generally more stable and consistent across a network than ad-hoc Unix user IDs/group IDs. Managing with UIDs/GIDs is still possible, too. They add both allow and deny permissions to ACLs; if you want to deny "fred" from reading a file, you can make a deny entry for "fred" specifically.

The fine-grained permissions you can get from NFSv4 ACLs: READ_DATA, LIST_DIRECTORY, WRITE_DATA, ADD_FILE, APPEND_DATA, ADD_SUBDIRECTORY, READ_NAMED_ATTRS, WRITE_NAMED_ATTRS, EXECUTE, DELETE_CHILD, READ_ATTRIBUTES, WRITE_ATTRIBUTES, DELETE, READ_ACL, WRITE_ACL, WRITE_OWNER, SYNCHRONIZE.

READ_DATA, WRITE_DATA, EXECUTE are equivalent to the read/write/execute Unix mode bits. The two expanded features I find I use most frequently are blocking the deletion of files and making them append-only.


How many people use ACLs?


In the real world only a small minority even use NFSv4 and that mostly for the figleaf of encryption via (yuck!) Kerberos. As a (too) long time storage industry person, I have hard numbers on this.


AWS's EFS is mounted with NFSv4 on Linux instances. Had to fight against race conditions between clients recently (per client directory caches,) but that's a feature of NFS, not specifically v4.


But quite a lot of people in the real world use NTFS.


It's primarily a corporate-level thing, but just as Windows has a strong set of ACLs that people at home ignore, it could be the same scenario on Linux.


> How many people use ACLs?

Everyone who uses systemd. Try it yourself: do a getfacl on the files inside /var/log/journal on a system with persistent logging enabled (if it's disabled, these files will be at /run/log/journal instead).


I have used them to set up a group-readable/writable directory to ensure that all subdirectories remain so.


This is a Yogi Berra type comment. "ACLs aren't supported. Nobody uses them."


Not really. Linux supports "old" posix ACLs and chungy is complaining that it doesn't support "new" rich ACLs. My point is why bother upgrading a feature that no one uses.


I think "no one" is a fairly tall claim: They are well-used on all the operating systems that do support them.

Mind also, "old POSIX ACLs" came from a POSIX draft: they never made it into POSIX. While being an extremely simple expansion of the Unix modes, they are only ever additive and do not support fine-grained permissions that NFSv4 allows for. They're sometimes better than the standard mode bits, but they very often come up short of being useful in the real world.


> (there has been no large move to adopt ACLs or file attributes, for example, although file capabilities have snuck into common use on Linux systems)

This is largely true for any server you’re likely to ssh into. But the most common Linux distributions are Android which make extensive use of SELinux features.


Once upon a time, it was common to have time-shared multiuser Unix systems – a single server and everyone in the department could login and use it simultaneously. Filesystem permissions – first the classic mode bits, then ACLs – were originally designed for that use case.

While such systems still exist, they are a much smaller percentage of all installed systems than they used to be. Most contemporary Unix(-like) systems fall into one of two categories:

(a) single-user machines (such as a laptop)

(b) application servers, database servers, etc, which serve many users, but those users are defined at the application layer, the OS and filesystem don't know anything about them

Maybe part of the reason why filesystem ACL adoption is weaker than people expected, is that (a) and (b) don't have the same need for filesystem ACLs as the classic multi-user timesharing environments do.

> But the most common Linux distributions are Android which make extensive use of SELinux features

SELinux context and filesystem ACLs are two separate things. I see many machines with heaps of the former and little of the later (such as most Red Hat boxes).

Even with single-user machines, there is still some value in inter-process security – application sandboxing, corporate-managed devices, etc – but there are many technologies available now to meet those requirements, and filesystem ACLs are often not the most useful among them.


We still do something like time-sharing these days, but often it's on the level of VMs instead of OS processes.


It has been renamed into the "cloud".


Android isn't really a good example of Linux systems, as their use of the Linux kernel is more of an implementation detail, but yeah in what concerns the Linux kernel, Android is one of the most locked down by default.


Hardware has changed dramatically; software is stagnant. - Rob Pike (2000)

... added to https://github.com/globalcitizen/taoup


Many important things happened to Unix in 1990s and 2000s. dpkg and apt come to mind for 1990s, and udev and systemd for 2000s.


I was about to name some innovations from Linux and *BSD worlds, but that is not technically Unix. (*BSD is Unix derived but cannot use the trademark). So I think your example is not technically Unix technical history.

But if we could include these things, I'd say also:

* Epoll and kqueue. The recognition that select(2) and later poll(2) do not scale.

* Interactivity. One thing I remember about 90s Unix is how terrible the usability was on those old terminal emulators or termcaps. You'd type a backspace and see ^H. Arrow keys that didn't do the right thing. You'd log into a Linux box and by contrast you'd get good tab completion, colors, etc.

* Secure defaults. First noticed this in OpenBSD. But the internet forced everybody to reconsider the set of daemons you get on a default install.

* Deprecation of unsafe libc functions


epoll / kqueue is a good thought. That also makes me think of scatter/gather i/o (readv/writev), though I can't remember if we ever really saw big gains from it like we do from epoll.

One other improvement worth calling out is that /dev/random (or substitute your RNG method of choice) is a thousandfold better now than it was in the 90s, though Linux has a big wart with entropy pool exhaustion. BSD leading the way there from what I understand, having an RNG that Just Works. This isn't just an invisible implementation detail, it means application developers can request and use random data without a bunch of "just-in-case" bit shuffling like discarding lower bits.


"Linux is not Unix" is useless pedantry, but sure, Solaris SMF did it earlier than systemd, both dependency based boot and parallel boot.


I think shell interactivity improved because everyone just decided xterm was the way to go, which itself was based on VT-102 with some later VT 200/300/400 bits and new inventions added on. Today it's so common (the default on Linux and macOS terminal emulators) you'd be a bit bananas to use anything else.

Sometimes something is so painful everyone unconsciously decides choice is bad and we're all going to use $X.


Commercial UNIXes already had package management before GNU/Linux came up with dpkg, rpm and apt.


> I think that if you took a Unix user from the early 1990s and dropped them into a 2022 Unix system via SSH, they wouldn't find much that was majorly different in the experience.

That is probably true as many of those old tools kept compatibility, but at the same time we got a whole bunch of new tools, more powerful than `ps` for gaining more insights. For an example look at all the changes in service management, eBPF and such. Not to mention that the shell prompt itself (see zsh etc.) is more powerful than back then, which could give a very different experience.

So yeah, old tools keep mostly working, but improvement goes on.


“I think that if you took a Unix user from the early 1990s and dropped them into a 2022 Unix system via SSH, they wouldn't find much that was majorly different in the experience.”

Well they might freak out about what you did with telnet and how you logged in without a password.


https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

levenez.com has been posted multiple times before in this context. There is a /unix and a /lang page under it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: