Hacker News new | past | comments | ask | show | jobs | submit login
The Unix-Haters Handbook (1994) [pdf] (simson.net)
81 points by udev4096 7 months ago | hide | past | favorite | 87 comments



When I was much younger I used to find this funny and entertaining, but nowadays I just find it boring. It always seems fashionable to hate on things, and I'm just tired of the depressing, defeatist attitudes that support that fashion.

I do wonder, though, if this were (re)written today, how much of it would be the same, how much of it would be outdated, and how much new stuff the haters would come up with.

The chapter on file systems is mostly no longer relevant (some file systems do still suck, but the defaults are pretty solid now). NFS still sucks (IMO it sucks more than it used to), but far fewer people need to use it nowadays. C and C++ are still unfortunately prevalent, but there are at least quite a few systems programming alternatives, and they're gaining ground. Sendmail isn't the only game in town anymore, and I expect most outfits use something else these days, and USENET is a distant memory for most people, so there go another two chapters.

But then the terminal/TTY situation has barely changed, and how all that works is just as (if not more) divorced from the reality of daily usage. Security has improved, but most people still have a god-mode root account, and most of the security improvements have come out of necessity; the world of networked computing is much more "dangerous" today than it was in the early 90s. Documentation is still often poor, and many systems still seem designed more for programmers than less-technical users.

I wonder what they'd think of systemd and Wayland!


I think some of its criticisms (see e.g. pages numbered 20-21, PDF pages 60-61) remain valid:

(1) Unlike OpenVMS/TOPS-20/LispMachines/etc, mainstream POSIX file systems lack built-in versioning (there are a small number of POSIX systems which implemented it, e.g. SCO OpenServer's HTFS, but for whatever reason more mainstream systems like Linux or *BSD never did)

(2) Directly related to (1), the fact that the unlink() system call is normally irreversible, and the rm command is (normally) a direct interface to unlink()

(3) The simplicity of the interface between the shell and the commands it runs (just a list of strings, only slightly better than the DOS/Windows approach of a single string), and the potential issues induced by the fact that wildcards are implemented by the shell not the command itself.

Regarding (3), OpenVMS DCL, IBM OS/400 and Microsoft PowerShell are all examples of systems where option parsing/etc happens in the shell, and the command is passed a parsed command line structure instead. However, although they are better in this regard, they have other disadvantages (the first two are super-proprietary; PowerShell is open source, but weighed down by the heaviness of .Net)

I think a lot of historical issues with Unix are due to the fact that shared libraries didn't exist until much later, so seemingly obvious things like "put wildcard parsing in a shared library like getopt() is" weren't possible at the start. Also, Unix has never had any kind of standard structured data format (JSON would work, but it didn't exist for the first 30 years of Unix's existence), which is a problem for ideas like passing command arguments as structured data.

> But then the terminal/TTY situation has barely changed,

POSIX TTYs are a horrific pile of kludges, especially when one considers stuff like ECMA-48 (which isn't technically part of the POSIX TTY stack, but de facto is). Someone should just redo it as something more sensible, like exchanging null-terminated JSON packets. But getting everyone to agree on that is probably too hard.


From the point of view of an end user, I remember versioning on the Vax using EDT when I spent hours on end entering data from our local emergency department in quite large text files. Over the years, every now and then I have been looking for this functionality on Windows.

Dropbox and OneDrive do this nowadays, in that I can just right click a file and see the different versions and change to a previous one. I work with long documents I write, and this functionality has saved the day a few times.


> Unlike OpenVMS/TOPS-20/LispMachines/etc, mainstream POSIX file systems lack built-in versioning

We have git, which is strictly better.

> Directly related to (1), the fact that the unlink() system call is normally irreversible, and the rm command is (normally) a direct interface to unlink()

You'd be complaining about how a really_really_unlink_now_i_mean_it() syscall was irreversible, too.

> and the potential issues induced by the fact that wildcards are implemented by the shell not the command itself.

I like not having to rely on arbitrary commands implementing (or not) wildcards.

(Plus, even in the best of worlds, all commands would implement wildcards by linking in the same library, which brings us back to square one.)


Firstly, largely agree.

> You'd be complaining about how a really_really_unlink_now_i_mean_it() syscall was irreversible, too.

Or the converse; what file (or worse hundreds/thousands of files) is using up all my disk and what's the sane way to delete all the old versions, and which files are important to keep all the old versions of?

> I like not having to rely on arbitrary commands implementing (or not) wildcards.

Indeed and being able to use meta tools like xargs. "The unreasonable effectiveness of plain text".


>git... No, and you know it. ITS' versioning (or VMS/TOPS-20 too I think) it's tons better. Even 9front has it.


OpenDCL for Unix:

https://github.com/johnsonjh/PC-DCL

patch: https://0x0.st/XoDG.patch

         git clone https://github.com/johnsonjh/PC-DCL
         cd PC-DCL
         wget -O corr.patch https://0x0.st/XoDG.patch
         git apply corr.patch
         make NDEBUG=1 
Enjoy.


> Also, Unix has never had any kind of standard structured data format (JSON would work, but it didn't exist for the first 30 years of Unix's existence), which is a problem for ideas like passing command arguments as structured data.

JSON is a serialisation of structured data, not structured data itself.

Its data model is not great, either: maps have no inherent canonical serialisation (one has to assert things such as ‘keys are sorted in Unicode lexicographical order’), and there is no way to shadow map value.

A list-based s-expression format would be preferable, as it immediately lends itself to canonicalisation and associative lists support shadowing (e.g. ((a 123) (b 456) (a 789))).


I think the OpenVMS's DCL interface has a libre implementation. I've seen things like mpsh and the ITS debugger/shell (DDT) ported to Unix...


> I think the OpenVMS's DCL interface has a libre implementation.

I know about https://github.com/rroart/freevms/tree/master/dcl/src but my impression was FreeVMS is too incomplete to use in anger.

> I've seen things like mpsh

You mean https://cca.org/mpsh/ ? I am wondering how close it is to DCL, in terms of the specific aspect of it I’m talking about here

> and the ITS debugger/shell (DDT) ported to Unix...

Do you have a link?

One aspect of ITS I find fascinating is a debugged program can execute debugger commands. I actually implemented that before using a GDB Python script - have a do-nothing function passed a memory buffer containing a GDB command go run, the Python script sets a breakpoint on the function, and when that breakpoint is hit, it runs the GDB command, it can even write the results back into the buffer, and then it continues execution


Just reading about DDT reminds me of the "WizBiz"[0] series of fantasy novels, it's funny how just the mention of something from so long ago can evoke memories.

[0] https://tvtropes.org/pmwiki/pmwiki.php/Literature/WizBiz


DDT in the 60's was a bug killing spray which soon was banned because it was toxic.



For (1) there's stuff like BTRFS snapshots, or you could use Git on top of the FS


> For (1) there's stuff like BTRFS snapshots,

From what I understand, BTRFS snapshots are time-based. Whereas, in classic versioned file systems, a "snapshot" is triggered by open/close operations – each time you open a file for writing, that creates a new snapshot.

> or you could use Git on top of the FS

Git requires your application to have explicit support for working with Git repos, it isn't seamless. And I know some people have built FUSE-on-Git implementations (there are a few around), but (I believe) they've all got various limitations and I don't believe (I could be wrong) that any of them is quite the same experience as a true versioned filesystem


Since mmap I/O is popular, together with long lived processes like databases, snapshot on open/close might not help you much.

With these caveats, using FUSE it should be easy to do, wonder why noone actually implemented it.


> Since mmap I/O is popular, together with long lived processes like databases, snapshot on open/close might not help you much.

Classic versioned file systems were generally not designed for the use case of databases, they were designed for the use case of lots of small text files (config files, source code, shell scripts, etc). For something like a database file you would turn versioning off


> (1) Unlike OpenVMS/TOPS-20/LispMachines/etc, mainstream POSIX file systems lack built-in versioning (there are a small number of POSIX systems which implemented it, e.g. SCO OpenServer's HTFS, but for whatever reason more mainstream systems like Linux or *BSD never did)

Linux does have that in NILFS2, it's just that almost nobody cares to use it: https://www.kernel.org/doc/html/latest/filesystems/nilfs2.ht... / https://man.archlinux.org/man/nilfs.8.en


> Linux does have that in NILFS2, it's just that almost nobody cares to use it

I don’t believe that NILFS2 is quite the same thing.

In HTFS, you turn versioning on or off on a per-directory basis, and then each individual file in that directory is versioned independently.

NILFS2 looks more like whole filesystem snapshotting, which is further away from the classic versioned filesystem approach


On Unix and the rest of the comment: plan9/9front superseded it well:

- 9p+encryption on top instead of NFS, much better.

- C under plan9 it's far better and easier than POSIX. Also, we had Golang under Unix which almost made that better C philosophy into Unix with a better systems language.

- Usenet/IRC works and you'll get far more trolls under the web.

- The terminal makes things better in most cases than freezing UI's or Emacs, see my another post. But 9front doesn't use terminals, it's graphical from the start and composable.

- On security, plan9/9front uses namespaces and factotum plus decoupled servers/devices for hardware, a much better design.

- On documentation, the rest of the OSes have it far worse. But it was almost the same case with ITS and Macsyma/Maclisp, where you had a reference book and not starting guides to ease the learning of the language. GNU Texinfo made at least an Elisp intro a Maxima it's far better docummented wth on-line guides and examples.

- SystemD it's a disaster, and Wayland destroys any scriptabilily/hacks with wmctrl/xdotool/custom WM/DE's or something simple like remapping keys on broken keyboards (I use the "<>" key as "\ |" as my physical one it's broken, and I already have < and > near 'm') and it works.


> 9p+encryption on top instead of NFS, much better

NFS with IPsec for authentication and privacy seems similar in principle, with the added benefit that it's widely available.


> NFS with IPsec for authentication and privacy seems similar in principle,

But a total hack and duct taped together.

> with the added benefit that it's widely available.

On plan 9/ 9front 9p over TLS is out of the box. ipsec isnt.



Went through the same progression with Dilbert comics. Colleagues with a positive general attitude are priceless.


> The chapter on file systems is mostly no longer relevant (some file systems do still suck, but the defaults are pretty solid now)

Really? Afaik filesystems still do `rm` immediately and with no recourse. GUIs on top like Finder and Explorer do something more sane though, but that doesn't save us terminal users.

POSIX shell expansion is just as crazy as it has ever been too.

Those are the two gigantic foot guns I can recall from memory from having read this 20 years ago.


I know this is a discussion of Unix in general, but on your own Mac, you can get `trash` packages for the terminal [0] [1].

I use the former, I haven’t tried the latter. But afaict, they should be pretty much identical – they both supply `trash <path>` which could, for most intents and purposes, probably be aliased as `rm`.

One thing to note is that none of these tools seem to support the “Put Back” feature of Finder. Trashed files don’t remember their original locations. But I’ll personally still choose that over being nervous before every `rm`.

[0] – https://formulae.brew.sh/formula/trash [1] – https://formulae.brew.sh/formula/macos-trash


Yea sure.. but again a big recurring theme of The Unix Haters Manual is "broken by default". That things can be fixed isn't the point, if the fix is local, ephemeral, easily forgotten, and the fix not being implemented is blamed on the user because "you can fix it".

Broken by default is itself exactly the problem.


I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open standards and, ultimately, open source. Compatibility and not being dependent on a single company are good things!


When the book was written most of the UNIX world was proprietary too, GNU and BSD existed but were marginal.


X was a partial example. It did kill NeWS thanks to being open and Sun not willing to open it up.


> NFS still sucks (IMO it sucks more than it used to)

Any chance you could elaborate?


Several of the criticisms the book lists are still true today. File locking is unreliable, deletions are weird, security is either garbage (in that you set it up in a way where there's very little security) or trash (in that you have to set up Kerberos infrastructure to make it work, and no one wants to have to do that).

Perhaps I was a bit hyperbolic about it sucking more nowadays. At least you can use TCP with it and not UDP, and you can configure it so you can actually interrupt file operations when the server unexpectedly goes away and doesn't come back, instead of having to reboot your machine to clear things out. But most of what the book says is still the NFS status quo today, 30 years later.


Everything you listed was fixed in NFSv4. Don't use the ancient versions of NFS.


We're not there with authentication yet (although I've no problem with Kerberos myself).


How are we not there? The only real issue I know is allegedly requiring host keys for gssd (e.g. "joining the domain"), but rpc.gssd(8) documents "anyname" principals.


The only per-user authentication option is Kerberos. Username/password based authentication is not possible.


That seems like a feature; mounting SMB is done on a local system on the basis of password, and it's horrible. (I assume you could, in principle, use some other GSSAPI mechanism.)


There has been recent work on RPC-with-TLS (RFC 9289), xprtsec=mtls.


AIUI this is still not user level authentication. It rather secures the communication between hosts, but you still have to choose between sec=sys ("trust me bro") or sec=krb5* at the upper layer.


easist way nowadays to get secure NFS is to just set up a wireguard tunnel


No because you still have to trust the client.

With Kerberos a hacked client where user 1 has authenticated can't impersonate user 2 unless that user has also authenticated on the client.

With sec=sys the client is simply trusted without any per-user authentication.


in most cases you can just use more fine-grained exports. e.g. export /home/user1 to 10.0.0.1 and /home/user2 to 10.0.0.2 instead of /home to 10.0.0.0/24 etc.


> C and C++ are still unfortunately prevalent

come now, they look nothing like they were 30 years ago.


Don't confuse C and C++ because C99 covers a lot and it's 25 years old.


That's how long I've been using C++. I would appreciate more chariable interpretations.


C and C++ are different enough.


Yes I know, I'm glad you're noticing the distinction. Just take them as a whole.


(1994) First submitted 11 years ago[0], discussions in 2014[1](128 points, 50 comments), 2017[2](382 points, 308 comments), 2019[3](284 points, 158 comments), 2022[4](189 points. 86 comments), 4 months ago(141 points, 139 comments) and 28 days ago (52 points, 45 comments)

[0]: https://news.ycombinator.com/item?id=5125613

[1]: https://news.ycombinator.com/item?id=7726115

[2]: https://news.ycombinator.com/item?id=13781815

[3]: https://news.ycombinator.com/item?id=19416485

[4]: https://news.ycombinator.com/item?id=31417690


Nobody on HN submitted the Unix Haters Handbook until 2013? I find that hard to believe.



I don't mean to send others on a scavanger hunt, but I'd expect it was posted within the first few months of HN (but don't have time to find it).


Quite often I have had to help someone on Windows, who prefers Windows to Linux, to use their Windows computers properly. I have never seen someone who prefers Windows to Linux help someone to use their Linux computers properly. Anecdotal, sure, but things be how they be.


It's to be expected as Linux is more niche on the desktop and so Linux people (like me) tend to be either enthusiasts or have some expertise with different systems. A similar example would be that you can see more pilots helping people fix their cars than you see a driver helping to fix a plane (despite there being more planes stuck on the ground than there are cars stuck in the sky).


To be clear, I'm talking about software engineers, which is where the analogy then breaks down because this it would be closer to Toyota engineers helping Ford engineers. I don't expect someone who is an accountant to know as much as a software engineer, but if a software engineer tells me they prefer Windows to Linux, but they can't use Windows, then I suspect their problem is not caused by any aspect of their operating system.


I prefer to use Windows as my daily driver. I also help people (friends, family, co-workers, etc.) with Linux (desktop or server) all the time. I have almost 30 years of experience using Linux. (using it since 1995)


I saved a requirement from an old MIT AI lab job on Usenet, perhaps from the early '90s: "Applicants must also have extensive knowledge of C and UNIX, although they should also have sufficiently good programming taste to not consider this an achievement."


Dupe:

The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=38464715 - Nov 2023 (139 comments)


Is there a similar article that explains the origins of Linux and its design choices?


I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open systems and, ultimately, open source. Compatibility and not being dependent on a single company are good things!


It was printed in 1994. Linux and BSD (either FreeBSD or 386BSD, I can't recall which) where around, but Open Source (TM) as we now know it wasn't. Other, proprietary Unix systems were still viable and in use (AIX, IRIX, SunOS/Solaris, HP-UX and SCO just to name a few). Running Unix was not cheap [1]. Most X servers were proprietary and cost money (I had some friends that sold X servers for a variety of OSes and video cards at the time). The criticisms at the time were, in my opinion, decent enough but not enough to stop to duopoly we have now with POSIX and Windows.

[1] In college, early 90s---I got to use IRIX on a machine that cost $30,000 in 1991. PCs caught up and past it pretty much by the mid-late 90s. Also, I did some consulting work at a bank which used SCO. They paid God knows how much for every conceivable package available for SCO. The base system? Pretty much just a shell and some commands in `/bin`. Compiler? Pay. Network? Pay. TCP/IP for said network? Pay.


> The base system? Pretty much just a shell and some commands in `/bin`. Compiler? Pay. Network? Pay. TCP/IP for said network? Pay.

Vladimir Barmin had a very hilarious writeup about getting SCO to work at all (http://lib.ru/UNIXOID/scomastdie.txt, in Russian).


No English translation? I refuse to run propietary JS...

Edit: links -dump works great piped to translate-shell -no-autorrect -b and less -SR ...

     links -dump http://lib.ru/UNIXOID/scomastdie.txt | trans -b -no-auto -t en 2> err > crap2.txt


>> proprietary systems got killed by open systems and, ultimately, open source.

> It was printed in 1994. Linux and BSD (either FreeBSD or 386BSD, I can't recall which) where around, but Open Source (TM) as we now know it wasn't.

I think the "open systems" phrase is more important than the "open source" phrase in the GP's comment.

In 1994, Unix systems, even though proprietary, were still more open than the competing systems. Sure, it'll be another 5 years or so before the writing was on the wall for all non-FLOSS systems, but in 1994 it was both a) easier and cheaper to get your hands on a Unix system, and b) easier and cheaper to program it.

Sure, they weren't open-source, but they were a hell of a lot more open to hacking than the (for example) Lisp systems, or VMS, etc.


Ah that's why Windows and OSX dont exist anymore.


> Ah that's why Windows and OSX dont exist anymore.

Aren't they even more open than the previous Unix systems used to be?


You could use GNU alternatives for that, most people installed GNU coreutils/*utils, Emacs and GCC and I think Irix supported networking and TCP/IP in base without additions. But, yes, the rest of the Unixen were like that if not worse.


That’s why we have GNU pretty much. There was a big law suit over twenty years ago about this. SCO were suing anyone that made significant contributions to OSS Unix (eg IBM) and I seem to remember it just kind of petering out. But yeah, Santa Cruz Operation, one of the bigger name back in the day. Not sure why nobody really talks about this any more, it seems like a hugely decisive period in OSS history


But GNU is not Unix.


"were around". Not "where around".

Sorry to nitpick but I've seen this grammatical error in comments here three times just this morning and it begins to grate.


> I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open systems and, ultimately, open source.

Could the reason that idea is so hard to get over be because it's just wrong?

I don't think these people "hated" UNIX because it was "open," I think they hated it because they thought it was bad.


They didn't hate Unix because it was open, but their hatred of a more open system does make them look a bit ridiculous: Any system tied to a single company is doomed anyway, either to the company dying or the company discontinuing it and/or turning it into something you can't stomach. (Microsoft lives, but how happy are MS-DOS partisans these days?) It kinda taints their technical points with a whiff of fanboyism, a naïve partisanship and attachment to entities that didn't give a shit about them.


LOL. Man, most Unixen in the 90's where as corporate as Microsoft, Linux was still a Usenet hobbyist project and 386BSD was sunk into lawsuits.

Proper hacker projects died a few years ago with the end of the PDP10/ITS/Emacs and the GNU project was still working on a replacement for that platform but cloning Unix with Emacs/Info and so on top of that because of convenience with 'modern' times.


Open as in open standards, not open source.


"were as"


> I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open systems and, ultimately, open source.

I think that they were angry that the inferior product was winning/had won. The "worse is better" essay didn't come out until a decade later, IIRC.

To be clear, the Unix systems were worse in all the ways they point out, but better in the way that mattered - multiple companies provided Unix, and the skills from one Unix to another were easily transferable.

The "winning" system won not because it was superior, but because it was easier to hire for. Need an operator for your SCO Unix? Someone experienced with Sun boxes could onboard faster than someone experienced with VMS.


Its not as simple really. X Server being work on so much and being partially open was because DEC and co were in a panic that Sun would turn NeWS into a second NFS.

NeWS also could and did run on some other Unixes.

So really a lot of the history has to do with complex pissing context between different vendors that eventually escalated into full blown Unix wars.


They weren't angry their systems got killed by open systems. They were angry that the replacements sucked.


They were better in some ways and worse in others. ITS had PCLSRing and no concept of subdirectories, not to mention SIXLTR NAMING; Lisp machines were great as long as they didn't have to be fast or run continuously without GC; VMS was freaking VMS are you serious? The snide derision doesn't help, especially coming from people who were being snide on the behalf of for-profit companies that never gave a shit about them.


And what, pray tell, was wrong with VMS? In my experience it works quite well.


In my mental image, most Unices used in industry and research in 1994 are not open.

What am I missing?


Your recollection matches mine. I think I might have had Yggdrasil installed on my home PC for doing after hours support on Solaris machines, but back then if I could have afforded a sparc workstation I would have bought one in a heartbeat.

Company owned Unices still dominated the landscape in 1994


Nothing, the book is mostly about proprietary Unices


Open standards, not open source.


It's not about objectivity it about teams/tribes if another team is winning then you're losing. Simple as that, you can see it in a lot of places. MMOs,console wars, phones, sports, politics, ...


Well:

- ksh it's far better than sh, and I think Perl fixed the needs of a "medium" system scripting language.

- Usenet it's still fun and a far better source than the web/stack overflow for some programming languages. Slrnpull it's a godsend.

- Current X it's far better, but XPra/x2Go should have been part of X.org, with far better features over the network

- GNU/Emacs it's the alternative from MIT/ITS/PDP-10 to Unix's 'worse it's better/KISS', but it's slower, error prone and easily deleted to much the .emacs file by itself with M-x customize -it deleted my (use-package) functions -. If Emacs' customize set the variables into the (use-package) funcs, it would be a great start.

We need a "Emacs haters handbook" (and I like Emacs itself as a concept, but it needs polishing:

(defun rant-start ()

" - GNUS it's dog slow on -current day- sized mail boxes (> 100MB) and it will last an hour on big mail lists/Usenet spools. Mbsync/slrnpull helps but as I said anywhere else Maildir support in GNU's it's broken and it will only show some directories, if any. Even if 'new', 'cur' and 'tmp' are already there.

- RMail should had supported Maildir long ago. No, movemail it's not an option and the current mailboxes will choke on the Unix MBOX format.

- Unfocussing the minibuffer prompt shouldn't cancel it. For Mastodon password prompts (or any other one), switching to a pane/window in Exwm (almost mandatory) will force you to repeat the login process on mastodon.el from the start. That's atrocious.

- DIsplaying "big" images (relatively) it's slow, dog slow. IDK how pdf-tools does it (it works really fast and well), but doc-view it's a disaster and reading big CBZ files will crawl down Emacs. Inb4 'Emacs it's an editor, focus on the text' Emacs comes up with Calc with has plotting support with Gnuplot and OFC it needs a proper image displaying method.

- Eww should support minimal CSS rules to parse at least simple pages as HN.

- Stop locking on I/O, period.

- The UI, even the Lucid/Athena port, it's not smooth at all even with some changes under some 'legacy' top-hier 32 bit machines, such as N270 netbooks. I'm missing something for sure.

- Emacs notifications shouldn't be bound to dbus, the notifiication system should allow using messages and a beep/sound file as a method to alert the user, or a custom made script or Elisp code.

" )


Apart from your minibuffer issue, everything else is the fault of third party packages. Even IO, as I am told you can do async IO in elisp but practically no package does it.

So none of those issues are limitations of core.


> Eww should support minimal CSS rules to parse at least simple pages as HN.

HN does not need CSS to display more or less correctly, just tables. I hear there is a w3m emacs package, maybe try that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: