Hacker News new | past | comments | ask | show | jobs | submit login
I didn’t learn Unix by reading all the manpages (2022) (owlfolio.org)
149 points by goranmoomin on May 24, 2023 | hide | past | favorite | 163 comments



Manpages aren't all the same quality. Look at the manpage for find(1) https://linux.die.net/man/1/find

An enormous page full of switches and options, yet if you're a beginner it's basically useless. You're just looking for a simple example of how to search for a file. find . -i -name <regex> is NOT mentioned anywhere, you have to figure it out yourself. And if you mess it up, your terminal gets flooded.

Contrast with grep(1) https://linux.die.net/man/1/grep which is short, simple, and easily searchable with keywords

Manpages are very useful, having offline documentation is great. But it is more of a specification reference than a manual.


"An enormous page full of switches and options, yet if you're a beginner it's basically useless."

Maybe 30 years ago you could learn through manpages but today that's how all the man pages of all the useful commands look. If you think grep is nice it would be the exception; for me it is indeed short, simple and easily searchable, but too terse for a beginner to learn with easily. (Plus even the regex documentation only documents grep mode, not egrep or pgrep or anything else.) Very useful in its own way, but most man pages are useless for learning.

I am disappointed at how few man pages take the opportunity to have a useful selection of "the 10 most popular things to do with this command" but web search has filled in reasonably decently. You do still have to understand the basics of how shell works, though, because most of them aren't quite what you want today and honestly a fair number of them aren't even what the poster thought they were. I am also in the habit of making sure that I understand every command line switch before I copy & paste a command off the internet, again, because many of them aren't what you need today. I don't always mean in a malicious manner, but things like either do or do not follow HTTP redirects in curl may or may not matter to you today, but the poster may be in the habit of setting it one way or another for no reason relevant to you.


the openbsd manuals usually have examples. you can see how compact https://man.openbsd.org/grep.1 is compared to the gnu grep manual.

i've been using https://git.causal.agency/exman/about/ to compare linux and bsd manpages, and it has saved me an incredible amount of time looking things up.


perhaps gnu's manpages suffer because they focused so much effort on their info pages.


Are info pages still a thing? I remember being very confused by them back when I started out with Linux, and eventually just stopped trying to understand how they worked.


They are, but my impression is that only Emacs users use them to some extent. (Emacs has a built-in info reader that is better than the standalone client, and ships mainly info pages as its documentation.) Outside of Emacs, I’d say very few people know info is even a thing, and I remember some Linux distributions also stripped info pages from packages since everyone used man pages.


More than better, people doesn't know how to use GNU Info correctly.


In fairness, if one were to come to more, pg, or less with no prior knowledge of the conventions, the madness of some of the character choices would be just as grating.

Ironically, this sort of stuff -- the basic conventions like qhjkl0^$G + CRFFBSSPC keyboard navigation and :/? prompt modes -- is exactly what one initially learns from guides, tutorials, other people in IRC, and whatnot, long before coming to reference doco. The _Introduction to Linux_ book at TLDP covers the bare essentials of these conventions, q, b, and SPC, in the "quickstart" chapter 2.

* https://tldp.org/LDP/intro-linux/html/sect_02_03.html


Every time I could not find and answer to a question in a man page and consulted info, it was a waste of time. I like info, really, but the remark at the end of most GNU man pages to look at the info pages is misleading at best.

For emacs stuff, yes, it's very convenient. I remember once upon a time, Python docs were available in info as well. When you have good documentation available in info, it's very nice.


I remember offering to make a PR w/ examples for something when I was a junior dev and was politely told that they didn't want examples. I assumed - and still assume - that this was a generational difference in what people expected competence to look like.

While this attitude made sense pre-internet, reading about every switch for common use cases is just not realistic today. Maybe if the number of command-line programs were less, and the complexity of each command was significantly less, I could see how it would be done, but not today.


Yeah man! Scenarios! Those documentations rarely bother with scenarios, which is all newbies need!


within a magpage that section is called EXAMPLES

there are not enough of them now, they used to be more prevalent


Yeah some have examples but many don't. I think maybr that's not the purpose of manpages though, more suitable for a cookbook.


tealdear and tldr seems to do "top 10 most useful things you can do with this command"


Man pages are pre-Internet Age help

StackOverflow is Internet Age help

ChatGPT is post-Internet Age help

In ascending order of helpfulness (modulo LLM confabulation)


There is a pre-internet and Internet pre-Stackoverflow bit. Called O'reilly books.

Learning to program for example Perl. I had the learning book, the reference book and the cookbook. learning I did once, the reference had post its on it and was on my work desk. The cookbook was more the facebook type you read it on the toilet.

They had books on everything that were the industry standard. DNS, Apache, Javascript, ... The revised them regularly. For Js had the 1st, 2nd, 3rd edition. I know this sounds all ancients like real paper books. But even work had O'reilly subscription so you could order what you need.

For me it was O'reilly books, man pages. But lets not forget when you installed Linux like RedHat (not fedora yet) you could install the linux howtos. Oh boy did I learn a lot from them too: https://tldp.org/HOWTO/HOWTO-INDEX/howtos.html

Finally there was IRC if you were stuck but this was off course the internet stage. I just went online to ask a question, dialup was expensive ;)


> (modulo LLM confabulation)

As of now, there is no LLM without confabulation, so your third item is effectively "hypothetical fantasy ChatGPT that we hope will be available some time in the future", not the actual tools we have right now.


And in descending order of correctness.

ChatGPT inherited "confidently wrong" and "misunderstanding question and then answering to question they misunderstood" from stackoverflow


I wonder if chatgpt will start “closing” questions as “not a good fit for its Q&A format,” or “too opinionated,” or “a duplicate” of something that is acutely unrelated.

Actually, this seems to be a case where chatgpt is smarter than the humans.


ChatGPT will not be able to hallucinate options to find and grep and ls because they all use every single letter of the alphabet.


A recent example: I used ChatGPT 4 to draft an mkvmerge command - take 2 video files, and merge them by only copying certain audio and subtitle tracks from the second file into the first file.

The resulting command looked good at first sight, something like „mkvmerge -o output.mkv first.mkv —-no—video -s 1 -a 2 -a 3 -a 4“. The problem here is that there can only be one -a flag, so it should have been „-a 2,3,4“ instead. But mkvmerge didn’t really care and just discarded every -a flag except the last one. So I ended up with only one of the audio tracks copied over. I only noticed when I actually checked the resulting file that it had less audio tracks than it was supposed to.

This would not have happened to a human after studying the man page - the documentation is very clear about the -a flag and I have no idea what led ChatGPT to come to the conclusion it did.


The lesson here is not to anthropomorphize ChatGPT. It didn't "conclude" anything. Based upon a corpus that includes tonnes of humans writing rubbish on the WWW, it came up with plausibly human-appearing rubbish that can fool humans. GIGO would apply, except that one can remix non-garbage into garbage with suitable statistical processes, we have now (re-)discovered. (-:


I think in this case it wasn’t GIGO but rather a too weak signal/noise ratio for mkvmerge. IMO there’s a subtle distinction.


> find . -i -name <regex> is NOT mentioned anywhere

you've got some typos here, it's not mentioned because... that's not how find works.

find has no -i like grep ; to ignorecase you use -iname ; and for name regular expression matching you need to use

    find . -iregex <regex>   # (or -regex)

    find . -iname <pattern>  # (uses wildcards ? \*)
cheers :)


See stuff like this is what frustrates me. Why can't the manpage tell me that? Thanks though.


Or:

Manpages aren't all the same quality. Look at the manpage for util-linux find(1):

* https://manpages.debian.org/bullseye/findutils/find.1.en.htm...

Contrast with the manpage for NetBSD find(1), where something similar to what you mention is in fact the very first example in the EXAMPLES section:

* https://man.netbsd.org/find.1

Compare with the manpage for Illumos find(1), where like util-linux it leaps into -exec and expressions in brackets with the second example:

* https://illumos.org/man/1/find

Then remember that some operating systems come with handbooks and guides, and if one wants a tutorial for beginners, reference doco is not the place to start at all. So next contrast with the SCO UnixWare users guide, which also starts off with find file by name and print:

* http://uw714doc.xinuos.com/en/FD_files/Moving_and_managing_f...

Then remember that GNU has a long-standing antipathy to man pages, and that actually one should have magically known the name "findutils":

* https://www.gnu.org/software/findutils/manual/html_node/find...

And there's also this thing that someone decided to call The Linux Documentation Project which has a book titled _Introduction to Linux_:

* https://tldp.org/LDP/intro-linux/html/sect_03_03.html#sect_0...


NetBSD doesn't look all that more helpful, you just get to examples sooner because it has far less options

The NetBSD examples also only document what command do, while the util-linux sometimes go into details of why, like when describing how exactly the time directive works and why `-mtime 0` shows files from last 24 hours.


> https://man.netbsd.org/find.1

"find -- walk a file hierarchy"

what an idiotic summary


That's a little inflammatory. Could you provide constructive input on your reasoning behind that statement? I personally found it a simple, concise explanation of what find is for.


imagine you don't know how find works or what it does. Does that sentence make any sense?


Yes. A few seconds of thinking does the trick.

"file hierarchy" - Something to do with my file organization?

"walk" - Most programmers are familiar with trees.

Oh, we are walking through the file tree!


That's because find's principal documentation is in info.

https://www.gnu.org/software/findutils/manual/html_mono/find...


And that points to another problem.


Stallman's pathetic devotion to the ITS online help system?


man some-tool

info some-tool

some-tool --help

just help me dammit some-tool


find(1) is a classic example. The first two or three pages are of essentially zero practical use. I open the manual, skim the first page, no obvious clues about how to actually use `find`, so my next step is always to go to web search.

I've got a thousand things to do that are better uses of my time than reading 20 pages of arcane documentation that may or may not answer my question.


I disagree - the find manpage is significantly better than the grep manpage because of one aspect:

EXAMPLES

Examples are critical to comprehension, are immensely practical, yet are often viewed as fluff.


After about 30y of using *nix of one kind or another, I only use the EXAMPLES section of man pages as I know that most writers write to sound out their ideas, and the first bits are worse than useless. If there isn't one, I go to stackoverflow.

The same style extends to library documentation, where you have functions and arguments with types and no other context. It's basically for posterity.


Agreed, lack of examples was my biggest gripe with the manly pages when I was a Linux newbie. The most frustrating part is, you know they do it on purpose, so that you have to read the entire thing! News flash, I'm here because I'm stuck and want to get unstuck ASAP, not because I'm bored and want to learn the intricate details of find/grep/whatever.


current macOS man pages are kind of off-putting, even for someone who has been using c and unix for decades. I mean, really, what is this?

    InFILE In*
     fopen(Inconst Inchar In* Inrestrict Inpath, Inconst Inchar In* Inrestrict Inmode);
Inconst InFILE In* Inrestrict Inpath?!

     FILE *fopen(const char *, const char *mode);
it's like looking at a ton of gibberish and trying to figure out what it says. it doesn't match the languages, it doesn't appear to match any standards.

I learned a lot from man pages in the 80's and 90's, and if I had to try to learn now from something like this, I'd likely just throw up my hands and choose another career.


My educated guess is that something is trying to italicize various keywords, and whatever text processing is going on isn't working, but producing bogus "In" literal text instead.

.I is the "an" macro for italics, after all.


Compare what one sees when the text processing works, and what parts are italicized.

* https://tty0.social/@JdeBP/110427220635184435

It even prettifies those ``'' quotes, notice. This is vanilla older FreeBSD man, using groff, but actually allowing groff to do the things that it is capable of.


This matches what I see using less as manpager and every terminal I have installed on MacOS (Apple Terminal, kitty, alacritty+Tmux, iTerm).


I don't think that's a macOS bug. Your terminal emulation might be borked.

What terminal are you using? I just tested on Terminal.app and iTerm2.app -- `man fopen` looks correct on both.

The following should be equivalent, but might perform differently for you depending on what part of your config is bad:

  cat /usr/share/man/man3/fopen.3 | nroff -man | more


iterm2.app - but I also use fish as my shell

TERM=xterm-256color and LESS_TERMCAP_us=In


Your LESS_TERMCAP_us is the issue here; it's literally putting "In" wherever an underline is supposed to start.


As others have noted, the 'In' looks like a rendering bug with whatever terminal emulator you're using. Otherwise it looks pretty normal: `fopen (const char *restrict filename, const char *restrict mode)` looks like a perfectly good C function, and (if you already know C) it's easy to see the parts that don't look like they belong.



amazing how web pages look different than terminal output :)


Looked fine in two terminal apps too, but you can't copy paste that!


I learned Unix by reading the FreeBSD handbook.


My man.

The FreeBSD Handbook was an amazing resource.


I still reference it.


I was pleased with how expect's man page [0] oriented me (instead of "it's all there, go figure"):

> Commands are listed alphabetically so that they can be quickly located. However, new users may find it easier to start by reading the descriptions of spawn, send, expect, and interact, in that order.

[0] https://linux.die.net/man/1/expect


Find is the worst example given it breaks the do one thing and do it well motto of unix. (and you can do one thing and do it well), After discovering dmenu's stest and googles walk, most times I just need a { walk $PWD | stest -f | rg } and I'm done. Maybe you need more, in which case, instead of find, you should run stat with the output formatted to json/csv and then filter using your preferred csv method or jq.


I had never heard of "walk". Is there some link to read more about it?



One thing I love about the Sway Window Manager is how awesome the man pages are. They are clearly written and contain great examples.


Do you mean the other way around? The manpage of find contains examples, including searching for files by name. This alone make the manpage somewhat usable for beginners. The grep manpage does not have any examples.

The grep manpage spends much explaining regex and environment variables, find more on options.

Both manpages are searchable, what makes the grep manpage more searchable?


The POSIX man pages tend to be much more approachable: https://manp.gs/posix/1/find


Software documentation is an art.

Right balance of exact operation of software and examples, put in order that's useful in normal usage, and explanation for the more complex parts


I've found tldr (https://tldr.sh/) very useful for quick lookups - basically gives examples with short descriptions, which is very often mostly what's needed.


My favorite is cheat.sh, which is a similar thing to tldr, but the best part about cheat is that you can just `curl cheat.sh/ln` (or whatever command) and get the cheat sheet on any system. I use this all the time.


grep's prerequisite is to understand regexes.


This goes deeper, since understanding grep regexes involves understanding quoting.

This means not only how regexes distinguish between literals and control characters, but how the shell distinguishes and passes on characters to the regex that grep receives as an argument.

I think regex quoting is one of the major stumbling blocks of comprehension.

Even after decades using regexes, I still stumble (especially when my regexes are used in the shell, in emacs, in python re.match, etc...)


Friedl's "Mastering Regular Expressions" is one of the best books I've ever read and started my Unix journey via Perl and Emacs.


Which rather reinforces the point of the headlined article: People actually learn a lot of stuff when they start out from tutorials, guides, handbooks, and whatnot; not from purely reading reference doco. I started out with books, too. Later people would have started with the likes of TLDP.

I suspect that the author is right that most people started with guides/handbooks/tutorials, even the ones who later on convince themselves that they only read reference doco.


O'Reilly's bestiary had an exceptional line-up of Unix classics in the early 2000s.


Pre chatgpt, for most things tldr gets me what I need. These days yeah, chatgpt goes a long way.


I see a lot of people talking about examples, and while having more of them could be good, I think the actual problem with most modern man pages is a lack of clear description of program behaviour.

I am nowhere near old enough to have used an old Unix implementation, but the man pages back in those days just hit different. just compare how the ls man page starts for the GNU and Unix V7 version.

GNU: "List information about the FILEs"

Unix V7: "For each directory argument, ls lists the contents of the directory; for each file argument, ls repeats its name and any other information requested."

The GNU man page's description continues with one sentence on default ordering before getting to the list of arguments. The only mention ls lists diretory contents is in the description of the -d flag (the flag to make it not do that). The GNU man page also goes on in detail avout the time formats supported by the command, but nowhere does it have anything to say about the format of -l info.

Trying to read modern man pages is like trying to read a library documentation where only the parameteres and return values for function are documented, but no description of the function in itself is given.


> I see a lot of people talking about examples, and while having more of them could be good, I think the actual problem with most modern man pages is a lack of clear description of program behaviour.

Have you had the opportunity to look into FreeBSD man pages? I have found them to be quite helpful. For example:

https://man.freebsd.org/cgi/man.cgi?query=ls&apropos=0&sekti...


I think the man pages in FreeBSD cross a critical threshold where they actually are enough to learn the bulk of most things and seem to be generally much more useful than at least what my personal experience with Linux is. I will admit to a general bias towards thinking FreeBSD is generally of much higher quality than Linux in general, so take this with a grain of salt.


Strangely enough, I did learn Unix by reading all the man pages.

My first introduction to a Unix was via 'Coherent', a v7 clone back in the early 1990s. That came with a huge book. That huge book was pretty much a chapter or two on how to install Coherent, followed by hundreds and hundreds of pages of nothing but man pages in alphabetical order.

It was good night time or spare time reading. You could delve anywhere into the book and read away. And I did. Or you could look up some particular man page if you felt like it.

Most value-for-money book I've ever read.

https://archive.org/details/CoherentMan/page/n3/mode/2up


I too learned most of what I knew about Unix at the time from man pages although I did have a support group in that I worked for a retailer who sold Unix machines to small businesses.

We had Altos Xenix machines and AT & T's PC 7300 with Kaypro, Compaq, Columbia and of course "real" PCs from IBM all sitting on the sales floor. In fact weirdly IBM's short-lived attempt to keep their PC line from gutting midrange sales by selling a small midrange server introduced me to a truly bizarre Unixism.

A bunch of us from the retail org were sent to IBM training on these new old systems. Unfortunately for the trainers alot of our feedback was "this is so much easier in Unix or Windows". Lol.

But during the training, I actually had some dudes tell me it was "too dangerous" for the "general public" to have root and they were members of the California Root User's association and would be calling my boss shortly to take over management of our computers. Lol. In retrospect I hope they were trolling 18-year-old me.

But I've bumped into the attitude since where people are terrified of running at a naked root prompt and insist every command must be buffered through sudo, as though that somehow insulates you from typos. And I once sat in a presentation where a cybersecurity consulting firm suggested were fire the hundreds of sysadmins we had to manage our environment and instead employ them to supply two 5-10 person admin teams. Those teams would spend the majority of their time red teaming each other but could do admin duties to fill in the gaps of their spare time.

Anyway. The most bizarre thing about Unix is having learned its use in the 80s I'm still using a derivative to this day. That's not a shot at Unix. I just always assumed there would be some next big thing and we'd all move on.


> Strangely enough, I did learn Unix by reading all the man pages.

I remember SunOS 4.x, going through the contents of /bin, /usr/bin, and /sbin (iirc), and reading the man page for each thing I found there. There was a man page for everything in there. About 20 years ago, I tried the same exercise on some Redhat variant, and there was tons of stuff with no man page. Nowadays, spot checking my linux Mint system, (and presumably Ubuntu and Debian) the situation seems much better, at least in that man pages seem to exist for all executables in those directories. Maybe Debian based systems have always been better about that than Redhat.

In general, I feel that if you're creating an executable that you expect others might use, failing to create a man page for it is to commit a crime against humanity.


On that subject: Three guesses what the preconv command in GNU troff does, before clicking on the hyperlink. (-:

* https://www.gnu.org/software/groff/manual/html_node/preconv....


> Strangely enough, I did learn Unix by reading all the man pages.

Ditto.

> My first introduction to a Unix was via 'Coherent', a v7 clone back in the early 1990s.

At or near the same time as your introduction was mine with Consensys SVR4 for i386[0]. IIRC, Consensys sold stat-mux cards for PC's at the time, hence their Unix SVR4 product. It was reasonably priced along with being a pretty vanilla AT&T distribution.

Good times :-)

0 - https://github.com/Mellvik/Consensys-SVR4


> Ditto

Double ditto.

When I started (Sun 1 era), it was all we had. No internet. No USENET. No BBSes. No real magazines. Dr Dobbs and Byte.

Also, no one to talk to. The little group I was in was the “cutting edge”. So we had to make it all up. We were normally a VAX shop, rest of the company was running off the CDC mainframe. We were also just getting early PCs.

“man -k” was your friend.

We pretty much also never had any training. Biggest regret was I think they sent the wrong guy to learn the Lisp machines (TI Explorers).

Very hard to pick up, and the engineer they sent simply didn’t have the CS background or inner geek drive to really exploit their potential. They basically taught him some high level, entry level expert system (likely CLIPS or OPS5 based) and that was it.

So they pretty much sat idle while we hammered on the Suns. Too bad.


No real magazines. Dr Dobbs and Byte.

Those were the real magazines. <grin>


Consensys! Thank you.

Yep. That was it. I went nuts the other day trying to remember the name of the first 'Real UNIX' I was using about 12-18 months after I started with Coherent.

After Consensys, I used Novell's Unixware, Sun's SunOS (Solaris 1), then Solaris 2, and finally Linux.


From my experience some manpages glaringly miss examples or explanations. Hint: all manpages that you don't need to move down for more contents probably fall into this catalog.

I completely agree that they are only useful if you know what you are looking for, and nothing else.

Although I'm still a newbie, but I'll never tell a complete newbie to RTFM, considering there is not many high quality manual out there. Actually most hottie open source tools we happened to introduce into our workflow have half-assed doc at most. I'll just teach him to Google stuffs efficiently and let himself figure out the rest. Sometimes it's surprisingly difficult to know what exactly to put into that search box.

I'm not a hacker. I'll never be even close to Ritchie, Carmack, whoever write state of art malwares. I know the pain.


Too many people think that GNU manual pages are what all manual pages look like. Not everyone has the GNU tradition of telling you to switch to a different doco format for better man pages. (-:

The thing to remember is that for many systems there are both guides and handbooks, tutorial style doco, in addition to the reference doco, the manual pages. Linux-based operating systems come up a bit short in comparison to the old commercial Unices which had specific user guides, and in comparison to the BSDs, all of which have handbooks. Even Debian only has an admininstrators' guide, not a users' guide.

But The Linux Documentation Project is oft forgotten. It's stale. It's unfashionable. It's oddly chock full of how-tos for things that one wouldn't dream of wanting to do this century. But it also has longer format stuff, and it's still there.

Consider the _Introduction To Linux_ for example:

* https://tldp.org/LDP/intro-linux/html/sect_05_01.html

Or the _Linux Command-Line Tools Summary_:

* https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/working-fi...

Or the _Bash Guide for Beginners_:

* https://tldp.org/LDP/Bash-Beginners-Guide/html/sect_04_03.ht...


Thanks! I did find tldp extremely helpful and entertaining to read sometimes.


> This is how we get monstrosities like the Git documentation, that are only of any use to someone who already knows how it works and just needs a bit of a reminder.

Preach.


Git is seriously weird in that regard. It reads like someone just really wanted to build a content-addressable file system and a generic append-only graph data structure on top, and then, to their mild annoyance, people were starting to use it as a version control system...


To be fair, manpages were intended for tools that followed the Unix philosophy ("small programs that do one thing well, and can be composed together easily"). git does not follow this philosophy. If it did, we'd have tools named checkout, clone, push, etc. rather than one git binary that does all of them (with highly irregular and non-orthogonal syntax!)


I remember when you could print out all of the manpages and put them in a 3-ring notebook. Yes, in those days I did read them all.


But how could you have done that before you read the man page for lpr?


Call the nearest grey-beard.



Admin the Grey?


Man-dolf the Grey


Sounds like "you can't learn true Judo without traveling to some mountain temple and eating nothing but rice and punching a brick wall all day for 3 years"

Furthermore, it's another case of some old Usenet hyperbole being taken as Gospel (or taken down as Gospel, which is ore the style now). One of the most important lessons the Jargon file has for "how to become a hacker", is "don't take anything too seriously."


> One of the most important lessons the Jargon file has for "how to become a hacker", is "don't take anything too seriously."

Especially not the Jargon File. It largely comes from a particular period of time at MIT and Stanford, which is interesting in a historical sense but nobody should try and "become a hacker" by reading the Jargon File. Also it's had all sorts of injections and alterations by self-appointed Jargon File steward Eric Raymond, who should be ignored completely.

I only wish I could have told my 16-year-old self this.


>> One of the most important lessons the Jargon file has for "how to become a hacker", is "don't take anything too seriously."

Change that to "how to live", and it is still true. :)


>Sounds like "you can't learn true Judo without traveling to some mountain temple and eating nothing but rice and punching a brick wall all day for 3 years"

Nothing but rice.

Typical comment :)

Let the other points go, but it's Kung Fu, and not just a brick wall.

I hope. :)

Shaolin.


If you mix a metaphor hard enough, it begins to taste like metaphysics.

If not, add more tequila.


Ha!

>If you mix a metaphor hard enough, it begins to taste like metaphysics

Make a new cocktail with that name:

MetaPhorMetaPhysics

>If not, add more tequila.

Not my poison. Mine is H2O, like in your username.


>Make a new cocktail with that name:

>MetaPhorMetaPhysics

Did that blow your mind?

It doesn't meta.


The correct order of learning Unix commands is: get a copy of Kernihan/Pike[0] and read that, do the examples.

Then the man pages make more sense.

[0]https://en.wikipedia.org/wiki/The_Unix_Programming_Environme...


>Then the man pages make more sense.

[0]https://en.wikipedia.org/wiki/The_Unix_Programming_Environme...

UPE for short, like K&R.

Yes, totally.

I'm sure there are some other good books about the subject of Unix usage and scripting in general, and I've read some of them, but IMO, UPE is one of the best.

FWIW, here are some of my HN comments about The Unix Programming Environment book by Kernighan and Pike, over the years:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Honestly manpages with and without an `EXAMPLES` section are completely different types of documents.


I did, I remember working night shift in operations feeding tapes and in the down time I installed openbsd on an old pc. I would read the man pages following the "see also:" sections. I remember using a script to show me random man pages so I could learn about things that would not catch my eye.

I have this really cool wiki sort of software I made during that time it was written in shell, awk, sed, m4 and rcs. It was mainly written as an exercise to learn the unix environment. Probably full of every sort of injection vulnerability know to man. but it worked well enough in a benign environment. The shell bits are funny because a lot of it was built before I figured out shell functions. so every function was a separate shell script.

The thing is the text of the article contradicts it's headline. yes there is a lot of auxiliary knowledge that seeps in from many sources, this is why a person good at operating one os will be good at operating another os, they will not be happy about it but should know the fundamentals and can pick up the incidentals quickly. however these incidentals are important, and you learn the unix incidentals by reading the unix manual.


Man pages are generally not always that great. I recommend people instead digest the GNU `info' manuals, as you'll come away with a much deeper understanding of all the userspace binaries you're likely to interact with on many common Linux systems today.

I'll make this iron clad promise: if you sit down and read the coreutils manual end-to-end (it'll take a couple of hours) you'll come away brimming with new ways to optimise your CLI workflow.

If you're allergic to the `info' program itself, you can of course read the manuals on GNU's website or, better still: use Emacs's info viewer (`C-h i' or M-x info), which is far superior.


Info has some really crap defaults. I invested some time to make a nice custom config and now I wish more things had info pages, because they're a joy to read.


I should give that a shot. Are there any specific defaults you are referring to? Lack of vi bindings? That's what always stopped me before.


The only key bindings I added were the ability to scroll up/down single lines, because I find page-scrolling annoying. But other than that, by crap defaults I mostly meant presentational issues. Once you spend a couple of minutes to learn the navigation shortcuts, they're not that hard to learn and they make using info quite pleasurable.

Here is my own .infokey file, in case it is of interest.

  #info
  j  up-line
  k  down-line
  
  #var
  link-style=blue,underline
  active-link-style=yellow,bold,underline
  cursor-movement-scrolls=Off
  scroll-behaviour=Page Only
  highlight-searches=On
  nodeline=print


I learned it from 2 sources:

* IN/ix (16 bit UNIX) Documents. The Docs were the old UNIX documents, most were man pages

* Later I got Coherent for my 286, the book that came with it was awesome.

FWIW, OpenBSD's man pages are very good, much better that what Linux has.


What I don't understand is why don't manpages have clear example of common usages? I imagine there's many many people that want to contribute that to the manpages, are they not there because leadership doesn't allow them for some reason?

Take grep for example https://linuxcommand.org/lc3_man_pages/grep1.html - it has everything and the kitchen sink but hell bro tell me how to find something is the 99% use case most people will use it.

For cases like this ChatGPT has been helpful.


They do have. You're just looking at the manual from one particular source of utilities, GNU, and overgeneralizing. Aside from the well-known antipathy to man pages, with every GNU manual page telling you at its foot to go and use a different tool to read a different doco format, it is not representative of manual pages in general.

The Illumos manual page for grep(1) has an examples section:

* https://illumos.org/man/1/grep#examples

So has the FreeBSD manual page for grep(1):

* https://man.freebsd.org/cgi/man.cgi?query=grep&sektion=1#EXA...

Not everything is written like GNU man(1) pages.

Even the Linux Documentation Project has a novice's approach to grep with several basic examples in its _Introduction To Linux_:

* https://tldp.org/LDP/intro-linux/html/sect_03_03.html#sect_0...


Interesting, I thought grep was grep and period. I didn't realize every Linux had their own version of grep with their own man pages.

Is this like a youtube-dl / ytp-dl situation?


Different Unices have different grep implementations. Most Linux distros use a particular implementation of grep: the GNU one.

And no, it's not like youtube-dl and yt-dlp. Some of the Unix implementations might have shared ancestry, but GNU grep is a from-scratch clone.


I never knew that, thanks for sharing!


From my experience most manuals actually don't have scenarios. Is git or docker better? I need to check and see, but from memory their manual pages mostly list all switches but a couple of examples at best. I could be wrong and need to double check.


Here's how people will learn Unix from here on out - they'll ask the chatbot to teach it to them:

Q "Please provide an outline and course syllabus for a ten-week course on Unix, aimed at undergraduate students in a college-level computer science program, as taught by an expert in Unix operating systems who has comprehensive historical knowledge of their development."

E.g. > "Week 4: Process Control; Process creation and termination; Process states and signals; Job control and background processes"

Need more? Q: "What are the most popular and useful process management tools for a Unix operating system?"

As far as how to read the man pages for any command, for beginners:

Q: "Provide a synopsis of the ps command's man page in a format suitable for a student who is just beginning to learn about process management."

Yes, you still should double-check those commands against the man reference - but think of all the benefits - students can get answers to their questions without having to pay private tutors or run the gauntlet of online forums where their ignorance will result in ridicule and similar unhelpful behavior, e.g. 'just read the man pages and think about it!', and they can ask as many dumb questions as they need to to figure things out, with no fear of embarrassment or humiliation (as long as the ChatGPT logs don't get leaked, I guess...)


I would have a hard time believing someone who learnt Unix at some point in the previous 20 years did so by reading some manpages.


I learned Unix through its manpage. I learned abotu sh, awk, grep, I learned about sockets and tcp, I learned about semaphore and shared memory. When I started working on Unix, in the early 90s', it was on a true-to-form physical VT220. I quickly learned to run make in the background so I could read the manpage while compiling.


> I got so much more out of Starting FORTH and TCP/IP Network Administration and the Lisp 1.5 Primer and some 1975-vintage algorithms textbook whose name I can’t remember (all the examples were in Pascal) than I ever got out of the manpages.)

Well, page 13 of Lisp 1.5 has Maxwell's Laws of Software, after all. (Might have an error or two though.) ;-)

Pascal algorithms book is likely Wirth's Algorithms + Data Structures = Programs (subsequently revised as Algorithms and Data Structures, with an Oberon version as well.)

The Forth, Lisp, and Wirth books are readily available online.

One thing that the language books have in common is that they describe programming language (Forth, Lisp, Pascal) implementation. (Note Wirth subsequently removed the compiler section in deference to his longer compiler book, which covers more material and compiles an Oberon subset onto his own RISC processor design.)


This post resonates with me as someone who tried to get into Unix as a very young person. The culture at the time felt like people really wanted you to figure it out by yourself. "Read the man pages, they have the answers." If someone thought that's all I needed, I could expect no further guidance as if I didn't deserve to be a user.

That was very discouraging when I was twelve.

I don't think this article would have existed without that terrible culture. Man pages are still here. I still use them. But, who in their right mind these days would suggest to a brand new user, RTFM or GTFO and leave it at that.


I've always found reference books a lot easier to get my head around than instructional textbooks. One of my favorite finds as a kid was a book about the Apple II that covered topics like the memory map and hardware registers, and my least favorite format was the 300-page book where you start with writing "Hello World" and twenty chapters in you're following instructions on how to schedule asynchronous messages or something. I don't think I've ever gotten past chapter three in a book like that, and I've had a reasonably successful, decades-long career in development.


I agree with the author's broader point about being more open to alternative paths into UNIX/hackery/etc. However, I'm not sure that I agree entirely with this statement at the end:

> I’m pretty confident that someone who practices programming strictly as a hobby, less than ten hours a week, will eventually get just as good at it as one of these "fledgling hackers" who doesn’t do anything else with their spare time.

I'm not sure. Maybe, if the fledgling hacker isn't able to use their time effectively and the hobbyist is. But that assumes that very thing the author discussed in the beginning: the hobbyist has some prior foundational experience that allows them to be more effective with their time than the fledgling.

In my experience if you want to excel in any discipline, you have to spend some time at some point deeply-focused on it. If we assume the path to mastery is linear, 10 hrs/week for 10 years (~5000 hours) leaves less time for mastery than 60 hrs/week for 2 years (~6000 hours). In my experience mastery is not linear and makes this calculus even worse for the casual learner. Learnings compound and not all hours of practice are equal in efficacy. The more experience practitioner is likely to be getting more out of the time that they spend.

I do believe, emphatically, in the value of consistent effort over time. I've spent a few years picking up piano, but its 4 hrs/week. My goal with piano is to "be good in 10 years." But still, this to me highlights the gap between casual progress and dedicated effort. One of the main reasons I feel like I'm able to actually make progress with only 4 hr/week is because I have the prior experience of reaching a fairly high level of proficiency playing bass after 20+ years. There were times earlier in my musical study where I spent 30+hrs/week with the bass and built a foundation of musical knowledge and discipline that allows me to be more effective with a lesser time investment on piano now. Even with that prior foundation I don't expect that with 10 years of 4 hrs/week effort that I will be comparable in skill or knowledge to a professional pianist.

I think, based on my experience, that immersing yourself in a subject and making it a major pursuit in your life, taking up a large portion of your waking hours, is qualitatively different than the same amount of time invested over a longer horizon. You make connections faster when the study is top-of-mind. You notice patterns and build up intuitions that you miss when it's more spread out. The path of learning is extremely non-linear, in my experience. Slow and steady is effective, but let's not kid ourselves. If it is sustained, fast and steady is more effective.


I picked up Ubuntu 14 years ago because it ran twice as fast on my parent's Dell laptop as Windows XP, and because I didn't have to worry nearly as much about getting a virus on it.

I didn't learn Linux using it that way, but I was immersed in it in a way that made the whole thing much easier when I finally took the plunge last year and started seeking work for real as a software engineer, instead of as an electrical engineer, which was what I went to college in.

I rarely read the manpages. I know a handful of switches and I prefer to ask the live Internet for easier material.


I also didn't learn Unix this way, but I sure learned a lot more once I did.


In addition to being informative, they're also sometimes funny. I remember that the HP/UX tunefs man page said, "You can tune a filesystem, but you can't tune a fish."


The inability to tune a fish was first documented in the original 4.2BSD man page for tunefs(8),

https://www.tuhs.org/cgi-bin/utree.pl?file=4.2BSD/usr/man/ma...

and is still present in most BSD-derived tunefs implementations, e.g.,

https://man.freebsd.org/cgi/man.cgi?query=tunefs&apropos=0&s...

AFAICT, this restriction does not apply to versions of tunefs provided by System V and its derivatives, e.g.,

https://docs.oracle.com/cd/E86824_01/html/E54764/tunefs-1m.h...

As with many things in UNIX, the ability to tune a fish is implementation-dependent, and testing is recommended before using it in production on any given platform.


One of my favorite man pages is openbsd's scan_ffs. mainly because it contains this pearl of wisdom

The basic operation of this program is as follows:

1. Panic. You usually do so anyways, so you might as well get it over with. Just don't do anything stupid. Panic away from your machine. Then relax, and see if the steps below won't help you out.

http://man.openbsd.org/scan_ffs


I didn't either, but if I needed help with a command- I'd consult the manpages first. But then again, I learned unices back in the days before resources were easily searchable online, and multiple flavors as well. Solaris, BSD, Linux, etc. They all had slightly different flags and it was always wise to check.

Sitting in a datacenter, with no wifi and working on a downed server... you learn to appreciate the manpages at the console.


All the manpages are bad IMO. A set of examples with a textual description of what each is doing would be far more useful in 99% of cases than a list of flags.


You have to learn both ways and work your way to the middle, or at least I do.

you need the tutorial(or example) to get context about what you are doing.

you need the reference manual to learn the syntax, the minutia, of what you are doing.

just the reference manual you have little to no clue on how things tie togther.

with just the tutorial/examples you have no clue about the scope and bounds of the system


I remember that I read a book with the main UNIX commands plus a vi tutorial and then a lot of manpages. That was in the second half of the 80s.


Only one? (-:

I read Foxley's Unix for Superusers, the Unix Text Processing book, and a couple of other books.


Books remained current for a longer time back then but it was already clear to me that they are not the way to go. Manpages were current. Experimenting is the way to learn stuff. Books as references have been basically dead since 1994 and the first websites with Javadocs IMHO.


The numerous similar books, now about Linux, that have been published in the first quarter of the 21st century indicates that your humble opinion is out of step with reality. (-:


I almost did, but by a book called "Guia Foca Linux" it's a brazilian author that created a book for those beginning their journey on Unix/Linux, it was a little bit hard for a 12 year kid but it took the job done:

https://www.guiafoca.org/


Follow examples. Hack other people's scripts. Troubleshoot some half-working tool. Don't hesitate to flip back to manpages for reference, but other people's scripts will show you how it's done (or serve as a warning.)

Oh yeah, Linux has a tradition of the HOWTO tutorials. grep those in /usr/share/doc.


I remember how many distros come with a package of Linux HOWTOS about varios topics, I did learn a lot from those


The Linux Documentation Project https://tldp.org/

Is where those came from it started circa 1992.


Tangential, but can someone explain the dichotomy of man pages vs info pages? Are they competitors, complements? Seems like many tools, like find, have sections for both.

Seems like man pages get all the love. Google searches often result in man page results, but not info pages.

Can someone educate me? Thank you :)


At some point, seems like in the late 90s, but who knows, the GNU project decided man pages weren't good enough, and tried to switch to info pages. As far as I can tell, nobody ever figured out how to use the info pages, and eventually either the GNU project relented and put the information back into the man pages, or the distributions did it for them. Info pages are supposed to be better because the different sections are hyperlinked or something, but as I recall, man info was useless and so was info info, so I never learned how to use it. :P


GNU tried to push everything to info pages (a simple hyperlink browser) and it backfired spectacularly. Because man was simple, info was confusing and complicated and annoying.


The irony is that Daniel J. Bernstein said in the late 1990s that switching to HTML is the answer. And here we are, almost a quarter of a century later: Most of the people claiming that they are reading manual pages are actually reading HTML documents with WWW browsers. (All of the people following the hyperlinks to man pages found in this very discussion are doing that, for example.)

* https://cr.yp.to/slashdoc.html

Strictly speaking it's not advocating HTML, but "browser-compatible" documents, which in fact includes DocBook XML.


And there are tools such as Debian's dwww (<https://packages.debian.org/bullseye/dwww>) which present virtually all local system information through a Web interface, also queryable through shell tools.[1] Both man and info pages (as well as numerous other document formats) are presented as Web pages, making info's distinction between man and hypertext formats largely irrelevant.

Note that this is a case of Debian's packaging conventions, standards, and requirements, as well as the wealth of available packages (including many documentation packages) being leveraged with a small additional bit of glue package to provide a fantastic utility.

As concerns man and info: I prefer manpages for their simplicity, use of ubiquitous tools and pagers (info by default relies on its own idiosyncratic viewer programme which I can never remember how to use or navigate, though you can bypass this for other tools). Info pages can offer richer documentation in more detail, in instances, though I find that generally HTML would be preferable for this. Note that both info and the WWW were invented within about five minutes of each other. Well, a few months, at any rate. Stallman really should acknowledge the clear winner here, as info's idiosyncrasies very much hurt rather than help it.

________________________________

Notes:

1. That is, in addition to man and info pages, you can point your browser at <https://localhost/dwww/> to see the content. You'll also get the /usr/share/doc/<packagename> trees, any installed manuals, auxiliary package information, and access to installed documentation such as RFCs, Linux Gazette, and other sources, if you've chosen to install those as well. Search is via swish++. Presentation can be limited to the localhost interface only (that is, only users on that system can view the docs) or opened up to specified IP ranges.


I feel the real key is having it in a separate window/terminal - I remember a TSR for Dos that would drop down a half window with command help - and you could scroll it AND still work on your command line with special modifier keys.

We do that now with a browser window open as we enter commands.


tmux and screen users would say that one can use the roff-based man system and still do that. The key is, rather, in what M. Bernstein wrote: "Much more effort has gone into web browsers than into roff readers."

And this is true. I just recently fixed a bug in a 2018-ish man command that prevented it from displaying Japanese manual pages, because it defaulted to 7-bit ASCII instead of UTF-8. Any WWW browser that couldn't have handled UTF-8 in 2018 would have been junked by most people. That sort of stuff was fixed years before for WWW browsers.

Indeed, there's been the reverse process going on. The effort that has gone into roff readers has sometimes even been actively suppressed. Did you know that your manual pages actually contain italics for emphasis and denoting certain syntactical elements, and have done pretty much all along? groff got the ability to display them, like troff originally could, long ago, as well as colour; but these abilities have been explicitly disabled in most operating systems.

We'd laugh at WWW browsers that couldn't do all of <i> and <em> and <u> and suchlike, or (of course) their CSS equivalents. But we suppress the ability of free software roff readers to reinstate what Unix text processing could likewise actually do in the 1970s.


Part of the reason the Gentoo wiki/docs are so good is that they use color and links - both totally doable in text mode consoles, but both neglected.


man comes from Unix, info comes from Emacs and maybe the OSes it was developed on. The GNU project also comes from that ecosystem, so even their drop-in replacements for the original Unix tools have info pages.


Specifically, info appears to come from ITS. Stallman hacked on ITS back in the day, and the first time I logged into an ITS system (http://up.dfupdate.se/, if you're curious) and used the online help system, I suddenly realized why GNU was always trying to foist this weird manpage alternative on us.

(I remember back when I first got into Linux, like early 2000s, it seemed like a lot of the core GNU man pages were just stubs that said "For documentation, use the info system". Luckily they seem to have backtracked on that?)


I and many others from my generation learned unix from the 'devil' book (Design and Implementation of the 4.3BSD Operating System). As I recall the Sun system documentation wasn't too shabby either.


Something I don't understand. Everyone talks about how they wish man pages had examples. What's stopping people from making/copying some examples and making a pull request to the maintainers of the man pages?


The people asking for examples aren't exactly in the position to be suggesting good examples.


Ever since I got CoPilot CLI access, it's all I use. I just tell it what I want to do and it spits out a nice CLI command that works 99% of the time with only filename changes.


I start with tldr then move into help or manpages as necessary. Then when I understand, I still come back to tldr to remember.

https://tldr.sh/

in browser: https://tldr.inbrowser.app/


tldr pages are such a thing of beauty. I personally use the tealdeer [0] client though, since it's so much faster than the official client.

[0]: https://github.com/dbrgn/tealdeer


I learned by trying to get my window manager to have flame effects on my windows... I also started on Linux when I was 12 probably at least two decades after the author started on Unix. I still use man pages and/or -h today though. More as swap memory for my brain than as something I would read top to bottom:

*What was the flag to flash firmware over the network? <types -h> Crap, too much text. <types -h | less> Ah, I remember now even though I used that flag five minutes ago. <types -d 0.0.0.0>*


I have no proof for what I am about to say, but I did grow up in the 1970s... tl;dr: man and info pages were never the way to learn Unix

There was a strong culture in nearly every professional competency 40+ years ago for social learning. It simply wasn't "done" back then for loners (and I am one) to sulk away with manuals for how to do... anything complicated and expect to gain mastery. One great example of this was trying to learn how to set non-electronic ignition points (for a gasoline engine). You _could_ try to figure it out with a manual and decipher the crude diagrams, but 25 minutes with someone who'd been taught by someone else transmitted much more knowledge than you could ever expect to get out of a handbook. Afterward, those crude diagrams would spring to life with fresh and helpful visual context. man pages weren't for beginners; they were for people who knew the basic forms and wanted to learn all of the other options. Yet, once you understood the basics of the top dozen Unix commands, then man pages could be used in isolation to learn most other commands. We didn't learn c back then by combing through the standard c library references, etc. Later on, Bjarne Stroustoup summed up how to improve (in general) in the first section of his seminal "The C++ Programming Language" when he wrote (paraphrasing from memory): 1) Know what you want to say. 2) Read others' works and copy from the best.

The development of the initial Unix userland tools at Bell Labs (and the drafting of most of the initial RFCs) was a social process and the use of those tools was passed down socially for decades. Even in school, older students were a major source of understanding for how to use these tools. As this era began to fade away in the mid-1990s, it wasn't Stack Overflow that filled the void (yet) but rather "recipe" books such as Oreilly's "Unix Power Tools". These were often written in a conversational style, as if the social learning context was continuing, only in written form. It is somewhat ironic that AI tools like ChatGPT are now bringing the process full-circle, where we must once again seek the wisdom of a competent agent by engaging in conversation, carefully crafting our questions so as not to elicit an incorrect answer or go off on an irrelevant tangent.


Its like no-ones heard of the 'info' command..


I've used tldr (https://github.com/tldr-pages/tldr) if I'm a beginner to a specific tool, then read up the manpages and/or SO if I need more flags.


I use frequently and recommend tldr pages: https://tldr.sh/


I’m a big fan of command line tools and probably the one I use the most is tldr. It’s what all man pages should have in their first paragraph but don’t, it’s just snippets of how to actually use the thing in the most common ways with a brief explanation.


as a beginner, how can i practice linux commands.


Try to self host something you find interesting on your home network. Don't expose it past your router because you'll probably do it wrong and don't want someone breaking into your system. It typically takes explicit effort to expose something past your home network, so just don't forward any ports.


I haven't seen this mentioned anywhere here, but the program `tldr` is a solid compliment to manpages, you run it like `tldr ls`, and it spits out a couple of examplesof how ls can be used


Ai chatbots are perfect for this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: