An enormous page full of switches and options, yet if you're a beginner it's basically useless. You're just looking for a simple example of how to search for a file. find . -i -name <regex> is NOT mentioned anywhere, you have to figure it out yourself. And if you mess it up, your terminal gets flooded.
"An enormous page full of switches and options, yet if you're a beginner it's basically useless."
Maybe 30 years ago you could learn through manpages but today that's how all the man pages of all the useful commands look. If you think grep is nice it would be the exception; for me it is indeed short, simple and easily searchable, but too terse for a beginner to learn with easily. (Plus even the regex documentation only documents grep mode, not egrep or pgrep or anything else.) Very useful in its own way, but most man pages are useless for learning.
I am disappointed at how few man pages take the opportunity to have a useful selection of "the 10 most popular things to do with this command" but web search has filled in reasonably decently. You do still have to understand the basics of how shell works, though, because most of them aren't quite what you want today and honestly a fair number of them aren't even what the poster thought they were. I am also in the habit of making sure that I understand every command line switch before I copy & paste a command off the internet, again, because many of them aren't what you need today. I don't always mean in a malicious manner, but things like either do or do not follow HTTP redirects in curl may or may not matter to you today, but the poster may be in the habit of setting it one way or another for no reason relevant to you.
the openbsd manuals usually have examples. you can see how compact https://man.openbsd.org/grep.1 is compared to the gnu grep manual.
i've been using https://git.causal.agency/exman/about/ to compare linux and bsd manpages, and it has saved me an incredible amount of time looking things up.
Are info pages still a thing? I remember being very confused by them back when I started out with Linux, and eventually just stopped trying to understand how they worked.
They are, but my impression is that only Emacs users use them to some extent. (Emacs has a built-in info reader that is better than the standalone client, and ships mainly info pages as its documentation.) Outside of Emacs, I’d say very few people know info is even a thing, and I remember some Linux distributions also stripped info pages from packages since everyone used man pages.
In fairness, if one were to come to more, pg, or less with no prior knowledge of the conventions, the madness of some of the character choices would be just as grating.
Ironically, this sort of stuff -- the basic conventions like qhjkl0^$G + CRFFBSSPC keyboard navigation and :/? prompt modes -- is exactly what one initially learns from guides, tutorials, other people in IRC, and whatnot, long before coming to reference doco. The _Introduction to Linux_ book at TLDP covers the bare essentials of these conventions, q, b, and SPC, in the "quickstart" chapter 2.
Every time I could not find and answer to a question in a man page and consulted info, it was a waste of time. I like info, really, but the remark at the end of most GNU man pages to look at the info pages is misleading at best.
For emacs stuff, yes, it's very convenient. I remember once upon a time, Python docs were available in info as well. When you have good documentation available in info, it's very nice.
I remember offering to make a PR w/ examples for something when I was a junior dev and was politely told that they didn't want examples. I assumed - and still assume - that this was a generational difference in what people expected competence to look like.
While this attitude made sense pre-internet, reading about every switch for common use cases is just not realistic today. Maybe if the number of command-line programs were less, and the complexity of each command was significantly less, I could see how it would be done, but not today.
There is a pre-internet and Internet pre-Stackoverflow bit. Called O'reilly books.
Learning to program for example Perl. I had the learning book, the reference book and the cookbook. learning I did once, the reference had post its on it and was on my work desk. The cookbook was more the facebook type you read it on the toilet.
They had books on everything that were the industry standard. DNS, Apache, Javascript, ... The revised them regularly. For Js had the 1st, 2nd, 3rd edition. I know this sounds all ancients like real paper books. But even work had O'reilly subscription so you could order what you need.
For me it was O'reilly books, man pages. But lets not forget when you installed Linux like RedHat (not fedora yet) you could install the linux howtos. Oh boy did I learn a lot from them too: https://tldp.org/HOWTO/HOWTO-INDEX/howtos.html
Finally there was IRC if you were stuck but this was off course the internet stage. I just went online to ask a question, dialup was expensive ;)
As of now, there is no LLM without confabulation, so your third item is effectively "hypothetical fantasy ChatGPT that we hope will be available some time in the future", not the actual tools we have right now.
I wonder if chatgpt will start “closing” questions as “not a good fit for its Q&A format,” or “too opinionated,” or “a duplicate” of something that is acutely unrelated.
Actually, this seems to be a case where chatgpt is smarter than the humans.
A recent example: I used ChatGPT 4 to draft an mkvmerge command - take 2 video files, and merge them by only copying certain audio and subtitle tracks from the second file into the first file.
The resulting command looked good at first sight, something like „mkvmerge -o output.mkv first.mkv —-no—video -s 1 -a 2 -a 3 -a 4“. The problem here is that there can only be one -a flag, so it should have been „-a 2,3,4“ instead. But mkvmerge didn’t really care and just discarded every -a flag except the last one. So I ended up with only one of the audio tracks copied over. I only noticed when I actually checked the resulting file that it had less audio tracks than it was supposed to.
This would not have happened to a human after studying the man page - the documentation is very clear about the -a flag and I have no idea what led ChatGPT to come to the conclusion it did.
The lesson here is not to anthropomorphize ChatGPT. It didn't "conclude" anything. Based upon a corpus that includes tonnes of humans writing rubbish on the WWW, it came up with plausibly human-appearing rubbish that can fool humans. GIGO would apply, except that one can remix non-garbage into garbage with suitable statistical processes, we have now (re-)discovered. (-:
Then remember that some operating systems come with handbooks and guides, and if one wants a tutorial for beginners, reference doco is not the place to start at all. So next contrast with the SCO UnixWare users guide, which also starts off with find file by name and print:
NetBSD doesn't look all that more helpful, you just get to examples sooner because it has far less options
The NetBSD examples also only document what command do, while the util-linux sometimes go into details of why, like when describing how exactly the time directive works and why `-mtime 0` shows files from last 24 hours.
That's a little inflammatory. Could you provide constructive input on your reasoning behind that statement? I personally found it a simple, concise explanation of what find is for.
find(1) is a classic example. The first two or three pages are of essentially zero practical use. I open the manual, skim the first page, no obvious clues about how to actually use `find`, so my next step is always to go to web search.
I've got a thousand things to do that are better uses of my time than reading 20 pages of arcane documentation that may or may not answer my question.
After about 30y of using *nix of one kind or another, I only use the EXAMPLES section of man pages as I know that most writers write to sound out their ideas, and the first bits are worse than useless. If there isn't one, I go to stackoverflow.
The same style extends to library documentation, where you have functions and arguments with types and no other context. It's basically for posterity.
Agreed, lack of examples was my biggest gripe with the manly pages when I was a Linux newbie. The most frustrating part is, you know they do it on purpose, so that you have to read the entire thing! News flash, I'm here because I'm stuck and want to get unstuck ASAP, not because I'm bored and want to learn the intricate details of find/grep/whatever.
it's like looking at a ton of gibberish and trying to figure out what it says. it doesn't match the languages, it doesn't appear to match any standards.
I learned a lot from man pages in the 80's and 90's, and if I had to try to learn now from something like this, I'd likely just throw up my hands and choose another career.
My educated guess is that something is trying to italicize various keywords, and whatever text processing is going on isn't working, but producing bogus "In" literal text instead.
It even prettifies those ``'' quotes, notice. This is vanilla older FreeBSD man, using groff, but actually allowing groff to do the things that it is capable of.
As others have noted, the 'In' looks like a rendering bug with whatever terminal emulator you're using. Otherwise it looks pretty normal: `fopen (const char *restrict filename, const char *restrict mode)` looks like a perfectly good C function, and (if you already know C) it's easy to see the parts that don't look like they belong.
I was pleased with how expect's man page [0] oriented me (instead of "it's all there, go figure"):
> Commands are listed alphabetically so that they can be quickly located. However, new users may find it easier to start by reading the descriptions of spawn, send, expect, and interact, in that order.
Find is the worst example given it breaks the do one thing and do it well motto of unix. (and you can do one thing and do it well), After discovering dmenu's stest and googles walk, most times I just need a { walk $PWD | stest -f | rg } and I'm done. Maybe you need more, in which case, instead of find, you should run stat with the output formatted to json/csv and then filter using your preferred csv method or jq.
Do you mean the other way around? The manpage of find contains examples, including searching for files by name. This alone make the manpage somewhat usable for beginners. The grep manpage does not have any examples.
The grep manpage spends much explaining regex and environment variables, find more on options.
Both manpages are searchable, what makes the grep manpage more searchable?
I've found tldr (https://tldr.sh/) very useful for quick lookups - basically gives examples with short descriptions, which is very often mostly what's needed.
My favorite is cheat.sh, which is a similar thing to tldr, but the best part about cheat is that you can just `curl cheat.sh/ln` (or whatever command) and get the cheat sheet on any system. I use this all the time.
This goes deeper, since understanding grep regexes involves understanding quoting.
This means not only how regexes distinguish between literals and control characters, but how the shell distinguishes and passes on characters to the regex that grep receives as an argument.
I think regex quoting is one of the major stumbling blocks of comprehension.
Even after decades using regexes, I still stumble (especially when my regexes are used in the shell, in emacs, in python re.match, etc...)
Which rather reinforces the point of the headlined article: People actually learn a lot of stuff when they start out from tutorials, guides, handbooks, and whatnot; not from purely reading reference doco. I started out with books, too. Later people would have started with the likes of TLDP.
I suspect that the author is right that most people started with guides/handbooks/tutorials, even the ones who later on convince themselves that they only read reference doco.
I see a lot of people talking about examples, and while having more of them could be good, I think the actual problem with most modern man pages is a lack of clear description of program behaviour.
I am nowhere near old enough to have used an old Unix implementation, but the man pages back in those days just hit different. just compare how the ls man page starts for the GNU and Unix V7 version.
GNU: "List information about the FILEs"
Unix V7: "For each directory argument, ls lists the contents of the directory; for each file argument, ls repeats its name and any other information requested."
The GNU man page's description continues with one sentence on default ordering before getting to the list of arguments. The only mention ls lists diretory contents is in the description of the -d flag (the flag to make it not do that). The GNU man page also goes on in detail avout the time formats supported by the command, but nowhere does it have anything to say about the format of -l info.
Trying to read modern man pages is like trying to read a library documentation where only the parameteres and return values for function are documented, but no description of the function in itself is given.
> I see a lot of people talking about examples, and while having more of them could be good, I think the actual problem with most modern man pages is a lack of clear description of program behaviour.
Have you had the opportunity to look into FreeBSD man pages? I have found them to be quite helpful. For example:
I think the man pages in FreeBSD cross a critical threshold where they actually are enough to learn the bulk of most things and seem to be generally much more useful than at least what my personal experience with Linux is. I will admit to a general bias towards thinking FreeBSD is generally of much higher quality than Linux in general, so take this with a grain of salt.
Strangely enough, I did learn Unix by reading all the man pages.
My first introduction to a Unix was via 'Coherent', a v7 clone back in the early 1990s. That came with a huge book. That huge book was pretty much a chapter or two on how to install Coherent, followed by hundreds and hundreds of pages of nothing but man pages in alphabetical order.
It was good night time or spare time reading. You could delve anywhere into the book and read away. And I did. Or you could look up some particular man page if you felt like it.
I too learned most of what I knew about Unix at the time from man pages although I did have a support group in that I worked for a retailer who sold Unix machines to small businesses.
We had Altos Xenix machines and AT & T's PC 7300 with Kaypro, Compaq, Columbia and of course "real" PCs from IBM all sitting on the sales floor. In fact weirdly IBM's short-lived attempt to keep their PC line from gutting midrange sales by selling a small midrange server introduced me to a truly bizarre Unixism.
A bunch of us from the retail org were sent to IBM training on these new old systems. Unfortunately for the trainers alot of our feedback was "this is so much easier in Unix or Windows". Lol.
But during the training, I actually had some dudes tell me it was "too dangerous" for the "general public" to have root and they were members of the California Root User's association and would be calling my boss shortly to take over management of our computers. Lol. In retrospect I hope they were trolling 18-year-old me.
But I've bumped into the attitude since where people are terrified of running at a naked root prompt and insist every command must be buffered through sudo, as though that somehow insulates you from typos. And I once sat in a presentation where a cybersecurity consulting firm suggested were fire the hundreds of sysadmins we had to manage our environment and instead employ them to supply two 5-10 person admin teams. Those teams would spend the majority of their time red teaming each other but could do admin duties to fill in the gaps of their spare time.
Anyway. The most bizarre thing about Unix is having learned its use in the 80s I'm still using a derivative to this day. That's not a shot at Unix. I just always assumed there would be some next big thing and we'd all move on.
> Strangely enough, I did learn Unix by reading all the man pages.
I remember SunOS 4.x, going through the contents of /bin, /usr/bin, and /sbin (iirc), and reading the man page for each thing I found there. There was a man page for everything in there. About 20 years ago, I tried the same exercise on some Redhat variant, and there was tons of stuff with no man page. Nowadays, spot checking my linux Mint system, (and presumably Ubuntu and Debian) the situation seems much better, at least in that man pages seem to exist for all executables in those directories. Maybe Debian based systems have always been better about that than Redhat.
In general, I feel that if you're creating an executable that you expect others might use, failing to create a man page for it is to commit a crime against humanity.
> Strangely enough, I did learn Unix by reading all the man pages.
Ditto.
> My first introduction to a Unix was via 'Coherent', a v7 clone back in the early 1990s.
At or near the same time as your introduction was mine with Consensys SVR4 for i386[0]. IIRC, Consensys sold stat-mux cards for PC's at the time, hence their Unix SVR4 product. It was reasonably priced along with being a pretty vanilla AT&T distribution.
When I started (Sun 1 era), it was all we had. No internet. No USENET. No BBSes. No real magazines. Dr Dobbs and Byte.
Also, no one to talk to. The little group I was in was the “cutting edge”. So we had to make it all up. We were normally a VAX shop, rest of the company was running off the CDC mainframe. We were also just getting early PCs.
“man -k” was your friend.
We pretty much also never had any training. Biggest regret was I think they sent the wrong guy to learn the Lisp machines (TI Explorers).
Very hard to pick up, and the engineer they sent simply didn’t have the CS background or inner geek drive to really exploit their potential. They basically taught him some high level, entry level expert system (likely CLIPS or OPS5 based) and that was it.
So they pretty much sat idle while we hammered on the Suns. Too bad.
Yep. That was it. I went nuts the other day trying to remember the name of the first 'Real UNIX' I was using about 12-18 months after I started with Coherent.
After Consensys, I used Novell's Unixware, Sun's SunOS (Solaris 1), then Solaris 2, and finally Linux.
From my experience some manpages glaringly miss examples or explanations. Hint: all manpages that you don't need to move down for more contents probably fall into this catalog.
I completely agree that they are only useful if you know what you are looking for, and nothing else.
Although I'm still a newbie, but I'll never tell a complete newbie to RTFM, considering there is not many high quality manual out there. Actually most hottie open source tools we happened to introduce into our workflow have half-assed doc at most. I'll just teach him to Google stuffs efficiently and let himself figure out the rest. Sometimes it's surprisingly difficult to know what exactly to put into that search box.
I'm not a hacker. I'll never be even close to Ritchie, Carmack, whoever write state of art malwares. I know the pain.
Too many people think that GNU manual pages are what all manual pages look like. Not everyone has the GNU tradition of telling you to switch to a different doco format for better man pages. (-:
The thing to remember is that for many systems there are both guides and handbooks, tutorial style doco, in addition to the reference doco, the manual pages. Linux-based operating systems come up a bit short in comparison to the old commercial Unices which had specific user guides, and in comparison to the BSDs, all of which have handbooks. Even Debian only has an admininstrators' guide, not a users' guide.
But The Linux Documentation Project is oft forgotten. It's stale. It's unfashionable. It's oddly chock full of how-tos for things that one wouldn't dream of wanting to do this century. But it also has longer format stuff, and it's still there.
> This is how we get monstrosities like the Git documentation, that are only of any use to someone who already knows how it works and just needs a bit of a reminder.
Git is seriously weird in that regard. It reads like someone just really wanted to build a content-addressable file system and a generic append-only graph data structure on top, and then, to their mild annoyance, people were starting to use it as a version control system...
To be fair, manpages were intended for tools that followed the Unix philosophy ("small programs that do one thing well, and can be composed together easily"). git does not follow this philosophy. If it did, we'd have tools named checkout, clone, push, etc. rather than one git binary that does all of them (with highly irregular and non-orthogonal syntax!)
Sounds like "you can't learn true Judo without traveling to some mountain temple and eating nothing but rice and punching a brick wall all day for 3 years"
Furthermore, it's another case of some old Usenet hyperbole being taken as Gospel (or taken down as Gospel, which is ore the style now). One of the most important lessons the Jargon file has for "how to become a hacker", is "don't take anything too seriously."
> One of the most important lessons the Jargon file has for "how to become a hacker", is "don't take anything too seriously."
Especially not the Jargon File. It largely comes from a particular period of time at MIT and Stanford, which is interesting in a historical sense but nobody should try and "become a hacker" by reading the Jargon File. Also it's had all sorts of injections and alterations by self-appointed Jargon File steward Eric Raymond, who should be ignored completely.
I only wish I could have told my 16-year-old self this.
>Sounds like "you can't learn true Judo without traveling to some mountain temple and eating nothing but rice and punching a brick wall all day for 3 years"
Nothing but rice.
Typical comment :)
Let the other points go, but it's Kung Fu, and not just a brick wall.
I'm sure there are some other good books about the subject of Unix usage and scripting in general, and I've read some of them, but IMO, UPE is one of the best.
FWIW, here are some of my HN comments about The Unix Programming Environment book by Kernighan and Pike, over the years:
I did, I remember working night shift in operations feeding tapes and in the down time I installed openbsd on an old pc. I would read the man pages following the "see also:" sections. I remember using a script to show me random man pages so I could learn about things that would not catch my eye.
I have this really cool wiki sort of software I made during that time it was written in shell, awk, sed, m4 and rcs. It was mainly written as an exercise to learn the unix environment. Probably full of every sort of injection vulnerability know to man. but it worked well enough in a benign environment. The shell bits are funny because a lot of it was built before I figured out shell functions. so every function was a separate shell script.
The thing is the text of the article contradicts it's headline. yes there is a lot of auxiliary knowledge that seeps in from many sources, this is why a person good at operating one os will be good at operating another os, they will not be happy about it but should know the fundamentals and can pick up the incidentals quickly. however these incidentals are important, and you learn the unix incidentals by reading the unix manual.
Man pages are generally not always that great. I recommend people instead digest the GNU `info' manuals, as you'll come away with a much deeper understanding of all the userspace binaries you're likely to interact with on many common Linux systems today.
I'll make this iron clad promise: if you sit down and read the coreutils manual end-to-end (it'll take a couple of hours) you'll come away brimming with new ways to optimise your CLI workflow.
If you're allergic to the `info' program itself, you can of course read the manuals on GNU's website or, better still: use Emacs's info viewer (`C-h i' or M-x info), which is far superior.
Info has some really crap defaults. I invested some time to make a nice custom config and now I wish more things had info pages, because they're a joy to read.
The only key bindings I added were the ability to scroll up/down single lines, because I find page-scrolling annoying. But other than that, by crap defaults I mostly meant presentational issues. Once you spend a couple of minutes to learn the navigation shortcuts, they're not that hard to learn and they make using info quite pleasurable.
Here is my own .infokey file, in case it is of interest.
#info
j up-line
k down-line
#var
link-style=blue,underline
active-link-style=yellow,bold,underline
cursor-movement-scrolls=Off
scroll-behaviour=Page Only
highlight-searches=On
nodeline=print
What I don't understand is why don't manpages have clear example of common usages? I imagine there's many many people that want to contribute that to the manpages, are they not there because leadership doesn't allow them for some reason?
Take grep for example https://linuxcommand.org/lc3_man_pages/grep1.html - it has everything and the kitchen sink but hell bro tell me how to find something is the 99% use case most people will use it.
They do have. You're just looking at the manual from one particular source of utilities, GNU, and overgeneralizing. Aside from the well-known antipathy to man pages, with every GNU manual page telling you at its foot to go and use a different tool to read a different doco format, it is not representative of manual pages in general.
The Illumos manual page for grep(1) has an examples section:
From my experience most manuals actually don't have scenarios. Is git or docker better? I need to check and see, but from memory their manual pages mostly list all switches but a couple of examples at best. I could be wrong and need to double check.
Here's how people will learn Unix from here on out - they'll ask the chatbot to teach it to them:
Q "Please provide an outline and course syllabus for a ten-week course on Unix, aimed at undergraduate students in a college-level computer science program, as taught by an expert in Unix operating systems who has comprehensive historical knowledge of their development."
E.g. > "Week 4: Process Control; Process creation and termination; Process states and signals; Job control and background processes"
Need more? Q: "What are the most popular and useful process management tools for a Unix operating system?"
As far as how to read the man pages for any command, for beginners:
Q: "Provide a synopsis of the ps command's man page in a format suitable for a student who is just beginning to learn about process management."
Yes, you still should double-check those commands against the man reference - but think of all the benefits - students can get answers to their questions without having to pay private tutors or run the gauntlet of online forums where their ignorance will result in ridicule and similar unhelpful behavior, e.g. 'just read the man pages and think about it!', and they can ask as many dumb questions as they need to to figure things out, with no fear of embarrassment or humiliation (as long as the ChatGPT logs don't get leaked, I guess...)
I learned Unix through its manpage. I learned abotu sh, awk, grep, I learned about sockets and tcp, I learned about semaphore and shared memory. When I started working on Unix, in the early 90s', it was on a true-to-form physical VT220. I quickly learned to run make in the background so I could read the manpage while compiling.
> I got so much more out of Starting FORTH and TCP/IP Network Administration and the Lisp 1.5 Primer and some 1975-vintage algorithms textbook whose name I can’t remember (all the examples were in Pascal) than I ever got out of the manpages.)
Well, page 13 of Lisp 1.5 has Maxwell's Laws of Software, after all. (Might have an error or two though.) ;-)
Pascal algorithms book is likely Wirth's Algorithms + Data Structures = Programs (subsequently revised as Algorithms and Data Structures, with an Oberon version as well.)
The Forth, Lisp, and Wirth books are readily available online.
One thing that the language books have in common is that they describe programming language (Forth, Lisp, Pascal) implementation. (Note Wirth subsequently removed the compiler section in deference to his longer compiler book, which covers more material and compiles an Oberon subset onto his own RISC processor design.)
This post resonates with me as someone who tried to get into Unix as a very young person. The culture at the time felt like people really wanted you to figure it out by yourself. "Read the man pages, they have the answers." If someone thought that's all I needed, I could expect no further guidance as if I didn't deserve to be a user.
That was very discouraging when I was twelve.
I don't think this article would have existed without that terrible culture. Man pages are still here. I still use them. But, who in their right mind these days would suggest to a brand new user, RTFM or GTFO and leave it at that.
I've always found reference books a lot easier to get my head around than instructional textbooks. One of my favorite finds as a kid was a book about the Apple II that covered topics like the memory map and hardware registers, and my least favorite format was the 300-page book where you start with writing "Hello World" and twenty chapters in you're following instructions on how to schedule asynchronous messages or something. I don't think I've ever gotten past chapter three in a book like that, and I've had a reasonably successful, decades-long career in development.
I agree with the author's broader point about being more open to alternative paths into UNIX/hackery/etc. However, I'm not sure that I agree entirely with this statement at the end:
> I’m pretty confident that someone who practices programming strictly as a hobby, less than ten hours a week, will eventually get just as good at it as one of these "fledgling hackers" who doesn’t do anything else with their spare time.
I'm not sure. Maybe, if the fledgling hacker isn't able to use their time effectively and the hobbyist is. But that assumes that very thing the author discussed in the beginning: the hobbyist has some prior foundational experience that allows them to be more effective with their time than the fledgling.
In my experience if you want to excel in any discipline, you have to spend some time at some point deeply-focused on it. If we assume the path to mastery is linear, 10 hrs/week for 10 years (~5000 hours) leaves less time for mastery than 60 hrs/week for 2 years (~6000 hours). In my experience mastery is not linear and makes this calculus even worse for the casual learner. Learnings compound and not all hours of practice are equal in efficacy. The more experience practitioner is likely to be getting more out of the time that they spend.
I do believe, emphatically, in the value of consistent effort over time. I've spent a few years picking up piano, but its 4 hrs/week. My goal with piano is to "be good in 10 years." But still, this to me highlights the gap between casual progress and dedicated effort. One of the main reasons I feel like I'm able to actually make progress with only 4 hr/week is because I have the prior experience of reaching a fairly high level of proficiency playing bass after 20+ years. There were times earlier in my musical study where I spent 30+hrs/week with the bass and built a foundation of musical knowledge and discipline that allows me to be more effective with a lesser time investment on piano now. Even with that prior foundation I don't expect that with 10 years of 4 hrs/week effort that I will be comparable in skill or knowledge to a professional pianist.
I think, based on my experience, that immersing yourself in a subject and making it a major pursuit in your life, taking up a large portion of your waking hours, is qualitatively different than the same amount of time invested over a longer horizon. You make connections faster when the study is top-of-mind. You notice patterns and build up intuitions that you miss when it's more spread out. The path of learning is extremely non-linear, in my experience. Slow and steady is effective, but let's not kid ourselves. If it is sustained, fast and steady is more effective.
I picked up Ubuntu 14 years ago because it ran twice as fast on my parent's Dell laptop as Windows XP, and because I didn't have to worry nearly as much about getting a virus on it.
I didn't learn Linux using it that way, but I was immersed in it in a way that made the whole thing much easier when I finally took the plunge last year and started seeking work for real as a software engineer, instead of as an electrical engineer, which was what I went to college in.
I rarely read the manpages. I know a handful of switches and I prefer to ask the live Internet for easier material.
In addition to being informative, they're also sometimes funny. I remember that the HP/UX tunefs man page said, "You can tune a filesystem, but you can't tune a fish."
As with many things in UNIX, the ability to tune a fish is implementation-dependent, and testing is recommended before using it in production on any given platform.
One of my favorite man pages is openbsd's scan_ffs. mainly because it contains this pearl of wisdom
The basic operation of this program is as follows:
1. Panic. You usually do so anyways, so you might as well get it over with. Just don't do anything stupid. Panic away from your machine. Then relax, and see if the steps below won't help you out.
I didn't either, but if I needed help with a command- I'd consult the manpages first. But then again, I learned unices back in the days before resources were easily searchable online, and multiple flavors as well. Solaris, BSD, Linux, etc. They all had slightly different flags and it was always wise to check.
Sitting in a datacenter, with no wifi and working on a downed server... you learn to appreciate the manpages at the console.
All the manpages are bad IMO. A set of examples with a textual description of what each is doing would be far more useful in 99% of cases than a list of flags.
Books remained current for a longer time back then but it was already clear to me that they are not the way to go. Manpages were current. Experimenting is the way to learn stuff. Books as references have been basically dead since 1994 and the first websites with Javadocs IMHO.
The numerous similar books, now about Linux, that have been published in the first quarter of the 21st century indicates that your humble opinion is out of step with reality. (-:
I almost did, but by a book called "Guia Foca Linux" it's a brazilian author that created a book for those beginning their journey on Unix/Linux, it was a little bit hard for a 12 year kid but it took the job done:
Follow examples. Hack other people's scripts. Troubleshoot some half-working tool. Don't hesitate to flip back to manpages for reference, but other people's scripts will show you how it's done (or serve as a warning.)
Oh yeah, Linux has a tradition of the HOWTO tutorials. grep those in /usr/share/doc.
Tangential, but can someone explain the dichotomy of man pages vs info pages? Are they competitors, complements? Seems like many tools, like find, have sections for both.
Seems like man pages get all the love. Google searches often result in man page results, but not info pages.
At some point, seems like in the late 90s, but who knows, the GNU project decided man pages weren't good enough, and tried to switch to info pages. As far as I can tell, nobody ever figured out how to use the info pages, and eventually either the GNU project relented and put the information back into the man pages, or the distributions did it for them. Info pages are supposed to be better because the different sections are hyperlinked or something, but as I recall, man info was useless and so was info info, so I never learned how to use it. :P
GNU tried to push everything to info pages (a simple hyperlink browser) and it backfired spectacularly. Because man was simple, info was confusing and complicated and annoying.
The irony is that Daniel J. Bernstein said in the late 1990s that switching to HTML is the answer. And here we are, almost a quarter of a century later: Most of the people claiming that they are reading manual pages are actually reading HTML documents with WWW browsers. (All of the people following the hyperlinks to man pages found in this very discussion are doing that, for example.)
And there are tools such as Debian's dwww (<https://packages.debian.org/bullseye/dwww>) which present virtually all local system information through a Web interface, also queryable through shell tools.[1] Both man and info pages (as well as numerous other document formats) are presented as Web pages, making info's distinction between man and hypertext formats largely irrelevant.
Note that this is a case of Debian's packaging conventions, standards, and requirements, as well as the wealth of available packages (including many documentation packages) being leveraged with a small additional bit of glue package to provide a fantastic utility.
As concerns man and info: I prefer manpages for their simplicity, use of ubiquitous tools and pagers (info by default relies on its own idiosyncratic viewer programme which I can never remember how to use or navigate, though you can bypass this for other tools). Info pages can offer richer documentation in more detail, in instances, though I find that generally HTML would be preferable for this. Note that both info and the WWW were invented within about five minutes of each other. Well, a few months, at any rate. Stallman really should acknowledge the clear winner here, as info's idiosyncrasies very much hurt rather than help it.
________________________________
Notes:
1. That is, in addition to man and info pages, you can point your browser at <https://localhost/dwww/> to see the content. You'll also get the /usr/share/doc/<packagename> trees, any installed manuals, auxiliary package information, and access to installed documentation such as RFCs, Linux Gazette, and other sources, if you've chosen to install those as well. Search is via swish++. Presentation can be limited to the localhost interface only (that is, only users on that system can view the docs) or opened up to specified IP ranges.
I feel the real key is having it in a separate window/terminal - I remember a TSR for Dos that would drop down a half window with command help - and you could scroll it AND still work on your command line with special modifier keys.
We do that now with a browser window open as we enter commands.
tmux and screen users would say that one can use the roff-based man system and still do that. The key is, rather, in what M. Bernstein wrote: "Much more effort has gone into web browsers than into roff readers."
And this is true. I just recently fixed a bug in a 2018-ish man command that prevented it from displaying Japanese manual pages, because it defaulted to 7-bit ASCII instead of UTF-8. Any WWW browser that couldn't have handled UTF-8 in 2018 would have been junked by most people. That sort of stuff was fixed years before for WWW browsers.
Indeed, there's been the reverse process going on. The effort that has gone into roff readers has sometimes even been actively suppressed. Did you know that your manual pages actually contain italics for emphasis and denoting certain syntactical elements, and have done pretty much all along? groff got the ability to display them, like troff originally could, long ago, as well as colour; but these abilities have been explicitly disabled in most operating systems.
We'd laugh at WWW browsers that couldn't do all of <i> and <em> and <u> and suchlike, or (of course) their CSS equivalents. But we suppress the ability of free software roff readers to reinstate what Unix text processing could likewise actually do in the 1970s.
man comes from Unix, info comes from Emacs and maybe the OSes it was developed on. The GNU project also comes from that ecosystem, so even their drop-in replacements for the original Unix tools have info pages.
Specifically, info appears to come from ITS. Stallman hacked on ITS back in the day, and the first time I logged into an ITS system (http://up.dfupdate.se/, if you're curious) and used the online help system, I suddenly realized why GNU was always trying to foist this weird manpage alternative on us.
(I remember back when I first got into Linux, like early 2000s, it seemed like a lot of the core GNU man pages were just stubs that said "For documentation, use the info system". Luckily they seem to have backtracked on that?)
I and many others from my generation learned unix from the 'devil' book (Design and Implementation of the 4.3BSD Operating System). As I recall the Sun system documentation wasn't too shabby either.
Something I don't understand. Everyone talks about how they wish man pages had examples. What's stopping people from making/copying some examples and making a pull request to the maintainers of the man pages?
Ever since I got CoPilot CLI access, it's all I use. I just tell it what I want to do and it spits out a nice CLI command that works 99% of the time with only filename changes.
I learned by trying to get my window manager to have flame effects on my windows... I also started on Linux when I was 12 probably at least two decades after the author started on Unix. I still use man pages and/or -h today though. More as swap memory for my brain than as something I would read top to bottom:
*What was the flag to flash firmware over the network? <types -h> Crap, too much text. <types -h | less> Ah, I remember now even though I used that flag five minutes ago. <types -d 0.0.0.0>*
I have no proof for what I am about to say, but I did grow up in the 1970s...
tl;dr: man and info pages were never the way to learn Unix
There was a strong culture in nearly every professional competency 40+ years ago for social learning. It simply wasn't "done" back then for loners (and I am one) to sulk away with manuals for how to do... anything complicated and expect to gain mastery. One great example of this was trying to learn how to set non-electronic ignition points (for a gasoline engine). You _could_ try to figure it out with a manual and decipher the crude diagrams, but 25 minutes with someone who'd been taught by someone else transmitted much more knowledge than you could ever expect to get out of a handbook. Afterward, those crude diagrams would spring to life with fresh and helpful visual context. man pages weren't for beginners; they were for people who knew the basic forms and wanted to learn all of the other options. Yet, once you understood the basics of the top dozen Unix commands, then man pages could be used in isolation to learn most other commands. We didn't learn c back then by combing through the standard c library references, etc. Later on, Bjarne Stroustoup summed up how to improve (in general) in the first section of his seminal "The C++ Programming Language" when he wrote (paraphrasing from memory): 1) Know what you want to say. 2) Read others' works and copy from the best.
The development of the initial Unix userland tools at Bell Labs (and the drafting of most of the initial RFCs) was a social process and the use of those tools was passed down socially for decades. Even in school, older students were a major source of understanding for how to use these tools. As this era began to fade away in the mid-1990s, it wasn't Stack Overflow that filled the void (yet) but rather "recipe" books such as Oreilly's "Unix Power Tools". These were often written in a conversational style, as if the social learning context was continuing, only in written form. It is somewhat ironic that AI tools like ChatGPT are now bringing the process full-circle, where we must once again seek the wisdom of a competent agent by engaging in conversation, carefully crafting our questions so as not to elicit an incorrect answer or go off on an irrelevant tangent.
I’m a big fan of command line tools and probably the one I use the most is tldr. It’s what all man pages should have in their first paragraph but don’t, it’s just snippets of how to actually use the thing in the most common ways with a brief explanation.
Try to self host something you find interesting on your home network. Don't expose it past your router because you'll probably do it wrong and don't want someone breaking into your system. It typically takes explicit effort to expose something past your home network, so just don't forward any ports.
I haven't seen this mentioned anywhere here, but the program `tldr` is a solid compliment to manpages, you run it like `tldr ls`, and it spits out a couple of examplesof how ls can be used
An enormous page full of switches and options, yet if you're a beginner it's basically useless. You're just looking for a simple example of how to search for a file. find . -i -name <regex> is NOT mentioned anywhere, you have to figure it out yourself. And if you mess it up, your terminal gets flooded.
Contrast with grep(1) https://linux.die.net/man/1/grep which is short, simple, and easily searchable with keywords
Manpages are very useful, having offline documentation is great. But it is more of a specification reference than a manual.