Hacker News new | past | comments | ask | show | jobs | submit login
An Illustrated Guide to Useful Command Line Tools (wezm.net)
636 points by signa11 on Oct 26, 2019 | hide | past | favorite | 102 comments



I quite like this list, there are a number of utilities here that I already use on a daily basis. There are also a few utilities that I like that weren't on this list, or some alternatives to what was shown. Some off the top of my head:

- nnn[0] (C) - A terminal file manager, similar to ranger. It allows you to navigate directories, manipulate files, analyze disk usage, and fuzzy open files.

- ncdu[1] (C) - ncurses disk analyzer. Similar to du -sh, but allows for directory navigation as well.

- z.lua[2] (Lua) - It's an alternative to the z utility mentioned in the article. I haven't done the benchmarks, but they claim to be faster than z. I'm mostly just including it because it's what I use.

[0] https://github.com/jarun/nnn [1] https://dev.yorhel.nl/ncdu [2] https://github.com/skywind3000/z.lua


Seconding ncdu! The interface is great, and I like that it’s modular so you can run the disk usage component as a cronjob and then have a nightly snapshot readily accessible


I'd like to further expand on the topic of ncdu.

rclone is a great utility for syncing data between local and remote (including remote to remote) locations and supports a ton of online services.

Well, rclone also supports an ncdu command modeled after the ncdu utility. It lets you quickly calculate directory sizes on remotes that don't show dir sizes, like Google Drive, for example.

https://rclone.org/

https://rclone.org/commands/rclone_ncdu/


And regarding ncdu, there are static binaries for Linux. Just put them in a folder and use them.


Nice. I've missed the static binaries on the homepage and should have tried static compilation myself. Now my little python2 and find-based scripts exporting ncdu-compatible outputs[1] on systems without ncdu seem a bit redundant. On the other hand, there are some examples of filtering the output with jq.

[1]: https://github.com/wodny/ncdu-export


+1 for nnn!


An observation: I have always taken the meaning of the word "illustrated" to specifically refer to non-lexical graphics. Searching the dictionary definition of the word, I find a looser definition that applies to "examples" intended to aid an explanation.

This is a similar cognitive dissonance to when I first learned that "Visual Basic" and "Visual Studio" meant that the syntax of the displayed code was highlighted, not graphically represented in a non-lexical way.


I’ve always assumed that the ”Visual” in VB and others refers to the WYSIWYG drag’n’drop interface for creating GUIs.


Same here—though for the same reason I always thought "VSCode" was a strange name since it doesn't have any of those interfaces.


But... they do refer to that.


I believe Visual Basic/Studio referred tonthe addition of WYSIWYG GUI creation tools and event binding to the languages.


I’m gonna sound like an old person here. As much as these tools are gorgeous and ergonomic, remember that the others are standard, which means they’re available (almost) everywhere.

Still though, these alternatives seem great for productivity locally, even if they’re not usable in a script.


I’ve actually found myself using these to get me much more proficient in the Shell overall. fd in particular is so much easier to use I find myself doing things like:

  fd .log$ -x mv {} {.}.bak

  (Rename *.log to *.bak)
Can I do that with xargs, awk, and find? Yes, but every time I have to look up the man page to at least one of them and it’s enough friction that I might just open it in finder if it’s a handful of files.

Having some of these utilities around is a crutch that let me leverage the entire ecosystem much more when I don’t have it. And when I do log into a shared enviroment and don’t have my crutch, then it’s easy to fill in the missing puzzle piece because it’s one part that’s missing and I’ve got the rest of the environment down.

Of course if all you do is shared environments I wouldn’t suggest these, but I would encourage people to use these to get more familiar with the CLI ecosystem.


Doing this with find is only marginally more difficult.

  find . -name '*.txt' -exec rename .log .bak {} \;
If your rename is prename by default you can use sed style replacement. Not to disagree with your overall point.


bash/zsh: for f in *.log; do mv "$f" "${f/%log/bak}"; done

a bit more generic, `for` loops and parameter expansion are good to know for proficiency, and also I dislike typing curly braces.


This doesn't move files in subdirectories? Or am I missing something?


You are right, it does not. But the find command is not too complicated, or different from the one above.


find has awful ergonomics that are completely unlike any other common unix tool. I can never remember the syntax, how the flags work, or what order things need to be in.

Let's use an example I just dug up of using find:

To list and remove all regular files named core starting in the directory /prog that are larger than 500KB, enter:

  find /prog -type f -size +1000 -print -name core -exec rm {} \;
OK first, how in the hell is 1000 == 500kb? Is that a bug in my example[0]? What does `-print` do exactly? And that backslash at the end? I have no clue what that signifies. I'm never going to remember this madness. I'd probably have resorted to writing a bash script in a file by now. But with fd it becomes manageable and memorable for future tasks:

To list and remove all regular files named core starting in the directory /prog that are larger than 500KB, enter:

  fd --type=file --size=+500k ^core$ ./prog -x rm {}
Now that's something that makes immediate sense even if you've never touched the tool before. Not to mention, it doesn't end up in my .git and other ignored directories.

[0] https://kb.iu.edu/d/admm


While I agree that find has weird quirks to my liking, your example is not a great one. I would write the find as follows:

`find /prog -type f -size +500k -name core -delete`

Alternative:

`find /prog -type f -size +500k -name core -exec rm -v {} +`

For extra context:

`-print` will just show you the results before `rm`'ing.

When the command ends with `\;`, the command will be repeated for every match. If the command ends with `+`, the results are appended until max args is reached (and then repeated). This is not always possible, but when it is, it's way easier to use. Less calls to the command, but certainly useful when appending the command after an ssh command, which would mean any number of extra `\` to escape the original `\`...


TIL, but that’s my point. If after many years of doing this and leading the development on one of the top utilities on homebrew I don’t immediately know the answer to these things, how could anyone.

UX is FAR more important in the CLI than even on the web. Users don’t just have to be able to learn what they want to do, for these common tools they have to memorize it or they won’t use it. Minor things like the order of the flags and bits like having to escape the semicolon don’t just make it challenging. I would argue for 99% of users they make it impossible.

In other words, I think 99% of users don’t know how to use this level of find and will never learn. That’s a problem and no amount of education is going to fix it. The tool itself is broken.


I can't entirely agree with you. Some points:

* The find syntax for this use case is almost identical to the syntax for fd (you may nitpick about "file" vs "f")

* Education would certainly help! How do you think anyone (me, as a data point) learned?

* Tab completion in the shell goes a long way. You will find yourself using the same flags often.

That being said, `find` brings with it a long legacy, which we don't all care for. Many of the options are practically unused, certainly by regular developers.

I find myself using ripgrep instead of grep, but still use find instead of fd.

And I still have a hell of the time as soon as I want to `prune`. I'd much rather `grep -v` at that point, but then I probably need to invoke `xargs` in the next step...


Your fd command:

Why type=file and a size? What else has size in 500k range on a filesystem, except files?

=+500 isn’t how math works, that’s implying it could be -500 or using equals for an inequality because the commonly used greater than symbol is off limits, it’s a learned bodge.

500kwhats? Bits? Bytes? Base2? Base10? Can I put a suffix on, is it kB and kb case sensitive?

Why is your filename on the left of the folder you want to look in? That’s so backwards to the left to right order /folder/files are normally written.

Why do you have some arguments with double dash, some with single dash, some with no arg name at all, what’s the pattern for any of that?

What’s -x and why is it short for a word beginning with E?

Why is the find command executing anything at all?

It’s no more immediately sensible than find, it’s inconsistent scribble and workarounds you’ve learned instead of the same that you haven’t learned.


>Why type=file and a size? What else has size in 500k range on a filesystem, except files?

Directories. They have a size depending on the metadata list of included files they contain. And can retain their size even if their inner files are deleted (until some cleanup style process is run).

>It’s no more immediately sensible than find

Oh yes, it is.

The same nitpicking for find would take 10 days, and wont be as contrived...


Hah, thankfully I refreshed. Was about to post something similar.

No need for xargs, awk or find


You might like zsh if you haven't tried it. Has a builtin `zmv` that does this. Check out oh-my-zsh


This suggestion is exactly why fd is so helpful. With zmv I can rename files in a directory. Great.

But what if I want to count the number of total files within each directory? Create tarballs out of each directory? Rename files that contain the word "FOOBAR" in them?

I can do this with fd and similar tools with slight modifications. With zmv I can rename files.


Cool! zsh also does a few things other than renaming files.


How do you rename files based on contents with fd?


I would use rg. My original point is not that fd is a do anything tool, it’s that fd and others are a much better replacements for find, grep, etc that when I want to do something complex they’re much better starting places.


>I’m gonna sound like an old person here. As much as these tools are gorgeous and ergonomic, remember that the others are standard, which means they’re available (almost) everywhere.

Well, I, for one, don't work "almost anywhere", I work with specific servers. I ain't gonna get a new server out of the blue. And if some teams works with the same N servers, they can mandate that the tools are present on all of them.

Plus even on some unknown system, one can quickly copy or download a set of static binaries as our toolset.

So unless someone is a sysadmin for heterogenous networks, or is called to go to random clients and fix unseen before systems, or some corporate mandate prevents them from having their tools installed, there's no reason not to expand beyond standard POSIX userland.


> Plus even on some unknown system, one can quickly copy or download a set of static binaries as our toolset.

Aha, even on a machine with networking problems running a non-glibc set of libraries? (muslc has some problems running glibc (even static,) binaries OOTB, and most docker systems use alpine as a base which means... dealing with musl).

> So unless someone is a sysadmin for heterogenous networks, or is called to go to random clients and fix unseen before systems, or some corporate mandate prevents them from having their tools installed, there's no reason not to expand beyond standard POSIX userland.

I don't believe that the person in question was arguing against expanding beyond the POSIX userland -- rather that they were arguing for maintaining familiarity with POSIX tools in case you need to use them.


Well, it's not always so simple.

If you are in a small team of 5 or 6 people, with less than 100 servers to manage, sure, it's doable.

But if you are part of a very large team of 50 or more sysadmins, with a large infra in the thousands of nodes with various OSes and vintage of OSes (even if Unix only), things can get tricky quite quickly.

First a lot of people will want their favorite tools to be installed which can result in a huge mess of special toolboxes not being consistently installed (different path location, tool set varying from server to server).

Second, these tools, specially the shiny newer ones, need to be built, packaged and maintained properly which represent a significant load, specially across several OSes/Vintage of OSes.

Third, as a general rule, having an install base with as few packages as possible is generally a good thing, on one hand it reduces the surface of exposition of a server, on a second hand, it helps auditing for security vulnerabilities as "dead weight" dependencies (ex: libX11 for an editor which is both terminal based and graphical based, but only used in its terminal form on a server) will not trigger false positives in term of CVEs.


You’re right, and I’m not at all against having custom tools in any controlled environment. In fact, I think enhancing a controlled environment with custom tools is extremely productive.

My point was more about education. I think becoming an expert in the standard tools should come before learning any custom ones, because it is a transferable skill.


I don't think its old fashioned at all, the arguments are solid. It's just like `uname -a` works under freeBSD, linux, darwin et all. The standard tools rule supreme IMHO.


ripgrep is included in some linux distros standard now.


In this case, I mean the POSIX standard. But yes, shipping by default in operating systems is how things (eventually) make it into the standard!


A number of the utilities mentioned here (bat, fd, hexyl) are made by the same author, who makes a number of additional command line utilities in Rust. They're all rather easy to use and aesthetically pleasing, I'm a fan of their work: https://github.com/sharkdp


This man is also creating awesome tools: https://github.com/BurntSushi


Indeed! I'm a fan of hyperfine and bat, and I've been meaning to check out pastel when I have more time...


I always find exotic command-line tools cool but they never stick because Im constantly sshing/using/targeting a variety of systems I don't own that don't have them readily istalled


If you have access to the build tools, you can compile and install stuff in your home directory.

A bit tedious, but you would still get to use your favourite tools.


Encryptable, portable home directories is one of the goals of systemd. I can’t wait until one day when I just ssh in and it’s all there



As much as I do use a number of these tools, there's something to be said for being able to use any box you ssh or log into that has the default tools available on any *nix system. ls, cat, find, etc. I'm hypocritical a bit in that I do use rg, and fd, but I just can't bring myself to deviate too far from the 'default'.


I often use the term “standard” in place of “default” there.


really bad name for "dot", it has always been the graphviz interface


Indeed, and graphviz's dot is really useful, too. It makes it really easy to make all sorts of graphs. For example, if I wanted to make a dependency graph of all the packages I have installed in Archlinux, I can use it like this:

  pacman -Qq | xargs -r pacman -Qi | awk '
    BEGIN { print "digraph deps {" }
    /^Name/ { n = $3 }
    /^Depends On/ {
      for (i = 4; i <= NF; i++)
        print "  \"" n "\" -> \"" $i "\";"
    }
    END { print "}" }
  ' | dot -Tsvg > package-deps.svg


The guide mentions a dotfile manager, but here's something I found[0] on HN a while ago that's really cool: just make your home folder a git repository.

[0] https://news.ycombinator.com/item?id=11071754


Amazing how many of those are written in Rust and Go.


I don't think it says much about the tools themselves though, it just makes sense when you look at the author's GitHub and see a bunch of Rust projects.


I think there are a couple of relevant factors

- Can be distributed as a single binary, not requiring an interpreter and virtual environment. - Being more fun to make a hobby tool in due to minimal footguns compared to C/C++. - Really good dependency management and build tool making it easy to compose these CLIs out of powerful building blocks (at east for Rust). For example, ripgrep is broken up into a lot of packages that you can compose together to make your own custom tool.


CLI tools have pretty strong needs to be self contained exes that start up fast. That rules out a lot of the more popular languages.


unless I'm missing something, jq isn't[1]

[1] https://github.com/stedolan/jq/tree/master/src


Yeah that's an error


The reason for that is more the user than the tools themselves in my opinion.


I found this too. I also found the only one I use is written in C - tig.


Can someone explain the misuse of cat, which bat solves?


Simply using cat to dump a single file to STDOUT is technically a misuse because cat is theoretically intended to concatenate files. This 'misuse' is 'solved' by bat because bat seems to be primarily intended for its syntax highlighting and git-related features; according to bat's docs, cat-style concatenation of files is intended for drop-in compatibility with traditional cat.

That said, the idea that using cat to show a single file is "misusing" cat is prescriptivist rules-lawyering. There is no technical reason against using cat for non-concatenation purposes. The descriptivist interpretation of cat says using the tool for non-concatenation purposes is fine, since that's how people are actually using it.

However, note that cat is often misused as a substitute for STDIN redirection:

    # this creates an extra process that wastes
    # time copying STDIN to STDOUT
    cat "${inputfile}" | do_stuff

    # just connect inputfile directly to STDIN
    do_stuff <"${inputfile}"

    # or if you want to keep the input
    # at the start of the pipeline
    <"${inputfile}" do_stuff


I guess the author was thinking of how cat's original purpose is to concatenate multiple files, not show just one file. (But I certainly don't think using cat in the latter way is a misuse.)

There's also a commonly noted "unnecessary use of cat" where people do this:

  cat file.txt | grep foo
instead of this:

  <file.txt grep.foo
but that's not relevant to bat (which can be used unnecessarily in the same way).


Or ‘grep foo file.txt’


What's the name of the bash feature with the '<' before the filename? I want to read the docs on it but I don't even know what to search for.



Catting a file will your terminal make interpret escape codes in the file.

For instance:

    $ echo -e "\033]0;${USER} is an unfriendly person\007" > test-file.txt
Then, many days later:

    $ cat test-file.txt
will change your terminal title.

With less, you _can_ interpret escape codes, but usually you don't, and I consider this the correct default.


in your example it is not cat who "interprets" the escapes, but echo


Neither one interprets the sequence, but cat will print the file unfiltered to stdout, and stdout is processed by your terminal, and then the terminal will interpret it. less filters before printing to the terminal.


I meant that "echo -e" transforms the four characters "\033" into a single byte, for example. Thus, the sequence is "interpreted" by echo. Then, the cat program just copies the bytes without looking at them. If you want "cat" to escape these bytes so that they are not seen by the terminal you can use the "-v" option.


The sequence that translates to an escape code is interpreted by echo. The escape code is passed unfiltered by cat for the terminal to interpret.


Tried it -- didn't work. Single-quoting didn't work either. Terminal is urxvt.

What is clear is that

  $ cat test-file.txt
does not print the initially-echoed text, but it is escaped and up to nefarious tasks.


The escape sequence is terminal specific. urxvt is likely different from xterm in this regard.


That's a surprising and interesting bit if knowledge, thanks


'less -R' is useful because it will honour colour control codes while escaping other ones (like title changes, terminal bell, etc)


They are referring to using cat to print out a file to the terminal. I'm not sure that's a "misuse", but personally I've never done it, using "more" or "less" instead. "bat" really is a replacement for more/less, only with additional features such as syntax highlighting in source code.


I use:

    cless() {
      pygmentize -O style=<style> "$1" 2>/dev/null | less
    }
I have programs that depend on pygmentize so I am not sure I will use bat anytime soon when I already have pygmentize and less, or at least not for it having syntax highlighting.


That use is the exact reason why LESSPIPE exists, check out man less & https://www-zeuthen.desy.de/~friebel/unix/lesspipe.html


Maybe, but I already had pygmentize, I already had less, and this small function took me less than 10 seconds to type in and start using immediately (and I use a particular style that pygmentize provides so that is a plus). :P In any case, thanks for sharing.


Oh, I forgot to mention that LESSPIPE is a mechanism to automatically pipe the contents of a file when opening in less, and that the linked LESSPIPE script has an option to use pigmentize for syntax highlighting.

It’s a tool for enhancing less & pygmentize, not replacing them.

Well, just providing info for everyone. I’m not trying to force you using that script :-)


> Well, just providing info for everyone. I’m not trying to force you using that script :-)

Yeah of course, sorry, I admit my comment did seem defensive! Thank you for sharing, really. :)


I just use nano -v.


Using cat to catenate a file to standard input isn't misuse, but it looks like bat performs syntax highlighting.


Google "UUOC" - it's famous.

Edit: Sorry downvoter, thought I was being helpful - no-one mentioned the acronym yet. There's a lot online about it, more than I'm qualified to explain. Entertaining reading too. I remember reading about the UUOC Awards years ago..


I guess they just meant it as in "reading code," when in fact bat is better tool just for reading code.


No mention of things that have gained a lot of traction, like `ag` and `fzf`

ag: - silver searcher, fast parallelized recursive grep that can abide by things like `.gitignore`

fzf: - fuzzy finder

powerline-shell - $PS1 on steroids


It does mention rg which is in the same category as ag and, maybe this is my own bias, seems to have more mindshare now.


Agreed - I used to use Ag but switched to Ripgrep. Better in every single way, in my opinion. It's obscenely fast and has good defaults


I find the to be faster than ag personally


On a simple case, `time rg ^:debug ~/projects`, `ag` appears to be about 10% faster for me, which is negligible for most cases anyway.

Maybe rg is faster on more complex cases?


Can you reproduce it in a public corpus and file a bug report? ripgrep should never be slower than ag.


ripgrep >> ag

and skim is a rust clone of fzf


What about yq, for parsing the increasing amount of yaml? Though there seems to be multiple competing tools by the same name...

https://github.com/mikefarah/yq

https://github.com/kislyuk/yq


in order to avoid having to unlearn commands like cat and ls, I use aliases to invoke bat and exa.

Here's a screenshot of my fish config: https://twitter.com/mxschumacher/status/1168993005744918528


For clearing the screen, try 'Ctrl-l' (lowercase L)

A quick lookup suggests fish handles some of these differently than bash (apparently fish clears the buffer with this shortcut, so your scroll back is gone?).

There are a quite a few of these for anyone interested: https://kapeli.com/cheat_sheets/Bash_Shortcuts.docset/Conten...


I would do

  alias ls "exa"
  alias ll "exa -ll"
That way you get the nice shorter output which comes in handy for piping ls things like:

  ls -d | xargs ls
Lists the files in subdirectories.


good idea - thank you!


Nice post! Surprised to not see httpie mentioned.


Lots of cool stuff in here. I’m going to have a long look at Restic and Syncthing.

I’d like to know why Skim instead of FZF. They are pretty similar but I’ve been using the latter for years and would like to know of any possible advantages to it.


There's a lot of these pickers and they are indeed similar. I've been using https://github.com/jhawthorn/fzy since it's not fullscreen and has a really good matching algorithm.


fzy is cool and probably underrated. It's the preferred fuzzy utility for file manager nnn too.


i loooove syncthing. something i set up recently is a folder called .home_sync to sync my dotfiles between computers. I have a bash script that sets up the symlinks and then version control turned on for the syncthing folder so i can have a record of any changes if I need to revert something


I've tried both Skim and fzf, and in my experience fzf has a much better fuzzy matching algorithm so that's what I'd recommend using. Plus it supports true color configuration fwiw.


My code editor does a fair job of finding things for me, I know I won't use ripgrep/fzf that often :shrug: On the other hand I would recommend fish shell (replacement for bash) with z plugin and http (replacement for curl).


An alternative to z, which I find a little bit "smarter" is autojump.


What’s the terminal prompt shown the graphics?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: