Hacker News new | past | comments | ask | show | jobs | submit login
Awesome but commonly unknown Linux Commands (anchor.com.au)
167 points by kezzah on Aug 10, 2011 | hide | past | favorite | 87 comments



If you're running a modern Linux desktop, you should have a "notify-send" command which will cause a message to be displayed by your desktop-environment's pop-up notification system (on Ubuntu, this command is in the libnotify-bin package). I have two scripts I keep around which I call "notify-success" and "notify-failure":

notify-success:

    notify-send --icon=gtk-dialog-info Success! "$*"
notify-failure:

    notify-send --icon=gtk-dialog-warning Failure! "$*"
When I want to run a long-running command and don't want to have to keep checking on it, I'll do something like this:

    make -j3 && notify-success "Build complete" || notify-failure "Build failed"


With tmux, you can monitor the window for activity (^A M) and do:

  make -j3 >make.log 2>&1 ; echo $?
When make finishes, your prompt is printed again (activity!) and the window's caption is highlighted. I use the same mechanism to watch for new mail and chats in other windows. It's like Growl for your terminal.


I just have a script called 'bell': echo -n "\a"

My terminal (urxvt) will detect the bell and set the 'urgent' flag on the window, which is detected by Awesome, which sets the appropriate tag* icon to red.

* Tag in Awesome is more or less akin to a virtual desktop.


I do just that, but I also add audible notifications using "play" for when I am not looking at the screen.

Use "--expire-time" argument to "notify-send" to control how long the message is displayed.


I just use a few aliased beep sequences so I can have an audio notification. I never have much need for more than a few at a time, so I've only written three such aliased beep tunes. Well, four really, I always have a special keyboard shortcut set to immediately run the one named "little_melody". By pressing down... well you get the idea.


On a mac I use the `say` command.

    make && say "I'm done."


On a mac, you could also use growlnotify.

    growlnotify -m "Hey I am done!" -t "Done!"


Also useful is the `-s` option to make the notification sticky

    growlnotify -s -t 'Done!' -m 'And I'm not going away until you click me'


All these article links should just end up here:

http://www.commandlinefu.com/commands/browse

you can sort by date, #(votes) and also search

The top all-time popular one is:

* run last command as root: $ sudo !!


To be fair, that's not even a command; that's a shell expansion that automatically substitutes your previous command in place of the !! before executing it.


Thank you for another '...and this is why I love HN' moment.


Just be aware it may not execute the command you think. man bash for HISTIGNORE. Personally I never use it with sudo; it's too dangerous.


This is also why I disabled HISTIGNORE in both bash and zshell. zshell also allows me to press TAB to expand the command in-place before I press enter, so I can still use the shortcut without the danger.


Check this out -- CommandLineFu One-Liners Explained:

http://www.catonmat.net/blog/top-ten-one-liners-from-command...


One I use all the time is this:

$ pwd

/Users/me/Sites/foo

$ pushd .

$ cd /

$ pwd

/

$ popd

$ pwd

/Users/me/Sites/foo

Or in English: If you're in a directory that you need to leave but you know you'll go back there in a minute, use "pushd ." to save that directory. Then go off and do whatever you need to, and when you need to return to the saved directory, use "popd" to take you straight back there.


You can also use `cd -` to return to the previous directory.


git "borrowed" this idea too. You can use `git checkout -` to checkout the last thing you checked out; very handy when merging.


Python sort of did too, you can use _ to access the last-returned variable in the interpreter.

    [testfunc(x) for x in testlist]

    <successful result>

    success = _

    print _

    <successful result>
It's useful when using the interpreter as a calculator, or when you're bashing out some calculation you can never remember how to do properly.


The same thing also works in irb.

    >> "foo"
    => "foo"
    >> bar = _
    => "foo"
    >> bar
    => "foo"


in bash you can use $_ to access the last argument of the previous command. It lets you do things like this:

    $ pwd
    /home/foo
    $ mkdir -p bar/baz/qux && cd $_
    $ pwd
    /home/foo/bar/baz/qux


Yeah but only one directory in the history. With pushd/popd you can navigate as much as you want, an then popd to that directory you intended to bookmark.


yeah, I switched to cd - for a while but got bitten by this too many times.

$ pwd

/some/long/compli/cated/v1.0.7/path/to/_stuff

$ cd /simple/

$ do -stuff

$ cd ./foo

$ do -stuff

$ rem now let's get back to that complicated directory again

$ cd -

oh, wait, I cd'ed twice, shit


If you find yourself in this spot again, Ctrl-r `cd /som` helps.


  $ pushd .
  $ cd /
Why not just

  $ pushd /
?

Also, it's not linux-specific, but bash-specific.


It's not Linux or Bash specific, pushd and popd work on Windows too, and work on network shares, e.g.

  c:\> pushd \\server\share
  z:\>


It's still provided by your shell, not the OS. Windows' cmd.exe just happens to provide pushd/popd, same as bash, zshell, etc.


     $ pushd /
oh, neat. Thanks.


You can also use $CDPATH to jump easily between directories e.g. : Imagine you have this folder ~/dev/python/my_awesome_project

If you set CDPATH to '.:~/dev/python', you can easily jump to your project just by doing cd my_awesome_project, it doesn't matter where you actually are in your FS!

I use it heavily with cd -, you should give it a try!


I really like $CDPATH, but there are tons of sloppy scripts out there that assume "cd $FOO" has no output and break when $CDPATH is set. Drives me nuts.


It's really handy for scripts too. pushd/popd definitely live up to the word awesome.

For interactive use zsh has an option called autopushd that automatically pushes directories you `cd` to. I never remember to use pushd so it's a nice convenience.


It's not just handy for scripts, I think it's imperative in scripts where you want to go in and out of a lot of directories.

  for i in foo bar baz; do
     cd $i;
     #... do something
     cd ..;
  done;
if "bar" doesn't exist, your cd .. will throw you off your original directory, whereas

  for i in foo bar baz; do
     pushd $i;
     #... do something
     popd;
  done;
you'll be guaranteed that after "popd" you're back in the right place to start a new iteration.


You really, really should use 'set -e' which will exit if there are any errors. Otherwise if 'bar' doesn't exist, you'll 'do something' in a wrong directory and frobnicate something you didn't want frobnicated.


I never was comfortable using pushd/popd.

But a few months ago found this.

https://github.com/joelthelion/autojump/wiki


On Linux I very regularly use 'strace', to see what a program is currently doing (in terms of syscalls), what files were opened/read/written/closed, etc.


strace is the Daddy of debugging especially when you can attach it to an already existing program using it's PID. I have solved so many permissions errors problems with strace where the application itself just failed without logging anything.


Similar experience here, except that I've solved similar issues (and far more complicated issues) with DTrace instead of strace. The range of information that DTrace can retrieve about a running application surpasses anything that strace can retrieve. DTrace can retrieve information on a systemic scale, whereas each strace instance operates on a single process. And the overhead is significantly lower with DTrace.

For many of the systems applications I design for Illumos, I've used DTrace probes as a way of logging very frequent events, on demand (to avoid frequent IO). All of the events that _must_ be logged for the application to function properly, are logged and fsync'd.

I think that in most systems dynamic tracing will eventually replace a significant portion of the logging functionality that people code into their applications.

Either way, if you think strace is sweet, give DTrace a spin.

Some helpful links:

[0] A video demostration of DTrace, by the creator of DTrace.

[0] http://www.youtube.com/watch?v=6chLw2aodYQ

[1] A post I wrote, demonstrating DTrace. Similar posts can be found on the blog's dtrace-addict page.

[1] http://nickziv.wordpress.com/2011/04/08/adventures-of-a-dtra...

[2] A wiki that contains DTrace examples for various languages and system facilities. Some examples may be Illumos-centric.

[2] http://www.solarisinternals.com/wiki/index.php/DTrace_Topics

[3] DTrace's home. Contains blogs by the engineers behind DTrace.

[3] http://www.dtrace.org

UPDATE: Meant to reply to parent's parent.


systemtap is another utility in this vein that's worth checking out

http://sourceware.org/systemtap/


I already checked it out and the timer probes don't work at all.

On one system, I couldn't even invoke kernel functions.

On another system, the kernel panicked as soon as I executed `stap -e ...`.

Basically unusable, at this point.

I hope it improves because developers on linux could really benefit from dynamic tracing.


If the application fails, wouldn't it just shut down and the process ends? Meaning that I can't call strace anymore? Do you have suggestions for that usecase? How would I know what the process id is going to be before I start the program?


$ strace program args


and on OS X you have DTrace (from Solaris), which can be called using dtruss, which is a shell script wrapper around DTrace

also, in developer > applications there is an app called 'Instruments' which is an excellent GUI front-end to DTrace


try austrace. also gdb of course.


strace, combined with lsof and netstat can help you with so many things it's not even funny.

I've debugged stalled processes across 3 separate servers with these three tools alone.


watch is rather handy:

    # watch packet counters
    watch ifconfig

    # watch tcp connections
    watch netstat -plan

    # watch the load average
    watch uptime

    # watch directory contents
    watch ls -l


Meh, I was hoping for at least semi obscure things like the 'moreutils' utils.

Speaking of which, many people don't know about the 'moreutils' utils. Check them out ;)



I see sponge is somehow useless, how is `| sponge /etc/passwd` different from `> /etc/passwd`.


The redirect happens before the command is executed, so if you're trying to read from /etc/passwd, it's already been overwritten. Sponge will buffer the output and write the file after the first command has executed.


Ah, excuse my ignorance.


He explained it on the site - if you are trying to use grep to trim contents from a file and output them to the same file, you will end up with an empty file.

Seriously though - everyone knows this. Just use a temp file.

He says sponge "keeps the results in memory" - that's a problem if the file is huge.

Plus there are other tools to do in-place replacement.


pstree prints out a tree of all the running processes, arranged by who spawned who.


Anchor's blog is worth subscribing to if you're a sysadmin. The technical content is good, and they can be merciless yet amusing when writing about software that fail to meet their standards of good taste (see http://www.anchor.com.au/blog/tag/fail/).


A ridiculously unknown tool is "expect". Automates anything. Saved me hours of typing.

Wrote about it (use case + example) here: http://www.chocobrain.com/Advanced-terminal-automation-with-...


xargs. Too few people know xargs. Every time I see a "find" with "-exec" in it, my soul cries a little bit. (Yes, sometimes it might be necessary, but in the vast majority of cases not.)


And too few people know that GNU Parallel[1] is most of the time a better xargs than xargs itself.

[1]: http://www.gnu.org/s/parallel/


How is it different than xargs -P?



Thanks, I clicked the documentation link at the top of the page and gave up when it didn't go anywhere useful.


I thought find -exec was the preferred way of doing it (better with spaces, not limited in command line length, etc). Why would it be better to use xargs?


With gnu tools, at least, xargs has a "-0" option to go hand in hand with gnu find's "-print0" option which obviates the spaces problem.

xargs is "better" in that it spawns 1 new process for however many things are found. -exec spawns 1 new process for EVERY thing found.

So:

    find . -name '*.log' -exec rm {} \;
spawns an rm for every log file.

    find . -name '*.log' -print0 | xargs -0 rm
spawns 1 rm for MANY log files. (Yes, I know zsh can do stuff like this too.)

xargs also has options to limit command line length or # of items if you want to limit that. find -exec is about the same then as find ... | xargs -n1


find -exec is almost always faster and more secure than xargs.


It is my understanding that -exec is almost always slower, since it generally needs to invoke whatever program you are using many many times more.


Whereas I almost always use for i in `find . -name foo`; do mv $i $i-new; done

That's very straightforward, and gets the job done without much weirdness.


I'd like to hear some justification, cites, or examples for either of these assertions.


I was mis-remembering the "faster" part. It is slower unless you have a version that supports the "-exec {} +" option. However, for security and robustness, see this:

http://stackoverflow.com/questions/896808/find-exec-cmd-vs-x... http://www.gnu.org/software/findutils/manual/html_node/find_...


Does find have an equivalent for xargs "-P max" flag?


You can emulate it with sem:

    find . -exec sem --id my_find -j3 sleep 4\; echo {} \;
See more about sem: http://www.gnu.org/software/parallel/sem.html



In the 'top' family, iotop is extremely useful for identifying bottlenecks. It shows you the I/O rate per process, which wasn't even possible until a recent kernel.


iftraf sounds interesting. I was looking for just such a program the other day.

I was sitting there and noticed with my iStat monitor that I was uploading something at 250 KB/sec. I closed Chrome and eventually all running programs in the doc, yet it still continued.

I tried to find out WHAT was uploading that, but to no avail. Any suggestions for tools? I ended up trying iftop, lsof -i, and netstat to get a glimpse, but it stopped before I could get to the bottom of it.



Nethogs is great when you need to determine what processes are transferring data or how much they're transferring. I got curious how it works, so I ran it through strace and looked through the source. /proc/net/tcp lists all established TCP connections. It includes local and remote addresses and ports and the inode for the socket. Nethogs sniffs traffic and associates it with its entry in /proc/net/tcp. It takes the inode from there and scans through /proc//fd/ looking for the file descripter that has that inode to determine which process has the socket open. Once it finds the process it adds it to a table of inode to process id mappings so it doesn't have to scan through /proc again the second time a packet for that connection comes through.


Wireshark or some other packet sniffer can be used to sniff the packets. From the packets you'll get a pretty big clue what's going on. If it's a connection-oriented protocol, you'll be able to trace back to the source ports with lsof, but for this use case often just a glance at what's coming out will be enough to give it away.


lsof is also great for monitoring network activity: you can have it tell you about all the tcp/udp connections, as well as who's attached to which port.


    lsof -i


Here are the tools I find indispensable as a sys admin:

- screen for accessing servers

- lshw for seeing Linux server hardware config

- cfengine for automating my system administration

- atop as an advanced top

http://www.atoptool.nl/downloadatop.php

atop uses color to show when a subsystem goes over warn/critical threshold. it can be run in present time, or can be used to go back in time and "play back the tape".


"at" is very handy for scheduling one-time scripts/commands

  echo my_script.py | at 4am tomorrow
then list queued jobs with

  atq

note: its not enabled by default on mac os x, see

  man atrun


While we're mentioning tools that break the "prog | xargs .." pattern, I've found ack to be pretty useful. ( http://betterthangrep.com/ )


Also good - bashmarks - directory bookmarks for the shell.

http://www.huyng.com/bashmarks-directory-bookmarks-for-the-s...


I think there's a critical one missing in this list: https://github.com/busyloop/lolcat



qmv from renameutils will bring up $EDITOR with all files in the current directory in two columns. Edit some filenames and save. The files will be renamed accordingly.


'rename' uses arbitrary perl expressions, including regexes. from the manpage:

  rename 's/\.bak$//' *.bak


No autojump? ;-)


im surprised when i see all this as if any of those super basic commands or programs were rare ;-)


> Want to kill all processes being run by a given user? Issue a pkill -U USERNAME; sure beats the hell out of...

similarly: killall -u USERNAME

OFWGKTA


strace

nuff said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: