Hacker News new | past | comments | ask | show | jobs | submit login
Unix tricks (ub.es)
623 points by shawndumas on March 7, 2013 | hide | past | favorite | 224 comments



    '!!:n' selects the nth argument of the last command, and '!$' the last arg
A lot of people know about "!$" (which is shorthand for !!:$), but that's just the tip of Bash's history expansion. I use these things all the time. One of my favorite keystroke savers is adding :h, the head modifier, to !$. For example:

    $ cp file.txt /some/annoyingly/deep/target/directory/other.txt
    $ cd !$:h
    $ pwd # => /some/annoyingly/deep/target/directory
Once you understand how each component works it's easier to put them together into new (to you) combinations. For example, once you know that !$ is shorthand for !!:$, it's not a huge leap to reason out that you can use !-2:$ to get the last argument to the 2nd-to-last command. Or !ls:$ for the last arg to the most recent `ls` command.

I also prefer to do substitution with the :s modifier rather than ^ as suggested at the link, for consistency's sake:

    $ echo "foo bar"
    foo bar
    $ echo !!:s/bar/baz
    foo baz
    $ echo !?bar?:s/foo/qux
    qux bar
Relevant Bash manual pages:

http://www.gnu.org/software/bash/manual/html_node/Event-Desi...

http://www.gnu.org/software/bash/manual/html_node/Word-Desig...

http://www.gnu.org/software/bash/manual/html_node/Modifiers....


Another nice ending is `:p` to print the command instead of executing it. I use this if I'm doing something complicated and I want to make sure it's right. Or if I'm saying `!-n:foo` with n>2. Then just up-arrow and enter to run it for real.


Here is a detail when using '!!:n' with ':p' when trying to iteratively construct a complex command. I want to emphasize the use of the up-arrow (which will show the interpolated arguments), as opposed to rewriting exactly what you wrote in the previous command, since the usage of ':p' will be interpreted by the shell as a command in itself:

    $ echo a b c d
    a b c d
    $ echo !!:2:p
    echo b
    $ echo !!:2
    -bash: :2: bad word specifier # there was only 1 argument in last command
However:

    $ echo a b c d
    a b c d
    $ echo !!:2:p
    echo b
    $
    <up-arrow pressed once will give the following prompt> 
    $ echo b


I use magic-space for that in my inputrc:

  $if Bash
    Space: magic-space
  $endif
Basically does the same thing as :p, but after a space instead of enter.

Edit: fixed formatting.


  shopt -s histverify
to show the expanded command before executing it. Then just enter. I never get these right the first time.


In zsh, you can hit <tab> to expand it in place before hitting <enter>.


Alt + . – use the last word of the previous command.

    $ cp file.txt /some/annoyingly/deep/target/directory/other.txt
    $ cd then press Alt + .
    $ pwd # => /some/annoyingly/deep/target/directory


No, that gives the last word, as you say, we want the dirname of the last word.

    $ echo foo/bar
    foo/bar
    $ echo !$ !$:h
    foo/bar foo


So you say cd M-. M-backspace M-backspace instead of cd M-.. Four keystrokes is still an improvement over shift-1 shift-4 shift-; h, which is five, plus you get to see where you're going to go before you get there.


You're incorrect. The number of M-backspace you appear to need would seem to depend on how many `little' words comprise the part that needs deleting, e.g. foo/2013-03-13 needed three M-backspace to rid me of the date and a further backspace to remove the trailing slash, which isn't insignificant to all commands, e.g. ls -ld bar/ when bar is a symlink.

In comparison. !$:h understands its task at a higher level. And thanks to key rollover, typing different characters, like !$:h, is quicker than tapping away at . until visual feedback, which may lag, tells me I've done enough.


It's true that M-backspace is occasionally less convenient, but it's usually more convenient.


Good point. I usually don't supply the file name of the second argument. Then it would be equivalent.


A sometimes handy addition to the s modifier is g, which replaces all instances of the pattern instead of just the first.

    $ echo "foo foo"
    foo foo
    $ echo !!:s/foo/bar
    echo echo "bar foo"
    echo bar foo

    $ echo "foo foo"
    foo foo
    $ echo !!:gs/foo/bar
    echo echo "bar bar"
    echo bar bar


  $ !!
runs the previous command.

This is especially useful:

  $ !!
  $ sudo !!


That first example is great, thanks. I run into that all the time. Up until now I've been doing this:

$ cp file.txt /some/annoyingly/deep/target/directory/other.txt

$ cd !$ (No such file or directory)

(up arrow and then backspace to directory)


1) `pgrep` is a standard utility that does what his `psgrep` does and much much more.

2) htop is a cpu and memory hog -- every time I've used it I noticed it takes 6+% CPU time

3) there's an awk trick to do the `sort | uniq` recommendation that works on 10+GB files (single pass):

    awk '!x[$0]++'
4) Passwordless keys are dangerous -- use ssh-agent to save the password of the keys


Not trying to contradict you, just some explanations

1. I prefer 'psgrep' because it covers 99% of my use cases for pgrep (ps axuf | grep $NAME)

2. htop is very nice, come on! Would not let it running on the background for hours, but it's nicer than top

3. Note taken, thanks!

4. Is ssh-agent really safer than using passwordless keys? Just asking, I'm curious


1. `man pgrep` is your friend (you save two greps, and TBH the `grep -v grep` should be a hint that there's a better way)

2. In my experience on Debian (granted this was in 2010), there is a noticeable performance difference between `htop` and `top`.

4. ssh-agent stores the password in memory and is erased on reboot. OTOH If you use a passwordless key file, anyone can use it if they have the key.


Using ssh-agent also means you can practically put a very large pass phrase on your SSH key, because you'll only type it infrequently. Good luck brute forcing my passphrase.


I'm pretty sure ssh-agent doesn't store the password, but the private key. Also, the fact that it supports timed expire (and can be setup to drop keys upon events such as screen lock) make it a wiser choice than passwordless keys.


That's correct. And ssh-agent doesn't give access to the private key either, only to perform operations like signing. The only way to extract the key is to search it in the process memory, which I believe would requires root level access.


1. I always went with

  % ps auwx | grep '[f]oo' | awk '{ print $2; }'


re: 4: not only is an ssh agent by far safer, but most agents now allow you to set a timeout on a key, so it's not indefinitely saved in memory.

A passwordless key gives anyone with acces to that file, access to the login associated with it. If that file is inadvertently exposed (oops, checked it into github...), any machine you have a login on must be considered compromised.


ssh-agent and 'ssh -A' is also useful if you have to login to one machine to access another, without having to copy your private key to the first machine.

For example if you login remotely to a machine, and want to access a git repository on another:

  $ eval `ssh-agent -s`
  $ ssh-add ~/.ssh/id_<yourkey>
  $ ssh -A <firstserver>
  you@firstserver$ git clone git+ssh://<secondserver>/path/to/repository


I use agent forwarding often, but you still need to be careful, especially if you forward your agent to a machine not under your control. From the ssh man page:

Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent.

Consider using a dedicated key for each of those circumstances, set sane defaults in your ~/.ssh/config on all machines, and be very careful about what ends up in any of your ~/.ssh/known_hosts files, as they provide a road map to other destinations.


The ssh_config HashKnownHosts option hashes the contents of the known_hosts file, making it intractable to get a list of hosts. But of course your shell history will still provide it.


3) if all the lines of the 10+GB file are actually unique, wouldn't awk keep the whole file in RAM? For files larger than my RAM could this leave my system unresponsive because it's thrashing on swap?


The sort | uniq method literally needs to sort the file and pipe it to uniq, a far more memory-intensive operation than the single-pass awk check. You can write your own hash function in AWK if you think you may overstep memory, but of course you risk hash collisions. It's a tradeoff.

I tried it on a 1U server with 24GB ram a few years ago and found that the sort was thrashing at the 10GB file size while AWK handled it easily


Or you can use `sort -u` and not have to pipe to `uniq`.


Sort will actually externally sort blocks into temp files and merge them. Adjusting this block size can help with thrashing.

Awk may still be better for uniquification.


What about 'sort -u' ?


I'm not sure. I haven't closely studied the difference between each algorithm. My guess would be that sort -u would perform better as the data set gets larger with a good block size setting because it does do an external sort. Cardinality would also affect the performance. If the unique set handily fits in memory, an external sort on a large data set wouldn't be very efficient.


Yes, awk does some real black bagic. It's awesome how it can parse really, really big files.


Yes it will, and a bit (read: a lot) more than 10GB as it needs to store the contents of variable x in a hash table (with the corresponding hash key and value of the counter). There's no other magic way it can 'know' whether a particular line has been seen before. You can't rely on hash keys alone as the hashes aren't guaranteed to be unique.

For files with relatively few duplicates it's going to be a lot slower than sort | uniq.

Trying it on a 128MB file (nowhere near enough time to test a 10GB file) filled with lines of 7 random upper case characters[1] (so hardly any duplicates):-

    $ wc -l x.out
    16777216 x.out
    
    $ time ( sort x.out | uniq ) | wc -l
    16759719

    real    0m17.982s
    user    0m42.575s
    sys     0m0.876s
    
    $ time ( sort -u x.out ) | wc -l
    16759719

    real    0m20.582s
    user    0m43.775s
    sys     0m0.688s
Not much difference between "sort | uniq" and "sort -u".

As for the awk method:-

    $ time awk '!x[$0]++' x.out | wc -l
    
has been running for more than 20 minutes and still hasn't returned. For that 128MB file the awk process is also using 650MB of memory (according to ps). Will check up on it later (have to go out now).

This Linux machine has ~16GB of memory so the file was going to be completed cached in memory before the first test. All things considered equal the awk method will be roughly O(n) (e.g. linear against file size) and sort/uniq will be O(n log n). So, theoretically, the awk method will eventually surpass the sort method because it's having to do less work (it's only checking for a previously seen key rather than sorting the entire file) but I'm not sure the crossover will be anywhere useful if the file doesn't contain many duplicates.

Repeating it for a file containing lots of duplicates (same 128MB file size but contents are only the 7 letter words consisting of A or B, so only 128 possible entries):-

    $ time awk '!x[$0]++' y.out | wc -l
    128
    
    real    0m1.207s
    user    0m1.192s
    sys     0m0.016s
    
    $ time ( sort y.out | uniq ) | wc -l
    128

    real    0m14.320s
    user    0m31.414s
    sys     0m0.428s
    
    $ time ( sort -u y.out | uniq ) | wc -l
    128
    
    real    0m12.638s
    user    0m30.366s
    sys     0m0.188s
Notice that "sort -u" doesn't do anything clever for files with lots of duplicates.

So awk is much faster for files with lots of duplicates. No great surprises. When I get a chance I'll repeat it for a 1GB file and a 10GB file (with lots of duplicates otherwise the awk version will take far too long).

1. Example contents:-

EPQKHPH DLJCROB WICVGQY MHWTPSR HMPNECN


The awk run against a file containing almost no duplicates finished after over an hour (compared to 43sec for the sort method).

    $ time awk '!x[$0]++' x.out | wc -l
    16759719

    real    64m41.089s
    user    64m31.970s
    sys     0m3.136s
Peak memory usage (given that it was a 128MB input file) was (pid, rss, vsz, comm):

     8972 1239744 1246488  \_ awk
So > 1GB for a 128MB input file.


Yes it will ... [need] to store the contents of variable x in a hash table [...] There's no other magic way it can 'know' whether a particular line has been seen before. You can't rely on hash keys alone as the hashes aren't guaranteed to be unique.

Technically that's true, but the result of a cryptographic hash like SHA-256 is (practically) guaranteed to be unique. Depending on the average length of an input line and how many of the lines are unique, storing only the SHA-256 hash value could take far less memory than storing the input lines along with a non-cryptographic 32-bit hash value.


You are describing a bad implementation of a bloom filter [1]. Anyway, people expect "uniq" to be correct in all cases (i.e., to never filter a unique line). A default implementation where it would possible (even with a minuscule chance) that this doesn't happen would be a recipe for disaster. It may be a cool option though ;)

http://en.wikipedia.org/wiki/Bloom_filter


No, this is not a bad implementation of a bloom filter, and the size of the minuscule chance matters; unless SHA-256 has a flaw in it that we don't know about, SHA-256 collisions are far less likely than undetected hardware errors in your computer. The universe contains roughly 2²⁶⁵ protons, 500 protons per distinct SHA-256 value, and has existed for roughly 2⁵⁸ seconds, which means there are roughly 2¹⁹⁸ SHA-256 values per second of the age of the universe.

Typical undetected bit error rates on hard disks are one error per 10¹⁴ bits, which is about 2⁴⁷. If your lines are about 64 characters long, you'll have an undetected bit error roughly every 2^(47-6-3) = 2³⁸ lines. SHA-256 will give you an undetected hash collision roughly every 2²⁵⁵ lines. That is, for every 2²¹⁷ disk errors, SHA-256 will introduce an additional error. If you're hashing a billion lines a second (2³⁰) then that will be 2^(217-30) = 2¹⁸⁷ seconds, while the disk is giving you an undetected bit error every minute or so. A year is about 2²⁵ seconds, so that's about 2¹⁶² years, about 10⁴⁹. By comparison, stars will cease to form in about 10¹⁴ years, all planets will be flung from their orbits around the burned-out remnants of stars by random gravitational perturbations in about 10¹⁵ years, the stellar remnants will cease to form cold, dark galaxies in about 10²⁰ years, and all protons will have decayed in about 10⁴⁰ years.

And if you somehow manage to keep running uniq on your very large file at a billion lines a second, in a mere 500 times the amount of time from the universe's birth to the time when nothing is left of matter but black holes, SHA-256 will have produced your first random false collision.


Another possibly relevant note: there are about 2¹⁴⁹ Planck times per second. None of the above takes into account the Landauer and related quantum-mechanical limits to computation, which may be more stringent.


On my computer htop consumes about the same amount of resources as top, just tested.


Feels very outdated.

1. Use zsh, not bash. AUTO_PUSHD, CORRECT_ALL and tons of other options make some tricks redundant. Also, the zle M-n and M-p are more useful than C-r imo.

2. Use tmux, not screen.

3. Use z (https://github.com/rupa/z), not j.py.

4. Use cron, not at. Or even systemd timer units, if you're so inclined.

5. Use public-key authentication and keychain, not password-based SSH.

6. Don't send emails from command-line naively; you have no control over the headers. Use git-send-email or similar.

7. Consider using something slightly more sophisticated than Python's SimpleHTTPServer to share files/ folders. One example: woof (http://www.home.unix-ag.org/simon/woof.html)


I've given tmux two separate tries and both times it had screen corruption issues with curses apps. screen is tried (and tried, and tried) and true.

I've also invested enough time in learning bash over the years that zsh is not a net-win for me. I've switched to using it on some systems but I seldom use more than what's available in bash. I would switch back to bash on these systems for consistency, but I feel like I've already sunk too much time in to this experiment of using zsh.

don't be a zsh/tmux hipster; there's nothing wrong with "good enough". this lesson has played out several times as businesses / software with arguably better execution / implementation loses out to existing players that have been around a while.

Edit: though, if you're relatively new and haven't been using bash/screen/whatever for decades, I don't think anybody would call you a hipster for using zsh/tmux instead of bash/screen. The marginal utility of learning tmux or zsh is much higher for somebody who hasn't already used other stuff forever.


I take it you don't work on many disparate unix systems on a daily basis :) I find most of the time I'm lucky if there's even bash installed on the remote system, it's usually ksh. tmux? way too new. Python? Nope. Perl is the only scripting language I'd wage my balls on beyond awk if I hope to reuse the script again.

Sad, but I find this is generally the case in extremely large enterprises where there is a mix of AIX, HPUX, Linux and Solaris being used due to years of weird procurement decisions. Sigh.


Yes, that's exactly why those tricks "feel outdated". It's because they're made to run on systems from the early 2000s


I would recommend fasd (https://github.com/clvv/fasd) as a more feature complete alternative to z.


Isn't at just a cron helper for one-off jobs?


Sadly this isn't usually the case


So happy to see this on HN! Actually I took most of the tips from unix threads here and r/commandline, I highly recommend that subreddit if you like this kind of tricks!

Edit: also, @climagic: https://twitter.com/climagic

Second edit: I'm enhancing the file with these comments, so if there are any inconsistencies between the txt and a commenter, it's my fault.


Hostiàs! Topopardo! I used to follow your podcasts long ago :D. Anyway, nice list of tips.


The special bash command I'm most often asked about by shoulder surfers is !$.

It substitutes the last argument in the previous command into the current one.

For example:

  $ ls /some/long/path/somewhere/looking/around/
  <output>
  $ cd !$
  cd /some/long/path/somewhere/looking/around/


An arguably better (and slightly more portable) way to accomplish this is the special variable $_ , which expands to the last argument of the previous command. Since $_ is a variable rather than a history substitution, it still works when there is no command history (e.g. in scripts) and allows all the usual variable expansion forms, for example ${_##*/} to extract the last path component.


That's good to know, although !$ is easier to type since you only need to depress the right shift key, whereas yours requires a quick shift on the opposite side. Makes a big difference when you're trying to quickly type "rm -rf !$" as root. :)


I don't think I'd type `rm -rf [anything]` quickly as root


Hence the smiley. Was a joke.


As a righty, I rarely ever use the right shift key.

Also, check it:

$ cd /happily/tabbing/out/some/really/deep/path/ooh/dear/maybe/its/java/WAIT_A.file

bash:> 'WAIT-A.file' is not a directory

$ nano $_ (opens file)


Better than that for me is "Alt+.", much easier to type. It can be also combined with number like "Alt+2+." will insert second argument. Similarly "Alt+0+." will insert last command. http://linuxcommando.blogspot.in/2009/05/more-on-inserting-a...


If you are on OS X and use Terminal.app “Alt-.” won’t work because Alt is used for alternate characters. You have two options: enable “use option as meta” in the app settings (but you lose the extra characters) or use “Esc-.” instead.

Yes, I know about iTerm. I don’t want it.


iTerm2 by chance? Carries a great deal of improvements over the original iTerm and is under active development.


One way or another, that's true of any terminal. The shell sees characters, not keystrokes.


In my bash/readline/whatever, Alt+<N>+. gives the Nth-from-the-end argument to the previous command. so,

    $ echo a b c d
    a b c d
    $ echo # pressing <Alt+<2>+.> here inserts 'c', not 'b'.


I find ESC, . (esc, then press period) to be more intuitive. (alt+period works as well)


I like that one and `!!` for the inevitable point I've forgotten to put sudo in before a command.


I find myself typing `sudo !!` on a daily basis. You'd think I'd eventually learn to remember to type sudo the first time, but nope.


Is there one for all but the first argument?


!!:2-$


That is really a good one, thanks.


His last trick - compressed fie transfer without intermediate state:

  'tar cz folder/ | ssh server "tar xz"'
Can be pulled off with two flags to scp - and you get to see progress as a benefit!

  scp -Cr folder server:dest/


Tar can transfer more filetypes and attributes than scp can (even using -p option). `scp -p` only transfers mode, mtime and atime; you lose ownership, extended atributes, symlinks, and hardlinks.

You will also get better compression with tar (or rsync), as it is compressing the files directly, and not just the ssh stream (-C is just passed on to ssh).

I did the tests years ago, but a quick google found someone who tried to test the various combinations: http://www.spikelab.org/transfer-largedata-scp-tarssh-tarnc-...


In particular, scp is mindblowingly slow on lots of small files. I independently rediscovered the tar-pipe trick while sitting there watching scp laboriously copy thousands of 100-bytes so slowly I could count the files as they went by. That should not be possible, even at modem speeds. Fine for moving one file, OK for directories of very large files, not suitable for general usage where you might encounter a significant number of smaller files.


Absolutely. Connection latency hits you the hardest, since each file is sent serially, and requires 2 (or 3 with -p) round trips in the protocol, and this is on top of an ssh tunnel with it's own overhead. I can't remember what my tests showed, but I have this inkling feeling that tar over ssh was far faster than rsync for an initial load, since there's no round trips required, but you lose some of possible rsync benefits, like resume-ability and checksums.


If my first tar attempt fails for some reason, but it made a lot of progress, I switch to rsync. Best of both worlds. This hasn't come up often enough for me to script it.


If security isn't a big consideration (read: you control both machines and the network), you go even faster with netcat.

On the receiving machine, in its destination directory:

    nc -l 6789 | tar xvf -
And on the sending machine, from its source directory:

    tar cvf - . | nc receiving-machine 6789
netcat varies a bit from distro to distro, so you may need to adjust these command lines a bit to get it to work.


Add pv (available in many standard repos these days, from http://www.ivarch.com/programs/pv.shtml if not) into the mix and you get a handy progress bar too.


Unless pv has gotten way more magical since the last time I used it, you also need to tell it how many bytes to expect if you want a progress bar.

If it doesn't know how many bytes there will be, it just gives you a "throbber" (which is better than nothing, though).


It depends how you call it.

    cat file | pv | nc ...
and

    gzip < file | pv | nc ...
and so forth will result in a throbber as it can't query the pipe for a length.

If you demoggify the first example to:

    pv file | nc ...
you get a progress bar on the sending end without manually specifying a size.

Even without a proper % progress bar, the display can be useful as you can at least see the total sent so far (so if you know the approximate final size you can judge completeness in your head) and the current rate (so you can see it is progressing as expected (so you get some indication of a problem such as an unexpectedly slow network connection, other than it taking too long)).


rsync -av folder $USER@$SERVER:/destination/path.

scp won't copy certain file types correctly. Rsync really is better here, and not just because of that.


You need a -z to activate compression. But, nonetheless, please always use rsync when copying host to host.


rsync is slow if the data is not already on the destination. tar over ssh is fast, and tar over socketpipe is even faster but not encrypted.

I'm not aware of any attributes that tar doesn't preserve.


How so?

Also, I find that if I'm going to copy the data once, I'm often going to copy it twice, or which to get a more up to date version of it at a later time. Rsync clearly wins in these cases.

Finally, from the compress flag on rsync: Note that this option typically achieves better compression ratios than can be achieved by using a compressing remote shell or a compressing transport because it takes advantage of the implicit information in the matching data blocks that are not explicitly sent over the connection.


Rsync is brilliant and useful but gets very slow when you apply it outside of its sweet-spot.

Remember: Rsync trades CPU and disk i/o (lots of disk i/o) for network bandwidth.

In the pathological case "thousands of tiny files over a fast network" it can easily be orders of magnitude slower than a straight tar.


Seems like -W disables the delta transfer.


Exactly right. In my use cases, it's best to tar over ssh initially. Then, if I ever want to update the copy, rsync.


There's cryptcat if you need encryption. "bar" is also a nice little program if you like to see an ETA (c.f. http://clpbar.sourceforge.net/ )


I don't think your method preserves ownership, file perms, etc.


[deleted]


-p only preserves modification times, access times, and modes.


Instead of

    ProxyCommand ssh -T host1 'nc %h %p'
you can use

    ProxyCommand ssh -W %h:%p host1
which uses ssh itself and therefor also works on machines where netcat isn't installed.


Hi, I tried that but my ssh version doesn't have the -W flag. What would you suggest?


Unfortunately OSX uses a badly outdated openssh version just like many other unix tools and it doesn't support the -W option. You could try upgrading openssh using homebrew if you want.


I'm using Ubuntu 10.04 and it doesn't have the -W flag either.


You need openssh 5.4+. I am running 12.10 and I've got 6.0p1


- 'cd -' change to the previous directory you were working on

To my surprise, this also works with git:

  git checkout -  # to checkout the previous branch


If I may add a trick:

ctrl-z - stops a program

bg - sends the stopped program to the background

fg - gets the program back to the foreground (interactive mode)

very useful in editor sessions or when you want to get rid of the endless download/scp that is blocking your terminal


Note that each job gets an identifier (which you can see by running `jobs`). Other commands like `kill` can work with the id number by using %[id]. For example:

    $ some_command
    ^Z (hit control z)
    $ some_command_2
    ^Z (hit control z)
    $ jobs
    [1]-  Stopped                 some_command
    [2]+  Stopped                 some_command_2
    $ kill %1
    $ jobs
    [1]-  Terminated: 15          some_command
    [2]+  Stopped                 some_command_2


Way back in my college days, one of the student admins took down the CS department's server by forgetting the % and killing process 1 by mistake.


I have my terminal emulator configured to set the URGENT ICCCM hint when it sees a bell character. When I realize a command I just entered is going to take a while, and I want to go look at something else on another workspace, I do this:

    $ alias b='echo -e "\a"'

    $ long_running_thing
    ^Z
    $ fg ; b
    [long_running_thing resumes]
Then when the job completes, the terminal bell will ring and my window manager will get my attention.


You can also use "jobs" to see background and stopped tasks. If there are multiple jobs, you can use "fg 2" to foreground the second job on the list, etc.


Another nice job management tool is disown, which lets you log out of your session without killing the job (similar to starting the command with nohup)


And also, you can pass the -h flag to disown to leave the job in the job control table (i.e. you can still bring it to the foreground/suspend it/etc.), but still skip sending the SIGHUP on logout.


i normally have this :

fork() { (setsid "$@" &); }

in myzshrc, and then start stuff with :

fork firefox


Wanted: a lint for your history that analyzes your commandline usage, suggesting these types of tips based on your historical use.

  history | commandlint


Where can I find commandlint?


That's a hypothetical program (note "Wanted").


Related earlier discussions:

https://news.ycombinator.com/item?id=5022457

https://news.ycombinator.com/item?id=4481234

A pretty good thread on Reddit:

http://www.reddit.com/r/linux/comments/mi80x/give_me_that_on...

Edit: see https://news.ycombinator.com/item?id=3257393 for a discussion of the above thread.

While we're at it, consider using weborf [1] as an alternative to Python's SimpleHTTPServer for simple file sharing. I found it able to saturate a gigabit ethernet connection when hosted on a Core 2 Duo ULV laptop with an SSD.

[1] See http://galileo.dmi.unict.it/wiki/weborf/doku.php?id=start. It's available from the official repos in Debian and Ubuntu with

  sudo apt-get install weborf
Invoking it is dead simple:

  weborf -b ~/dir-to-share


RE: weborf

I will have to look at weborf. I have always wished debian packaged publicfile, similar to djbdns or even dbndns. As it is I am still looking for a "djbdns-like http server" that is apt-get installable and actively maintained.

I will never understand why gnome-user-share depends on apache...


Consider webfs too. It is even simpler than weborf.

http://packages.debian.org/search?keywords=webfs


Whut? ctrl-r? How have I missed that? No more 'history|grep foo' for me!


Try this in your .inputrc:

    # Bind the up arrow to history search, instead of history step
    "\e[A": history-search-backward
    "\e[B": history-search-forward
No more "^rls" to search for ls in your bash history; just type "ls" and start hitting the up arrow.


Thanks for the suggestion. I used to have a very customized .bashrc with nice little things like that, but have decided to stick with standard stuff for things that work off of muscle memory.

I got tired of sshing into a new box and half the things I'd type wouldn't work properly until I remembered to copy my settings files over, which seemed more trouble than it was worth for a short-lived s3 instance.

That's why I was excited to discover ctrl-r. It's a built-in method of searching history that I can remember and it'll work everywhere.


> I got tired of sshing into a new box ...

I've got a public repo of my dotfiles, so the first thing I typically do is "git clone git@github.com:pavellishin/dotfiles.git && cd dotfiles && ./install.sh"

After that, I launch tmux, and it's all hunky dory.


Heh.. right after I posted that comment, my thought was: 'you know - the correct answer here would have been to make an Uber command that would suck all the configs in and install them'.

Thanks for giving me the push to do so. I think I'll take your suggestion but put the command on a site somewhere so I can simply 'curl https://foo.bar/configs | bash'


This is the best tip ever.


Also ag is faster and better than grep: https://github.com/ggreer/the_silver_searcher


If you are using 'set -o vi', then you can just use 'ESC' -> '/<somePartOfCommand>' then use 'n' to iterate in reverse order through the commands that match the vi regex. really powerful and useful.


Of similar note, http://www.commandlinefu.com/commands/browse/sort-by-votes indexes a good number of these tricks.


"Add "set -o vi" in your ~/.bashrc to make use the vi keybindings instead of the Emacs ones." Better to do this kind of thing in .inputrc, as:

set editing-mode vi

(or set editing-mode emacs) because any application that uses readline gets to use those settings. So for example you get command line editing in various command line apps. bash uses readline, so you'll get that. The python repl will give command line editing with .inputrc set as above.

  $ apt-rdepends -r libreadline6 |egrep -ve "^ " |wc -l
  8995
psql (postgresql) and mysql (mysql) are really handy with command line editing.

"'ctrl-x ctrl-e' opens an editor to work with long or complex command lines"

If you've set -o vi, or set editing-mode vi in .inputrc, then on a command line type:

  esc-v
(Escape key to get out of insert mode, then the 'v' key) That will open a full vim session for editing your complex command line.

  :wq
exits vim and gives your command to bash to execute.


'set -o vi' is more likely to something that isn't bound up tight with readline, like /bin/sh


Oh, wow, you just made my day. I love vim-style line-editing.


s/wq/x/g


Duly noted. :)


Lots of good tricks, though I replaced a lot of them with "Use fish instead of bash" http://ridiculousfish.com/shell/


Here's one it took me a while to figure out...

Hung SSH session (such as wifi out of range)?

Type Return-Tilde-Period


Also, if you've done this:

    you@somehost:~$ ssh otherhost
    you@otherhost:~$ some_command
If you hit ^Z right now, it will tell the shell on otherhost to stop some_command and give you the shell prompt on otherhost back.

If instead you wanted to stop the ssh process and get the shell prompt on somehost, hit Tilde ^Z (you don't have to hit a new Return, but ssh only notices these escape sequences after a Return).

Also if you use ControlMaster and have a few xterms open all with sshs to otherhost, and then you exit the first ssh you happened to have open, it will seem to hang and not give you your prompt back. What's happening is that that ssh process is the "control master" and it's still open because you've got other sshs to the same host open. Hit Tilde & to background the ssh process and get your terminal back.

Yes, you could also hit Tilde ^Z and then 'bg' the ssh process.


> Compile your own version of 'screen' from the git sources. Most versions have a slow scrolling on a vertical split or even no vertical split at all

or just use tmux


Just to note, splits in tmux and screen behave differently. iirc, in screen you have a set of splits and fill them in with different ports (kind of how vim thinks of viewports). so technically you can have the same viewport open twice on your monitor and the other will mirror the workings of the one that your are working in right now. in tmux each 'tab'/window is a set of splits, which act more like sub-windows, and don't really share across multiple spaces.


This is bad juju.

  'find . -type d -exec chmod g+x {} \;'
If you happen to have a malicious directory named (without double quotes):

  ".;sudo rm -rf /"
You'd be stuffed.

It's better practice to use the '-print0' flag of find(1) and pipe the result into xargs(1) with the '-0' flag set. For safety, it's best to also use '-r #' for expected number of arguments, '-n' to stop xargs from running without arguments, and "-J %" so you can use quoting.

  find . -type d -print0 | xargs -0 -n -r 1 -J % chmod g+x "%"
I believe some on implementations '-n' is unnecessary when '-r #' is specified, but on other implementations it's is the maximum.

EDIT: thanks for the vim tip on for finding spelling mistakes!


This is simply not true:

    $ mkdir '; echo woops'
    $ find . -type d -exec echo {} ';'
    .
    ./; echo woops
As you can see 'woops' is never echoed.

EDIT: The reason being the shell is never involved in this process, and the shell is what is responsible for splitting commands on semicolons/newlines.


You are assuming the exact versions of the shell and find programs that you use are the only ones that exist. It may not be a problem on your exact system, but it can be a problem elsewhere.


I am assuming a POSIX-compliant implementation of `find`. The shell is not involved.

FWIW, your `find -print0`/`xargs -0` is not POSIX.


Sorry, I didn't see your 'EDIT' caveat when I responded --now that will teach me to reply too soon. ;-)

Also, it seems I failed to be clear; I'm probably too tired I suppose.

My point was there is plenty of ancient and buggy code out there. It could be "most", or even "many" modern unix variants have fixed a lot of the old bugs in find(1), but if you don't have the luxury of working on a current system, and your not allowed to upgrade it, then plenty bad things can happen due to invoking a shell, handling space, quote, and delimiter characters, and so forth.

reference:

  $ uname -a
  OpenBSD alien.foo.test 5.1 GENERIC.MP#207 amd64
setup:

  $ mkdir test
  $ cd test
  $ touch file1
  $ touch file2
  $ touch file3
  $ mkdir ';ls'
bad:

  $ find . -type d -exec sh -c {} \; 
  sh: ./: cannot execute - Is a directory
  ;ls     file1   file2   file3   test.sh
better:

  $ find . -type d -exec sh -ec {} \;
  sh: ./: cannot execute - Is a directory
also bad:

  $ find . -type d -print0 | xargs -0 -r -n 1 -J % sh -c "%"
  sh: ./: cannot execute - Is a directory
  ;ls     file1   file2   file3   test.sh
better:

  $ find . -type d -print0 | xargs -0 -r -n 1 -x -J % sh -ec "%"
  sh: ./: cannot execute - Is a directory
better;

  $ find . -type d -print0 | xargs -0 -r -J % sh -c "%"
best:

  $ find . -type d -print0 | xargs -0 -r -J % sh -ec "%"
POSIX is all great and wonderful in theory, but in practice it's no different than the bogus Java "write once, run anywhere" claim. If a system or utility claims to be POSIX compliant, then you're probably close, but you'll still need to do testing and debugging.

At least some of the issues with find/xargs are mentioned in the following wikipedia article. It's probably more clear than I am right now.

http://en.wikipedia.org/wiki/Xargs


The contrived examples you've shown aren't examples of POSIX-incompatibility, or bugs in `find` at all. You've explicitly involved the shell. Of course trying to run every directory name as a shell command string is going to result in executed code!

Your original argument was that given:

    find . -type d -exec chmod g+x {} ';'
It is possible to force code execution of arbitrary commands given a carefully crafted directory name. The key difference in this case is that the shell is not involved _at all_. I challenge you to find an implementation of `find` that is broken in this way.

As a side note, it is even possible to involve the shell in the picture in a safe way with `find`, without the use of `xargs` (and thus avoid the overhead of setting up a pipeline):

    find . -type d -exec sh -c 'chmod g+x "$1"' _ {} ';'
(my contrived example is quite poor, though, since it does nothing but introduce unnecessary shell overhead)

Modern (POSIX > 2001?) `find`'s support `-exec {} +` which further reduces the number of reasons to invoke `xargs`:

    find . -type d -exec sh -c '
        for x; do
            do_foo "$x"
            do_bar "$x"
            do_baz "$x"
        done
    ' _ {} +
(example above shows how to make proper use of this feature with an explicit shell invocation)


It's not a problem with GNU or OpenBSD `find`, and I'm pretty sure that's the case for FreeBSD too. What version of find uses system(3) instead of execv(3)?


Instead of

  find . -type d -exec chmod g+x {} \;'
you can usually use

  chmod -R g+X .
which gives additionally the group execute permission to files which have already user/everyone execute permission.


His example changes permissions for directories only, which I believe your example does not.


Yes, but the find command will only affect directories (-type d).


> SMB is better than NFS.

More details would be nice.


NFS basically breaks the client computer until the host responds, while SMB detects the I/O error earlier and, at least, doesn't hang the terminal.

Other than that, SMB allows for permissions and a bunch of other features. It is, basically, a more modern and robust protocol. Does NFS shine for some use cases? Indeed. Would it be the first choice for most ones? Nope.

Disclaimer: my sysadmin skills are not so great, I'm talking as a user


NFS has 2 mount types, soft and hard.

http://tldp.org/HOWTO/NFS-HOWTO/client.html

Soft mounts report errors immediately, hard mounts hang.

And for permissions, NFS provides everything under the sun that you could possibly need via ACLs. NFSv4 is a very modern protocol, much as SMBv2 is (not SMB though, it's awful).

Linux has had NFSv4 for what, 6 years now, at least, and even v2 and v3 had some limited ACL support?

http://wiki.linux-nfs.org/wiki/index.php/ACLs


That's nice to know, it seems that we have the "hard" configuration at the lab and it's a real pain. Backwards compatibility, you know. For my mini-cluster we use SMB and couldn't be happier.


Not sure what your lab's trying to be backwards compatible with (NFS has had those mount options since at least 1989), but whatever works for you :)

It's a mount option on the client, not the server, so maybe you can change it yourself.

http://www.ietf.org/rfc/rfc1094.txt


In a lot of cases, hard is what you want. Some systems can't deal with fs errors very well, and it's better to wait than to risk data inconsistencies.


i cannot possibly disagree more. Are you using NFS from 1999?


1) The caveats of NFS can be crippling if you are a rubbish sysadmin. NFS requires a more thorough understanding than what you can get from a tip sheet.

2) Samba is single threaded, performance will suffer serving SMB from a Linux machine. For this reason it would be better to serve SMB from a Windows machine.

3) Using a foreign protocol between homogenous computers when the native protocol will do is non-ideal. NFS is the right thing when sharing filesystems from Linux to Linux.


I've found NFS to use much less cpu than SMB (on raspberry pis running XBMC at least)


I second this..

If you use any NAS device with a weak CPU and you have a choice of CIFS or NFS, the NFS transfer rates are normally 25% faster and the load is much less on the server.


I believe NFS is a less chatty protocol with fewer roundtrips. SMB is not a particularly well-designed protocol overall.

My only evidence is anecdotal. Back in ~1998 we had Samba running on our small Linux file server using Windows NT desktops via 10mbit Ethernet. It was dog slow, not just for browsing but on sequential things like file transfer. We installed NFS mounts on the same box, and it turned out to be lightning fast. I don't remember just how fast, but it made everyone go "wow".

Nowadays, with NFS over fast 100mbps or gigabit Ethernet the latency difference probably is not significant enough to make a difference. I prefer NFS just because its more Unixy.


I third this. I was attempting to set up my Raspberry Pi for sharing a big HDD that was plugged into it on my home network, and the NFS mounts appear to weigh lighter on my vanilla Raspbian setup.

And fwiw, if there's some sort of error with the mount, the terminal lets me know right away.


I freaked out when I found out about the "screen" command several years back. "screen" starts a virtual screen that you can detach from with "ctrl-a d" and you can log out, login from a different machine/session and reattach with "screen -r". it has history so you can run long running commands and reattach 3 days later to continue from where you left off as if you had been logged in the whole time.


Tmux is the new screen. tmux allows for emacs or vim key bindings to run around the buffer and search, splits horizontal and vertical and easier configuration: http://tmux.sourceforge.net/


Yup! You might want to try tmux, though. I used screen for many years and switched over to tmux about 2 years ago, and have been really happy with it. It (tmux) is also maintained more regularly now; screen development seems to have stagnated.

Either way, enjoy!


I will checkout tmux. Thank you!


and you can do things like

  "screen -X -S minecraftserver -p 0 -X stuff "say test $(printf '\r')"


Tmux is similar, here is a cheat sheet that helped me move from screen to tmux: http://www.dayid.org/os/notes/tm.html


This was a nice very timely page sitting with my private repo in mercurial, and the other one at github…

I discovered help <builtin> some months ago, and that was a great boon really. Like help "test" so I didn't have to go the rather large bash man page.

Here is a little shellscript for displaying a man page on Mac Os X (gman). (If you then click on on of the links on the man-page, it may pop up in your default browser).

  #!/bin/bash
  if [ $# -lt 1 ] ; then
  	echo "gman takes a man page, if found and formats it into html."
  	echo "Usage: gman [manfile]"
  	exit 2
  fi
  a=`man -aw $* |head -1`
  if test  x$a = x ; then
  	echo "Can't find $1"
  	exit 1
  fi
  # Figures out if it is a normal man page or something else (gz).
  b=`man -aw $* |head -1 |grep "gz"`
  echo $b
  if test  x$b = x ; then
  	groff -man $a -Thtml >|/tmp/tmp.html
  else
  	gzcat $b |groff -man -Thtml >|/tmp/tmp.html
  fi
  qlmanage -p /tmp/tmp.html >/dev/null 2&>1


Only one Vim tip? I'm disappointed.

Check out this resource!

http://www.rayninfo.co.uk/vimtips.html


Great resource! Thanks for the link.

Most of my vim tips are on my .vimrc (my dotfiles here https://github.com/carlesfe/dotfiles/blob/master/.vimrc) and on the links on my homepage: http://mmb.pcb.ub.es/~carlesfe/#programming_h3. The .txt that op linked feels a bit out of context :)


Good lord, thats a big list! Thanks!


When creating excessively long oneliners (you know, the kind that should actually be a script, because you know you're going to find a use for it in a few weeks), the following key combo is golden:

    ^x ^e
It opens up the exported EDITOR with a tmp file containing whatever is on the command line.

Using SSH, especially on campus/in a train/other places where a wifi connection doesn't long, mosh is really a godsend. In places where mosh isn't practical or available, the following combos are really good to know:

    <RETURN> ~ .    # end ssh connection
    <RETURN> ~ ?    # show available commands
Paired with autossh which will reconnect by itself, it really takes the pain out of traveling while doing remote work.

Oh, and when you need to know what the decimal value of 0x65433 is, it's good to know that bash can do stuff like that:

    $ echo $((16#65433))
    414771
Reading the bash man page is not a bad idea in itself...


GNU parallel: easily substitute of file extensions + parallel execution, e.g.

    ls *.png | parallel convert {} {.}.jpg


'ssh -R 12345:localhost:22 server.com "sleep 1000; exit"' forwards server.com's port 12345 to your local ssh port, even if you machine is not externally visible on the net.

This one blew my mind


Better to use, e.g.:

  ssh -fN -o ServerAliveInterval="240" -R 2222:localhost:22 example.com
(And on example.com ssh -p 2222 localhost)

This lets you easily keep the tunnel open long-term. Why?

  -f    Background the ssh process (don't need nohup)
  -N    Don't run any command
  -o... Make ssh do the work of keeping the session alive forever


The ServerAliveInterval option might also help with another issue mentioned in the post:

'sshfs_mount' is not really stable, any network failure will be troublesome'

To avoid trouble with remote backups and other long running processes, I always add the following to the end of ~/.ssh/config or /etc/ssh_config:

    Host *
        ServerAliveInterval 300


But the remote host can limit your ServerAliveInterval, however most hosts don't close your session if there is something running. On our clusters this is the only working solution, and I tried both, trust me.


If that's an issue (I've never met a server like that, not to mention one like that and I couldn't change the setting) then wouldn't you rather use this?

  while [ true ]; do sleep 1000; done
Just using:

  sleep 1000; exit
means you can't make new connections after 17 minutes.


Yes, in fact I use the first one (while true; sleep; ls) but I thought the second is more succint. Anyone interested can make a loop. Please notice that most of the snippets aren't meant to be copied & pasted, but rather analyzed and understood by the user.


Ah, sometimes that's a hard line to draw since often the people that know enough to understand not to take it literally don't really need the pointer in the first place. :)

I enjoyed your list.


This is excellent, thank you!


Thanks for calling that one out - had never tried that before and works great. Putting into my bag of tricks ...


More:

Bash and zsh support a surprising amount of emacs editing functionality. Cursor navigation (M-b, M-f, C-a, C-e), text selection (C-space), copy/paste (C-w, M-w, C-y, M-y), and undo (C-_).


Learn to use your shell's globbing features instead of overusing find. In zsh, you can do 'print -l /*.(c|cc|h|hh)' for example (I'm sure bash has an equivalent).


does that work for say 2143789 files?


I actually hit a limit quite a few times already - I don't remember if it was the shell that complained or the command (mv, cp, ...) itself, but I know I couldn't execute the command. find with xargs or -exec worked, however.


It's what happens if one uses * or similar globbing. The shell will try to pass all the files as arguments to the command and if it is too many, it will fail. That is why using find is often needed.


For me, the glob gets a lot of use.


    ** glob


Personally I put the following in my .bashrc:

    function pushcd {
        if [ $# -eq 0 ]; then
            pushd ~ > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    }

    alias cd='pushcd'
    alias b='popd > /dev/null'
Then cd saves the history of visited directories and b navigates backwards, e.g.:

    ~$ cd /tmp
    /tmp$ cd /usr
    /usr$ b
    /tmp$ b
    ~$


Here's a really great presentation I found (probably on HN) some years ago. http://www.ukuug.org/events/linux2003/papers/bash_tips

All extremely useful, my favourite being the .inputrc rebindings of up and down to search history. Takes a little getting used to but is great once you are (good luck to anyone else trying to use your terminal though ;))


As an emacs user I map "C-p" "C-n" for history search in bash

In .inputrc

  "\C-p": history-search-backward
  "\C-n": history-search-forward
Also I found the following settings to be very useful:

  # List the possible completions when Tab is pressed
  set show-all-if-ambiguous on
  #tab complete without needing to get the case right
  set completion-ignore-case on


One of my new favorites-- use ctrl-r autocompletion to grab a command close to what I want from history, tack on a character that breaks it if necessary and run that.

Then use fc to get that command back in a vi editing context, change what I want and :wq

The resulting command is immediately executed. This process can be really fast compared to manually constructing long piped commands.


    * Read on 'ssh-keygen' to avoid typing passwords every time you ssh
He meant ssh-agent?


No he did mean ssh-keygen which generates a public and private key pair for you. Run ssh-keygen on your local machine and copy the public key to ~/.ssh/authorized_keys and you'll be able to login to the server without a password using ssh -i /path/to/private/key user@host


That approach is insecure, however, because anyone with the private key now has access. When running ssh-keygen, you should add a passphrase to the key, then add the key to ssh-agent so you don't need a password for the account, nor do you need to type the key's passphrase constantly.


Anyone with private key access has control over my user account and has better access than what my private key + passphrase would provide.

I understand layers (probably moreso than most), but this is something that always bothers me a lot from a practicality perspective. My passwords are encrypted at rest via encrypted filesystems. If you are running things on my personal machine as my user account, I'm already being keylogged and/or am executing arbitrary code for you. If I'm logged into somewhere via ssh (hint: I am whenever I have a network connection), you can just scan my ssh config and use my ssh key anyway. From there, you can probably do a lot of other nasty stuff. ssh-agent won't really prevent this. It will prevent the malware from working again when I reboot until I log into another remote host (which I've established I do a lot) where the keylogger now gets me.

It's possible, but extremely unlikely that I have might have completely read-only media. I could be using my TPM device to protect from from booting and executing modified system states. Some of this might prevent you from easily persisting the keylogger threat across reboots. I might also have a module or something that calculate checksums on startup of critical things, have ridiculous anti-exfil outgoing connection policies, etc that prevent all but the most targeted attacks.

I don't have all of that in place. (In particular, to anyone generating a profile for me, I don't build detailed outgoing packet filter rules (you are welcome).) But what I do have in place will probably prevent me from getting my initial passphase keylogged if I used ssh-agent since it's likely (although this isn't strictly necessary) that I'm going to get attacked again after I start logging into remote hosts. So they can't steal my password, but they do have unrestricted access to my user account and the remote users I can log into. That complicates things, but is still a major security failure, to the point where them having the passphrase to my key isn't super important. I mean, in this scenario, they already have the absolute best input vector (a history of me logging in so they can execute attacks at the times I'm supposed to be logging in, as well as direct access to the systems from my ip addresses) to the point where using my ssh key from elsewhere is probably a worse a idea.


If you work with something like this

  /this/is/some/very/nested/directory/
and you need to move to almost the same structure

  /this/be/some/very/nested/directory/
then you might benefit from

  function bcd {
    cd ${PWD/$1/$2}
  }


Is there a command/short-cut to storing the path(s) returned from a find command? Currently I do something like:

find . -name somefile.txt -print

Then copy and paste the results manually into a cd command for example. I feel like there's a much better way somewhere.


xargs?

Here is an example from the man page: >For example, the following command will copy the list of >files and directories which start with > an uppercase letter in the current directory >to destdir:

                   /bin/ls -1d [A-Z]* | xargs -J % cp -rp % destdir
You can use this with find really nicely.

find . -name somefile.txt -print0 | xargs -0 -J % cp -rp % destdir

Note the -print0 and -0 flags (zero). This will use a null byte as a separator instead of spaces to avoid failing on files with spaces in their names.


cd `find -name foo`


:set spell is pretty neat. Will use not when coding, but when writing blog posts.


If you put that in, for example, `~/.vim/ftplugin/markdown.vim`, it will be used on all markdown files. (`../ftdetect/markdown.vim` controls how Vim determines that a file is markdown.)


I turn that on when editing Git commits.

  autocmd FileType gitcommit setlocal spell


- Use 'apt-file' to see which package provides that file you're missing

Ummm, "apt-" isn't a "Unix trick..." It's specific to linux distros which use the "Aptitude" package manager.

Linux != Unix


Aptitude is just a front-end to APT. And APT (package tool) is just an interface to dpkg or RPM (package managers).


Did a quick and dirty format for easy reading: https://gist.github.com/dmackerman/5117156


Weird, I've been using vi/elvis/vim/MacVim as my primary editor since 1984 and I hate vi key bindings for shell; I always use the emacs bindings.


I'm the opposite. Every time I log into a server that doesn't have my bashrc, I immediately have to `set -o vi` or else I'm useless.


That's why I love *nix ... options!


I can't believe this past has sort | uniq when GNU sort (important distinction, the BSD version and hence OSX can't) has a -u flag so sort -u == sort | uniq


OS X does have sort -u. -u is also in POSIX (http://pubs.opengroup.org/onlinepubs/9699919799/utilities/so...).


Md version for readabilty https://gist.github.com/hemanth/5109020


Regarding set -o vi, which I love, is there a way to have it load my vimrc as well? I have custom bindings I would love to have.


tar czf - . | ssh destination "tar xz"

To pipe all the contents of your current directory (including dotfiles) to the destination machine.

Greetings from LSI-UPC!


I use that everyday, but didn't think about adding it to the list. It's on now, thanks!

PS: hey, we're neighbors! :)


I used to use this, because it's awesome that it got all the file types correct (sockets, for example).

Then I learned to love rsync -av. The benefits are many-fold.


I would change this to:

tar czf - . | ssh destination "cd /remote/dir; tar xz"


You will one day punch yourself for not being safe.

    tar czf - | ssh destination 'cd /remote/dir && tar xzf -'
Test that `cd` or one day you'll end up extracting in the wrong place.


There is also tar -C /local/dir -czf - . | ssh dest "tar -C /remote/dir -xz"


Why not rsync?


They're both useful. tar over ssh is very fast for large amounts of data. rsync is only fast if most of the data already exists.


sudo !!

Perform the previous command as root. Great if you continually forget which of your scripts need to be run as root and which don't.


I see people often have aliases for ls -ltr | tail but ls -lt | head might be a lot faster with a lot of files.


SMB is better than NFS in Unix Tricks? Shame Shame Shame. Possibly just wrong. Like the rest though. Thanks!


I know, that's controversial... but I've had zero problems with SMB and many ones with NFS. Just my two cents.


i'm working with unix since years, but only on my server, and now for a little bit more then a year now on osx for 2 days a week.

and since one month i have my first own macbook. totally helpful to get in touch with some magic in the console.

thank to everybody, who makes my working life easier =)


find . -name "file-wildcard" -exec "string" {} ";" -print

is something I use a lot - plus xargs sometimes


I use variations on "find" so often that I've created several little commands "fij" (find in any java source), "fit" (find in any text / org file), "fix" (find in any XML file) etc. which I use all the time


You might want try `ack` then. (http://betterthangrep.com/).

On Ubuntu/Debian systems, it is packaged as `ack-grep`.


I really could have used that "chmod g+X * -R" the other day! Thanks for the tips.


find . -name "*" | xargs grep "hello" 2>/dev/null

Searches all the files in current directory and all subdirectories for files that contain "hello". Add -l option to grep to only display the filenames instead of filename and match.


zsh: setopt extendedglob allows you to use numeric ranges on globbing with < >:

mv p1080<100-300>.jpg folderx/

to move p1080100.jpg through p1080300.jpg to a new folder.

Still not available on bash or more common shells?


Bash:

    mv p1080{100..300}.jpg folderx/


Though that's not strictly a glob, but brace expansion (it will expand to all the numbers in that range regardless of whether or not files with those names exist). bash does have 'shopt -s extglob', which enables a number of useful globbing extensions, though I don't believe there are any numeric ones among them.


TIL, thank you. Here it is from the manpage:

A sequence expression takes the form {x..y[..incr]}, where x and y are either integers or single characters, and incr, an optional increment, is an integer. When integers are supplied, the expression expands to each number between x and y, inclusive. Supplied integers may be pre‐fixed with 0 to force each term to have the same width. When either x or y begins with a zero, the shell attempts to force all generated terms to contain the same number of digits, zero-padding where necessary. When characters are supplied, the expression expands to each character lexicographically between x and y, inclusive. Note that both x and y must be of the same type. When the increment is supplied, it is used as the difference between each term. The default increment is 1 or -1 as appropriate.


ack-grep is great for searching source code


Yesterday, I discovered the_silver_searcher:

https://github.com/ggreer/the_silver_searcher

I think you might like it.


% :(){ :|:& };:




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: