Hacker News new | past | comments | ask | show | jobs | submit login
Stupid Unix Tricks (sneak.berlin)
427 points by signa11 on Oct 17, 2019 | hide | past | favorite | 157 comments



Regarding the point of "SSH: Faster Crypto", you should not enforce only one single specific cipher for ssh. The reason also seems wrong, modern hardware should be capable of achieving better performance with an AEAD cipher (such as AES-GCM or ChaCha20-Poly1305) instead of AES-CTR, as the latter also requires an additional HMAC.

If there really is a slow (or insecure) cipher you do not want to use, remove it by prepending a minus sign, for example `Ciphers -3des-cbc`, which keeps all other default ciphers. Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.


>Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.

Is that actually a real concern at this point vs the risk that comes from not white listing a few known reliable ones? It seems like old-style security, favoring modularity and "what if we need to change this someday soon?", but in practice it's turned out to be a lot less valuable than expected and raise significant risks of accidentally using something bad. We seem to be past the point where new ciphers being "better" is actually an event to be expected with any real frequency, prime/elliptic curve seems pretty mature now. Post-quantum could be a new frontier at some point, but may well require significant other changes as well.

WireGuard for example decided to just flat out say that any cipher changes will be tightly coupled with a full on version change. If you're using it you know exactly what you're getting and that's it.

I don't disagree that white listing means care with what is chosen, and picking AEAD-based seems like a better idea anyway (WG is Curve25519/ChaCha20-Poly1305/SipHash/BLAKE2s). Plus there is server compatibility to consider in some cases. But I'm not sure the logic of "you might miss out on better ciphers as they are added" is convincing either, vs setting yourself an alarm to recheck your SSH setup every year or two. Shouldn't any cipher additions/deletions arguably be something you actively consider, rather than have automagically added?


> what if we need to change this someday soon?

Git was created in 2005 and its hash algorithm is already outdated.

Additionally, software and hardware support continue to develop for better performance.


SHA-1 was already broken (Feb 2005) when git was first published (April 2005). But Linus decided that git doesn't need a collision resistant hash function. https://marc.info/?l=git&m=115678778717621


SHA-1 was not broken until 2017.

http://shattered.io/

If you know an earlier instance, go ahead and take the crown from the shattered folks.

---

The choice to use SHA-1 was a trade-off of security, size, performance. If Linux invented git today, I imagine the choice would have been different, because those parameters are now different.


In cryptography broken means "known attack significantly faster that brute-force", which was published in 2005. And cryptographers were advocating for deprecating it several years before that, because the security margin was clearly insufficient. https://www.schneier.com/blog/archives/2005/02/sha1_broken.h...

The time between a theoretical attack and practical demonstration of an attack should be considered a grace period we can use to migrate to a secure primitive. Choosing SHA-1 for an application which relies on collision resistance after the 2005 papers is plain incompetence.

Git chose SHA-1 because Linus did not consider collisions a problem. The downsides of SHA-256 were pretty small even then (32 instead of 20 bytes, and somewhat slower performance which is still faster than most IO).


You never know when a particular cipher will be cracked, or its implementation found to have a serious flaw.

At moments like that, you want an easy and fast way to disable such a cipher, but stay interoperable otherwise.


> Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.

That sounds like good advice; thank you. Out of curiosity, as someone who is fairly noobish on SSH, are "better ciphers" typically automatically preferred by SSH clients and servers as they are introduced? In other words, do the SSH implementations maintain a rank ordering that prefers "better" ciphers? That would be my expectation, but it seems I am often surprised by the bad defaults when dealing with security.


Generally yes (there are a lot of SSH implementations out there), but that isn't the only thing you want to protect against:

1. If there is a critically broken cipher an attacker that can perform a MiTM attack and claim it only supports the broken cipher between both ends which can force an association using that and thus break your crypto transparently.

This type of attack would be high effort and targeted. Most threat models don't really need to address this issue, but disabling ciphers is so easy you mind as well spend a couple of keystrokes doing it.

2. If the cipher implementation is broken (think OpenSSL's heartbleed) then leaving the cipher available opens you up to being directly attacked by botnets.

This type of attack has a high initial cost for the attacker (developing the exploit) but can be sprayed across the entire internet. This is the type of attack that would affect most people and should be protected against by patching and disabling known bad ciphers.


I wonder if this issue is prone to cases of ‘a sysadmin writing a verbose config,’ like with TLS servers where ciphers are often put in a whitelist in my experience.


> Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.

You’ve identified issues with whitelisting but blacklisting isn’t perfect either. For example, many don’t trust NIST and may want to prevent the use of any of their future curves. Blacklisting fails here.

When updating to a newer version of SSH I think it’s good practice to ‘man ssh_config’ and at least look at KexAlgorithms, HostKeyAlgorithms and Ciphers.


> Don’t try to install things with brew if brew is not installed:

   if which brew 2>&1 /dev/null ; then
        brew install jq
   fi
This just hides a useful error message (brew not installed). I would rather just see that error message (either interactively or in a log) and have the script fail. Hiding the error message just leads to an eventual failure down the road when jq is invoked.


In bash, there is the build-in command named "command". I have used it for the same purpose, like in:

    command -v brew >/dev/null && brew install jq


If you don't care for maximum POSIX compatibility, i.e. your script is bash-specific, it's better to use "hash", which is going to ignore aliases (but not functions) and also has the benefit of caching the command for further use.

  hash foo &>/dev/null || { echo "foo command not found blabla..." >&2; exit 1; } 

more info: https://stackoverflow.com/questions/592620/how-to-check-if-a...


Wow. It's impossible to google, and on MacOS, `man command` is useless.


$ help command

command: command [-pVv] command [arg ...] Execute a simple command or display information about commands.

    Runs COMMAND with ARGS suppressing  shell function lookup, or display
    information about the specified COMMANDs.  Can be used to invoke commands
    on disk when a function with the same name exists.
    
    Options:
      -p    use a default value for PATH that is guaranteed to find all of
            the standard utilities
      -v    print a description of COMMAND similar to the `type' builtin
      -V    print a more verbose description of each COMMAND
    
    Exit Status:
    Returns exit status of COMMAND, or failure if COMMAND is not found.


Thank you.

I never knew about help to describe a builtin shell command. normally searching for something like "command" in the bash man page would be very tedious.

also: help echo


How little we expect from our computers. "help <command>" should be the first thing you expect to work.


i do think 'man command' should bring up the specific man page for this, so there is no need to search into the bash man. this works with most builtin shell commands. e.g. 'man ls' is a thing


On (at least) Debian systems (, and probably many more) `ls` is not a built-in, but an external binary (`/usr/bin/ls`) is used.

`which ls` and `help ls` will show you, if it is similar on your system.


It does't need to be a built-in to have a man page (many first party and third party libraries come with their own man pages).


"man command" on linux says "no manual entry for command"

"man echo" on linux describes /usr/bin/echo "help echo" describes the builtin.

on the othe hand, "man command" on mac os x gives a huge manpage of builtins (still hard to search for the common word "command")


`command` is a bash builtin, not an external program. The information is in the bash man page (which is hard to find in there because the word "command" is in the man page 7 bajillion times. Look in the major section "SHELL BUILTIN COMMANDS" where you find `cd` and others like it).


You can determine if something is a bash built-in by using 'which command'. If it's built-in, man won't work.


In config.fish I use type for this purpose, together with the exit code

  test (type brew 2>/dev/null); and brew # blahblah


type -p also works in bash/zsh


Fish as well, but the redirection of both stdout and stderr is different:

  type -p brew >/dev/null ^&1; and brew
in Bash it would be:

  type -p brew &>/dev/null && brew
Using test avoids caring about stdout/stderr.


You don't need to check for `brew` in each task if they're run automatically. Just have one task that checks for `brew`.

With a proper dependency graph, the tasks installing things would depend on the task `brew-installed`.

This, of course, is leaving alone the point that installing stuff on startup is weird, doubly so with brew which is pretty slow.


Correction, to clarify: you need to check for `brew` in each task but not warn. One master warning is enough.


It seems you miswrote, or the author corrected himself. Right now it's:

  if which brew >/dev/null 2>&1 ; then
      brew install jq
  fi
Which would hide the error message. The code you posted outputs:

  brew not found
  /dev/null not found


It's impractical to determine whether Hacker New's undocumented formatting language is going to eat any given angle bracket ahead of time. I suspect OP wrote something correct and the site has mangled it.


It got messed up when I was trying to format it, unfortunately. But my point still stands.


I think this snippet needs some context. In all contexts that I can imagine, the first line should actually be

  if which jq &>/dev/null; then
    brew install jq
  fi


That seems more reasonable, but it's not what the author wrote. He precedes the snippet with "Don’t try to install things with brew if brew is not installed", so his intention does seem to be to swallow errors silently, which is definitely weird.


He might want his .bashrc to work on both macos and linux.


IME .bashrc already doesn't work the same on OS X, though, so that's still a weird reason.


Likely due to using the older version of bash preinstalled on Mac. Installing a new one and it should be virtually identical.


Before we begin, first note that bashrc refers to something that runs in each and every new shell, and profile refers to something that runs only in interactive shells (used by a user at a keyboard, not just a shell script, for example). They aren’t the same and you don’t want them to be the same.

I'm not an expert on bash exactly though I am a heavy shell user (POSIX shell for scripting) but this part doesn't sound right. When is .bashrc ever executed when bash isn't interactive? And as far as I can tell when I open new shells .profile isn't read. I am using Linux and tmux and the reason I mention tmux is that it opens bash as a login shell and therefore .bash_profile is also loaded. Is this a mac OS thing or the version of bash mac OS comes with, which I believe is really old due to some license issues.


Which startup files are read by bash and other shells in which state is very inconsistent, even across distributions of Linux. I've collapsed all of mine into .bashrc and simply source that file from the other possibilities. And on the rare occasion that I care about interactive vs not, I can make that distinction explicitly in the code.


I ran an eBPF program called opensnoop [1] to capture what files were opened during login to a system and then re-launching bash. Looks like both are read during initial login but only .bashrc for non-login shells. Output is below.

  24435  bash                3   0 /etc/profile
  24435  bash                3   0 /etc/profile.d/
  24435  bash                3   0 /etc/profile.d/256term.sh
  24435  bash                3   0 /etc/profile.d/colorgrep.sh
  24435  bash                3   0 /etc/profile.d/colorls.sh
  24435  bash                3   0 /etc/profile.d/lang.sh
  24435  bash                3   0 /etc/profile.d/less.sh
  24435  bash                3   0 /etc/profile.d/which2.sh
  24435  bash                3   0 /etc/profile.d/sh.local
  24435  bash                3   0 /home/centos/.bash_profile
  24435  bash                3   0 /home/centos/.bashrc
  24435  bash                3   0 /etc/bashrc

  24736  bash                3   0 /home/centos/.bashrc
  24736  bash                3   0 /etc/bashrc
  24736  bash                3   0 /etc/profile.d/
  24736  bash                3   0 /etc/profile.d/256term.sh
  24736  bash                3   0 /etc/profile.d/colorgrep.sh
  24736  bash                3   0 /etc/profile.d/colorls.sh
  24736  bash                3   0 /etc/profile.d/lang.sh
  24736  bash                3   0 /etc/profile.d/less.sh
  24736  bash                3   0 /etc/profile.d/which2.sh

[1] http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-li...


Be careful that some of those files explicitly include others. For example my (default) ~/.profile includes ~/.bashrc, my ~/.bash_profile includes ~/.profile, /etc/profile includes /etc/bash.bashrc...

So your capture here doesn't show only the files that bash itself decided to load. You also won't see the fallback files (e.g. bash will open .profile if .bash_profile doesn't exist).


as far as I can tell when I open new shells .profile isn't read

True, iff you have a .bash_profile. Bash only reads the first of ~/.bash_profile, ~/.bash_login, ~/.profile. It will ignore the rest of the list once it's found an existing file.


Still, they're not read with every new interactive shell. They're read only with login shells. From the manual:

> When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login,

> When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc

So, this is wrong:

> bashrc refers to something that runs in each and every new shell

Because .bashrc doesn't run when you execute a shell script, and doesn't run when you use `bash -c`.

And this is wrong:

> profile refers to something that runs only in interactive shells

Because it does run in non-interactive shells when they're login shells, and because it implies that it runs in every interactive shell, which it doesn't. It only runs in login shells.


Insanely enough, this is set up wrong in Debian and derivatives, where bash_profile sources bashrc as a default from /etc/skel, leading to damning "not a tty" messages because at some point it calls some tty requiring command unconditionally.


This explains so many of my problems. Thank you.


https://hackernoon.com/bash-profile-vs-bashrc-vs-bin-bash-vs...

Note that in macOS every new Terminal (tab) opens a login shell, while most GUI Linux environments don't. (In Tmux/screen it depends on the configuration).


There's a graph (generated by graphviz from text description) that shows the flow and files involved for bash, sh and zsh. Yes, it's insane.

https://blog.flowblok.id.au/2013-02/shell-startup-scripts.ht...


I thought macOS always loaded the shell as a login shell by default while most Linux and UNIX load a non-login shell by default.

On a Mac you usually end up adding

    if [ -f ~/.bashrc ]; then
      . ~/.bashrc;
    fi 
To your .profile or .bash_profile.

Just read that part and now I can’t help but question the rest of it before even reading it


I just moved to Catalina, and the fact that zsh has a clear and reasonable standard for startup files made up for the fact that I had to move my shell init to its files.


The way the files are loaded on Bash has a standard.

Apple breaks it, but there is.

Here’s info, https://unix.stackexchange.com/questions/170493/login-non-lo...

ZSH follows the same for login/non-login but I don’t see it for interactive.


If you do, it will even load it for non-interactive shells.

  if [ -n "$PS1" ] ; then
    [ -r ~/.bashrc     ] && . ~/.bashrc
    [ -r ~/.bash_login ] && . ~/.bash_login
  fi


Yep. It’s a mess frankly.


It sounds right to me, when running thru cron for example, I need to load env vars as part of my script startup otherwise even path is missing.


> I'm not an expert on bash exactly though I am a heavy shell user (POSIX shell for scripting) but this part doesn't sound right. When is .bashrc ever executed when bash isn't interactive?

Never.

TFA has got it wrong.


.bashrc would get executed when you run something in cron. Cron runs non-interactive shell sessions.


I tested this and this is not the case. My .bashrc does a lot of things such as custom completions that are expensive that I would not want run by anything else other than an interactive shell. It also has return at the end so if anything I don't know about adds stuff to it, it won't get executed. This would break other scripts loading .bashrc as well. So as far as I can see this is false.


From the manual:

When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist. This may be inhibited by using the --norc option.

So interactive, non-login shells only.

But also:

When invoked as an interactive shell with the name sh , bash looks for the variable ENV, expands its value if it is defined, and uses the expanded value as the name of a file to read and execute. [A] shell invoked as sh does not attempt to read and execute commands from any other startup files

So only when invoked as bash, not as sh.


Thank you, very helpful. That explains why I have to have my .bash_profile source .bashrc for use with tmux, as it executes bash as a login shell. I knew I had to have this but wasn't 100% on why until now. Note to self, RTFM!


scp executes .bashrc if I recall correctly. Any echo in .bashrc broke scp for me in the past.


This inspired me to (quickly) write down my own Stupid Bash Tricks: https://gist.io/@tsutsu/2c11fc0a36000a46566e9fd62c60dea4

Most of it might be somewhat obvious; but scroll down to "Bash Hooks" for a cool trick I've never seen anywhere else.


I found the OSX specific XDG section helpful, because I've been setting it to ~/.cache for years! Good to know there is a better way that the OS understands (a bit a least).


This got me to raise an eyebrow:

    if [[ -d "$HOME/dev/go" ]]; then
        export GOPATH="$HOME/dev/go"
    fi
Using a directory named "dev" not to store device files, but development tools. I am so stuck with conventions.


True! Also the mixing of upper and lower-case directories on ~ is a red flag.

Offtopic, but this kind of one-line conditions are typically best written as

    test -d ~/dev/go && export GOPATH=~/dev/go
then it is easier tho change the sign of the condition (by using either || or &&)


File systems on macOS are case insensitive. The system-provided directories are capitalized by convention, but you may also refer to them in lowercase if you like. It’s no more a “red flag” than `c:\WINDOWS` being capitalized.

I opt for the if-brackets rather than the one line syntax because it is clearer to read. “if this, then that” is less mental effort than remembering just how boolean operators short circuit, for me at least. It's a few extra lines but I prefer the layout, indent, and ease of scanning.


Windows now has optional case sensitivity on a per directory level. Drove me crazy trying to work out errors in an old c# codebase after a git checkout in wsl, multiple rounds of screaming THE FILES RIGHT THERE!


> File systems on macOS are case insensitive.

Not all of them. You can choose to have a case-sensitive filesystem if you’d like.


As of about five years ago, enabling that feature at mkfs time broke tons of apps that assume case insensitivity. Not sure if the situation has improved.


Speaking of filename weirdness.. I changed my Mac's language to Spanish a few years ago, as I was learning the language, which has many bizarre effects, online and off. One is that the Library folder in my home folder appears in Finder as Librería, yet if I try to cd to Librería or Libreria in bash it doesn't work, as it's actually Library. Same for Documentos, Imágenes, Aplicaciones, Escritorio (Desktop), Usuarios, Sistema etc. I haven't looked into how that actually works. But there seems something very wrong about files appearing as not what they're called, which I assume the whole non-English-speaking world puts up with.


> File systems on macOS are case insensitive.

Holy smacks, all these years and I never knew that! Too bad auto-complete doesn't pick up on it though.


It can. Try: set completion-ignore-case on (for bash).


A "red flag" for what? He's using macOS, in which the typical user-facing directories are propercase (/Users, ~/Documents, etc.)


I don't know, it's just very ugly and it lacks a sense of aesthetics (just like everything else on macos, so at least it is consistent).

It's like reading a book that uses three different fonts on each page. You just cannot take it seriously.


Aren't you being a little over-dramatic? I use Uppercase directories (Documents, Downloads, Development) in my ~ as well, in a consistent manner. Why is that ugly?


I dislike CaMelcAse as well, but to each his own, I have stuff backed up from my windows machine on my BSD machine that has spaces and uppercase i leave it in case i need to restore. It does not mean you cannot take it seriously.

MacOS is one of the few registered UNIX systems out there (https://en.wikipedia.org/wiki/Single_UNIX_Specification#macO...).

I actually do not own a Mac, but it is a decent OS and performs well and tries to be windows friendly (hence camelcase and spaces) and tries to be stable and secure.


I guess the downvotes tell you how other hackers feel about your opinions.


letter cases were a mistake in language itself.

cool way to bloat your glyph set to double the size and introduce aesthetic and string matching problems.


Related fact: the term "case" dates back to the old printing presses where characters would be individual blocks or wood or metal and they were stored in draws. Capital letters were stored in the draw above non-capital letters -- hence why they are referred to as "upper case" and "lower case".

In terms of your general point, upper and lower case letters were originally different stylistic type-faces rather than "modes" of a letter in the same type-face. It wasn't until relatively recently in our writing history that the rules of capital letters became defined rather than a style choice and I'm certain they weren't thinking much about the problems that might cause with string matching on digital systems invented several hundred years later.


I justify it to myself as “$HOME is different.” My fingers are also fast at typing it, and my brain is very well adjusted to “I am unprivileged” xor “I am root” - dev means vastly different things in those contexts, just as rm -r does.


Indeed, if it works for you, then it is good.

In 1990's I used to have ~/dev in school's Sparcs running SunOS, as I needed some device files that weren't available in /dev. But I was able to mknod them.

I've been also toying with chroots quite a lot and there dev is also quite essential for its original purpose. Therefore "dev" is like a reserved keyword for me in *nix systems.

Personally I use ~/w (like "work") or ~/code mostly. I've had ~/dev once for storing projects, but I had to rename it as it was distracting (to me) :)


Do you chroot to ~? Do you still use 90s SunOS?

Neither do I.


No. Above I explained to you why my experiences has made me to think "dev" is special. I don't get your reaction.


This is a point of conflict for me. I use "~/dev" for all of my development work, but I don't want to because of the convention set by "/dev".

My problem is finding an alternative to "dev" that is just as convenient. Do any of you have any suggested alternatives?


I've always found myself using `~/projects/` for my development. It's also general enough that I can put not-quite-software projects like this in there: https://github.com/lelandbatey/custom_cpu--ALU


I use ~/ops

...but not in the "operator" sense. Rather, the "operations" sense...and that is because i used to use ~/projects...But i got lazy to type that out.


sometimes I use ~/opt


I Use ~/src for ad-hocs and ~/git to home all of the git projects


Just use ~/dev. It's OK to have two folders with the same name, if they're in different places. This is after all why we have folders in the first place!


I just use ~/Developer - with tab completion it's not really any harder to type. (And on macOS ~/Developer automatically gets a unique icon in Finder)


I usually use "~/repos". That makes "re<tab>" a one hand movement in one direction.

Most everything I do development-wise is in Git, which is why I came to that name years ago.

I also have a "~/tmp" directory for one-off dev stuff that I clean up periodically.


I use ~/code


How about `~/workspace`? With tab and autocomplete it‘s not hard to type.

But I must admit that `~/code` seems a good alternative to me.


This is what I use with the specific nit that I actually only use it for "work stuff" and usually have a `~/personal/` for my own pet projects and a `~/scripts/` for anything that I want on my path.


I use ~/hg with a big monorepo


I use ~/devel


I use ~/lab


i use ~/prog


I don't like it either, but that's an artifact of using macOS - most users aren't fully aware of Linux naming conventions.


It's not a Linux convention, it's a Unix convention.


macOS also uses /dev for device nodes.


"Docker desktop for mac is closed source software, which is dumb for something that asks for administrator permissions on your local machine. This lameness aside, it runs the docker daemon..." ...Proceeds to run MacOS...


I never understood the appeal of Docker for mac. It just runs a Linux VM under the covers. If I need to use docker, I just run it in a Linux VM.


I use docker, docker-machine, and virtualbox, the former two built and packaged from ArchMac. Too bad docker-machine-driver-xhyve is not seeing more love.

Docker for Mac is a mess, it regularly pegs CPUs for no obvious reason at all on idling containers among other things. Plus now it packages k8s which made it balloon in size.


Sure, but there are probably more eyes on MacOS releases than on Docker releases.


Like every time someone points this out: there are an unreasonable amount of companies that don't let you pick anything else. It's either a macbook, or a macbook, and if you want a different machine go buy it yourself and use it outside of work hours "because we can't afford writing three different copies of the same internal documentation for more than one OS when we're the ones paying for the machine you work on".

And in an unfortunate twist, if you _have_ to run Docker, macos is actually the best choice for that. Installing it on linux or windows is an exercise in "you know what, maybe I should buy a mac for this instead".


> Installing it on linux or windows is an exercise

...? Am I missing something obvious? Installing docker on Ubuntu is as simple as

    sudo apt install docker.io
and you're done. Installing from the official repos is simple too if you want the latest version [1]

[1] https://www.digitalocean.com/community/tutorials/how-to-inst...


> And in an unfortunate twist, if you _have_ to run Docker, macos is actually the best choice for that.

You are aware that Docker is a Linux technology right?

As others have pointed out it is trivial to install on Linux compared to Windows or Mac.

I get it that Mac folks like their Macs, and I will argue your case at work, but please stop the misinformation campaign.

Mac is different than Windows and Linux. Not generally better.

It is better than Linux for running Adobe software.

It is horrible for running gimp.

It cannot run Docker natively but it might have a number of other advantages.

Source: I once used a Mac. Started with great enthusiasm, gave up three years later, massively disappointed. Liter realized Mac actually is the best computer that exists for Mac users, but not for me and many others.


> Installing [Docker] on linux [...] is an exercise in "you know what, maybe I should buy a mac for this instead".

This quip is outdated. Installing Docker on Linux is only a hassle if you're using ancient distributions like Debian or RHEL from 5-10 years ago. Anything released after about 2016 should have no problems at all running Docker, and probably comes with Docker in the default repos.


The author configures lots of syncing by setting the location of config files, I do it by setting up a whitelist based gitignore in my homedir:

  *
  !.gitignore
  !.bashrc
  !.ssh/authorized_keys
It's fast (comparative to a blacklist based 'git status' scan) and less work :)

On a sidenote, I'm curious about the security implications of the git repository - if the git host the service is breached, as far as I known there's really nothing stopping the actor leveraging access to achieve code execution on my host right?

I'm aware of commit signing but in the context of a raw git directory synced over ssh an attacker could create and use any valid signature key to commit to the repo. Hosting on Gitlab/Github would require a breach or significant abuse of security controls, but is still possible, too.


So you have a gitignore that says "ignore everything except these three files" -- what does that do? Is it supposed to replace the line where he curls those files to github? Isn't it awkward having all your other git repos in your home dir be under that git repo?


> what does that do? Is it supposed to replace the line where he curls those files to github

No, it is for synchronization. See the article sections titled 'SSH: Move Your SSH Config File Into a Synced Folder' and 'Extra Credit'.

> So you have a gitignore that says "ignore everything except these three files" -- what does that do?

My gitignore is upwards of 100 files, but allows me to track changes and synchronize configurations across hosts, which I do often as I often work in short-lived graphical VMs and across multiple hosts. Using a whitelist of tracked files means 'git status' wont take seconds/minutes to scan the entire directory tree under my homedir which seemed to be the case when using a blacklist when I initially configured it.

> Isn't it awkward having all your other git repos in your home dir be under that git repo?

It breaks stuff like `git add -A` which I haven't fully solved, but don't really feel the need to - most of my commits are 2-3 files at most and I'd prefer to be aware of exactly what's being committed for the additional minor overhead.

There's other alternatives, like rsync, which solve entire tree synchronization but that's not what I'd normally like to do as my ultraportable has a 128gb SSD and my daily driver is a 2TB laptop. I'd be open to hear other suggestions, but at this point git is a convenient and flexible solution that works well in my environment :)


Edit, I inadvertently stripped out the subdir whitelist, without it subdirectory files are completely ignored irrespective of whitelist flagging. I don't understand why it works, but it works. The gitignore should be:

  *
  !*/
  !.gitignore
  !.bashrc
  !.ssh/authorized_keys


With docker >= 18.09, you can connect directly over SSH by setting DOCKER_HOST=ssh://<user>@<host>


Is there any advantage/difference to using ProxyCommand with netcat vs just saying "ProxyJump bastion.example.com"?


ProxyJump was added in OpenSSH 7.3 so systems from a few years ago might not support it. It does the same thing as ProxyCommand with -W %h:%p but you can’t set custom options for the jump connection with ProxyJump.


OpenSSH 7.3 was released on 2016-08-01, so more than 3 years ago. Nobody should be running SSH versions this old (hopefully).


CentOS 6 lives on in enterprise, unfortunately. (don't know exact OpenSSH version but didn't support -J when I checked earlier)


I've had my shell startup scripts modularized for many years now, but I do a much more complicated system where the scripts actually have a function `require` exposed to them that can be used to express dependencies. If you `require` something that's already been loaded it does nothing, so each script is still only just loaded once.


> I use Mac OS X (pron: “ten”).

To be extra pedantic: unless you’re running an eight-year-old OS, you are likely running either OS X or macOS.


Thank you!


author should probably be using: command -v

instead of: which



His tip about running "make clean" on ~/Desktop is okay, assuming people actually use Desktop the way he does. Which I observe that many, possibly most, users don't. Apple says that "The desktop is where you do most of your work", but I see a lot of desktops with no files whatsoever.

https://support.apple.com/en-gb/guide/mac-help/mh40612/mac


Very true because I don't want any file icons on my wallpaper. Btw, i use this https://irvue.tumblr.com/


You can also hide all the desktop icons with a defaults setting:

    defaults write com.apple.finder CreateDesktop false


Aha, thanks for sharing!


Everybody in development that I work with keeps all their local work in ~/username


> "always ssh as root" example

urggh... I've seen devs do this too many times and it breaks a bunch of stuff when configured incorrectly. You almost never want "Host *" to have a user entry.

also completely ignores the "don't ssh as root" opsec best practice. I would strongly argue that even for dev environments, it's worthwhile building the opsec muscle memory and spending the effort at the start of a project.


The best way to store SSH private keys is in a hardware security module, or HSM.

I have three Yubikeys. Would I need to add three ssh keys to every one of my ssh accounts?


Yes.

If one Yubikey were lost, stolen, or damaged, you would then revoke its access by removing the corresponding entry in ~/.ssh/authorized_keys.


You can load keys into yubikeys if you like (and thus load the same key into multiple devices), but I choose to generate unique keys on-device, so each one has its own (which would make the answer to your question “yes”).

One upside though is that all your keys can go as individual lines in authorized_keys, so there is still only one file to install on remote machines.


There are sites that show how to move your key to multiple Yubikeys. (Basically, backup your keyring before moving to Yubikey, then restore and repeat move to a new Yubikey).

This is the guide I followed: https://github.com/drduh/YubiKey-Guide


Unless you put the same OpenPGP key on all of them, or you want them to have a different levels of access.


Thanks for the cached ssh connections thing, didn't know that was possible. Not that useful when I'm on my environment with TMUX (typically have each connection in its own tmux pane) but massively useful on all the jump boxes where I don't have a sensible shell environment :)


Using a tor hidden service for emergency ssh access is pretty sweet and I'm going to have to go set that up for myself now. Maybe with an extra bit to auto-publish the hostname so I don't have to write it down every time something changes.


I find it weird that an article about "Unix tricks" requires the GNU bash shell.


Instead of which it's better to use command -v.


Ssh ProxyCommand: make sure to use ssh -W option rather then depending on netcat.


I was expecting something like

    cat file | less
Stop abusing cats!


another interesting site is:

https://dotfiles.github.io/


Bash is not Unix.


But it does run on my Unix boxes.


Although amusingly, not exclusively; it also runs on Haiku and NT. (This isn't at all to say that it doesn't fit in an article about Unix, since it's very popular there (just like talking about Firefox on Unix wouldn't be out of place), just a fun side note.)


Haiku is a UNIX, though. :)


No, because Bash is GNU is Not Unix.


Embrace, extend, extinguish. GNU has stage three well under way.


Well, I had to stop reading when I scrolled a bit and some pop up blocked the content with (I presume) the button to close it off screen on mobile.


There should have been an X at the top right corner.

Also, do you have reader mode on your device? That would fix it too.


Or you know, people could not put popovers on blogs.


If you're not browsing with an adblocker installed, then this complaint is basically moot: it's 2019, the ad-free internet died years ago; use a decent adblocker.

If they show up even with that active, though, now you have a valid complaint.


Or you can use sensible alerts and ads.

Good taste hasn't died and never will.

And there was never effectively an “ad-free internet”, if by internet you mean Web. We had ads on the Web in 1994.


It was some kind of mailing list solicitation from the author (I actually didn't even bother reading it, just frantically closed it as quickly as possible), so not blocked by an ad blocker. I'm running ad blockers and pihole and I still saw it.


I have uBlock Origin on, and the popup did appear. Maybe it wasn't an ad. I don't really know because I closed it reflexively.


While this is common on desktop, do you have suggestions for how to do this on mobile?


Firefox mobile supports extensions, including ublock origin (and umatrix, if you're into that).


With the advent of RSS going by the wayside, email has become the sole reliable method of doing non-invasive push notifications.

I wish it weren’t so, but collecting emails was the personal advice of someone highly regarded by both myself and HN so it’s what I do—with an apology in the modal itself.


I don’t really remember popups ‘please add our feed to your RSS reader’, instead there was a link to add the feed which took exactly enough screen space.


It was terrible advice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: