Regarding the point of "SSH: Faster Crypto", you should not enforce only one single specific cipher for ssh. The reason also seems wrong, modern hardware should be capable of achieving better performance with an AEAD cipher (such as AES-GCM or ChaCha20-Poly1305) instead of AES-CTR, as the latter also requires an additional HMAC.
If there really is a slow (or insecure) cipher you do not want to use, remove it by prepending a minus sign, for example `Ciphers -3des-cbc`, which keeps all other default ciphers. Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.
>Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.
Is that actually a real concern at this point vs the risk that comes from not white listing a few known reliable ones? It seems like old-style security, favoring modularity and "what if we need to change this someday soon?", but in practice it's turned out to be a lot less valuable than expected and raise significant risks of accidentally using something bad. We seem to be past the point where new ciphers being "better" is actually an event to be expected with any real frequency, prime/elliptic curve seems pretty mature now. Post-quantum could be a new frontier at some point, but may well require significant other changes as well.
WireGuard for example decided to just flat out say that any cipher changes will be tightly coupled with a full on version change. If you're using it you know exactly what you're getting and that's it.
I don't disagree that white listing means care with what is chosen, and picking AEAD-based seems like a better idea anyway (WG is Curve25519/ChaCha20-Poly1305/SipHash/BLAKE2s). Plus there is server compatibility to consider in some cases. But I'm not sure the logic of "you might miss out on better ciphers as they are added" is convincing either, vs setting yourself an alarm to recheck your SSH setup every year or two. Shouldn't any cipher additions/deletions arguably be something you actively consider, rather than have automagically added?
SHA-1 was already broken (Feb 2005) when git was first published (April 2005). But Linus decided that git doesn't need a collision resistant hash function. https://marc.info/?l=git&m=115678778717621
If you know an earlier instance, go ahead and take the crown from the shattered folks.
---
The choice to use SHA-1 was a trade-off of security, size, performance. If Linux invented git today, I imagine the choice would have been different, because those parameters are now different.
In cryptography broken means "known attack significantly faster that brute-force", which was published in 2005. And cryptographers were advocating for deprecating it several years before that, because the security margin was clearly insufficient.
https://www.schneier.com/blog/archives/2005/02/sha1_broken.h...
The time between a theoretical attack and practical demonstration of an attack should be considered a grace period we can use to migrate to a secure primitive. Choosing SHA-1 for an application which relies on collision resistance after the 2005 papers is plain incompetence.
Git chose SHA-1 because Linus did not consider collisions a problem. The downsides of SHA-256 were pretty small even then (32 instead of 20 bytes, and somewhat slower performance which is still faster than most IO).
> Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.
That sounds like good advice; thank you. Out of curiosity, as someone who is fairly noobish on SSH, are "better ciphers" typically automatically preferred by SSH clients and servers as they are introduced? In other words, do the SSH implementations maintain a rank ordering that prefers "better" ciphers? That would be my expectation, but it seems I am often surprised by the bad defaults when dealing with security.
Generally yes (there are a lot of SSH implementations out there), but that isn't the only thing you want to protect against:
1. If there is a critically broken cipher an attacker that can perform a MiTM attack and claim it only supports the broken cipher between both ends which can force an association using that and thus break your crypto transparently.
This type of attack would be high effort and targeted. Most threat models don't really need to address this issue, but disabling ciphers is so easy you mind as well spend a couple of keystrokes doing it.
2. If the cipher implementation is broken (think OpenSSL's heartbleed) then leaving the cipher available opens you up to being directly attacked by botnets.
This type of attack has a high initial cost for the attacker (developing the exploit) but can be sprayed across the entire internet. This is the type of attack that would affect most people and should be protected against by patching and disabling known bad ciphers.
I wonder if this issue is prone to cases of ‘a sysadmin writing a verbose config,’ like with TLS servers where ciphers are often put in a whitelist in my experience.
> Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.
You’ve identified issues with whitelisting but blacklisting isn’t perfect either. For example, many don’t trust NIST and may want to prevent the use of any of their future curves. Blacklisting fails here.
When updating to a newer version of SSH I think it’s good practice to ‘man ssh_config’ and at least look at KexAlgorithms, HostKeyAlgorithms and Ciphers.
> Don’t try to install things with brew if brew is not installed:
if which brew 2>&1 /dev/null ; then
brew install jq
fi
This just hides a useful error message (brew not installed). I would rather just see that error message (either interactively or in a log) and have the script fail. Hiding the error message just leads to an eventual failure down the road when jq is invoked.
If you don't care for maximum POSIX compatibility, i.e. your script is bash-specific, it's better to use "hash", which is going to ignore aliases (but not functions) and also has the benefit of caching the command for further use.
hash foo &>/dev/null || { echo "foo command not found blabla..." >&2; exit 1; }
command: command [-pVv] command [arg ...]
Execute a simple command or display information about commands.
Runs COMMAND with ARGS suppressing shell function lookup, or display
information about the specified COMMANDs. Can be used to invoke commands
on disk when a function with the same name exists.
Options:
-p use a default value for PATH that is guaranteed to find all of
the standard utilities
-v print a description of COMMAND similar to the `type' builtin
-V print a more verbose description of each COMMAND
Exit Status:
Returns exit status of COMMAND, or failure if COMMAND is not found.
I never knew about help to describe a builtin shell command. normally searching for something like "command" in the bash man page would be very tedious.
i do think 'man command' should bring up the specific man page for this, so there is no need to search into the bash man. this works with most builtin shell commands. e.g. 'man ls' is a thing
`command` is a bash builtin, not an external program. The information is in the bash man page (which is hard to find in there because the word "command" is in the man page 7 bajillion times. Look in the major section "SHELL BUILTIN COMMANDS" where you find `cd` and others like it).
It's impractical to determine whether Hacker New's undocumented formatting language is going to eat any given angle bracket ahead of time. I suspect OP wrote something correct and the site has mangled it.
That seems more reasonable, but it's not what the author wrote. He precedes the snippet with "Don’t try to install things with brew if brew is not installed", so his intention does seem to be to swallow errors silently, which is definitely weird.
Before we begin, first note that bashrc refers to something that runs in each and every new shell, and profile refers to something that runs only in interactive shells (used by a user at a keyboard, not just a shell script, for example). They aren’t the same and you don’t want them to be the same.
I'm not an expert on bash exactly though I am a heavy shell user (POSIX shell for scripting) but this part doesn't sound right. When is .bashrc ever executed when bash isn't interactive? And as far as I can tell when I open new shells .profile isn't read. I am using Linux and tmux and the reason I mention tmux is that it opens bash as a login shell and therefore .bash_profile is also loaded. Is this a mac OS thing or the version of bash mac OS comes with, which I believe is really old due to some license issues.
Which startup files are read by bash and other shells in which state is very inconsistent, even across distributions of Linux. I've collapsed all of mine into .bashrc and simply source that file from the other possibilities. And on the rare occasion that I care about interactive vs not, I can make that distinction explicitly in the code.
I ran an eBPF program called opensnoop [1] to capture what files were opened during login to a system and then re-launching bash. Looks like both are read during initial login but only .bashrc for non-login shells. Output is below.
Be careful that some of those files explicitly include others. For example my (default) ~/.profile includes ~/.bashrc, my ~/.bash_profile includes ~/.profile, /etc/profile includes /etc/bash.bashrc...
So your capture here doesn't show only the files that bash itself decided to load. You also won't see the fallback files (e.g. bash will open .profile if .bash_profile doesn't exist).
as far as I can tell when I open new shells .profile isn't read
True, iff you have a .bash_profile. Bash only reads the first of ~/.bash_profile, ~/.bash_login, ~/.profile. It will ignore the rest of the list once it's found an existing file.
Still, they're not read with every new interactive shell. They're read only with login shells. From the manual:
> When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login,
> When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc
So, this is wrong:
> bashrc refers to something that runs in each and every new shell
Because .bashrc doesn't run when you execute a shell script, and doesn't run when you use `bash -c`.
And this is wrong:
> profile refers to something that runs only in interactive shells
Because it does run in non-interactive shells when they're login shells, and because it implies that it runs in every interactive shell, which it doesn't. It only runs in login shells.
Insanely enough, this is set up wrong in Debian and derivatives, where bash_profile sources bashrc as a default from /etc/skel, leading to damning "not a tty" messages because at some point it calls some tty requiring command unconditionally.
Note that in macOS every new Terminal (tab) opens a login shell, while most GUI Linux environments don't. (In Tmux/screen it depends on the configuration).
I just moved to Catalina, and the fact that zsh has a clear and reasonable standard for startup files made up for the fact that I had to move my shell init to its files.
> I'm not an expert on bash exactly though I am a heavy shell user (POSIX shell for scripting) but this part doesn't sound right. When is .bashrc ever executed when bash isn't interactive?
I tested this and this is not the case. My .bashrc does a lot of things such as custom completions that are expensive that I would not want run by anything else other than an interactive shell. It also has return at the end so if anything I don't know about adds stuff to it, it won't get executed. This would break other scripts loading .bashrc as well. So as far as I can see this is false.
When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist. This may be inhibited by using the --norc option.
So interactive, non-login shells only.
But also:
When invoked as an interactive shell with the name sh , bash looks for the variable ENV, expands its value if it is defined, and uses the expanded value as the name of a file to read and execute. [A] shell invoked as sh does not attempt to read and execute commands from any other startup files
Thank you, very helpful. That explains why I have to have my .bash_profile source .bashrc for use with tmux, as it executes bash as a login shell. I knew I had to have this but wasn't 100% on why until now. Note to self, RTFM!
I found the OSX specific XDG section helpful, because I've been setting it to ~/.cache for years! Good to know there is a better way that the OS understands (a bit a least).
File systems on macOS are case insensitive. The system-provided directories are capitalized by convention, but you may also refer to them in lowercase if you like. It’s no more a “red flag” than `c:\WINDOWS` being capitalized.
I opt for the if-brackets rather than the one line syntax because it is clearer to read. “if this, then that” is less mental effort than remembering just how boolean operators short circuit, for me at least. It's a few extra lines but I prefer the layout, indent, and ease of scanning.
Windows now has optional case sensitivity on a per directory level. Drove me crazy trying to work out errors in an old c# codebase after a git checkout in wsl, multiple rounds of screaming THE FILES RIGHT THERE!
As of about five years ago, enabling that feature at mkfs time broke tons of apps that assume case insensitivity. Not sure if the situation has improved.
Speaking of filename weirdness.. I changed my Mac's language to Spanish a few years ago, as I was learning the language, which has many bizarre effects, online and off. One is that the Library folder in my home folder appears in Finder as Librería, yet if I try to cd to Librería or Libreria in bash it doesn't work, as it's actually Library. Same for Documentos, Imágenes, Aplicaciones, Escritorio (Desktop), Usuarios, Sistema etc. I haven't looked into how that actually works. But there seems something very wrong about files appearing as not what they're called, which I assume the whole non-English-speaking world puts up with.
Aren't you being a little over-dramatic? I use Uppercase directories (Documents, Downloads, Development) in my ~ as well, in a consistent manner. Why is that ugly?
I dislike CaMelcAse as well, but to each his own, I have stuff backed up from my windows machine on my BSD machine that has spaces and uppercase i leave it in case i need to restore. It does not mean you cannot take it seriously.
I actually do not own a Mac, but it is a decent OS and performs well and tries to be windows friendly (hence camelcase and spaces) and tries to be stable and secure.
Related fact: the term "case" dates back to the old printing presses where characters would be individual blocks or wood or metal and they were stored in draws. Capital letters were stored in the draw above non-capital letters -- hence why they are referred to as "upper case" and "lower case".
In terms of your general point, upper and lower case letters were originally different stylistic type-faces rather than "modes" of a letter in the same type-face. It wasn't until relatively recently in our writing history that the rules of capital letters became defined rather than a style choice and I'm certain they weren't thinking much about the problems that might cause with string matching on digital systems invented several hundred years later.
I justify it to myself as “$HOME is different.” My fingers are also fast at typing it, and my brain is very well adjusted to “I am unprivileged” xor “I am root” - dev means vastly different things in those contexts, just as rm -r does.
In 1990's I used to have ~/dev in school's Sparcs running SunOS, as I needed some device files that weren't available in /dev. But I was able to mknod them.
I've been also toying with chroots quite a lot and there dev is also quite essential for its original purpose. Therefore "dev" is like a reserved keyword for me in *nix systems.
Personally I use ~/w (like "work") or ~/code mostly. I've had ~/dev once for storing projects, but I had to rename it as it was distracting (to me) :)
I've always found myself using `~/projects/` for my development. It's also general enough that I can put not-quite-software projects like this in there: https://github.com/lelandbatey/custom_cpu--ALU
Just use ~/dev. It's OK to have two folders with the same name, if they're in different places. This is after all why we have folders in the first place!
This is what I use with the specific nit that I actually only use it for "work stuff" and usually have a `~/personal/` for my own pet projects and a `~/scripts/` for anything that I want on my path.
"Docker desktop for mac is closed source software, which is dumb for something that asks for administrator permissions on your local machine. This lameness aside, it runs the docker daemon..." ...Proceeds to run MacOS...
I use docker, docker-machine, and virtualbox, the former two built and packaged from ArchMac. Too bad docker-machine-driver-xhyve is not seeing more love.
Docker for Mac is a mess, it regularly pegs CPUs for no obvious reason at all on idling containers among other things. Plus now it packages k8s which made it balloon in size.
Like every time someone points this out: there are an unreasonable amount of companies that don't let you pick anything else. It's either a macbook, or a macbook, and if you want a different machine go buy it yourself and use it outside of work hours "because we can't afford writing three different copies of the same internal documentation for more than one OS when we're the ones paying for the machine you work on".
And in an unfortunate twist, if you _have_ to run Docker, macos is actually the best choice for that. Installing it on linux or windows is an exercise in "you know what, maybe I should buy a mac for this instead".
> And in an unfortunate twist, if you _have_ to run Docker, macos is actually the best choice for that.
You are aware that Docker is a Linux technology right?
As others have pointed out it is trivial to install on Linux compared to Windows or Mac.
I get it that Mac folks like their Macs, and I will argue your case at work, but please stop the misinformation campaign.
Mac is different than Windows and Linux. Not generally better.
It is better than Linux for running Adobe software.
It is horrible for running gimp.
It cannot run Docker natively but it might have a number of other advantages.
Source: I once used a Mac. Started with great enthusiasm, gave up three years later, massively disappointed. Liter realized Mac actually is the best computer that exists for Mac users, but not for me and many others.
> Installing [Docker] on linux [...] is an exercise in "you know what, maybe I should buy a mac for this instead".
This quip is outdated. Installing Docker on Linux is only a hassle if you're using ancient distributions like Debian or RHEL from 5-10 years ago. Anything released after about 2016 should have no problems at all running Docker, and probably comes with Docker in the default repos.
The author configures lots of syncing by setting the location of config files, I do it by setting up a whitelist based gitignore in my homedir:
*
!.gitignore
!.bashrc
!.ssh/authorized_keys
It's fast (comparative to a blacklist based 'git status' scan) and less work :)
On a sidenote, I'm curious about the security implications of the git repository - if the git host the service is breached, as far as I known there's really nothing stopping the actor leveraging access to achieve code execution on my host right?
I'm aware of commit signing but in the context of a raw git directory synced over ssh an attacker could create and use any valid signature key to commit to the repo. Hosting on Gitlab/Github would require a breach or significant abuse of security controls, but is still possible, too.
So you have a gitignore that says "ignore everything except these three files" -- what does that do? Is it supposed to replace the line where he curls those files to github? Isn't it awkward having all your other git repos in your home dir be under that git repo?
> what does that do? Is it supposed to replace the line where he curls those files to github
No, it is for synchronization. See the article sections titled 'SSH: Move Your SSH Config File Into a Synced Folder' and 'Extra Credit'.
> So you have a gitignore that says "ignore everything except these three files" -- what does that do?
My gitignore is upwards of 100 files, but allows me to track changes and synchronize configurations across hosts, which I do often as I often work in short-lived graphical VMs and across multiple hosts. Using a whitelist of tracked files means 'git status' wont take seconds/minutes to scan the entire directory tree under my homedir which seemed to be the case when using a blacklist when I initially configured it.
> Isn't it awkward having all your other git repos in your home dir be under that git repo?
It breaks stuff like `git add -A` which I haven't fully solved, but don't really feel the need to - most of my commits are 2-3 files at most and I'd prefer to be aware of exactly what's being committed for the additional minor overhead.
There's other alternatives, like rsync, which solve entire tree synchronization but that's not what I'd normally like to do as my ultraportable has a 128gb SSD and my daily driver is a 2TB laptop. I'd be open to hear other suggestions, but at this point git is a convenient and flexible solution that works well in my environment :)
Edit, I inadvertently stripped out the subdir whitelist, without it subdirectory files are completely ignored irrespective of whitelist flagging. I don't understand why it works, but it works. The gitignore should be:
ProxyJump was added in OpenSSH 7.3 so systems from a few years ago might not support it. It does the same thing as ProxyCommand with -W %h:%p but you can’t set custom options for the jump connection with ProxyJump.
I've had my shell startup scripts modularized for many years now, but I do a much more complicated system where the scripts actually have a function `require` exposed to them that can be used to express dependencies. If you `require` something that's already been loaded it does nothing, so each script is still only just loaded once.
His tip about running "make clean" on ~/Desktop is okay, assuming people actually use Desktop the way he does. Which I observe that many, possibly most, users don't. Apple says that "The desktop is where you do most of your work", but I see a lot of desktops with no files whatsoever.
urggh... I've seen devs do this too many times and it breaks a bunch of stuff when configured incorrectly. You almost never want "Host *" to have a user entry.
also completely ignores the "don't ssh as root" opsec best practice. I would strongly argue that even for dev environments, it's worthwhile building the opsec muscle memory and spending the effort at the start of a project.
You can load keys into yubikeys if you like (and thus load the same key into multiple devices), but I choose to generate unique keys on-device, so each one has its own (which would make the answer to your question “yes”).
One upside though is that all your keys can go as individual lines in authorized_keys, so there is still only one file to install on remote machines.
There are sites that show how to move your key to multiple Yubikeys. (Basically, backup your keyring before moving to Yubikey, then restore and repeat move to a new Yubikey).
Thanks for the cached ssh connections thing, didn't know that was possible. Not that useful when I'm on my environment with TMUX (typically have each connection in its own tmux pane) but massively useful on all the jump boxes where I don't have a sensible shell environment :)
Using a tor hidden service for emergency ssh access is pretty sweet and I'm going to have to go set that up for myself now. Maybe with an extra bit to auto-publish the hostname so I don't have to write it down every time something changes.
Although amusingly, not exclusively; it also runs on Haiku and NT. (This isn't at all to say that it doesn't fit in an article about Unix, since it's very popular there (just like talking about Firefox on Unix wouldn't be out of place), just a fun side note.)
If you're not browsing with an adblocker installed, then this complaint is basically moot: it's 2019, the ad-free internet died years ago; use a decent adblocker.
If they show up even with that active, though, now you have a valid complaint.
It was some kind of mailing list solicitation from the author (I actually didn't even bother reading it, just frantically closed it as quickly as possible), so not blocked by an ad blocker. I'm running ad blockers and pihole and I still saw it.
With the advent of RSS going by the wayside, email has become the sole reliable method of doing non-invasive push notifications.
I wish it weren’t so, but collecting emails was the personal advice of someone highly regarded by both myself and HN so it’s what I do—with an apology in the modal itself.
I don’t really remember popups ‘please add our feed to your RSS reader’, instead there was a link to add the feed which took exactly enough screen space.
If there really is a slow (or insecure) cipher you do not want to use, remove it by prepending a minus sign, for example `Ciphers -3des-cbc`, which keeps all other default ciphers. Otherwise, you will miss out on better ciphers as they are added and would be stuck on this one forever.