Following the Hardening Guide link [1] I see that most server configs consist of the same 3 steps:
* Re-generate (and enable) the RSA and ED25519 keys
* Remove small Diffie-Hellman moduli
* Restrict supported key exchange, cipher, and MAC algorithms
As an ignorant of security matters, my obvious question is: if these are such an important thing to configure, why servers don't come with them done by default? There must be thousands of people who don't know about this guide, don't necessarily know a lot about security hardening, and would benefit from more secure defaults.
I get "because compatibility" might be a reason, but that would only maybe apply to the last point, wouldn't it?
Many (most?) releases auto generate the SSH server keys and set default configuration during OS or during SSH server enablement. Changing the server key protects you against the case that whoever made the distribution did it wrong - either used duplicate keys to other hosts, or intentionally compromised the key, or used a poor PRNG, or used a less desirable algorithm, perhaps for compatibility purposes.
That is not to say that any of those things actually happened; it’s just that the most paranoid/defense-oriented view is not to trust what the OS did for you, but to do it yourself.
A distro provider is incentivized to find a local optimum between security and compatibility and has no idea about your environment.
You have much more context about your environment- you know what you need to connect to, what needs to connect to you, and what ciphers, key length, and other configuration that you must support. Therefore if you take all that into account you might be able to optimize for security by disabling compatibility options you don’t need.
BTW this answer is pretty much the same when it comes to hardening any security feature of any OS. Vendors/distributors do a good job of hardening but you might be able to lock things down just a little further.
> You have much more context about your environment- you know what you need to connect to, what needs to connect to you, and what ciphers, key length, and other configuration that you must support. Therefore if you take all that into account you might be able to optimize for security by disabling compatibility options you don’t need.
As long as that's what you actually do. If instead you want a checklist to blindly execute to "harden" your system, then the grandparent commenter is correct: either your can accept that your distribution's defaults are actually good enough, or you're removing functionality such that you'll find a hurdle you've left for yourself sooner or later.
If your OS distributor screwed you that badly (and a short check should be sufficient to determine if they did if the distribution is popular!) you’re deeply screwed anyway. And there is nothing post-facto you can do from within that distribution.
For two years there was an undiscovered Debian-specific vulnerability in their OpenSSL package borking the generation of keys. But your point about being screwed from within applies to this case- regenerating the keys on an affected system would not have solved the problem, until it was patched. For those curious, look up ssh-vulnkey.
>Changing the server key protects you against the case that whoever made the distribution did it wrong
I'm unsure of ssh_host_* and it's importance, however on some hosts I manage the "ssh_host_rsa" key is only 2048 bits. Would regenerating "ssh_host_rsa" to be larger (4096 bits) not be an improvement?
These things (the ones you listed) aren't in fact that important to configure. What matters is that you disable password login. The rest of it is pretty performative. Small modulus finite field Diffie Hellman isn't how your server is going to get compromised. The server is going to accept the first algorithm the client proposes that's also supported by the server, which isn't going to be FFDH at all.
The reasons this site recommends disabling certain algorithms are, to say the least, very specualtive.
Just to give two examples: It recommends disabling diffie hellman with 2048 bit. There isn't even the slightest evidence that any attack might be possible on that size. It's just based on some policy recommendations that convert DH size to key strength and it ends up being below 128 bit.
It recommends disabling HMAC-SHA1. Now, SHA1 is not collission resistant. But HMAC is not affected by this weakness. Therefore there's no real risk.
Generally, OpenSSH has already improved algorithm security quite a lot over the years and everything that is remotely close to a real attack is disabled by default.
I think the logic (there’s a link to an opinionated article they base their recommendations on) is the same as with TLS1.3, why wait until something gets broken? It selects a few safe choices and disables everything else. You don’t usually need to support more than two different algos, as you usually control both the server and the client.
Slower (as in less bandwidth and larger latencies) connection, using more CPU. Not sure if that makes any difference if you don’t use it for a tunnel or open a tons of connection for some reason.
If you're using pre-compiled images, OpenWRT has a package manager and repo with OpenSSH in it. I used to do something like this[1] to install and update OpenSSH outside of internal flash, before deciding that exposing SSH on my router was a bad idea in general.
If you're compiling OpenWRT yourself, you can easily just switch them out in your base image, too.
I added a Telegram poll in a private group with a yes/no option as a second factor for all my (ssh) servers (using session pam_exec.so /path/to/poll/binary). If my ssh key gets compromised, I can still deny ssh access. Unless my entire laptop gets stolen while running of course.
Is this basically https://www.ssllabs.com/ssltest/ but for SSH? If so, that is great. In the past, it has been great to point my sysadmin team at that site and say "My goal is get to an A+ on SSLLabs, the site will tell us what settings to tweak to get there" or to tell external stakeholders "SSLLabs gives us an A+, now stop telling me that LetsEncrypt is somehow not good enough". I'd love to have a simple tool I can point at my internal SSH servers and get a single letter grade like that.
Per FIPS 186-5 and SP 800-186, Ed25519 looks like its going to be in next revision of FIPS 140 which will be nice for those who have to stay aligned with FIPS. Hoping that X25519 will come in for KEX as well, but I'm not holding my breath.
But if the NSA had backdoored that and you were working for the NSA, that's exactly what you would say. What if the NSA wants to get people off elliptic curves because it's the other stuff that they can crack?
TOFU is secure if you verify the host key fingerprint against information received via a sufficiently secure channel.
That most do TOFU wrong¹ does not make TOFU itself insecure.
Most do do TOFU wrong though. In DayJob I run an SFTP-to-Azure-storage relay⁴ for clients to send automated feeds from third-party systems via that method⁵, and it wasn't until one client reported getting the wrong fingerprint that we noticed we were outputting the UAT and Live system values back-to-front in the on-boarding details, and had been doing for some time so at least a few other clients⁶ had just blindly accepted a different fingerprint to the documented one…
--
[1] Anyone else remember way way way back² when browsers stopped trusting self-signed certificates, or at least started shouting loudly about them? There was many a heated discussion with some who kept repeating “well TOFU is good enough for SSH” and wouldn't accept that 1. TOFU isn't as secure as they think it is the way they are using it and 2. They were accepting the risk for only themselves³ with SSH & TOFU-done-a-bit-wrong, which might be OK to them, but with HTTPS+TOFU on public sites they were essentially making that risk choice for everyone, which is not OK.
[2] somewhere in the very early 2000s?
[3] and their users
[4] Until recently there was no built-in support for this in Azure, and the recently-left-preview implementation is ~£186 per month per storage account which works out very expensive for our current storage-account-per-client layout (so without a refactor that might breach our client data separation policies running a small VM⁷ with my BIY service built around Blob-FUSE works out far cheaper)
[5] We offer “more modern” APIs for such things too, but SFTP is still the most widely supported method (often the only supported method) by the other software/infrastructure used by our clients
[6] and these are investment banks, who you'd like to think were properly paranoid about a transport mechanism they'll be sending business information and customer PII through…
[7] actually, a couple of small VMs and a little infrastructure for HA
To be honest, this is a case of technology making humans work for it, instead of the other way around. By the design itself of TOFU, most users will trust the fingerprint because it looks like a weird string of random stuff and just pressing "Y" removes the question out of the way.
If verifying the fingerprint is such a key aspect of TOFU, then the dialog should be an input instead of an output. "Input here the expected fingerprint" in order to allow connecting. You'd have received it by some other means, and would type or copy-paste it into the terminal to finish the verification process. Et voila, safe TOFU completed!
The article linked includes OOB verification as a scenario in TOFU. From the perspective of the ssh client it's TOFU (no CA chain for the client to perform a check), sure, that just means it's up to the user to do the work and use ssh safely (either the server has a site posting the fingerprints like GitHub/the AUR or you're setting up the machine and have physical access, or maybe you're using SSHFP).
>If no identifier exists yet for the endpoint, the client software will either prompt the user to confirm they have verified the purported identifier is authentic, or if manual verification is not assumed to be possible in the protocol, the client will simply trust the identifier which was given and record the trust relationship into its trust database.
> TOFU is secure if you verify the host key fingerprint against information received via a sufficiently secure channel.
Verifying the fingerprint out-of-band is the opposite of TOFU. TOFU is generally understood to mean not verifying it, but flagging if it changes later.
Verifying the fingerprint out-of-band is very much what is meant to happen, but it's like reading the EULA, everyone knows that you should do it, everyone knows that very very few people do.
* Re-generate (and enable) the RSA and ED25519 keys
* Remove small Diffie-Hellman moduli
* Restrict supported key exchange, cipher, and MAC algorithms
As an ignorant of security matters, my obvious question is: if these are such an important thing to configure, why servers don't come with them done by default? There must be thousands of people who don't know about this guide, don't necessarily know a lot about security hardening, and would benefit from more secure defaults.
I get "because compatibility" might be a reason, but that would only maybe apply to the last point, wouldn't it?
[1]: https://www.ssh-audit.com/hardening_guides.html