Hacker News new | past | comments | ask | show | jobs | submit login
SSH-audit: SSH server and client security auditing (github.com/jtesta)
302 points by thunderbong on Oct 15, 2023 | hide | past | favorite | 51 comments



Following the Hardening Guide link [1] I see that most server configs consist of the same 3 steps:

* Re-generate (and enable) the RSA and ED25519 keys

* Remove small Diffie-Hellman moduli

* Restrict supported key exchange, cipher, and MAC algorithms

As an ignorant of security matters, my obvious question is: if these are such an important thing to configure, why servers don't come with them done by default? There must be thousands of people who don't know about this guide, don't necessarily know a lot about security hardening, and would benefit from more secure defaults.

I get "because compatibility" might be a reason, but that would only maybe apply to the last point, wouldn't it?

[1]: https://www.ssh-audit.com/hardening_guides.html


Many (most?) releases auto generate the SSH server keys and set default configuration during OS or during SSH server enablement. Changing the server key protects you against the case that whoever made the distribution did it wrong - either used duplicate keys to other hosts, or intentionally compromised the key, or used a poor PRNG, or used a less desirable algorithm, perhaps for compatibility purposes.

That is not to say that any of those things actually happened; it’s just that the most paranoid/defense-oriented view is not to trust what the OS did for you, but to do it yourself.

A distro provider is incentivized to find a local optimum between security and compatibility and has no idea about your environment.

You have much more context about your environment- you know what you need to connect to, what needs to connect to you, and what ciphers, key length, and other configuration that you must support. Therefore if you take all that into account you might be able to optimize for security by disabling compatibility options you don’t need.

BTW this answer is pretty much the same when it comes to hardening any security feature of any OS. Vendors/distributors do a good job of hardening but you might be able to lock things down just a little further.


> You have much more context about your environment- you know what you need to connect to, what needs to connect to you, and what ciphers, key length, and other configuration that you must support. Therefore if you take all that into account you might be able to optimize for security by disabling compatibility options you don’t need.

As long as that's what you actually do. If instead you want a checklist to blindly execute to "harden" your system, then the grandparent commenter is correct: either your can accept that your distribution's defaults are actually good enough, or you're removing functionality such that you'll find a hurdle you've left for yourself sooner or later.


If your OS distributor screwed you that badly (and a short check should be sufficient to determine if they did if the distribution is popular!) you’re deeply screwed anyway. And there is nothing post-facto you can do from within that distribution.


For two years there was an undiscovered Debian-specific vulnerability in their OpenSSL package borking the generation of keys. But your point about being screwed from within applies to this case- regenerating the keys on an affected system would not have solved the problem, until it was patched. For those curious, look up ssh-vulnkey.


Following the guide's steps wouldn't have helped in the Debian case.

I reckon on average a distro ssh maintainer is more aware than an average end user following a hardening guide, even with mistakes like that one.


>Changing the server key protects you against the case that whoever made the distribution did it wrong

I'm unsure of ssh_host_* and it's importance, however on some hosts I manage the "ssh_host_rsa" key is only 2048 bits. Would regenerating "ssh_host_rsa" to be larger (4096 bits) not be an improvement?


These things (the ones you listed) aren't in fact that important to configure. What matters is that you disable password login. The rest of it is pretty performative. Small modulus finite field Diffie Hellman isn't how your server is going to get compromised. The server is going to accept the first algorithm the client proposes that's also supported by the server, which isn't going to be FFDH at all.


Ironically enough, the one most sensible thing to do, disabling password login, is not included in the linked hardening guide.


The reasons this site recommends disabling certain algorithms are, to say the least, very specualtive.

Just to give two examples: It recommends disabling diffie hellman with 2048 bit. There isn't even the slightest evidence that any attack might be possible on that size. It's just based on some policy recommendations that convert DH size to key strength and it ends up being below 128 bit.

It recommends disabling HMAC-SHA1. Now, SHA1 is not collission resistant. But HMAC is not affected by this weakness. Therefore there's no real risk.

Generally, OpenSSH has already improved algorithm security quite a lot over the years and everything that is remotely close to a real attack is disabled by default.


I think the logic (there’s a link to an opinionated article they base their recommendations on) is the same as with TLS1.3, why wait until something gets broken? It selects a few safe choices and disables everything else. You don’t usually need to support more than two different algos, as you usually control both the server and the client.


Is there a negative effect of applying these recommendations?

On another note, this helped me solve a an issue [1], where I could not disable some weak HMAC algorithms.

[1]: https://unix.stackexchange.com/q/758814/380381


Slower (as in less bandwidth and larger latencies) connection, using more CPU. Not sure if that makes any difference if you don’t use it for a tunnel or open a tons of connection for some reason.


For history's sake: Bitvise SSH server's ability to block by client id killed all bot login attempts for me... nobody recompiles libssh!

I think this will be good security-by-obscurity until OpenSSH offers the same feature.


Neat. Adding three lines to my sshd_config makes lines go green.

Unfortunately, Dropbear doesn't appear to support any Encrypt-then-MAC algorithms, so I can't get my swarm of OpenWRT routers all green.


You can replace Dropbear with OpenSSH on OpenWRT.


After realizing that I'd have to recompile dropbear myself to disable SHA-1, I also thought about that approach.

I'll figure it out. I'm currently planning the update to 23.05, anyway.


If you're using pre-compiled images, OpenWRT has a package manager and repo with OpenSSH in it. I used to do something like this[1] to install and update OpenSSH outside of internal flash, before deciding that exposing SSH on my router was a bad idea in general.

If you're compiling OpenWRT yourself, you can easily just switch them out in your base image, too.

[1] https://openwrt.org/docs/guide-user/additional-software/extr...


If your client puts sensible options first, those are the ones that will get used. So most likely you're using chachapoly and x25519 always anyway.


I added a Telegram poll in a private group with a yes/no option as a second factor for all my (ssh) servers (using session pam_exec.so /path/to/poll/binary). If my ssh key gets compromised, I can still deny ssh access. Unless my entire laptop gets stolen while running of course.


This is very cool. Any idea about the threat surface change? Do you only deny with the poll?


Is this basically https://www.ssllabs.com/ssltest/ but for SSH? If so, that is great. In the past, it has been great to point my sysadmin team at that site and say "My goal is get to an A+ on SSLLabs, the site will tell us what settings to tweak to get there" or to tell external stakeholders "SSLLabs gives us an A+, now stop telling me that LetsEncrypt is somehow not good enough". I'd love to have a simple tool I can point at my internal SSH servers and get a single letter grade like that.


NixOS hardening with (slightly wonky) tests[1], in case anyone's interested.

[1] https://gitlab.com/engmark/root/-/merge_requests/395/diffs


> (kex) ecdh-sha2-nistp521 -- [fail] using elliptic curves that are suspected as being backdoored by the U.S. National Security Agency

That seems a bit paranoid


There's really no reason to use anything besides Ed25519 these days (openssh has supported it since 2014!). As a bonus, the keys are really short too.


Per FIPS 186-5 and SP 800-186, Ed25519 looks like its going to be in next revision of FIPS 140 which will be nice for those who have to stay aligned with FIPS. Hoping that X25519 will come in for KEX as well, but I'm not holding my breath.


Azure doesn't allow Ed25519. AWS only just enabled it in the last year or two.


Does Apple's secure enclave thing support ed25519 now? I think that was missing support a while ago, probably other hardware too.

Webcrypto had missing support in some browsers too (though that was rsa-only on firefox for a while, unsure if ed25519 was added)


But if the NSA had backdoored that and you were working for the NSA, that's exactly what you would say. What if the NSA wants to get people off elliptic curves because it's the other stuff that they can crack?


Ed25519 is a DJB thing and the suggestion to replace it.

If they got to DJB, there is nothing we can do except nuke ourselves pre-emptively!


I set up some go cluster app with ed25519 signed x509 certs and it simply wouldn't connect with itself until it made RSA certs :shrug:


The "Security. Cryptography. Whatever." podcast just did an episode about the origin of the nistp curves.

https://securitycryptographywhatever.com/2023/10/12/the-nist...


No. And add the new kyber exchanges also.


It is that.


Backdoored or paranoid?


Paranoid (based on listening to his recent podcast.)


[flagged]


What's "often breakable since 2011"?


I think the GP has read but not comprehended the "best public cryptanalysis" part of that page.


It’s feels odd that how these hardening guides focus on crypto and then pretend that TOFU is perfectly secure.


TOFU is secure if you verify the host key fingerprint against information received via a sufficiently secure channel.

That most do TOFU wrong¹ does not make TOFU itself insecure.

Most do do TOFU wrong though. In DayJob I run an SFTP-to-Azure-storage relay⁴ for clients to send automated feeds from third-party systems via that method⁵, and it wasn't until one client reported getting the wrong fingerprint that we noticed we were outputting the UAT and Live system values back-to-front in the on-boarding details, and had been doing for some time so at least a few other clients⁶ had just blindly accepted a different fingerprint to the documented one…

--

[1] Anyone else remember way way way back² when browsers stopped trusting self-signed certificates, or at least started shouting loudly about them? There was many a heated discussion with some who kept repeating “well TOFU is good enough for SSH” and wouldn't accept that 1. TOFU isn't as secure as they think it is the way they are using it and 2. They were accepting the risk for only themselves³ with SSH & TOFU-done-a-bit-wrong, which might be OK to them, but with HTTPS+TOFU on public sites they were essentially making that risk choice for everyone, which is not OK.

[2] somewhere in the very early 2000s?

[3] and their users

[4] Until recently there was no built-in support for this in Azure, and the recently-left-preview implementation is ~£186 per month per storage account which works out very expensive for our current storage-account-per-client layout (so without a refactor that might breach our client data separation policies running a small VM⁷ with my BIY service built around Blob-FUSE works out far cheaper)

[5] We offer “more modern” APIs for such things too, but SFTP is still the most widely supported method (often the only supported method) by the other software/infrastructure used by our clients

[6] and these are investment banks, who you'd like to think were properly paranoid about a transport mechanism they'll be sending business information and customer PII through…

[7] actually, a couple of small VMs and a little infrastructure for HA


To be honest, this is a case of technology making humans work for it, instead of the other way around. By the design itself of TOFU, most users will trust the fingerprint because it looks like a weird string of random stuff and just pressing "Y" removes the question out of the way.

If verifying the fingerprint is such a key aspect of TOFU, then the dialog should be an input instead of an output. "Input here the expected fingerprint" in order to allow connecting. You'd have received it by some other means, and would type or copy-paste it into the terminal to finish the verification process. Et voila, safe TOFU completed!


Tofu literally means don’t verify, trust the first time you see it. Only complain if it changes later.

https://en.m.wikipedia.org/wiki/Trust_on_first_use


The article linked includes OOB verification as a scenario in TOFU. From the perspective of the ssh client it's TOFU (no CA chain for the client to perform a check), sure, that just means it's up to the user to do the work and use ssh safely (either the server has a site posting the fingerprints like GitHub/the AUR or you're setting up the machine and have physical access, or maybe you're using SSHFP).

>If no identifier exists yet for the endpoint, the client software will either prompt the user to confirm they have verified the purported identifier is authentic, or if manual verification is not assumed to be possible in the protocol, the client will simply trust the identifier which was given and record the trust relationship into its trust database.


Which as noted, users won’t generally do (they’ll just whack ‘yes’) and forcing them to do it means it isn’t TOFU?


> TOFU is secure if you verify the host key fingerprint against information received via a sufficiently secure channel.

Verifying the fingerprint out-of-band is the opposite of TOFU. TOFU is generally understood to mean not verifying it, but flagging if it changes later.


Verifying the fingerprint out-of-band is very much what is meant to happen, but it's like reading the EULA, everyone knows that you should do it, everyone knows that very very few people do.


> TOFU is secure if you verify the host key fingerprint against information received via a sufficiently secure channel.

In other words it's secure if you don't do TOFU and do out-of-band verification instead.


Just because the user chooses not to bother with the verification step doesn't mean it's not part of the process.


A better acronym for that process would be VOFU.


Trust doesn't mean blindly trust, it means assert trust.


For those wondering, TOFU means “trust on first use” here




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: