> Using a non-standard SSH port is a bad example because nmap can see through that deception in a few seconds.
Compared to milliseconds. Do yourself the favor and open one sshd on port 22 vs one on a port >10000, then compare logs after a month. The 22 one will have thousands of attempts; the other one hardly tens if even any.
The 99% level we're defending against here is root:123456 or pi:raspberry on port 22. Which is dead easy to scan the whole IPv4 space for. 65K ports per host though? That's taking time and, given the obvious success rate of the former, is not worth it.
Therefore I'd say it's the perfect example: It's hardly any effort, for neither attacker nor defender, and yet works perfectly fine for nearly all cases you'll ever encounter.
I know we've spoken in another thread, but I think it's important for people to understand that this sshd thing is a perfect example of why it isn't this easy: You reduce log spam moving to a non-privileged port, but also reduce overall security - a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.
Or you can implement real security, like not allowing SSH access via the public internet at all and not have to make this trade off.
Here's a counter-example (I said else-where in this thread):
Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
I'll also point out that we're generally talking about different threat vectors here, so it's good to lay them out. I don't think obscurity helps against a persistent threat probing your network, it helps against swarms.
> a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.
This is getting closer to APT territory, but I'll bite. If someone has RCE on your SSH server it honestly doesn't matter what port you're running on. They already have the server. You're completely right it would work if you have separate linux users for SSH and web server. Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them). But let's assume it here. In reality, even if you did have this setup this is a skilled persistent threat we're talking about (not quite an APT, but definitely a PT). They already own your website. Your compromised web/SSH server is being monitored by a skilled hacker, it's inevitable they'll escalate privileges. If they're smart enough to put in fake SSH daemons, they're smart enough to figure something else out. Is your server perfectly patched? Has anyone in your organization re-used passwords on your website and gmail?
You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:
* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).
* Use standard port, but you still have an APT who owns your web server and will find other exploits.
>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
Yep! And I should be clear: I am not saying just don't change the SSH port. I'm saying if you care about security, at a minimum disallow public access to SSH and set up a VPN at a minimum.
>Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).
I'm a bit confused here. In every major distro I've worked on (RHEL/Cent, Ubuntu, Debian, SUSE) the default httpd and nginx packages are all configured to use their own user for the running service. I haven't seen a system where httpd or nginx are running as root in over a decade.
I think the bare minimum for anyone that is running a business or keeping customer/end user data should be the following:
1) Only allow public access to the public facing services. All other ports should be firewalled off or not listening at all on the public interface
2) Public facing services should not be running as root (I'm terrified that you've not seen this to be the case in the majority of places!)
3) Access to the secure side should only be available via VPN.
4) SSH is only available via key access and not password.
5) 2FA is required
I think the following are also good practices to follow and are not inherently high complexity with the tooling we have available today:
1) SSH access from the VPN is only allowed to jumpboxes
2) These jumpboxes are recycled on a frequent basis from a known good image
3) There is auditing in place for all SSH access to these jumpboxes
4) SSH between production hosts (e.g. webserver 1 to appserver 1 or webserver 2) is disabled and will result in an alarm
With the first set, you take care of the overwhelming majority of both swarms and persistent threats. The second set will take care of basically everyone except an APT. The first set you can roll out in an afternoon.
With the first set, you take care of the overwhelming majority of situations.
Protecting sshd behind a VPN just moves your 0day risk from sshd to the VPN server.
Choosing between exposing sshd or a VPN server is just a bet on which of these services is most at risk of a 0day.
If you need to defend against 0days then you need to do things like leveraging AppArmor/Selinux, complex port knocking, and/or restricting VPN/SSH access only to whitelisted IP blocks.
Except you don't assume that just because someone is on the VPN you're secure.
If the VPN server has a 0day, they now have... only as much access as they had before when things were public facing. You still need there to be a simultaneous sshd 0day.
I'll take my chances on there being a 0day for wireguard at the same time there's a 0day for sshd.
(I do also use selinux and think that you should for reasons far beyond just ssh security)
A remote code execution 0day in your VPN server doesn't give an attacker an unauthorized VPN connection, it gives them remote code execution inside the VPN server process, which gives the attacker whatever access rights the VPN server has on the host. At this point, connecting to sshd is irrelevant.
Worse, since Wireguard runs in kernel space, if there's an RCE 0day in Wireguard, an attacker would be able to execute hostile code within the kernel.
One remote code exploit in a public-facing service is all it takes for an attacker to get a foothold.
I do not run my VPNs on the same systems I am running other services on, so an RCE at most compromises the VPN concentrator and does not inherently give them access to other systems. Access to SSH on production systems is only available through a jumphost which has auditing of all logins sent to another system, and requires 2FA. There are some other services accessible via VPN, but those also require auth and 2FA.
If you are running them all on the same system, then yes, that is a risk.
For a non-expert individual who would like to replace commercial cloud storage with a self hosted server such as a NAS, do all these steps apply equally?
I am limiting the services to simple storage.
Looks like maintaining a secure self cloud requires knowledge, effort and continuous monitoring and vigilance.
Most of those are good practices for a substantial cloud of servers that are already expected to have sophisticated configuration management. They're easy to set up in that situation, and a good idea too because large clouds of servers are an attractive target - they may be expected to have lots of private data that an attacker might want to steal and lots of resources to be exploited.
A single server run by an individual and serving minimal traffic would have different requirements. It's a much less attractive target, and much harder to do most of those things. For example, it's always easy and a good idea to run SSH with root login and password authentication disabled, run services on non-root accounts with minimum required permissions, and not allow things to listen on public interfaces that shouldn't be. Setting up VPNs, jumpboxes, 2FA, etc is kind of pointless on that kind of setup.
>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
But how much of a threat is this? Who's going to drop a ssh 0day with PoC for script kiddies to use? If it's a bad guy he's going to sell it on the black market for $$$. If it's a bad guy he's going to responsibly disclose.
>You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:
>* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).
But blocking 50% of the hacking attempts don't make you 50% more secure, or even 1% more secure. You're blocking 50% of the bottom of the barrel when it comes to effort, so having a reasonably secure password (ie. not on a wordlist) or using public key authentication would already stop them.
It makes the logs less noisy. And with much less noisy logs it is easier to notice if something undesirable is happening. Also from my experience this 50% is more like 99%.
> Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).
If you made a list of things like this which annoy you, I would enjoy reading it.
> Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
And with all those compromised servers they could easily scan for sshd on all ports.
Well, there's basically two stances you can reasonably take:
1) SSH is secure enough just by using key based auth to not worry about it.
2) SSH isn't secure enough just by using key based auth so we need to do more stuff.
If you believe #1, then you don't need to do anything else. If you believe #2, then you should be doing the things that provide the most effective security.
Personally, I believe #1 is probably correct, but when it comes to any system that contains data for users other than myself, or for anything related to a company, I should not make that bet and should instead follow #2 and implement proper security for that eventuality.
I'm willing to risk my own shit when it comes to #1, but not other people's.
The range in the figures is surprising. I leave everything on port 22, except at home where due to NAT one system is on port 21.
On these systems, since 1 September:
lastb | grep Sep\ | wc -l
160,000 requests (academic IP range 1),
120,000 requests (academic IP range 2),
1,500 requests¹ (academic IP range 3),
1,700 requests² (academic IP range 3),
180,000 requests³ (academic IP range 3, just the next IP),
80,000 requests (home broadband),
14,000 requests (home broadband — port 21),
5,000 requests (different home broadband, IPv4 port)
0 requests ( ,, ,, IPv6 port)
¹²³ is odd. All three run webservers, ² also runs a mailserver, yet they have sequential IP addresses.
I don't bother with port knocking or non-standard ports to ensure I have access from everywhere, to avoid additional configuration, and because I don't really see the point when an SSH key is required (password access is disabled).
Good example, but doesn't help his point, which was:
> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here
An attacker scanning the whole IPv4 space won't think "ah, there's no ssh on port 22, there's no ssh to attack". They will think "yep, they did at least the bare minimum to secure their server, let's move on to easier targets".
I have 0 in the last 14 days on port 2xxx. Probably depends a lot on your IP range (I'd assume AWS etc is scanned more thoroughly) and whether you've happened to hit a port used by another service. But even in commercial ranges, I've seen hardly any hits on >10k.
But I have only anecdotal evidence as well, so my guess is as good as yours.
Compared to milliseconds. Do yourself the favor and open one sshd on port 22 vs one on a port >10000, then compare logs after a month. The 22 one will have thousands of attempts; the other one hardly tens if even any.
The 99% level we're defending against here is root:123456 or pi:raspberry on port 22. Which is dead easy to scan the whole IPv4 space for. 65K ports per host though? That's taking time and, given the obvious success rate of the former, is not worth it.
Therefore I'd say it's the perfect example: It's hardly any effort, for neither attacker nor defender, and yet works perfectly fine for nearly all cases you'll ever encounter.
EDIT: Note that it comes with other trade-offs, though, as pointed out here: https://news.ycombinator.com/item?id=24445678