Hacker News new | past | comments | ask | show | jobs | submit login

Having written an SSH server that is used in a few larger places, I find the perspective of enabling these features on a per-address basis by default in the future troubling. First, with IPv4 this will have the potential to increasingly penalize innocent bystanders as CGNs are deployed. Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network. With IPv6 on the other hand, it is trivially easy to get a new IP, so the protection method described here will be completely ineffective.

From my experiments with several honeypots over a longer period of time, most of these attacks are dumb dictionary attacks. Unless you are using default everything (user, port, password), these attacks don't represent a significant threat and more targeted attacks won't be caught by this. (Please use SSH keys.)

I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment". It got hacked within 20 minutes of going online and these mechanisms wouldn't have saved it, the attacker got in on the second or third try.

If you want to have a read about unsolved problems around SSH that should be addressed, Tatu Ylonen (the inventor of SSH) has written a paper about it in 2019: https://helda.helsinki.fi/server/api/core/bitstreams/471f0ff...




> With IPv6 on the other hand, it is trivially easy to get a new IP

OpenSSH already seems to take that into account by allowing you to penalize not just a single IP, but also an entire subnet. Enable that to penalize an entire /64 for IPv6, and you're in pretty much the same scenario as "single IPv4 address".

I think there's some limited value in it. It could be a neat alternative to allowlisting your own IP which doesn't completely block you from accessing it from other locations. Block larger subnets at once if you don't care about access from residential connections, and it would act as a very basic filter to make annoying attacks stop. Not providing any real security, but at least you're not spending any CPU cycles on them.

On the other hand, I can definitely see CGNAT resulting in accidental or intentional lockouts for the real owner. Enabling it by default on all installations probably isn't the best choice.


IPv6 has the potential to be even worse. You could be knocking an entire provider offline. At any rate, this behavior should not become default.


FYI it's pretty common to get a /48 or a /56 from a data center, or /60 from Comcast.


I can never remember whether /x means "the last x bits are 0" or "the first x bits are 1"

People should write 80/48 or 48/80 to be clear


It's not about how many bits are 1 - it's about how many bits are important. And the first bits are always most important. So it's the first x bits.

If you have a /48 then 48 bits are used to determine the address is yours. Any address which matches in the first 48 bits is yours. If you have a /64, any address which matches in the first 64 bits is yours.


It's about how many bits are 1, in the subnet mask.


The number of bits that are important is the number of 1 bits in the which bits are important mask, yes. I thought you couldn't remember how that mask worked.


/48 is netmask of ffff:ffff:ffff:0:0:0:0:0. `sipcalc` can help with this.

  $ sipcalc ::/48
  -[ipv6 : ::/48] - 0
  
  [IPV6 INFO]
  Expanded Address - 0000:0000:0000:0000:0000:0000:0000:0000
  Compressed address - ::
  Subnet prefix (masked) - 0:0:0:0:0:0:0:0/48
  Address ID (masked) - 0:0:0:0:0:0:0:0/48
  Prefix address  - ffff:ffff:ffff:0:0:0:0:0
  Prefix length  - 48
  Address type  - Reserved
  Comment   - Unspecified
  Network range  - 0000:0000:0000:0000:0000:0000:0000:0000 -
       0000:0000:0000:ffff:ffff:ffff:ffff:ffff

I remember how this works because of the IPv4 examples that I have baked into my head, e.g. 10.0.0.0/8 or 192.168.1.0/24. Clearly the first 24 bits must be 1 for that last one to make any sense.

I recently found a case where an "inverted" netmask makes sense - when you want to allow access through a firewall to a given IPv6 host (with auto-config address) regardless of the network that your provider has assigned.


> I can never remember whether /x means "the last x bits are 0" or "the first x bits are 1"

> People should write 80/48 or 48/80 to be clear

The clarity is found implied in your preferred example.

- "80/" would mean "80 bits before"

- "/48" would mean "48 bits after"


... and this is the opposite of the other 2 responses


/x is almost always the number of network bits (so the first half). There are some Cisco ISO commands that are the opposite but those are by far the minority.

99/100 it means the first bits.


Maybe the only equivalent is to penalize a /32, since there are roughly as many of those as there are ipv4 addresses.


That may be true mathematically, but there are no guarantees that a small provider won't end up having only a single /64, which would likely be the default unit of range-based blocking. Yes, it "shouldn't" happen.


You cannot reasonably build an ISP network with single /64. RIPE assigns /32s to LIRs and LIRs are supposed to assign /48s downstream (which is somewhat wasteful for most of kinds of mass-market customers, so you get things like /56s and /60s).


As I said, "should". In some places there will be enough people in the chain that won't be bothered to go to the LIR directly. Think small rural ISPs in small countries.


What if it uses NAT v6 :D


i cannot tell if facetious or business genius.


Well seriously, I remember AT&T cellular giving me an ipv6 behind a cgnat (and also an ipv4). Don't quote me on that though.


That’s what Azure does. They also only allow a maximum of 16(!) IPv6 addresses per Host because of that.


Right. It's analogous to how blocking an ipv4 is unfair to smaller providers using cgnat. But if someone wants to connect to your server, you might want them to have skin in the game.


The provider doesn't care, the owner of the server who needs to log in from their home internet at 2AM in an emergency cares. Bad actors have access to botnets, the server admin doesn't.


Unfortunately the only answer is "pay to play." If you're a server admin needing emergency access, you or your employer should pay for an ISP that isn't using cgnat (and has reliable connectivity). Same as how you probably have a real phone sim instead of a cheap voip number that's banned in tons of places.

Or better yet, a corp VPN with good security practices so you don't need this fail2ban-type setup. It's also weird to connect from home using password-based SSH in the first place.


> you or your employer should pay for an ISP that isn't using cgna

That may not be an option at all, especially with working from home or while traveling.

For example at my home all ISPs i have available use cgnat.


> That may not be an option at all, especially with working from home or while traveling.

Your work doesn't provide a VPN?

> For example at my home all ISPs i have available use cgnat.

Doubtful - you probably just need to pay for a business line. Somtimes you can also just ask nicely for a non-NATed IP but I imagine this will get rarer as IP prices increase.


The better answer is to just ignore dull password guessing attempts which will never get in because you're using strong passwords or public key authentication (right?).

Sometimes it's not a matter of price. If you're traveling your only option for a network connection could be whatever dreck the hotel deigns to provide.


Even with strong passwords, maybe you don't want someone attempting to authenticate so quickly. Could be DoS or trying to exploit sshd. If you're traveling, cellular and VPN are both options. VPN could have a similar auth dilemma, but there's defense in depth.

Also it's unlikely that your hotel's IP address is spamming the particular SSH server you need to connect to.


> Even with strong passwords, maybe you don't want someone attempting to authenticate so quickly. Could be DoS or trying to exploit sshd.

DoS in this context is generally pretty boring. Your CPU would end up at 100% and the service would be slower to respond but still would. Also, responding to a DoS attempt by blocking access is a DoS vector for anyone who can share or spoof your IP address, so that seems like a bad idea.

If someone is trying to exploit sshd, they'll typically do it on the first attempt and this does nothing.

> Also it's unlikely that your hotel's IP address is spamming the particular SSH server you need to connect to.

It is when the hotel is using the cheapest available ISP with CGNAT.


Good point on the DoS. Exploit on first attempt, maybe, I wouldn't count on that. Can't say how likely a timing exploit is.

If the hotel is using such a dirty shared IP that it's also being used to spam random SSH servers, that connection is probably impractical for several other reasons, e.g. flagged on Cloudflare. At that point I'd go straight to a VPN or hotspot.


Novel timing attacks like that are pretty unlikely, basically someone with a 0-day, because otherwise they quickly get patched. If the adversary is someone with access to 0-day vulnerabilities, you're pretty screwed in general and it isn't worth a lot of inconvenience to try to prevent something inevitable.

And there is no guarantee you can use another network connection. Hotspots only work if there's coverage.

Plus, "just use a hotspot or a VPN" assumes you were expecting the problem. This change is going to catch a lot of people out because the first time they realize it exists is during the emergency when they try to remote in.


I already expect unreliable internet, especially while traveling. I'm not going to have to explain why I missed a page while oncall.


Well, allocating anything smaller than a /64 to a customer breaks SLAAC, so even a really small provider wouldn't do that as it would completely bork their customers' networks. Yes, DHCPv6 technically exists as an alternative to SLAAC, but some operating systems (most notably Android) don't support it it all.


There are plenty of ISPs that assign /64s and even smaller subnet to their customers. There are even ISPs that assign a single /128, IPv4 style.


We should not bend over backwards for people not following the standard.

Build tools that follow the standard/best practices by default, maybe build in an exception list/mechanism.

IPv6 space is plentiful and easy to obtain, people who are allocating it incorrectly should feel the pain of that decision.


I can't imagine why any ISP would do such absurd things when in my experience you're given sufficient resources on your first allocation. My small ISP received a /36 of IPv6 space, I couldn't imagine giving less than a /64 to a customer.


My ISP has a /28 block, so if they chose to penalize my /32 for some reason, that would include 1/16th of the customers of my ISP. Just guessing based on population and situation, that might include on the order of 50000 people.



> With IPv6 on the other hand, it is trivially easy to get a new IP, so the protection method described here will be completely ineffective.

I’m sure this will be fixed by just telling everyone to disable IPv6, par for the course.


The alternative to ipv6 is ipv4 over cgnat, which arguable has the same problem.


Serious question: why doesn't OpenSSH declare, with about a year's notice ahead of time, the intent to cut a new major release that drops support for password-based authentication?


There are very legit reasons to use passwords, for example in conjunction with a second factor. Authentication methods can also be chained.


Password authentication is still entirely necessary. I don't want to have to setup keys just to ssh into a VM I just setup, as one very minor example.


By the time it gets into distros' package managers, is it not often that long (or more) anyway?


> I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment". It got hacked within 20 minutes of going online and these mechanisms wouldn't have saved it, the attacker got in on the second or third try.

Is it possible to create some kind of reverse proxy for SSH which blocks password-based authentication, and furthermore only allows authentication by a known list of public keys?

The idea would be SSH to the reverse proxy, if you authenticate with an authorised public key (or certificate or whatever) it forwards your connection to the backend SSH server; all attempts to authenticate with a password are automatically rejected and never reach the backend.

In some ways what I'm describing here is a "bastion" or "jumphost", but in implementations of that idea I've seen, you SSH to the bastion/jumphost, get a shell, and then SSH again to the backend SSH – whereas I am talking about a proxy which automatically connects to the backend SSH using the same credentials once you have authenticated to it.

Furthermore, using a generic Linux box as a bastion/jumphost, you run the same risk that someone might create a weak password account–you can disable password authentication in the sshd config but what if someone turns it on? With this "intercepting proxy" idea, the proxy wouldn't even have any code to support password authentication, so you couldn't ever turn it on.


Passwords are not the issue you think they are. Someone compromising a strong password with something like fail2ban isn't more likely than someone finding a 0day that can exploit an sshd setup to only accept keys.


> what if someone turns [password authentication back] on

sshd_config requires root to modify, so you've got bigger problems than weak passwords at this point.


It is a lot more likely for some random admin to inappropriately change a single boolean config setting as root, than for them to replace an entire software package which (by design) doesn't have code for a certain feature with one that does.


Check out the ProxyJump and ProxyCommand option in ssh config. They let you skip the intermediate shell.


Wait, how often do you connect to a ssh remote that isn't controlled by you or say, your workplace? Genuinely asking, I have not seen a use case for something like that in recent years so I'm curious!


GitHub is an example of a service that would want to disable this option. They get lots of legit ssh connections from all over the world including people who may be behind large NATs.


I somehow didn't think about that, even if I used that feature just a few hours ago! Now I'm curious about how GitHub handles the ssh infra at that scale...


GitHub, as I've read[1], uses a different implementation of SSH which is tailored for their use case.

The benefits is that it is probably much lighter weight than OpenSSH (which supports a lot of different things just because it is so general[2]) and can more easily integrate with their services, while also providing the benefit of not having to spin up a shell and deal with the potential security risks that contains.

And even if somehow a major flaw is found in OpenSSH, GitHub (at least their public servers) wouldn't be affected in this case since there's no shell to escape to.

[1]: I read it on HN somewhere that I don't remember now, however you can kinda confirm this yourself if you open up a raw TCP connection to github.com, where the connection string says

SSH-2.0-babeld-9102804c

According to an HN user[2], they were using libssh in 2015.

[2]: https://news.ycombinator.com/item?id=39978089

[3]: This isn't a value judgement on OpenSSH, I think it is downright amazing. However, GitHub has a much more narrow and specific use case, especially for an intentionally public SSH server.


Even the amount of SSH authorized_keys they would need to process is a little mind boggling, they probably have some super custom stuff.


Perhaps at a university where all students in the same class need to SSH to the same place, possibly from the same set of lab machines. A poorly configured sshd could allow some students to DoS other students.

This might be similar to the workplace scenario that you have in mind, but some students are more bold in trying dodgy things with their class accounts, because they know they probably won't get in big trouble at an university.


One of my clients has a setup for their clients - some of which connect from arbitrary locations, and others of which need to be able to scripted automated uploads - to connect via sftp to upload files.

Nobody is ever getting in, because they require ed25519 keys, but it is pounded nonstop all day long with brute force attempts. It wastes log space and IDS resources.

This is a case that could benefit from something like the new OpenSSH feature (which seems less hinky than fail2ban).

Another common case would be university students, so long as it's not applied to campus and local ISP IPs.


I sometimes use this: https://pico.sh/


Git over SSH


> First, with IPv4 this will have the potential to increasingly penalize innocent bystanders... Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network.

So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Your concerned are addressed in TFA:

> ... and to shield specific clients from penalty

> A PerSourcePenaltyExemptList option allows certain address ranges to be exempt from all penalties.

It's easy for the original owner to find the list of all the IP blocks the three or four ISPs he's legitimately be connecting from to that exemption list.

I don't buy your argument nor all the variation on the same theme: "There's a minuscule risk of X, so we absolutely nothing but saying there's nothing to do and we let bad guys roam free!".

There's nothing more depressing than that approach.

Kudos to the author of that new functionality: there may be issues, it may not be the panacea, but at least he's trying.


> So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Random brute force attempts against SSH are already a 100% solved problem, so doing nothing beyond maintaining the status quo seems pretty reasonable IMO.

> I don't buy your argument nor all the variation on the same theme: "There's a minuscule risk of X, so we absolutely nothing but saying there's nothing to do and we let bad guys roam free!".

Setting this up by default (as is being proposed) would definitely break a lot of existing use cases. The only risk that is minuscule here is the risk from not making this change.

I don't see any particularly reason to applaud making software worse just because someone is "trying".


> So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

The thing is, we have tools to implement this without changing sshd's behavior. `fail2ban` et. al. exist for a reason.


Sure but if I only used fail2ban for sshd why should I install two separate pieces of software to handle the problem which the actual software I want to run has it built in?


Turning every piece of software into a kitchen sink increases its security exposure in other ways.


Normally I would agree with you, but fail2ban is a Python routine which forks processes based on outcomes from log parsing via regex. There’s so many ways that can go wrong…and has gone wrong, from one or two experiences I’ve had in the past.

This is exactly the sort of thing that should be part of the server. In exactly the same way that some protocol clients have waits between retries to avoid artificial rate limiting from the server.


> There’s so many ways that can go wrong

There are a lot of ways a builtin facility of one service can go wrong, especially if it ends up being active by default on a distro.

`fail2ban` is common, well known, battle-tested. And its also [not without alternatives][1].

[1]: https://alternativeto.net/software/fail2ban/


As I’ve already posted, I’ve ran into bugs with fail2ban too.

Also adding firewalling to SSH is hardly “kitchen sinking” (as another commenter described it). You’re literally just adding another layer of security into something that’s literally meant to be used as an out of the box solution for creating secure connections.

If you want to take issue with the “kitchen sink” mentality of SSH then complain about its file transfer features or SOCKS support. They are arguably better examples of feature creep than literally just having the server own what connections it should allow.


> Also adding firewalling to SSH is hardly “kitchen sinking”

sshd is a service. It may be one among dozens of other services running on a host.

Now imagine for a moment, if EVERY service on the host took that approach. Every backend service, every network-facing daemon, every database, every webserver, voip servers, networked logging engines, userspace network file systems, fileservers...they all now take security into their own hands.

Every single one of them has its own fail2ban-ish mechanism, blocklists it manages, rules for what to block and how long, what triggers a block, if and when a block will be lifted...

Oh, and of course, there is still also a firewall and other centralized systems in place, on top of all that.

How fun would such a system be to administer do you think? As someone with sysadmin experience, I can confidently say that I would rather join an arctic expedition than take care of that mess.

There is a REASON why we have things like WAFs and IDS, instead of building outward-facing-security directly into every single webservice.


If you’ve been a sysadmin as long as I have then you’ll remember when services didn’t even manage their own listener and instead relied on a system-wide daemon that launched and managed those services (inetd). Whereas now you have to manage each listener individually.

That was additional initial effort but the change made sense and we sysadmins coped fine.

Likewise, there was a time when server side website code had to be invoked via a httpd plugin or CGI, now every programming language will have several different web frameworks, each with their own HTTP listener and each needing to be configured in its own unique way.

Like with inetd, the change made sense and we managed just fine.

Tech evolves — it’s your job as a sysadmin to deal with it.

Plus, if you’re operating at an enterprise level where you need a holistic view of traffic and firewalling across different distinct services then you’d disable this. It’s not a requirement to have it enabled. A point you keep ignoring.


> Likewise, there was a time when server side website code had to be invoked via a httpd plugin or CGI, now every programming language will have several different web frameworks, each with their own HTTP listener and each needing to be configured in its own unique way.

And still we keep all those webservices, be they in Java, Go, node, C# or Python, behind dedicated webservers like nginx or apache.

Why? Because we trust them, and they provide a Single-Point-Of-Entry.

> Tech evolves — it’s your job as a sysadmin to deal with it.

Single-Point-Of-Entry is still prefered over having to deal with a bag of cats of different services each having their own ideas about how security should be managed. And when a single point of entry exists, it makes sense to focus security there as well.

This has nothing to do with evolving tech, this is simple architectural logic.

And the first of these points that every server has, is the kernels packet filter. Which is exactly what tools like fail2ban manage.

> A point you keep ignoring.

Not really. Of course an admin should deactivate svc-individual security in such a scenario, and I never stated otherwise.

The point is: That's one more thing that can go wrong.


> And still we keep all those webservices, be they in Java, Go, node, C# or Python, behind dedicated webservers like nginx or apache.

Not really no. They might sit behind a load balancer but that's to support a different feature entirely. Some services might still be invoked via nginx or apache (though the latter has fallen out of fashion in recent years) if nginx has a better threading model. But even there, that's the exception rather than the norm. Quite often those services will be stand alone and any reverse proxying is just to support orchestration (eg K8s) or load balancing.

> Single-Point-Of-Entry is still prefered over having to deal with a bag of cats of different services each having their own ideas about how security should be managed.

Actually no. What you're describing is the castle-and-Moat architecture and that's the old way of managing internal services. These days it's all about zero-trust.

https://www.cloudflare.com/en-gb/learning/security/glossary/...

But again, we're talking enterprise level hardening there and I suspect this openssh change is more aimed at hobbyists running things like Linux VPS

> > A point you keep ignoring.

> Not really. Of course an admin should deactivate svc-individual security in such a scenario, and I never stated otherwise. The point is: That's one more thing that can go wrong.

The fact that you keep saying that _is_ missing the point. This is one more thing that can harden the default security of openssh.

In security, it's not about all or nothing. It's a percentages game. You choose a security posture based on your the level of risk you're willing to accept. For enterprise, that will be using an IDP to manage auth (including but not specific to SSH). A good IDP can be configured to accept requests from non-blacklisted IP, eg IPs from countries where employees are known not to work in), and even only accept logins from managed devices like corporate laptops. But someone running a VPS for their own Minecraft server, or something less wholesome like Bit-Torrent, aren't usually the type to invest in a plethora of security tools. They might not even have heard of fail2ban, denyhosts, and so on. So having openssh support auto-blacklisting on those servers is a good thing. Not just for the VPS owners but us too because it reduces the number of spam and bot servers.

If your only concern is that professional / enterprise users might forget to disable it, as seems to be your argument here, then it's an extremely weak argument to make given you get paid to know this stuff and hobbyists don't.


still better trying to improve fail2ban than to add a (yet another) kitchen sink on sshd


fail2ban has been around for so long, people get impatient at some point


Impatient about what exactly? fail2ban is battle tested for well over a decade. It is also an active project with regular updates: https://github.com/fail2ban/fail2ban/commits/master/


What hnlmorg said a few comments up


a system where sshd outputs to a log file then someone else picks it up and then pokes at iptables, seems much more of hacky than having sshd supporting that natively, imo. Sshd is already tracking connection status, having it set the status to deny seems like less of a kitchen sink and more just about security. the S in ssh for secure, and this is just improving that.


fail2ban has a lot of moving parts, I don't think that's necessarily more secure.

I would trust the OpenSSH developers to do a better job with the much simpler requirements associated with handling it within their own software.


> why should I install two separate pieces of software to handle the problem

https://alanj.medium.com/do-one-thing-and-do-it-well-a-unix-...


generally i agree with this principle, but fail2ban is kind of a hacky pos.


> but fail2ban is kind of a hacky pos.

It's battle-tested for well over a decade, has accumulated 10.8k stars and 1.2k forks on github, so it seems to do something right no?

Not to mention that even if it were otherwise, that's not a reason to ignore UNIX philosopies that have served the FOSS world well for over half a century at this point.

Last but not least, there are any number of alternative solutions.


Just because it's 'battle tested' and has stars and is useful does not preclude it from being a hacky pos. Reading logs using regexps and then twiddling IP tables is not the cleanest method of achieving this result. I would much prefer if this functionality were either handled like ssh or if there was some kind of standardized messaging (dbus?) that was more purposeful and didn't rely on regex.

It's useful because you can hook it up to anything that produces logs, it's hacky because that means you are using regexp. If the log format changes, you're likely fucked, not to mention that regexps are notoriously hard to make 'air tight' and often screwed up by newbies. Add to that in a case where your regexes start missing fail2ban will stop doing it's job silently.. not great my friend.

It's been a useful hack for a very long time, but I'd like to see us move on from it.


The issue is that the log parsing things like fail2ban work asynchronously. It is probably of only theoretical importance, but on the other hand the meaningful threat actors are usually surprisingly fast.


Yeah, they exist because nothing better was available at that time.

It doesn’t hurt to have this functionality in openssh too. If you still need to use fail2ban, denyhosts, or whatever, then don’t enable the openssh behaviour feature. It’s really that simple.


How is baking this into sshd "better"?

UNIX Philosophy: "Do one thing, and do it well". An encrypted remote shell protocol server should not be responsible for fending off attackers. That's the job of IDS and IPS daemons.

Password-based ssh is an anachronism anyway. For an internet-facing server, people should REALLY use ssh keys instead (and preferably use a non-standard port, and maybe even port knocking).


It’s better if you want an out of the box secure experience. This might be quite a nice default for some VPSs.

If you have a IDS and IPS set up then you’re already enterprise enough that you want your logs shipped and managed by a single pane of glass. This new SSH feature isn’t intended to solve enterprise-level problems.

Plus if you want to argue about “unix philosophy” with regards to SSH then why aren’t you kicking off about SOCKS, file transfer, port forwarding, and the countless other features SSH has that aren’t related to “shell” part of SSH? The change you’re moaning about has more relevance than most of the other extended features people love SSH for.


> This new SSH feature isn’t intended to solve enterprise-level problems.

But service level security features have the potential to cause enterprise-level problems.

Sure, in an ideal world, all admins would always make zero mistakes. And so would the admins of all of our clients, and their interns, and their automated deployment scripts. Also in that perfect world, service level security features would never be on by default, have the same default configuration across all distros, and be easy to configure.

But, alas, we don't live in a perfect world. And so I have seen more than one service-level security feature, implemented with the best of intentions, causing a production system to grind to a halt.


> But service level security features have the potential to cause enterprise-level problems.

Only if you don’t know what you’re doing. Which you should given you’re paid to work on enterprise systems.

Whereas not having this causes problems for users are not paid to learn this technology.

So it seems completely reasonable to tailor some features to lesser experienced owners given the wide spectrum of users that run openssh.


It would be frustrating to be denied access to your own servers because you are traveling and are on a bad IP for some reason.

Picture the amount of Captchas you already getting from a legitimate Chrome instance, but instead of by-passable annoying captchas, you are just locked out.


I have fail2ban configured on one of my servers for port 22 (a hidden port does not have any such protections on it) and I regularly lock out my remote address because I fat finger the password. I would not suggest doing this for a management interface unless you have secondary access


Why would you use password based auth instead of priv/pub key auth? You'd avoid this and many other security risks.


what do you if you get mugged and you laptop and phone and keys are taken or stolen from you? or lost?

After this party, this guy needed help, he lost his wallet and his phone, his sister also went to the party and gave him a ride there but had left. he didn't know her number to call her, and she'd locked down her socials so we couldn't use my phone to contact her. we were lucky that his socials weren't super locked down and managed to find someone that way, but priv keys are only good so long as you have them.


> what do you if you get mugged and you laptop and phone and keys are taken or stolen from you? or lost?

My ssh keys are encrypted. They need a password, or they are worthless.

Sure, I can mistype that password as well, but doing so has no effect on the remote system, as the ssh client already fails locally.


You can and you should back up your keys. There isn't a 100% safe, secure and easy method that shields you from everything that can possibly happen, but there are enough safe, secure and easy ones to cover vast majority of cases other than a sheer catastrophe, which is good enough not to use outdated and security prone mechanisms like passwords on network exposed service.


I use a yubikey. You need a password to use the key. It has it's own brute force management that is far less punishing than a remote SSH server deciding to not talk to me anymore.


but what do you do if you don't have the key? unless it's implanted (which, https://dangerousthings.com/), I don't know that I won't lose it somehow.


My keyboard has a built in USB hub and ports. They key lives there. They keyboard travels with me. It's hard to lose.

I have a backup key in storage. I have escrow mechanisms. These would be inconvenient, but, it's been 40 years since I've lost any keys or my wallet, so I feel pretty good about my odds.

Which is what the game here is. The odds. Famously humans do poorly when it comes to this.


If I present the incorrect key fail2ban locks me out as well. Two incorrect auth attempts locks out a device for 72 hours. The idea is for regular services which depend on ssh (on port 22) to work regularly (because of key auth) but to block anyone attempting to brute force or otherwise maliciously scan the system.

Doesn’t change the advice, if this is your only management interface, don’t enable it :)

Also you know you can have MFA even with pw authentication right? :)


What's the alternative? If you get onto a bad IP today, you're essentially blocked from the entire Internet. Combined with geolocks and national firewalls, we're already well past the point where you need a home VPN if you want reliable connectivity while traveling abroad.


What happens when your home VPN is inaccessible from your crappy network connection? There are plenty of badly administered networks that block arbitrary VPN/UDP traffic but not ssh. Common case is the admin starts with default deny and creates exceptions for HTTP and whatever they use themselves, which includes ssh but not necessarily whatever VPN you use.


Same as when a crappy network blocks SSH, you get better internet. Or if SSH is allowed, use a VPN over TCP port 22.


Better internet isn't always available. A VPN on the ssh port isn't going to do you much good if someone sharing your IP address is doing brute force attempts against the ssh port on every IP address and your system uses that as a signal to block the IP address.

Unless you're only blocking connection attempts to ssh and not the VPN, but what good is that? There is no reason to expect the VPN to be any more secure than OpenSSH.


If you're using an IP address that's being used to brute force the entire Internet, it's likely that lots of websites are blocking it. If that doesn't matter to you and all you need is to get into a particular SSH server, and also the network blocks VPNs, you're still fine if the SSH is on port 9022 and VPN is port 22. If it's not your own SSH server and it's port 22, then you're still fine if your own VPN is port 22 (on a different host).

Hacking into the VPN doesn't get the attacker into the SSH server too, so there's defense in depth, if your concern is that sshd might have a vulnerability that can be exploited with repeated attempts. If your concern is that your keys might be stolen, this feature doesn't make sense to begin with.


> If you're using an IP address that's being used to brute force the entire Internet, it's likely that lots of websites are blocking it.

Websites usually don't care about ssh brute force attempts because they don't listen on ssh. But the issue isn't websites anyway. The problem is that your server is blocking you, regardless of what websites are doing.

> If that doesn't matter to you and all you need is to get into a particular SSH server, and also the network blocks VPNs, you're still fine if the SSH is on port 9022 and VPN is port 22. If it's not your own SSH server and it's port 22, then you're still fine if your own VPN is port 22 (on a different host).

Then you have a VPN exposed to the internet in addition to SSH, and if you're not rate limiting connections to that then you should be just as concerned that the VPN "might have a vulnerability that can be exploited with repeated attempts." Whereas if the SSH server is only accessible via the VPN then having the SSH server rate limiting anything is only going to give you the opportunity to lock yourself out through fat fingering or a misconfigured script, since nobody else can access it.

Also notably, the most sensible way to run a VPN over TCP port 22 is generally to use the VPN which is built into OpenSSH. But now this change would have you getting locked out of the VPN too.


The situation is the SSH server is exposed everywhere, and you also have an unrelated VPN, maybe even via a paid service you don't manage. The VPN just provides you with an alternative IP address and privacy when traveling. It matters a lot more if someone hacks the SSH server.


It would also be very rare. The penalties described here start at 30s, I don't know the max, but presumably whatever is issuing the bad behavior from that IP range will give up at some point when the sshd stops responding rather than continuing to brute force at 1 attempt per some amount of hours.

And that's still assuming you end up in a range that is actively attacking your sshd. It's definitely possible but really doesn't seem like a bad tradeoff


lol. depending were you travel the whole continent is already blanket banned anyway. but that only happens because nobody travels there. so it is never a problem.


There is nothing wrong with this approach if enabled as an informed decision. It's the part where they want to enable this by default I have a problem with.

Things that could be done is making password auth harder to configure to encourage key use instead, or invest time into making SSH CAs less of a pain to use. (See the linked paper, it's not a long read.)


> So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Yes, because as soon as the security clowns find out about these features, we have to start turning it on to check their clown boxes.


[flagged]


Don't use fail2ban

Use keys


Alternatively: Use both


According to the commit message, the motivation is also to detect certain kinds of attacks against sshd itself, not just bruteforced login attempts.


It's not quite fair, but if you want the best service, you have to pay for your own ipv4 or, in theory, a larger ipv6 block. Only alternative is for the ISP deploying the CGN to penalize users for suspicious behavior. Classic ip-based abuse fighter, Wikipedia banned T-Mobile USA's entire ipv6 range: https://news.ycombinator.com/item?id=32038215 where someone said they will typically block a /64, and Wikipedia says they'll block up to a /19.

Unfortunately there's no other way. Security always goes back to economics; you must make the abuse cost more than it's worth. Phone-based 2FA is also an anti-spam measure, cause clean phone numbers cost $. When trying to purchase sketchy proxies or VPNs, it basically costs more to have a cleaner ip.


I like being able to log into my server from anywhere without having to scrounge for my key file, so I end up enabling both methods. Never quite saw how a password you save on your disk and call a key is so much more secure than another password.


This is definitely a common fallacy. While passwords and keys function similarly via the SSH protocol, there's two key things that are different. 1, your password is likely to have much lower entropy as a cryptographic secret (ie: you're shooting for 128 bits of entropy, which takes a pretty gnarly-sized password to replicate), and 2. SSH keys introduce a second layer of trust by virtue of you needing to add your key ID to the system before you even begin the authentication challenge.

Password authentication, which only uses your password to establish you are authentically you, does not establish the same level of cryptographic trust, and also does not allow the SSH server to bail out as quickly, instead needing to perform more crypto operations to discover that an unauthorized authentication attempt is being made.

To your point, you are storing the secret on your filesystem, and you should treat it accordingly. This is why folks generally advocate for the use of SSH Agents with password or other systems protecting your SSH key from being simply lifted. Even with requiring a password to unlock your key though, there's a pretty significant difference between key based and password based auth.


I’ve seen lots of passwords accidentally typed into an IRC window. Never seen that happen with an SSH key.


I heard that if you type your password in HN it will automatically get replaced by all stars.

My password is **********

See: it works! Try it!


So if I type hunter2 you see ****?


A few more things:

An SSH key can be freely reused to log in to multiple SSH servers without compromise. Passwords should never be reused between multiple servers, because the other end could log it.

An SSH key can be stored in an agent, which provides some minor security benefits, and more importantly, adds a whole lot of convenience.

An SSH key can be tied to a Yubikey out of the box, providing strong 2FA.


Putting aside everything else. How long is your password vs how long is your key?


It's this, plus the potential that you've reused your password, or that it's been keylogged.


It's more secure because it's resistant to MITM attacks or a compromised host. Because the password is sent, the private key isn't.


My home IP doesn’t change much so I just open ssh port only to my own IP. If I travel I’ll add another IP if I need to ssh in. I don’t get locked out because I use VPS or cloud provider firewall that can be changed through console after auth/MFA. This way SSH is never exposed to the wider internet.


Another option is putting SSH on an IP on the wireguard only subnet.


I've recently done this for all my boxes, but tailscale over barebones wireguard. So fucking awesome. I just run tailscale at all times on all my boxes, all my dns regardless of what network i'm on goes to my internal server that upstreams over tls. It's great, and tailscale is a snap to set up.


Use TOTP (keyboard-interactive) and password away!


And even with IPv4, botnets are a common attack source, so hitting from many endpoints isn't that hard.

I'd say "well, it might catch the lowest effort attacks", but when SSH keys exist and solve many more problems in a much better way, it really does feel pointless.

Maybe in an era where USB keys weren't so trivial, I'd buy the argument of "what if I need to access from another machine", but if you really worry about that, put your (password protected) keys on a USB stick and shove it in your wallet or on your keyring or whatever. (Are there security concerns there? Of course, but no more than typing your password in on some random machine.)


You can use SSH certificate authorities (not x509) with OpenSSH to authorize a new key without needing to deploy a new key on the server. Also, Yubikeys are useful for this.


Just a warning for people who are planning on doing this: it works amazingly well but if you're using it in a shared environment where you may end up wanting to revoke a key (e.g. terminating an employee) the key revocation problem can be a hassle. In one environment I worked in we solved it by issuing short-term pseudo-ephemeral keys (e.g. someone could get a prod key for an hour) and side-stepped the problem.

The problem is that you can issue keys without having to deploy them to a fleet of servers (you sign the user's pubkey using your SSH CA key), but you have no way of revoking them without pushing an updated revocation list to the whole fleet. We did have a few long-term keys that were issued, generally for build machines and dev environments, and had a procedure in place to push CRLs if necessary, but luckily we didn't ever end up in a situation where we had to use it.


Setting up regular publishing of CRLs is just part of setting up a CA. Is there some extra complexity with ssh here, or are you (rightfully) just complaining about what a mess CRLs are?

Fun fact: it was just a few months ago that Heimdall Kerberos started respecting CRLs at all, that was a crazy bug to discover


There's extra complexity with ssh, it has its own file of revoked keys in RevokedKeys and you'll have to update that everywhere.

see https://man.openbsd.org/ssh-keygen.1#KEY_REVOCATION_LISTS for more info

And unlike some other sshd directives that have a 'Command' alternative to specify a command to run instead of reading a file, this one doesn't, so you can't just DIY distribution by having it curl a shared revocation list.


The hard part is making sure every one of your servers got the CRL update. Since last I checked OpenSSH doesn't have a mechanism to remotely check CRLs (like OCSP), nor does SSH have anything akin to OCSP stapling, it's a little bit of a footgun waiting to happen.


Oh wow... That's pretty nuts. I guess the reason is to make it harder for people to lock themselves out of all their servers if OSCP or whatever is being used to distribute the CRL is down.


Not necessarily. There is a fork of OpenSSH that supports x509, but I remember reading somewhere that it's too complex and that's why it doesn't make it into mainline.


You might want to check out my project OpenPubkey[0] with uses OIDC ID Tokens inside SSH certs. For instance this let's you SSH with your gmail account. The ID token in SSH certificate expires after a few hours which makes the SSH certificate expire. You can also do something similar with SSH3 [1].

[0] OpenPubkey - https://github.com/openpubkey/openpubkey/

[1] SSH3 - https://github.com/francoismichel/ssh3


Why not just make the certificate short-lived instead of having a certificate with shorter-lived claims inside?


You can definitely do that, but it has the downside that the certificate automatically expires when you hit that the set time and then you have to reauth again. With OpenPubkey you can be much more flexible. The certificate expires at a set time, but you can use your OIDC refresh token to extend certificate expiration.

With a fixed expiration, if you choose a 2 hour expiry, the user has to reauth every 2 hours each time they start a new SSH session.

With a refreshable expiration, if you choose a 2 hour expiry, the user can refresh the certificate if they are still logged in.

This lets you set shorter expiry times because the refresh token can be used in the background.


With normal keys you have a similar issue of removing the key from all servers. If you can do this, you can also deploy a revocation list.


My point is that, at first glance, this appears to be a solution that doesn't require you to do an operation on all N servers when you add a new key. Just warning people that you DO still need to have that infrastructure in place to push updated CRLs, although you'll hopefully need to use it a lot less than if you were manually pushing updated authorized_keys files to everything.


Easier to test if Jenkins can SSH in than to test a former employee cannot. Especially if you don't have the unencrypted private key.


Moneysphere lets you do this with tsigs on gpg keys. I find the web of trust marginally less painful than X509


> I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment".

pam_pwnd[1], testing passwords against the Pwned Passwords database, is a(n unfortunately abandoned but credibly feature complete) thing. (It uses the HTTP service, though, not a local dump.)

[1] https://github.com/skx/pam_pwnd


meh. enabling any of the (fully local) complexity rules pretty much had the same practical effect of checking against a leak.

if the password have decent entropy, it won't be in the top 1000 of the leaks so not used in blond brute force like this.


I'd love to penalize any attempt at password auth. Not the IP addresses, just if you're dumb enough to try sending a password to my ssh server, you're going to wait a good long time for the failure response.

Actually I might even want to let them into a "shell" that really screws with them, but that's far outside of ssh's scope.


I certainly don't want to expose any more surface area than necessary to potential exploits by an attacker who hasn't authenticated successfully.


Yeah you're right, the screw-with-them-shell would have to be strictly a honeypot thing, with a custom-compiled ssh and all the usual guard rails around a honeypot. The password tarpit could stay, though script kiddie tools probably scale well enough now that it's not costing them much of anything.


I had a similar experience with a Postgres database once. It only mirrored some publicly available statistical data, and it was still in early development, so I didn't give security of the database any attention. My intention was anyway to only expose it to localhost.

Then I started noticing that the database was randomly "getting stuck" on the test system. This went on for a few times until I noticed that I exposed the database to the internet with postgres/postgres as credentials.

It might have been even some "friendly" attackers that changed the password when they were able to log in, to protect the server, maybe even the hosting provider. I should totally try that again once and observe what commands the attackers actually run. A bad actor probably wouldn't change the password, to stay unnoticed.


How did you accidentally expose it to the Internet, was your host DMZ?


I saw a Postgres story like this one. Badly managed AWS org with way too wide permissions, a data scientist sort of person set it up and promptly reconfigured the security group to be open to the entire internet because they needed to access it from home. And this was a rather large IT company.


Yeah on some cloud provider, the virtual networks can be all too confusing. But this story sounded like a home machine.


DMZ setting on a router makes this pretty easy.

I've faced the DMZ at an IP on DHCP. Later when the host changed I had noticed traffic from the internet getting blocked on the new host and realized my mistake.


docker compose, I accidentially commited the port mappings I set up during local development.


Interesting paper from Tatu Ylonen. He seem to be quick on throwing out the idea of certificates only because there is no hardened CA available today? Wouldn’t it be better to solve that problem, rather than going in circles and making up new novel ways of using keys? Call it what you want, reduced to their bare essentials, in the end you either have delegated trust through a CA or a key administration problem. Whichever path you choose, it must be backed by a robust and widely adopted implementation to be successful.


As far as OpenSSH is concerned, I believe the main problem is that there is no centralized revocation functionality. You have to distribute your revocation lists via an external mechanism and ensure that all your servers are up to date. There is no built-in mechanism like OCSP, or better yet, OCSP stapling in SSH. You could use Kerberos, but it's a royal pain to set up and OpenSSH is pretty much the defacto standard when it comes to SSH servers.


> Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network.

According to to the article, you can exempt IPs from being blocked. So it won’t impact those coming from known IPs (statics, jump hosts, etc).


most places barely even have the essential monthly email with essential services' ip in case of dns outage.

nobody cares about ips.


Agreed. In addition to the problems you mentioned, this could also cause people to drop usage of SSH keys and go with a password instead, since it's now a "protected" authentication vector.


> innocent bystanders as CGNs are deployed

SSH is not HTTPS, a resource meant for the everyday consumer. If you know that you're behind a CGN, as a developer, an admin or a tool, you can solve this by using IPv6 or a VPN.

> Worst case, this will give bad actors the option to lock the original owner out of their own server

Which is kind of good? Should you access your own server if you are compromised and don't know it? Plus you get the benefit of noticing that you have a problem in your intranet.

I understand the POV that accessing it via CGN can lead to undesirable effects, but the benefit is worth it.

Then again, what benefit does it offer over fail2ban?


Yes, I agree. This seems a naive fix.

Just silencing all the failed attempts may be better. So much noise in these logs anyway.


Fail2ban can help with that


Just throw away that document and switch to kerberos.

All the problems in this document are solved immediately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: