Hacker News new | past | comments | ask | show | jobs | submit login
Some thoughts on OpenSSH 9.8's PerSourcePenalties feature (utcc.utoronto.ca)
75 points by jandeboevrie 4 months ago | hide | past | favorite | 80 comments



I suspect the distributed cracking will move to the same pattern as the SMTP/pop3 brute force guys did and use one IP per x+1 seconds where x=the ssh penalty window. We have seen this on our customer facing smtp server where we have hundreds of remote compromised IPs trying each one password per 30-60min. Still, I welcome this change as there are enough single prick attackers out there where this will help cut down on the size of the logs to process / digest.


Actually this already is the SOTA of cracking. My honeypot can see several different IP is brute forcing concurrently, and they seems irrelevant. But once you let one of them login, it will quit immediately and all those IPs will quiet after ~15sec. Then one of those IPs will login again to deploy miner.


Next level: let them login and forward the ssh connection to the digital equivalent of a room full of mirrors.


reminds me of using the old MIRROR target in iptables back in the day. before it was removed because its ridiculous. we used to watch script kiddies trying to brute force their own hosts but even then we knew it was ripe for abuse.

https://www.linuxtopia.org/Linux_Firewall_iptables/x4448.htm...


Probably for the best, since it sounds like that could be used for DDoS amplification and/or reflection.

For example, if an attack could spoof traffic to get two different reflectors hall-of-mirror-ing each other, or using a botnet that spoofs traffic to get one collection of dupes to slam a single victim in response, etc.


How would you spoof multiple valid packets in a TCP-based protocol requiring a sequence of interactions when you can't receive any of the ACKs (because they'll be sent to not-your-IP)?


Depending on the protocol you can probably do reflection attacks over tcp with TFO.


It was beautiful to see people nuke themselves in winnuke era.


This is already the practice in my experience. Fail2ban has become completely useless for ssh about 5~6 years ago. Always just one to three tries per IP address.

So looks like this openssh feature is a decade late.


That doesn’t make it useless. It still severely limits the rate of brute force versus having no limit.


For what it's worth if you have control over both client and server and don't want to limit access using a strict IP whitelist, an alternative solution that will keep your logs quieter and add additional protection is to use good old fashioned port knocking. knockd on Linux helps with automating this on the server side. Client side you can use anything (although knockd does include a dedicated client) to send your sequence of packets before actually connecting.


I really think this solution is underrated. Port knocking is robust, doesn't use any special technology, and servers using it can't reasonably be scanned for. The only real disadvantage is that any passive observer can see your knock sequence in "plaintext" (so that includes anyone logging netflow).

Even so, I don't know why OpenSSH hasn't implemented it instead of the the silly fail2ban theatre we're discussing in these comments.


One thing to help with the passive observer would be to have the knock sequence be time varying like a TOTP. It's still a very thin addition but more defense in depth the better sometimes.


lol, hadn't read all the comments before posting mine.. Have an upvote! Actually who not do both. Vary the knock code and the resulting ssh port using succesive codes.

I just checked knockd man page and it turns out it can use a one_time_sequences file that contains a sequence of port knock combinations. I wonder if this file is dynamically checked, or loaded and parsed during startup? Or could one simply echo the TOTP code straight into that file and hup the knockd service each time (let's say the TOTP interval was set to something like 5 minutes).



Well, that's the answer. Thank you.


I wonder could you combine command line TOTP tools with port knock for a fully time-based unique knock codes? Or even use the TOTP code for the ssh port?

I'm totally gonna do this.


Because it's a stupid low entropy key put in front of a service that you should be using MUCH harder keys on instead of passwords as of circa the 90s.

You're wanting to add a screen door on a sub, and its just a feel good option for those who don't understand the math involved.

The proper solution is to stop using passwords and use keys or proper cert auth.


I think it goes without saying that you would still want to be using keys instead of passwords for the actual authentication. Port knocking should always be an additional layer, not a replacement layer.


I find adding dynamic dns entries to my firewalls much more efficient and to have a more meaningful protection value.

A timed job that checks the up of your clients and updates the firewall every 30 seconds seems a much more secure method than having a magic sequence of ports that can be captured in the wild.

It’s hard to spoof a full tcp connection (with a key) needed to update your ddns.

Best part is you can leave your ddns to a separate box or service which complicates the compromise of a single host


What’s the difference between fail2ban? Though I feel neither of those two works now.


Seems similar, except that this is built-in to sshd vs having to install a separate tool. It's also enabled by default here in sshd.


fail2ban works just fine with sshd. I combine this GeoIP blocking ceetain troublesome locations in firewalls. 98% of my scanning / exploiting comes from 11 countries.


fail2ban is great, but only works on the local host.

The post says: "Right now our perimeter firewall is blind to whether a brief SSH connection was successful or not"

(I suspect there's a way to set up centralised logging and fail2ban running looking at those centralised logs and sending updates to a perimeter firewall, but that's not a typical deployment of fail2ban. Or at least is wasn't when I was heavily using it a while back.)


How about a service that lets bruteforcers "in" after some number of failed attempts, but what they get is just a fake command prompt that accepts all of their commands? I'm sure hackers would eventually adapt, but it would annoy them for a while.


They would just connect, issue some command like ‘ls’ and evaluate the response, not sure that would slow them down much more than they already are.


I can't tell if that's sarcasm, but that's a thing already, called honey pot. Very often, you set it up to let attackers in so you can analyze what they're doing and protect yourself better.


This thread has some mentions of memory-safe issues. Has anyone tried a Go or Rust sshd in their production?

I've had some dreams of a "tighter" sshd for my universe and have toyed (unsuccessfully) with a Go one.

Anyone want to share their experience?


Hem... There is PerSourcePenaltyExemptList to whitelist specific hosts so while I agree it might create a bit more complexity (not much different than handling fail2ban and co) it's not "blocking" like described.


I usually just IP whitelist inbound sshd connections and then drop all other packets


If you must expose SSH to the internet, this can be a helpful feature.


PerSourcePenalties-like abuse mitigations are not very subtle, but often very effective.

My own experience with this is not as much with SSH as with SMTP. If your particular IP emits a single non-deliverable message every 30 minutes or so, that's fine.

But: more than a handful of SPF failures for the same sender/recipient combo within 10 minutes or so? Yeah, you're now on the general-deny-list. And persisting even then? Automagically tar-pitted on the firewall, and have fun...


> But: more than a handful of SPF failures for the same sender/recipient combo within 10 minutes or so? Yeah, you're now on the general-deny-list. And persisting even then? Automagically tar-pitted on the firewall, and have fun...

What is the advantage of bothering to do this?

To me, it seems like you're risking blocking legitimate traffic (as two obvious examples, in cases where a sending mailserver is temporarily misconfigured, or if a malicious mailserver is using a cloud IP that is later recycled to a completely unrelated user), for basically zero upside.


> What is the advantage of bothering to do this?

For SMTP: log optimization.

Like: yeah, I don't care that you're self-DDOSing if you're an actual spammer. But, just in case you're a legitimate sender lacking SPF records, and you just need some help, I still want some kind of signal.

For SSH, I guess you might want something similar, just to catch authorized-yet-misconfigured remotes...


the future is bleak: Attackers can just rent a /48 ipv6 block for a few dollars and have a billion IPs at their disposal.


You can control the block size for this, though: https://man.openbsd.org/sshd_config.5#PerSourceNetBlockSize


Such blocking was always not about protection, but not having 100500 error messages in your logs every hour. Attackers that are blocked are the kind of attackers that cast a wide net and see what get caught.


I'm sure people will quickly start blocking /48s or even larger blocks once that becomes common.


They already do. The IPv6 public routable prefix is the first 48 bits, so a /48 is what ~everyone uses as the analogy of the v4 /32, for rate limiting purposes.


Everyone? I’ve had ISPs provide me with anything from an /48 to a /64. For the latter, you’d be vastly over counting with /48-based limiting.


We have two perspectives, the normal user and the scraper. The normal user either acquires a /64, /56 or /48, depending on the whim of their ISP. There is typically no cost difference between these options, which means that the scraper (or their upstream proxy) always chooses a provider which offers a /48.

Thus, the default unit of IPv6 blocking must be a /48. This situation will persist for as long as /48s are readily available at the same price as /64s.

Perversely, the reason we don't have this issue in IPv4 space is because the address space is of the same order of magnitude as the number of potential users. That artificial scarcity means that a routable /24 is x256 the cost of a end-user /32, so the unit of blocking can be a /32.


> Perversely, the reason we don't have this issue in IPv4 space is because the address space is of the same order of magnitude as the number of potential users.

Do we not? And is it really?

There are some /32 IPv4 addresses hosting many users, e.g. with CG-NAT, and it's already an issue with regards to blocking/rate-limiting.

Just like there are single-user /48s and multi-user /64s, there can be single-user /32s and /32s hosting tons of users behind a CG-NAT.


Sure, but that's the same argument I'm making: the unit of blocking will be the largest unit that is routinely allocated to a single user. In IPv4 space that's a /32, so people block by /32. In IPv6 space that's a /48, so people block by /48. Check out Let's Encrypt's rate limit policy, for example.

The difference I'm pointing out between IPv4 and IPv6 is that nobody is giving single IPv4 users /24s for their own use. But IPv6 /48s (which are theoretically somewhat equivalent to IPv4 /24s) are freely available. This is a problem because it makes over-blocking even more likely than it already is. And as you point out elsewhere, over-blocking is already an issue in IPv4 space.


But the /64 would not be publicly routable (i.e., be visible in the global BGP table).

I guess in practice, it's more like IPv4 /24 = IPv6 /48, though. The closest analog to an IPv4 /32 would probably be an IPv6 /64.


How do routeability and BGP matter for the purpose of traffic source identification and rate limiting?


If you have two addresses in the same /64, you know almost certainly they are on the same LAN.

If you have two addresses in neighboring /64s (same initial 63 bits), or in general within the same /48 (same initial 48 bits), you know almost certainly that they are somehow within the same organization. They could be within the same building, or in the same company, or using the same ISP or cloud provider; you don't know, but they are somehow related. How do you know? Well, since a /49 isn't individually routable in BGP, they have to _somehow_ originate at the same upstream network. There has to be some sort of cooperation between them (possibly through an ISP as a middleman).

But if they are in _neighboring_ /48s, you don't have this kind of guarantee. They could be from completely different organizations. Most likely, they are on the same continent (since they were given out by the same regional internet registry; RIR), but even that is not really guaranteed.

So when you are bucketing addresses for rate limiting purposes, a /48 is a reasonable place to start doing that, just like /24 is for IPv4. Of course, you may need to get smarter than that (e.g. an attacker could have access to a /32), but it's a reasonable starting spot.


> So when you are bucketing addresses for rate limiting purposes, a /48 is a reasonable place to start doing that, just like /24 is for IPv4.

I've encountered assumptions such as this one as a user, and they're really frustrating.

More than once I've found myself banned from being able to log in, view a site etc. because of somebody else's bad behavior I temporarily share a CG-NAT or large public Wi-Fi with, or more likely because somebody topologically close to me got hacked.

Meanwhile, actual attackers are using pretty much the entire IPv4 space worth of compromised embedded devices spread across the globe...


How exactly is a non routable ipv6 address going to try to log in to your ssh server?


By being part of a larger, routable prefix.

gnfargbl didn't say the _address_ was unroutable, but that effectively, routing policies mean that /48 is a common minimum unit for administrative purposes (similar to how /24 has a special “minimum size” meaning for IPv4).


Good to know my single ipv6 address that my ISP gives my home router can be used to get the whole /48 ratelimited by ~everyone


Most ISP customers get either a /48 or a /56. The IPv6 equivalent of a random 1.2.3.4/32 consumer IP address would be something like 2001:abcd:1234::/48 or 2001:abcd:1234:ff::/56 in IPv6.


It's certainly an annoying future. I fully expect some sort of ASN-reputation map and service to become de rigeur in the next decade. Pretty sure heavier-duty SIEM/IPS systems already do that.


I wish the openssh folks would implement a UDP based "whole auth key or no talk at all" protocol.

ie Single Packet Authorization

Wrapping your ssh with wireguard (because wireguard doesn't respond without a full key) doesn't feel too good.


I’ve been thinking that we need a new ‘WireGuard of SSH’ for a while. SSH is such a complex behemoth now. Before WireGuard came along VPNs were horrible to work with, the cryptography was bad and they were too configurable. Just do one thing and do it well - provide a way to establish an encrypted remote shell. Let others build on top of that if they need more.

Vulnerabilities lurk in overly complex software with a thousand bells and whistles. By reducing the code paths you’re making software that is much easier to audit and fuzz.


I built a HTTP-based shell system on top of a configuration management tool. It uses public key cryptography via JWTs, and generates noise to obfuscate keystroke timing. Since it's all over HTTP, you don't really get any port knocking, and you can expose access using proxies and middleware.

https://etcha.dev/docs/guides/shell-access


I agree directionally, but the frankly stunning security track record of OpenSSH makes that a hard argument to really prosecute.


I'm curious about what works and what doesn't: Many experts don't trust OpenBSD's security implementation (I don't want to raise the issue, just stating fact - many don't). Yet many do trust OpenSSH's security implementation, and OpenSSH is of course an OpenBSD project.

What works in OpenSSH that doesn't work in OpenBSD? Maybe it's as simple as, though under the same umbrella OpenSSH uses a different team, methodology, etc.


I’m genuinely interested to hear criticism re: security in openBSD if any one has an interesting link or take


That's easy: OpenBSD pioneered OS-wide dragnet code audits, and also an ethos of minimally-invasive, parsimonious OS security features (PID randomization and syscall pinning are two emblematic examples). The rest of the world caught up to OpenBSD on dragnet audits, and then surpassed it; meanwhile, the OpenBSD ethos of minimal, modest security features was probably less effective than the Linux approach of features that bend the whole universe around security challenges or that thread deeply through the operating system.

More than anything else though, it's not so much that OpenBSD is less resilient than Linux (I think a case could be made), and more that OpenBSD isn't materially more secure.

They should have killed the "only N K holes in Z time" tagline a long time ago.


I have written poorly worded criticisms of the OpenBSD project on HN before. They boil down to this: from my observation, it looks like the approach to security in the OpenBSD project is adding more code to solve security issues when it should be the opposite. Code is a liability. You should write as little of it as possible to solve the problem at hand and not spare a single line that isn't needed. In this case the problem is getting a shell on a remote host. Why do you need so many configuration options to solve this single problem?

The OP is a prime example of the opposite happening - it's adding code to prevent repeated authentication failures. Why would this be needed in the first place? If you have configured OpenSSH correctly (that is, using public key authentication instead of password auth, which should not even be an option), then repeated authentication failures should not be a problem. At worst, they take up some CPU time.

Much of the code in OpenBSD and the wider OpenBSD projects also address memory safety issues which would not be issues in the first place if they just used a memory safe language. Yet they push ahead using C in the full knowledge that there are better options available. Java, Go, Python, Rust, I literally don't care, anything would be better than C. Developers should not need to spend hours carefully poring over each others patches to find critical mistakes when it comes to memory. They should not need to spend hours reading C development guidelines or rely on mailing list oracles. By eliminating memory errors as a class, mental capacity is freed up to identify logic errors.


I can't speak for OpenBSD specifically but I can speak to some of my thoughts in why an operating system continues to use C. Supporting a language ecosystem is not easy, the less "default" languages needed to bootstrap the core system the better. The nice part about C is that it's one of the few languages suited for both kernel space and user space. Out of the alternatives you listed the only language that could even seriously be considered for kernel space is rust, and even that took a lot of back and forth to get it to that point in the Linux kernel. Higher level languages have a larger range as assumptions and you have to tow those accomodations in to kernel space if you want to use them. There is also the issue of the memory management in kernel space being much more of a complicated environment than user space. How do I teach the borrow checker about my MMU bring up and maintenance?

I am also skeptical to your claim about removing memory bugs freeing up brain space for logic bugs, at least for Rust. Rust has grown quite a number of language features, that in my experience, result in a higher cognitive load compared to C. If you seriously reduce your reliance on the C macro system (as Plan 9 has shown possible), the language itself is quite simple.


In very simple terms, their approach to security is auditing to try and remove all bugs, but offering very little to protect against the case where there is a remote root hole, such as they have had in the past.

Things have gotten better in recent years with things like pledge and doas, but they are quite lacking compared to proper MAC or sandbox implementations. Worse, many of the OpenBSD devs don't seem to know much about those technologies and are kind of dismissive. I remember an OBSD user or enthusiast trying to argue Pledge was equivalent to SELinux which was pretty bad.

I'd take a slimmed down hardened Linux install over OpenBSD any day of the week.


> In very simple terms, their approach to security is auditing to try and remove all bugs, but offering very little to protect against the case where there is a remote root hole, such as they have had in the past.

I'm not sure what is meant by "protect against the case where there is a remote root hole". Do you mean, to mitigate harm from existing holes? They secure things from top to bottom, but maybe you mean some kind of authorization issues? To proactively prevent holes? They do a lot of engineering around the latter (and around other attacks) - to the extent that many question the wisdom of solving 'problems' for which there is no proven exploit. And they have had very few remote holes - whatever the reason, there's not evidence that they fail to prevent them.


> I'm not sure what is meant by "protect against the case where there is a remote root hole".

I don't understand how this is ambiguous. I mean limit the damage that an attacker can do if they get root - this is something RHEL can do and OBSD pretty much refuses to.

> They secure things from top to bottom

Eh. Kind of. The devs are against security technologies that they think add too much complexity to their system regardless of benefits. That's why they don't have any kind of RBAC or MAC, just plain old DAC. You get root, you get everything - pledge and unveil won't help too much there.


> I don't understand how this is ambiguous.

It wasn't an attack, but a genuine question, for which I provided two possible interpretations. I was (am) interested in what you were saying.

> That's why they don't have any kind of RBAC or MAC, just plain old DAC. You get root, you get everything - pledge and unveil won't help too much there.

Thanks for explaining.


> It wasn't an attack, but a genuine question, for which I provided two possible interpretations. I was (am) interested in what you were saying.

No worries at all! I wasn't taking it as an attack and apologies if my response seemed combative. I just honestly didn't understand where the point of confusion was.

> Thanks for explaining.

My pleasure! If you're still interested in discussing, I am interested in the point you made that 'They secure things from top to bottom' - if I may ask, why do you think this is the case? It's not a statement I would ever make myself.


I don't think OpenSSH's affiliation with OpenBSD really means anything; it's an accident of history that the people most likely to want to build something like OpenBSD happened to have been involved with OpenBSD at the time. Just my take.


> Many experts don't trust OpenBSD's security implementation...

Does this distrust go to the extent where confronted with a public Internet-facing, stock OpenBSD with a TLS-secured SSH connection, and a GSA hardened RHEL 8 [1] with a similar SSH configuration, they’ll pick the RHEL instance?

[1] https://github.com/GSA/ansible-os-rhel8


RHEL has put a lot of work into their SELinux policy. Without a doubt it's more secure than OpenBSD.

If both OSes had a remote root hole, on OpenBSD you would have carte blanche to do whatever, on the RHEL system you would be able to do very little.


Many experts will pick RHEL over OpenBSD yes. It's impossible to isolate a single reason; generally you want a server to do something. (I guess if you're talking about SSH only then maybe you want it as a bastion host? In my experience people will generally use the same OS as their other servers though; the risk of misconfiguring an unfamiliar operating system outweighs any security improvement from picking one or the other)


SSH (v2) has always been a complex behemoth of a protocol.


I've not used it, but this ssh wrapper was mentioned here a few days ago: https://github.com/mrash/fwknop


Telnet over WireGuard?


You'd really have to think about that, I'd personally reject it just on defense in depth grounds. SSH over WireGuard is probably the correct solution.


That's not easy, if you do a ssh -vvv you will see that its a long forth and back of negotiations and information exchanges.


Why? What does that get you?


Then it would be vulnerable to MITM.


How so?


You fool a valid sender into thinking you're the recipient, he passes you his one and done key, which you use to login and takeover.


You're only passing public keys with WireGuard, its asymmetric not symmetric encryption.

It doesn't matter if an attacker gets your (or the server's) public key.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: