Hacker News new | past | comments | ask | show | jobs | submit login

If you’ve been a sysadmin as long as I have then you’ll remember when services didn’t even manage their own listener and instead relied on a system-wide daemon that launched and managed those services (inetd). Whereas now you have to manage each listener individually.

That was additional initial effort but the change made sense and we sysadmins coped fine.

Likewise, there was a time when server side website code had to be invoked via a httpd plugin or CGI, now every programming language will have several different web frameworks, each with their own HTTP listener and each needing to be configured in its own unique way.

Like with inetd, the change made sense and we managed just fine.

Tech evolves — it’s your job as a sysadmin to deal with it.

Plus, if you’re operating at an enterprise level where you need a holistic view of traffic and firewalling across different distinct services then you’d disable this. It’s not a requirement to have it enabled. A point you keep ignoring.




> Likewise, there was a time when server side website code had to be invoked via a httpd plugin or CGI, now every programming language will have several different web frameworks, each with their own HTTP listener and each needing to be configured in its own unique way.

And still we keep all those webservices, be they in Java, Go, node, C# or Python, behind dedicated webservers like nginx or apache.

Why? Because we trust them, and they provide a Single-Point-Of-Entry.

> Tech evolves — it’s your job as a sysadmin to deal with it.

Single-Point-Of-Entry is still prefered over having to deal with a bag of cats of different services each having their own ideas about how security should be managed. And when a single point of entry exists, it makes sense to focus security there as well.

This has nothing to do with evolving tech, this is simple architectural logic.

And the first of these points that every server has, is the kernels packet filter. Which is exactly what tools like fail2ban manage.

> A point you keep ignoring.

Not really. Of course an admin should deactivate svc-individual security in such a scenario, and I never stated otherwise.

The point is: That's one more thing that can go wrong.


> And still we keep all those webservices, be they in Java, Go, node, C# or Python, behind dedicated webservers like nginx or apache.

Not really no. They might sit behind a load balancer but that's to support a different feature entirely. Some services might still be invoked via nginx or apache (though the latter has fallen out of fashion in recent years) if nginx has a better threading model. But even there, that's the exception rather than the norm. Quite often those services will be stand alone and any reverse proxying is just to support orchestration (eg K8s) or load balancing.

> Single-Point-Of-Entry is still prefered over having to deal with a bag of cats of different services each having their own ideas about how security should be managed.

Actually no. What you're describing is the castle-and-Moat architecture and that's the old way of managing internal services. These days it's all about zero-trust.

https://www.cloudflare.com/en-gb/learning/security/glossary/...

But again, we're talking enterprise level hardening there and I suspect this openssh change is more aimed at hobbyists running things like Linux VPS

> > A point you keep ignoring.

> Not really. Of course an admin should deactivate svc-individual security in such a scenario, and I never stated otherwise. The point is: That's one more thing that can go wrong.

The fact that you keep saying that _is_ missing the point. This is one more thing that can harden the default security of openssh.

In security, it's not about all or nothing. It's a percentages game. You choose a security posture based on your the level of risk you're willing to accept. For enterprise, that will be using an IDP to manage auth (including but not specific to SSH). A good IDP can be configured to accept requests from non-blacklisted IP, eg IPs from countries where employees are known not to work in), and even only accept logins from managed devices like corporate laptops. But someone running a VPS for their own Minecraft server, or something less wholesome like Bit-Torrent, aren't usually the type to invest in a plethora of security tools. They might not even have heard of fail2ban, denyhosts, and so on. So having openssh support auto-blacklisting on those servers is a good thing. Not just for the VPS owners but us too because it reduces the number of spam and bot servers.

If your only concern is that professional / enterprise users might forget to disable it, as seems to be your argument here, then it's an extremely weak argument to make given you get paid to know this stuff and hobbyists don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: