Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: A proactive way of securing your Linux server (github.com/prashantgupta24)
143 points by prashantgupta24 on Sept 3, 2020 | hide | past | favorite | 107 comments



This just feels like moving ssh auth to another port, using an obscure "authentication format" that nearly no one uses.

If you're using pubkey or certificate auth, have disabled password auth, and restrict what users are allowed to ssh in, this just feels like added complexity and points of failure (not to mention a possible source of crypto vulnerabilities) with not much added benefit.

Having said that, it's a cool project in general, and I feel like it could be useful to dynamically manage access to large groups of servers (perhaps not just ssh; you could use it to manage access to https interfaces or other things). Then again, if you have a large group of servers, you should have those ports blocked to the world and only allow access through a VPN and/or jump boxes.


> Then again, if you have a large group of servers, you should have those ports blocked to the world and only allow access through a VPN and/or jump boxes.

Even with just one server it can make sense to use a VPN.

In my case I am running Wireguard on my server, and I use that VPN among other things for being able to remote in to my grandfather’s computer when he needs help. Since we are both behind NAT at our respective homes, a VPN was needed and I chose to use Wireguard. Wireguard is super easy to set up, and works on FreeBSD (my server and also my desktop run this), macOS (my laptop and my grandfather’s desktop run this), iOS (my phone) and Linux (currently I don’t run Linux but I have for many years and will certainly run it again on some machines again). I don’t use Windows, almost haven’t for a decade with a few small exceptions, but I think Wireguard is available for Windows also.

So anyway, I decided back when I set up Wireguard hey, now that all my machines and my phone is on this VPN, there’s really no reason to expose SSH to the internet anymore. So ever since then I have had sshd listen only on the VPN. It works great.


This. For any amateur server setup that is connected to Internet, wireguard is one-stop solution to cut down all exposure to Internet by running all services only on localhost or wireguard interface and keep only the wireguard port open to Internet.


Agreed - it looks like security through obscurity to me. It has value by shifting the goalposts away from common mass scanning, but to a motivated attacker the security is equivalent.


Relying on security through obscurity alone is a terrible idea, but there is absolutely a place for it. Most of us do not specifically get targeted, but will be affected by automated attacks that look at a large block of ips.


Yeah, just moving your SSH daemon from port 22 to some other port number alone can help with that.


We have continuous distributed dictionary attacks on our SSH servers, with attempts from literally thousands of IP addresses at a cadence designed not to trigger rate alerts.

We moved from port 22, and within days all ports were scanned and the attacks started again on the new port.

We are working on a different scheme to thwart the attackers.


I faced a similar issue with tens of thousands of login attempts from dozens of unique IP's daily on my server. Moving from 22 helped with some of the traffic but not all. Port knocking however did stop anyone from ever even reaching the right combination and had no login attempts since.


You could generate a random sequence to determine which port to open on any given day.


Port knocking, huh?


Is there a practical benefit to pubkey or certificate auth? I feel like using ufw with limit on port 22 and using fail2ban covers most cases, assuming good password decisions.


Yes, when you use key-based authentication to a server, you never actually give the server a secret, you just cryptographically prove you have the private key that goes to your public key. This is important because the sever can be entirely compromised and you can log into it without leaking any secrets.

Compare this to a secret password, if someone hijacks the SSH connection and you accept the host key (which everyone says yes to on the first connection), you give away your password which can then be used by an attacker to get access to the real server.


Never assume good password hygiene. Always use public key auth, turn off password auth ASAP. And run ssh-audit against yourself often.


Well, I know to immediately block your ip if you try to use password authentication. Saves a bit of time dealing with the criminals. Plus, users sometimes use their work passwords at home for other services. Password managers aren't as much of a thing as I'd like.


fail2ban has become a lot less useful in recent years, as brute-force password attacks have seemed to mostly move from "1000 attempts from one single IP" to "one single attempt from each of 1000 IPs".

Besides, set up your keypair, set "PasswordAuthentication no", and you don't have to worry about brute-force password attacks anymore (if the "log noise" bothers you, as some people claim, you can filter those out).


Cert auth is the way to go. Here's a good blog post I found by searching: https://smallstep.com/blog/use-ssh-certificates/


Against a securely generated password in password manager? No. The only real advantage I can think of is ergonomics: you can safely reuse a public key/certificate across multiple hosts, and even if one of them becomes compromised, your key isn't.


This isn’t true: because the secret never leaves your client machine, mitm attacks can’t be trivially leveraged to access other systems you have access to.


Isn't that what I said? How are you going to get access to other machines if each machine has a unique password?


Sorry, missed that: the problem with a password is that, it the session is mitmed, your password is compromised and the person running the attack can login to the machine whenever they want. ssh keys prevent this attack: mitm only grants access for the duration of the session.


This is very similar to port knocking, although more complicated. Although a proper JWT token is “more secure” than a sequence of 0-65535 integers, I contend that having complicated and/or unvetted logic as your first line of defense is more problematic than secure.


On the positive side, it's probably pretty easy to encrypt the RPC traffic. On the negative side, you've traded the fairly well known risk of running SSH with some extra set of software (whoops: s/hardware/software/) with less well known vetting, and it's own possible security problems. This is basically relying on security through obscurity because of that.

SSH is almost definitely handled by your OS package management, so any problems with SSH will be patched immediately as found, and automatically distributed to you if you opt in to that (or you will be notified there are updated to apply, and you can do so). If this system is not packaged and distributed by your OS, you're going to have to keep up to date on any security problems with it yourself. It may have less permissions by default because it's running through a container, but that doesn't always help (kernel exploits are a thing, a lot of successful hacks are first getting the ability to run unprivileged arbitrary code, then using a kernel exploit to gain privileged access.

There's something to be said for well engineered, well vetted simplicity. This is an interesting solution, but I would much rather just lock down the set of IPs it allows connections from to SSH to a very small subset of known good ones, including the IP address of a $5 DigitalOcean VM that I keep off but can turn on and connect to and use to jump to that box if at a location I can't connect from directly (and you can set a timed firewalld rules to allow access from your current IP if it's not permanent and shut of the VM again). Of use the free tier of AWS if you can get it to have a specific IP for free as well (I don't recall what you can get in the free tier other than the smallest VM).

Edit: Oh, and disable password auth. That's SSH hardening action #1, I kind of just assume everyone does that (because they should, if they can).


Yeah, I'd want to see a mention of port-knocking in the README, indicating that the author was familiar with this technique, and the known critiques of it, and why they think this method (or port-knocking in general?) withstands it. Cause it's definitely in the vicinity.

With no mention, it makes me wonder if the author is familiar with the history of this approach, which is not a good sign.

And yeah, I'm skeptical. Is this new, uncommon/unvetted, and somewhat complex "access" app just another place for potential vulnerabilities that give someone the keys to the kingdom?


> With no mention, it makes me wonder if the author is familiar with the history of this approach, which is not a good sign.

Seeing as the word “proactive” keeps reappearing in a similar fashion to “decentralized” does for people promoting their blockchain app (invariably hosted web app on google cloud or aws, using Ethereum nodes from Infura, and administered by a single permissioned smart contract) yeah, sounds like OP is excited they invented this new principle.


Technically they could be equally as secure if the port knocking converts knocked ports into a signature based authentication mechanism ;)

But at that point, you’ve potentially introduced exploitable bugs in the network stack.


Setting this up makes much less sense than setting up a tested vpn, such as wireguard or open, or even a persistent ssh tunnel using autossh to your home rpi.

I would never allow my prod systems to be potentially exposed by an api that runs as root. (And the documentation is incorrect on that; it should run as an unprivileged user with sudo privs to only run a wrapper script that runs firewall-cmd).

This also makes little sense in the context of configuration management, which should be enforcing a static set of iptables rules.


Personally I’d make a binary run as a non-root user with just CAP_NET_ADMIN and CAP_NET_RAW permissions to cut the scope down even more.


Having not used setcap, would that also require the setuid bit to be set with ownership as root (like sudo)? And reside on a filesystem without the nosuid mount option?


> would that also require the setuid bit to be set with ownership as root (like sudo)?

Doesn't setuid just change to whichever user owns the file? So with setuid root only needs to own it if you want full administrative privileges.

I also haven't played with setcap myself. My understanding is that capabilities don't rely on file ownership but instead on extended file attributes (which can't be changed by ordinary users on a correctly configured system). So root doesn't need to own the file, but of course granting any capabilities to a binary for which a non-root user has write permission seems like it would be a really bad idea in general.

> And reside on a filesystem without the nosuid mount option?

Yes, it appears the nosuid mount option disables file capabilities. (As one would hope!) But I'm not really seeing how that's an issue since (for example) bind mounts and btrfs subvolume mounts both allow changing the nosuid option.

(man 7 capabilities)

> 1. For all privileged operations, the kernel must check whether the thread has the required capability in its effective set.

> 2. The kernel must provide system calls allowing a thread's capability sets to be changed and retrieved.

> 3. The file system must support attaching capabilities to an executable file, so that a process gains those capabilities when the file is executed.

(https://wiki.archlinux.org/index.php/Capabilities)

> Capabilities are implemented on Linux using extended attributes (xattr(7)) in the security namespace.

(man 2 execve)

> The capabilities of the program file (see capabilities(7)) are also ignored if any of the above are true.


I like to run fail2ban in conjunction with a non-standard SSH port on which only public key auth is available.

This way, most of the junkware that does rude things to port 22 is banging on a closed door; the slightly more effective junkware that actually finds the SSH port gets banned immediately, because I know anyone trying to log in with a password is full of it.


Wouldn't it make more sense then to keep SSH on 22? Think of it like a honeypot. The more accessible you make it, the more bans you get. There was a guy on here saying that fail2ban banning SSH bruteforcers also reduced the number of HTTP bruteforcers, because they overlap.


Collecting bans isn't a good thing today with the scale of background noise malicious behavior. You will very quickly collect thousands of IP addresses doing this and need to implement ipset - an iptables plugin that allows O(log n) lookup time on a list of IP addresses.

Another issue; the overlap between SSH scanners also running HTTP/S attacks is negligible.

From experience; what makes sense is shifting your SSH port away from 22, disabling password based authentication, whitelisting your IP address from your cloud provider's firewall, and still aggressively auto-banning incorrect logins with fail2ban.

Then, for good measure, implement a WAF to protect your HTTP/S traffic as well.

Do not turn your production system into a honeypot. Only do this with a separate system that contains no valuable data.


Why can't there be an O(1) iptable? Just a hashmap?


For IPv4 it can be done, but it would take at least 512Mb of kernel memory. A hashmap would be inefficient in that case, just use a bit-array. For IPv6 however, you run out of memory with a bit-array. Using a dynamically allocated hashmap in the kernel in that would help the initial allocation, but then you can easily be DOS'ed by having a lot's of banned access and running out of kernel memory.


This is a noob question, but could you not setup fail2ban to ban bad IPs on both ports, even if you're not using SSH on one? I'm wondering if it's possible to close port 22 and set fail2ban rules to ban any IPs that try to SSH there, while changing your SSH port and also setting up fail2ban on that port, too. Does a closed port work like that?


You could have the firewall log connection attempts on port 22 and plug that into fail2ban.


fail2ban works by tailing authentication logs and matching regexes to find failed attempts and the IP address of the source. So if nothing is listening on the port, there won't be anything to spit out logs that fail2ban can read.

You could potentially create a fake daemon that listens on port 22 and just logs every access attempt, and set up fail2ban to block any IP that even opens a connection to the port, regardless of what they try to send over it.

Actually, I think iptables has a LOG target that can log connection attempts even without something listening, so that could work too and be much simpler.


Just make sure to exclude that port from service checks ;)


> anyone trying to log in with a password is full of it Dang! But so true


I usually just disable password auth and update regularly, when I have an SSH server open to the internet.

Short of an 0day in the SSH service, I expect brute forcing the private key(s) to take longer than I have years to live.


There are bots out there that will attack from thousands if not 10's of thousands of IPs... It's impossible to ban the individual IPs byh hand, so I let them tell me who they are by leaving the SSH port open and have fail2ban ban any IP that fails to authenticate for a month.

This had an unintended effect... most of my automated HTTP(s) attacks stopped as well-- presumably because the IPs of compromised machines were already blocked.


That's interesting! Care to share your fail2ban config for this?


Sounds like just a vanilla sshd filter with bantime cranked up to 1 month instead of the default (and possibly maxretry=1 or something low like that)


But it takes CPU, fills up log files, etc.

fail2ban is the first thing install. It’s also easy to understand, which I love. It’s basically regex on log files.


It really doesn't make that big a difference. I have single core, 515MiB VMs that idle at <1% utilization with background noise. Most distros come with logrotate anyways.


> Most distros

Most distros those days come with journald, and to best of my awareness journald does not have per-unit (or better, per unit per message type) configurable retention.

Still in the TODO list today: https://github.com/systemd/systemd/blob/908dbc70d6abeb9f6562... (since at least 2016), with corresponding issue gathering no traction: https://github.com/systemd/systemd/issues/9519


Yes, but don't most of them also come with syslog forwarding enabled and a syslogd (rsyslog, syslog-ng, etc.) and logrotate installed?


What decade are your servers from? A failed key authentication doesn't even involve any cryptographic operation unless they also know the hash of your public key.


Raspberry Pi. It was taking considerable CPU with many attempts per second.


There is no cpu use. The hash compare of key will fail if I understand correctly.


The idea is better than port knocking, in the sense that you take active action to associate your host with the server, but there's a few issues.

* Something this simple shouldn't need k8s, unless it was intended as an exercise by the developer for that reason

* It combines the idea of using a non-standard port with certificate based authentication, which you can already do with SSH-- it's functionally the same with more steps

Hypothetically, this approach can be more powerful as a centralized service running elsewhere, (i.e. cloud, your remote DC) and used for a whole bunch of jump boxes and end-users. End-users could run a "check-in" script wrapping around SSH that notified the service that a user was imminent, and then could check the server to see if 1) The bearer token is accepted for the destination server 2) That the destination server checked in, saw the new incoming request, and has already opened the port -- And then proceeds to run the SSH command if all is well, or fails with appropriate error.


Nope nope nope.

This has serious quality issues and any competent and nice sysadmin should say _nope_ to run this in any serious environment, for your own good ;-)

IP addresses are just strings? At least parse/validate for IPv4/IPv6.

Why yet another database? Can't the rules be loaded from the running system?

Why not just ipset-persistent + knockd + portsentry? I know it is easy to get overexcited with a new pet project, but just be careful to not put this kind of stuff in a production system kidos.


Thank you for the feedback!


Anytime.

I saw you are already implementing some ideas from my inputs, that is great, please keep it going.

Eventually, in notorious systems, you may face issues of having too many /32 rules, a good idea is to implement some mechanism to defragment them into single/bigger contiguous cidr blocks. There are other firewall implementations that can serve as inspiration.

OR

I recommend using ipset with a single iptables rule to match/block from the set. This way you don't need to reload stuff everytime. (Just add the ips into the set)

PS: I hope you don't mind my snark comments, HN is fun.


I do not mind at all. I appreciate constructive criticism. Plus you gave me points to work on, I really appreciate that. So much to learn!


I don't like this approach very much because it's much more complicated than it would have to be. Security is about layers, and this is essentially one layer that acts as the sole guardian for sshd.

The way that I like to do this is to have a common 'entry point' for all my cloud systems. Instead of whitelisting IPs on every VPS or cluster I build, I just add them to the ACL on my management server. All the other systems only allow SSH connections in from the bastion server. In practice, it works like this:

* Add IP to the whitelist file in my change control

* Run the Terraform script to update the DigitalOcean ACL

* Start an SSH agent locally and add the key for the bastion, as well as the key for the destination

* Connect to the destination server by using ProxyJump

So, connecting to a box would always route through my bastion system first, like this:

    ssh -J mgmt.mydomain.net cool-app-server.mydomain.net
I've been doing this for a couple years, and it works great. I practically never have attempted login attempts on my systems. And, since I use an ssh agent to forward the keys through the bastion without ever actually storing them, a compromise of that system doesn't really give the attacker anything other than access to port 22 on a bunch of systems that they wouldn't know where to find. Only the most sophisticated attack (https://xkcd.com/538/) would lead to a real compromise.


I love that XKCD, but this particular cartoon gives a false impression in order to be humorous. (You can't fault a comic artist for this of course.)

In my experience the #1 real attack vector is the shouty middle manager demanding that: "Everything be opened right now because some outsourced admin says he needed it".

It's nigh impossible to use technological measures to defeat a CxO or a department head.


I like this "proactive" solution :) Endlessh: an SSH Tarpit https://nullprogram.com/blog/2019/03/22/


6. Possible enhancements

Rate limiting the number of requests that can be made to the application

So this just moves the brute forcing target from ssh to a web app. A lot of work for no added security.


If I have public key authentication set up for ssh, should I even bother with fail2ban, firewalld-rest, port knocking, etc.? There's no way anyone is brute forcing my ed25519 key, so what's the point? Sure security should be layered and all that but it seems like public key auth is so strong by itself anything on top seems unnecessary.


There's no point to this at all, it's just making services more fragile.

If "brute force" attacks are a concern, you are a fool that has password authentication turned on.

If you have password authentication turned off, why does it matter?


Given some other attacks to ssh over the years, isn't it safer to use fail2ban


It's very easy for brute-force attacks from botnets to consume all your SSH process slots and then you are locked out.


Can confirm, had this problem the other day. CI started failing and after a short investigation I figured out sshd was just dropping my connections. Installed fail2ban first, but then just used a firewall since all valid IPs are known.


That ain't the truth.


> If I have public key authentication set up for ssh, should I even bother with fail2ban

If someone's bruteforcing their way in via ssh it's probably worthwhile blocking them before they start trying other services.


Might protect against some ludicrous attack that we're not considering. Limiting attempts seems to me to always be a good idea.


You are reducing attack vectors. How do you know your ssh implementation is secure?


How can you find out if you only get three tries a month?


If you do not need the more granular firewall configuration options there is also classic port knocking (https://en.wikipedia.org/wiki/Port_knocking) where the daemon sits behind the firewall so all ports can be closed by default.


A dynamic IP filtering list is still reactive because you haven't actually secured the box or its services. You've just made it slightly inconvenient to brute force. You might as well use port knocking, because even with a fancy schmancy authentication system (and the attack surface of a custom web app...) I can MITM your active connections just fine either way, or spoof your whitelisted IP.

I know everybody likes Fail2ban, but these two iptables rules (or something just like them) actually work better and don't fill up your logs:

  iptables -t mangle -A PREROUTING -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --set
  iptables -t mangle -A PREROUTING -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 4 -j DROP
To actually protect the network layer, use some kind of VPN with MFA.


This is interesting in that it combines the concept of port knocking with a REST interface, which I'm assuming is up to the user to create a front end for.

Unfotunately it also relies on Kubernetes which means that using it for a single system isn't practical. At least, not for this server owner.

My own approach is simply security by obscurity (a non standard port) with APF/BFD doing the needful for locking bots out if they figure out the port. I've had to change ports only once in 6 years, so it's working to keep bots out rather nicely.

And really that's all these things are- a way to keep bots out. A determined attacker will figure this stuff out anyway.


Secure SSH and you won't have to worry about rogue login attempts, which happen to everyone. If it really bothers you then move it to another port where it will happen less.

But install a new firewall management system? Sounds like it will definitely introduce more risks than the problems it solves, which for the SSH example isn't really a problem at all.


Have you tried knockd https://linux.die.net/man/1/knockd ? You send a special sequence of "knocks" to the server (packets to different ports) and it executes a command such as allowing your IP for a time period. No JWTs.


My go-to is fwnop (https://www.cipherdyne.org/fwknop/) which is actually similar to the OP’s thing but battle tested. Only downside is it’s not available on iOS, so recently I setup WireGuard for my iPad.

And yes, ssh pubkey + fwnop + WireGuard + fail2ban is ridiculous overkill, but hey, it’s my homelab server. That’s how I learn this stuff.


This is very nice and renders the OP useless. OP itself is another attack vector. Do you know what syscall API knockd uses to listen to "link" ?


If you're gonna go through all of this work -- including creating and maintaining private keys -- why not just restrict the SSH server to only permit key-based authentication (optionally, signed by your CA)?

If having 22/TCP open to the world is an issue, then set up Wireguard on the host and only allow SSH connections that are coming in over the Wireguard interface.

Got a bunch of machines to deal with? Set up a jumpbox or two running OpenBSD, lock it down, give your users access to it via SSH (optionally, over a Wireguard connection) and then only allow SSH access to all of those other hosts from the jumpbox(es).

Then there's the fact that I have a whole lot more trust in the security of OpenSSH than I do some random web application!

To me, this just seems kinda pointless -- there's a bunch of other, better (IMO) ways to deal with this -- but I guess if it fits your needs ...


Step 1: assume every service listening on your network has an auth exploit that has not been found.

Step 2: deploy this.

Step 3: ...

Step 4: profit!

The line between something like this and a locking the service behind a VPN is very narrow. They both achieve the same thing, protect high risk/value targets by not even allowing unauthorized people to talk to them at the network level.


> you can go to jwt.io and generate a valid JWT using RS256 algorithm (the payload doesn't matter). You will be using that JWT to make calls to the REST application, so keep the JWT safe.

The JWT you got after you plugged the private key into a random website is going to protect access to your machine?


Adding a JWT authenticated API layer to something is not a first choice for adding additional security.

If you want something like this look into platform level firewalls (ex: AWS security groups) or run spiped in front of your SSH server. I’d trust that a hell of a lot more than a REST API.


Requiring ssh keys, disabling password auth, and using ssh bastion hosts is my preferred approach.


Yeah it's not clear to me why exposing the ability to configure your firewall, protected by a keypair, to the internet, is safer than just letting someone bang their head against sshd, protected by a keypair, and presumably having had a lot more eyes on it for security.


And the distributions can also do the SElinux stuff that I don't use day to day.


This looks like reincarnation of "Lock & Key" feature of Cisco routers available since the late 90s. There were two major issues that led to abandonment and hindered adoption. The first is that it's an extra step. Extra steps for installation, for complexity, for single point of failure, for availability of key services. The second is that instead of thwarting attackers, you're thwarting yourself, every single time. It breaks so many usecases, for example if you have a new machine, or if you want to access through a jump host, or from ssh client on phone, etc.


On EC2, I've had a great experience using AWS Systems Manager [1]. I don't need any ports open, and it works great with normal shell tools and emacs given a .ssh/config like this:

host i-* mi-*

ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"

[1] https://docs.aws.amazon.com/systems-manager/latest/userguide...


Documentation and idea framing is too snowflaked.

This a JWT based port knocking over https framework that would be useful when used in a much broader sense.

Framing it as proactive fail2ban is technically correct, it also masks the other and more powerful use-cases.

I could see this in use as a vpn bypass for a prosumer production system, where normal operational commands go over a secured vpn but in a pinch, you can disable that restriction for direct control during a partial failure.


A proactive way of increasing your attack surface no doubt.


A proactive way to shoot yourself in the foot for sure.


Please don't use this software. From a security perspective this is horrible.

Option A) Expose all firewall rules via some hacky web-based API.

Option B) https://nvlpubs.nist.gov/nistpubs/ir/2015/NIST.IR.7966.pdf


If you're going to take this approach, a bigger question is why bother running the SSH jumpbox 24/7 in the first place? Shut down the SSH jumpbox when you don't need it (thus achieving SSH isolation) and start it up with your public IP in user-data to enable access to you and only you when you do need it.


This sounds like a complex port knocking setup with bigger overhead. If you want better security and UDP is not a problem, consider use of Wireguard VPN. It is passive and silent, random attackers won’t even know it is there.


Running SSH on a non-standard port gets rid of the brute force login attempts, too.


In my experience, a non-standard port reduces the number of brute force attempts, but it doesn't eliminate them. I just do a simple rate limit on syn packets to the port and use SSH keys for auth, and that feels like plenty of security.


(1) The auth interface is per IP address, as visible by the listening server. (2) The word "NAT" is found 0 times in the text. (3) There is no point 3 from the practical standpoint, please move along :(


Much simpler to stick with SSH and an IP Address whitelist...

Then you can't even get login attempts, unless they're from the same IP... am I missing something?


I still think that running ssh over spiped is more secure. Every byte being passed to sshd is cryptographically verified (and encrypted).


Can anyone explain what's wrong with "pubkey only, ignore attempts" method?


I just whitelist IPs. You can do this easily using iptables or sshd config.


Why is SSH open to the public internet at all, ever? Use WireGuard as VPN tunnel to your servers network, then SSH via internal IP.


> Why is SSH open to the public internet at all, ever?

Wasn't SSH made for that?


Yeah, I trust the OpenSSH developers a lot more than almost anyone else I can think of: if I’m going to trust any services open to the public internet, I’ll trust OpenSSH


Sometimes you literally just have a server on the internet, and there's not a network to connect to with wireguard. You could connect directly to the server with wireguard, but I actually trust OpenSSH more than VPN software built into the kernel (not to mention I can update it easily without rebooting).


Why not avoid VPN and fully embrace the end-to-end ideology – only authenticate against services, never against networks. Abolish private networks! :P


Wow - why didn't I think of just setting up a firewall instead of running fail2ban. Great idea.


eh, fail2ban uses iptables to ban?


Fail2Ban can execute scripts etc to do things. I have mine making API calls to OPNsense to block IPs there.


Fail2ban uses whatever one wants to do whatever one need based on logs pattern matching :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: