Serious question: how much more maintenance is required? Could I get away with unattended-upgrades and nginx+wsgi+PostgreSQL?
I ask because actual servers seem like dark magic to me so I want to try to build a product with them, but I can't find anywhere if it's possible to run a reasonably secure server without years of studying.
If you're serving static content, installing Apache, nginx, or any other web server will do just fine. Make sure to set the document root to a directory you're fine being public.
If you're running something dynamic like WordPress, stay extremely on top of patches, unfortunately, and be super cautious about what plugins you use. (This is one of the better reasons to use a static website.)
If you want to run a Postgres for your dynamic website, configure it to listen only to localhost or only via UNIX sockets.
Make sure you keep your software up-to-date. unattended-upgrades is a great idea for OS-provided software.
Be careful about where you get software from. More than just "get it from somewhere trustworthy," the big concern here is to get it from someone who is applying software updates. For most OS-ish things, you want to get them from your distro; try to avoid downloading e.g. PHP from some random website, because you won't get automatic updates. For a few things - especially things like WordPress - I wouldn't trust the distro to keep up, largely because the common practice is to release security fixes by releasing new versions, and distros are going to want to backport the fixes, which is slower and not always guaranteed to work.
As another commenter mentioned, turn off remote password logins and set up SSH keys. (Most VPS providers will have some form of console / emergency access if you lose access to your SSH keys.)
I run my sites, all static on a VPS, but I do the authoring in a single multi-site wordpress install and use 'Simply Static' plugin to publish the result. The benefits are pretty awesome:
heaps of templates (because I'm often lazy), 1 stop shop for patches, locked down plugins (child sites can't install plugins, only enable/disable), and only one place to look for problems (& you can lock the wordpress site to a single IP if you always want to use it from a single place).
FWIW, I never groked AWS & it's ping times in my country are about 1/2 as good as local providers. (15-30ms for local, vs 50-100ms for AWS local). Speed matters.
Also, my use case is to 'fall over' (meaning: fail/stop working/be unresponsive) wrt DDOS, whereas I know many here are 'must not fail' (with varying levels of acceptability). So, I write concise, low bandwith consuming websites that appear instantly (to my local market users).
Thank you for the advice. I've tried out passwordless login and found it more convenient, so that's not a problem. I'd want to be deploying a Python app I wrote myself, and some static files.
It’s not that bad. A day or two initially, then an hour or two every six months, depending how much work you put into automating it. It’s definitely a good way to learn.
Write everything down! Every command you type. You don’t want to come back in sixth months time and have to relearn what you did the first time.
If you’re feeling ambitious you can script almost the entire deployment from provisioning a machine through to rsyncing the content. It’s pretty fun to run a bash script or two and see an entire server pop up.
As a former sysadmin, this is still a lot of pain in the ass. One Terraform file that keeps my S3 + CloudFront sites configured, run once a month to ensure LetsEncrypt certs are rolled, and done.
Have maintained enough servers for a lifetime, I’d rather be coding!
Thanks for the advice! I was stuck thinking I'd have to learn something like Ansible to automate deployments, bash scripts is a greatidea.I
I have Linux on my laptop and I've been trying to document what I configure with heavily commented bash code, but I've run into issues with editing config files. I frequently want to say something like "edit this variable to this value" but sed feels too fragile and easy to mess up silently, relacing the entire file is silently badly future incompatible if other entries in the config get changed in an update, and appending to the file so the last item overrides feels hacky and doesn't always work.
Managed cloud products seem like dark magic to me. A VPS or EC2 VM is just like the computer I'm using right now. There's no magic. If something goes wrong, I can fix it as if it were on my local machine since it's often literally the same kernel version, same architecture, same shared libraries, same software from the same package manager. Performance tests on the local machine very closely predicts that on the server. On a serverless cloud product, to fix something deep, the tools at my disposal are a maze of buttons on a web GUI or CLI that sends the same opaque API calls the web console does.
Do not fear running your own server. There is no such thing as perfect security and neither is the cloud inherently secure. Many of the infamous data leaks you've heard about in recent years occurred on cloud-hosted systems. Ultimately, if security is a concern, you need someone that understands security, regardless of where its hosted.
What do you mean? I asked because I constantly hear that running my own server is better and cheaper on HN, and also that running a server is really hard if you didn't grow up memorizing binders of man pages.
I felt the same was last year before I had ever deployed a server publicly. It's really not that bad for small things. I run Nginx and some docker containers and proxy to those docker containers for certain subdomains. Now that I know how to do it, I moved from AWS to DO and the new setup was probably 20-30min to get everything set up, including Let's Encrypt.
you can change the ssh port and use a ssh key instead of a password. Don't worry about a firewall or fail2ban. That's about all. Also run everything from root.
Repeat above steps once vps provider goes out of business (as someone else also pointed out)
> you can change the ssh port and use a ssh key instead of a password.
I'd advice against changing the ssh port - I don't think the (small) inconvenience is worth the (tiny) benefit to obscurity.
I would always recommend turning off password authentication for ssh, though.
(along with disabling direct root login via ssh, but root-with-key-only is now the default - and if you already enforce key based login, it's a bit hard to come up with a real-world scenario where requiring su/sudo is much help for such a simple setup).
I would probably amend your list to include unattended-upgrades (regular, automated security-related updates - but I guess that's starting to be standard, now?).
You will probably need an ssl cert, possibly from let's-encrypt.
At that point, with only sshd and nginx listening to the network - avenues of compromise would be kernel exploit (rare), sshd exploit (rare) or ngnix exploit (rare) - compromise via apt or let's-encrypt (should also be unlikely).
Now, if the site is dynamic, there's likely to be a few bugs in the application, and some kind of compromise seems more likely.
Anecdotally, changing the ssh port on a very low-budget VPS is worth the effort because the CPU time eaten by responding to the ssh bots can be noticable.
This has been my experience as well. I remember having a VPS with digital ocean a long time ago and it was getting hammered badly with bots. Changed the ports, made pubkey authentication only and installed fail2ban for future pesky bots did the trick for me.
To be honest I don't think the people controlling those bots want to deal with us that makes it harder for them to gain access. Instead why not happily hammer away everyone's else port 22 with the bare minimum configuration? Those who enhance the security were never the targeted audience to begin with.
> Those who enhance the security were never the targeted audience to begin with.
This is pretty insightful. Statistically, attackers are probably mostly looking for badly configured machines which are easy to exploit rather than hardened systems that take a long time to penetrate.
State actors and obsessed attackers are different, of course. But statistically even taking care of using the simplest precautions keeps one out of the reach of the broad majority of such attacks.
I'm more familiar with AWS. There I just firewall SSH to just my IP (with a script to change it for the laptop case, or use mosh), and thus spend no CPU time responding to ssh bots.
Do VPS providers offer some sort of similar firewall service outside your instance?
I don't think low budget vps providers typically allow this. That said, fail2ban works OK, as does manual iptables (now nftables) - unfortunately /etc/hosts_allow is deprecated[1].
If you don't know that you'll be able to arrive from an IP or subnet - another option would be port knocking. (eg: knockd). Although, I'd try to avoid adding more code and logic to the mix - that goes for both fail2ban and knockd.
[1] ed: Note, the rationale for this is sound: the firewall (pf or nftables) is very good at filtering on IP - so better avoid introducing another layer of software that does the same thing.
I'm inexperienced, but relatively confident if I use an off the shelf login module to protect everything but the login page, the handful (literally) of users with creditials are internal to the organization and trusted with underlying the data anyway, and the data itself is essentially worthless to outsiders, I'm pretty safe.
My thinking is that even if I for example fail to sanitize inputs to a database or displayed to other users that won't lead to an exploit absent a bug in the off the shelf login module or someone attacking their colleagues (in which case there are other weaker links).
The organization I'm building this for has other moderately sensitive systems on an internal network, but the server I'll be managing will on the public internet. The site I'm building will export CSV files to be opened with Excel, so I suppose if the site I build was compromised it could be used to get an exploit onto a computer in the network. Still I presume if they're facing that kind of attack they'll have plenty of other weak links like documents spearphished to people and I'm pretty sure the sensitive systems are on a separate internal network.
But I also think that I would trust eg apache/nginx basic auth, more than login/session handling at the application level (php/ruby/... with users in a db).
Assume at least one user has a dictionary password, and suddenly you'll want to enforce 2fa via otp or similar - for peace of mind.
As a general rule, I tend to assume a targeted attack will succeed (no reason to make that too easy, though) - what I aim to avoid are the bots.
They'll likely be brute forcing passwords, blindly trying sql injection - along with a few off the shelf exploits for various popular applications (eg: php forum software).
I ask because actual servers seem like dark magic to me so I want to try to build a product with them, but I can't find anywhere if it's possible to run a reasonably secure server without years of studying.