Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Tips to get started on my own server
56 points by ctxc 5 months ago | hide | past | favorite | 107 comments
What I want to do: get hands on experience hosting and maintaining a linux server (perf, sec, etc). I love the abstraction cloud services provided to enable me to build stuff without having to understand the nitty-gritty - but I think the knowledge will help. Rent a server on the cloud for about $10/mo if possible, build an application for personal use with everything residing on the server. Nothing critical.

-----

Why: I've always wanted to. I read some HN posts this week that have inspired me to finally make the leap. For eg,

- I read this and realised I don't know what any of these commands are (I've always used Windows), and it's time to start. https://www.brendangregg.com/blog/2024-03-24/linux-crisis-tools.html

- The SingleFile post. I already built one with Supabase that I use now, but I want to do one with the CLI, my own DB etc.

-----

Where I'm at: I have quite a bit of experience building both FE and BE for applications, mostly utilizing cloud services (serverless, hosted DBs etc). I've also hosted a few applications locally for personal use but not open to the internet, like Postgres and Nginx (all windows). Some devOps experience. Serious about security but no hands on experience with networking, I want to actually understand and reduce the attack surface and so on without just flipping a switch.

-----

What I'm asking for: Any suggestions how to go about it, resources, links, advice - please feel free to share your experiences as well. Thanks!




Some tips:

1. Digital Ocean offer small VPS for $5 per month. That's a 50% saving right away!

2. Stick with Ubuntu in the beginning. It's not the best, but it's 100% good enough and has so much support and tutorials out there.

3. If you have a small VPS with not much RAM, definitely set up a swapfile. It gives you virtual RAM for doing RAM heavy things on a small VPS.

4. Use the virtual firewalls offered by your host rather than the server firewalls in the beginning. If you mess up a server firewall you may have to get your host to reset it for you. If you mess up a virtual firewall you can amend it through a web UI and get back to doing things quickly.

5. Learn to read man pages and log files. Between the two you can figure out how to do stuff, and then figure out why it isn't working correctly.

6. In terms of security, use a recent distro, use a firewall close everything you don't need, use SSH keys, and set up secure passwords for everything else, and you will avoid a lot of problems.

7. Keep an eye on resources, programs like top, uptime, free, df, and du will allow you to see what's using up CPU, RAM, or disk space.

8. Learn a relational database. MySQL or Postgres are good choices. This skill will keep you employed for years, almost every business uses a relational database in one way or another!

9. Have fun :)


Thanks! Especially for the swapfile, hadn't heard of that one before. And your comment history makes me want to read up about ansible as well :D


Ubuntu Server is superb! You can install it with one click in AWS Lightsail, configured securely with or without apps, with a public IP.


I will add 6.1 use containers and keep you VPS clean.

P.s. as an alternative to ubuntu you could check Alpine Linux (for both root vps and containers)

Good luck!


Curious why you say Ubuntu is not the best. What would you consider better?


I think it's a question of how sharp the tool is, and how 'idiot-proof' it's been made. Ubuntu is as idiot-proof as Linuxes go, but the more knowledgeable you get the more you find use for a sharper tool that fits your domain requirements better.

Debian is molded to the server domain pretty closely; Red Hat / Rocky is ideal in regulatory environments; Alpine when the priority is to be lightweight and reliable; I run Void on my personal machines because it's the most BSD of the Linuxes and stays out of my way when I need to do weird stuff.

If you're a beginner, you can append 'ubuntu' to your google searches instead of 'linux' and get an answer that makes sense. You'll know when to move on from ubuntu when you have to search 'foo bar linux' to get a usable answer.


I tend to append archlinux to google searches to get quality answers from ArchLinux Wiki.


Heh, fair. I should see if that does any better.


Personally I think debian is the best server OS to learn. It's sufficiently similar to ubuntu that you can get away with either one really, they aren't very different as ubuntu is based on debian.


I'd recommend Debian as the uncontroversial Linux server distro.

But for someone who might have some Ubuntu experience and little other Linux experience, running an Ubuntu server might lead to fewer surprises. Ubuntus do ship with things that Debians don't have. I can't name any, since I don't use Ubuntu, but they're the things about which you either say "how nice!" or "how dull!"


Not OP, but ubuntu is among the best documented an supported, which is critical for beginners.

What may not be good about Ubuntu for some is that it's too bloated and not difficult to learn for someone who wants to hang their hat on that.

Moving from Ubuntu to another Debian installation, including debian is no big deal.


Nix?


My two primary tips:

1) Keep notes as you set it up. Include problems you encountered and what the solution was, develop lists of how to install and configure the various services so you can refer to them should you need to reinstall things in the future.

2) Do it one service at a time and get each service completely running before starting on the next. Your server isn't a single thing, it's a home for multiple things. Do each of those one at a time.


In the off chance you don’t keep notes, I’ve found reconstructing what happened on the server from its history to be invaluable.

I once joined a company after they ran out of backend engineers and had to restore a server from a backup. Figuring out how to bring all the services up based on history was a fun ride.


Trouble shooting problems caused by someone else this way is also fun... "I see, they edited the nginx config ... let's check out what they did."


It may not be appropriate for a complete beginner, such as the OP, however, ideally setups should be documented by using a configuration management tool like Ansible, puppet, terraform, docker, etc. This way the setup is easily reproducible. The best notes are executable notes.


Yeah I think I'll try Ansible out. Maybe play around a bit with the bare instance and redo it with Ansible.


> Keep notes as you set it up.

This. I still have a 10 year old server setup file that I refer whenever I am setting up a new server.


I'd recommend a note taking app, something like Obsidian, and create a page for each new service you set up. Copy the commands and add notes to this file.

Later, when you upgrade or modify the service, you can keep adding to this notes.

Also, copy any configuration files OFF the server and into you notes when you are done editing them.


I use obsidian, but brought my old hierarchical folder structure mentality into it. I now think I just should've created single files and let the auto-linking take care of the hierarchy.

What do you think?


I have a folder called operations/recipes and I put separate files in that folder. I have files like nginx, supervisor, etc. etc.


Thanks, I try to do it with my normal dev process as well. "preferably one unknown at a time" I call it, although I always end up wishing I had more notes. At least my blog would have more than 5 entries if I had half decent notes.

But never too late eh :P


One quick (but possibly retro) suggestion is to just host something small on your home network. Could be a raspberry pi. Getting the os and network to the point where you can host a simple website - perhaps with the aid of a dynamic DNS provider - could be a huge step.

What if you set a goal for yourself - that whenever someone clicks the user page for CXTC on HN, it contains a link to a web page on this server you have built.

Anything further (in terms of hardware, not-self hosting, etc) is incremental.


Completely agree. Use a raspberry pi or just run a VM on your desktop. Do your initial learning on something contained in your home network before you experiment (and potentially get compromised) on the internet.


I wouldn't suggest hosting publicly open services from own local network. Without proper maintenance and experience, this is asking for trouble imo. VPSes are so cheap these days (far cheaper than 10$ if you don't need much performance), I'd recommend leaving local network out of the question.

When searching for cheap VPS, I usually peruse https://www.serverhunter.com/


I think it's a useful exercise in simple port forwarding, proxying (via nginx, for example), and certificate management. So long as you're not serving port 80 beyond redirecting to 443 and those are the only two ports being forwarded to the pi, it's really not that risky.


> host something small on your home network

I came to suggest the same thing. Another comment raised a good point about rpi vs a tiny x86—ARM may introduce some extra hurdles.

At the end of the day, especially when dealing with network configuration and security, you're going to make mistakes and get locked out. Having physical keyboard access can be a lifesaver and is how many of us "retro" people got started with linux administration


It might be "retro" in the sense that it isn't the primary way most businesses host their services, but I still don't think there is any better way to actually learn the ins and outs of networking and server management early on than to have non-obfuscated control and responsibility for everything you're doing.


I'm hosting my blog and my private Ghidra server on my Synology NAS. The NAS and the ISP router are already always-on, so it "only" costs me ~20€/year for a domain name.

Even accounting for the electricity and network I'm going to use anyways, it's quite cheap.


I started with Linux by paying 5$/month for a Linode server, and I’ve never regretted it. I’m all for setting up your own hardware as well at some point, but it’s a different experience from having a server with 100% uptime to play with.


Thanks for the great suggestion! I think I'll start with the hosted server and then move to this.

I've been burnt too many times (by myself), buying domain names and arduinos and cool things I never got around to using .-.


I've worked in managed webhosting quite a bit. I would really recommend starting with the cheapest VPS you can get, something like a Vultr VPS, unless you're already familiar with DNS.

If you are familiar with DNS, I would actually recommend using a cheap SBC and then utilizing Cloudflare tunnels for DNS.

I've written about hosting at home a few times. While the posts are a bit out of date now, they may be helpful:

https://absurd.wtf/posts.cgi?post=posts/2020-05-04_setup-wor...

https://absurd.wtf/posts.cgi?post=posts/2022-03-12_Setting-u...

https://absurd.wtf/posts.cgi?post=posts/2021-10-04_scalable-...

https://absurd.wtf/posts.cgi?post=posts/2022-11-05_vaultward...

https://absurd.wtf/posts.cgi?post=posts/2022-04-14_hosting-a...


wtf, those are absurd. But I'll read em anyway!

Vultr is the way I'm planning to go, with the sive.rs tech independence list.


Here's a very cheap way to get started:

I'm going to assume you have a Windows computer that you can leave on 24/7.

Go get Virtual Box (an open-source VM application) https://www.virtualbox.org/ and download a .iso from a distribution of your choice. (Maybe even try a few different distributions.)

Install Linux on the VM. Whenever you set up a service, open up a port for it on your router. Use Dyndns https://account.dyn.com/ or No-ip https://www.noip.com/ to set up a domain.

The advantages of the above approaches is that they require little (or no) money to start, and allow you to try a lot of different things. The nice things about VMs is that you can make a few of them, and back them up before you make changes, so it's easy to make mistakes and go back.

If / when you're ready to spend some money, you can either move to a physical computer or a VM hosted somewhere. Just hold off on doing this until after you've made a few mistakes.


This would be good opportunity to test out BSDs or Illumos variants, as well.


Hadn't heard of dyn dns, thanks for that! Will check it out.


Derek Sivers — Tech Independence

https://sive.rs/ti

Also what’s kinda cool is he encourages e-mailing him / reaching out . He’s pretty great


This is exactly what I wanted. I think I'm going to follow these steps and then branch out and experiment. Thanks a ton!


For sure !! ^-^ Happy to be able to help.

Definitely e-mail him if you have questions —

think he really enjoys the Hacker News crowd & is always game to be of service


If you use a Mac and just want to mess around with linux try something like Orbstack(https://orbstack.dev/) to start up VMs and mess around. The benefit of this is you're going to break things a bunch as you get started. Going from there I'd start looking automating the deployment of the various components the 'old fashioned' way aka writing shell scripts/using SSH. Once you do that then go to using things like Ansible or Terraform etc.


Windows, but yup - I'll start with the raw stuff and move to Ansible.

Till a while back, ssh sounded scary though xD


Sounds like the easiest thing to do would be install Linux to a VM locally and configure the same services you've hosted in Windows, but now in Linux. Test it out from your client to make sure everything that worked on Windows is also correct on the new setup.


> "I read this and realised I don't know what any of these commands are (I've always used Windows), and it's time to start."

I'm a great admirer of Brendan Gregg but I have to say that 1) that's trying to run before you can walk; those tools require fairly advanced Linux knowledge to operate properly and you say that you've only used Windows, and 2) if you're primarily interested in the performance analysis aspect, analogous performance measurement tools exist in Windows, like Windows' built in performance counters and PerfMon.

If you're unfamiliar with Linux and want to get to a point where you understand the tools Brendan Gregg mentioned, I'll say something similar to what others have said: get an inexpensive used 1-liter PC¹ at home, load Ubuntu or some other popular Linux distribution, and start with Nemeth's "UNIX and Linux System Administration Handbook" to learn your way around the system. That will give you the context you need to be able to start to understand how performance analysis tools are used. After that, you can move to the cloud whenever you feel you're ready.

¹ The description at https://www.servethehome.com/introducing-project-tinyminimic... explains what they are and why they are a good value. I wouldn't recommend the Raspberry Pi because not all diagnostic tools are available on ARM and a PC is no more difficult to re-image if you make a mistake than a Raspberry Pi.


That was a good article! I saw the handbook recommendation on a other HN post as well, will take a look. Thanks!


My experience is I did a systems administration internship and i had to play with all the services, it taught me a ton about tools and layers below where I usually operate (Go APIs and backend services). If you're just wanting to get skills on system administration and linux. I'd start with either 1) an old machine you have lying around (or pluck one from a recycling center) or 2) a virtual machine, and 3) once you have confidence on a single box, start to use multiple to make clusters/replica sets of things like multiple nodes of HTTP server, or multiple nodes of a mongodb, etcd or other distributed systems. Try doing work on them and pulling the plug on one mid execution to see what happens.

From there try to play with a bunch of the following things: (OTOH, no particular order)

setup a domain name server like bind (<something>.localhost?), setting up an SMTP service, setup an IMAP/POP3 service, set up a http server like nginx, lighttp, apache, try communicating to the http service over telnet, setup a https certificate with lets encrypt, try doing the same telnet to https and get it piped through gnutls, setup nightly backups to an external HD (try rsync), setup a spam filter for your SMTP server, try to run excalidraw on your http service, build an api that sits behind your HTTP(s). use tcpdump and wireshark to inspect network traffic.

Most of that should be google-able :)

Edit: MOAR

fail2ban, ssh-server, iptables firewall, irc server, sftp host, kubernetes cluster of virtual machines, hadoop/spark for a map reduce workload, check out https://www.cncf.io/projects/ and start to learn to build with those projects/abstractions.


Learning to make snapshots at different steps so you can restore to them and not have to build from scratch is a big help too.


That's a good list, thank you! Will keep the ~Google~ duck overlords busy :)


Dig up an old laptop, put a beginner OS on it (like Ubuntu). Pick something you want to do on it - like a shared network folder. Then figure out how to build it.

If you need a full server to do it - that's a toy that'll get in your way.


Self-hosting is a lot easier than it was 10 or 15 years ago.

Many things about self-hosting then that leads to cloud tools becoming more popular are quite a bit better now, and people are starting to circle back to compare, or learn.

Start simple and first use tools in between and gradually get more and more complex. There's lots to absorb and learn, but the good thing is the same body of knowledge is way easier to get today than in the past.

Learn something like Yunohost first to get the concepts down on administering apps, and then move to installing one from scratch.

Sticking with super well documented and community supported things can help. So that means Debian and usually Ubuntu, as well as enough docker to be dangerous (portainer is OK to use as well).

DevOps today is much higher level than what it was when it was self hosting. You will find just by starting and reinstalling over a few times, how much the dots start to connect between what you already know.

Find some content, on youtube that you like. The ability to pause and replay explanations an screen shots can help.

In the beginning only do what there are videos and tutorials for (in that order) until you get your feet underneath you to start exploring and yes troubleshooting.


This is a great thing to learn. There are a few basic pieces to this.

- Start with any server, like a Digital Ocean droplet for like $5 per month - Read some blog posts and setup basic security (like locking down ssh access to root, setting up a firewall and so on) - Think about the kind of app you want to use or host - This will entail things it needs, e.g. Postgres, setting up dependencies - Come up with ways to start and stop services and such

In the end, as you start to see patterns for all this, you will find it beneficial to script everything in some way so that you can easily reuse patterns and lessons on other servers and apps.

I tend to use Ansible for this, and here is a concrete example of all sorts of things you might find interesting: https://github.com/scancer-org/setup

This sets up a server, locks it down, adds a python app with a worker set and so on: https://github.com/scancer-org/setup

Good luck on your learning journey!


The repo looks like something for me to explore, will check it out. And I plan to take a look at ansible too. Thanks!


>Rent a server on the cloud for about $10/mo if possible

It can be cheaper to buy your own "mini" server (e.g. a small N100 ITX computer) to use on your home network. You'll get more RAM and more disk space (e.g. 16GB ram 512GB disk) to play with compared to datacenter rental prices cumulatively adding up over a year.[1]

Unless the specific aspect of renting a public-facing server is part of your self-learning curriculum and/or you also need to access it from outside your home without NAT port-forwarding or Cloudflare reverse-proxy tunneling, you can self-host your playground server at home.

[1] compare the rental costs and cross-reference GB of RAM and GB of disk they include for the monthly price range you want to pay.

https://www.digitalocean.com/pricing/droplets

https://www.linode.com/pricing/#compute-shared


Instead of having the sever hosted, you could also get a cheap PC and install it yourself. You’d have more control and more choice over the distro. Install CLI only, so you don’t lean on the GUI, and so you can get a much cheaper system (which will be cheaper than your $10/month hosted system after a few months).

For what it’s worth, that list of crisis tools… I haven’t heard of any of them either, and I spent over a decade working in a datacenter dealing with everything that broke, from hardware, to OS, to app… across Windows, Linux, Solaris, and more. I had some escalation points, but a lot of that was on outage calls where people are talking about and showing what they are doing, and we’d always write down useful stuff so we’d have it. I worked 3rd shift, so we tried to handle everything ourselves as much as possible. Those commands might be useful, but apparently you can survive for a decade in a large production environment without them. Don’t take that one page as your bible.


Haha, thanks! Just trying to narrow my "unknown unknowns"...your experience sounds cool! You could write about some of it, I'm sure it'd be an interesting read :)


You might like the t-series servers from AMZN

https://aws.amazon.com/ec2/instance-types/t2/

the t2.large costs about $10 a month and is particularly good for the bursty loads you'd expect for a lab server. You can get a t2.micro on the free tier as well which is a pretty weak machine. I would watch out because I once ran OpenVPN on a t2 instance that had way too little RAM and it went swap crazy and ran up a $200/month I/O bill. (For the life of me I cannot understand why AMZN doesn't support a branded VPN server that "just works")

Note Azure has the B-series which is similar to the AMZN T-series

https://azure.microsoft.com/en-us/pricing/details/virtual-ma...

and my understanding is that this is about as good.


If someone wants to use Amazon, as beginner I would rather recommend Lightsail. It includes some data transfer and I think it has a free tier of 3 months. https://aws.amazon.com/lightsail/?nc1=h_ls


0racle cloud. Always free.


TIL ram swap! I always thought processes would get shut down when ram is full...thanks!


Here's my strategy:

- Setup VMs locally, on your development machine. (This eliminates the cost of hosting but gives you all the technical learning opportunities). My development machine is macOS and UTM has been an excellent app to manage these VMs. You can eventually model your VM's configuration around what resources your VPS will have on AWS/DO (e.g. 1GB RAM, 2 vCPUs, etc).

- Learn the basics of Ansible, in order to provision a server (local or remote). I did the course on KodeKloud.com and found it great to getting me going quickly.

- Write Ansible playbooks to provision your local VM as you would want your VPS on AWS/DO/etc to work. Ansible Galaxy is a repository of many community-supplied roles for common tasks/services. You could consult these for best practices on building your own playbooks or totally offload provisioning onto those roles.

- Once you're comfortable getting your local VM setup, point your Ansible playbook at an AWS/DO VM and put it online!

My high-level roadmap has been to build my own Ansible playbook to provision a Ubuntu server to CIS Level 2.

CIS benchmarks define security controls for a few of the more common aspects of DevOps work (e.g. Ubuntu OS hardening, AWS account security, Docker host, etc). They're freely available and there's many well-maintained scripts that can both audit and provision your host to the standard. I've been using the benchmarks as an easy to way to self-teach security aspects (and validate I've done it correctly). Level 2 is the standard used to handle financial information and medical records, so it's probably the most secure you'll ever need to go.

Once I have a provisioning playbook to stand up a secure host with some services (Nginx, Redis, etc), the next goal on my roadmap is learn Terraform to configure + deploy a personal cloud of services to AWS/DO/etc.


This is pretty comprehensive. Thanks! KodeKloud looks interesting.


Provisioning is something that may be out of the picture you described, Terraform, Ansible, templates, whatever, more than creating that server try to recreate it after destroying it (and backups while at that). Monitoring using your own tools instead of (or along with) your provider tools will give you a view of not just when it is working wrong, but also when is working right. Security is another thing that goes both for the cloud and your own server, but for long running instances firewalls and vulnerabilities in what is exposed gets more urgent.

Regarding performance, if you want to dig in that topic, don't wait for a crisis for using those tools, try to understand how the system run, even if not under high load. Gregg have more tools (https://www.brendangregg.com/linuxperf.html) and a few great books.


Yup, I got to focus on monitoring as well! Will check the tools out


Tons of great tips in this thread already.

For cheap virtual machines/virtual private servers there’s options beyond the $5/ish a month at Digital Ocean/Vultr:

* many of them offer starting credits for new accounts * lowendbox often has offers for VMs as cheap as $20 a year * Scaleway Stardust is super cheap as well. The web interface never shows them in stock, but I have always been able to create them via their CLI tool.

What will definitely help is look for a community (there’s multiple great subreddits that are welcoming to beginners). Don’t be afraid to ask questions, even if they may sound obvious. That’s the best way to learn!

Good luck, and enjoy!


Will check the other recommendations out, I was planning to go with vultr. Thanks!


I'm surprised nobody has mentioned SDF yet.

http://sdf.org/

They're an org with a super long legacy in the technology / linux server space and they offer a unix shell for free. It's a great starting place to play around, and there is a built in community of hackers that are also hacking around on the shell, so its easy to find answers for "Beginner" questions.

I know as well for a nominal fee they offer VOIP Telephony, every flavor of database under the sun, and lots of other fun stuff. Great place to start tinkering with these technologies.


Distros are a bit like a matter of taste; I use Debian Server. First, practice locally on an old laptop, thin client, Raspberry Pi, or whatever you have. Proxmox LXC is very useful for quickly bringing up a new instance in case you mess up.

UFW is an easy-to-use firewall. Use Docker with a reverse proxy (Traefik/Nginx). You can VPN into your server and close nearly all ports with that.

Try my installer script: diy-smart-home.ei23.com. You can also use it to built an regular dockerized server. Happy learning!


My advice: Get a cheap vps (LowEndBox / DigitalOcean) and be prepared to break it a lot and re-install it a few times. It's part of the learning curve.

Things I'd focus on:

1. Use SSH keys to log in, disable password logging in (easy)

1.5 Understand users/groups, chmod, chown

2. Set up Nginx, how to config, set up LetsEncrypt (easy)

3. Have a play with Uncomplicated Firewall (UFW), don't lock yourself out of ssh ;) (medium)

4. Hook up Github Actions with your server and get some auto-deployments going. (medium/harder)

Beyond that, you'll quickly realise the cloud is great but for the most part managing a few servers isn't that hard.


SSH always sounded scary...on track to getting over it after this xD


I would have suggested Linux HOWTOs (https://www.linuxdoc.org/HOWTO/HOWTO-INDEX/categories.html) but they are ancient by this point. Really sad that as a community we abandoned HOWTOs for blog posts and StackOverflow. Some of the guides may still be relevant, and they might give you at least an idea of the general landscape.

I think you'll learn more faster by buying the cheapest, junkiest old PC you can (literally any PC will do, you can't find one too old) and plug that into your wifi router's ethernet ports. Set up the OS by following the guide for whatever distribution you want. If there's a problem, you have the keyboard, monitor, network, USB drives, etc to recover it.

Linux From Scratch is a great resource to start working with the underlying OS, understand how applications work, libraries work, compiling, and the files and components needed to start an OS. But you could also use something more polished like Debian or Slackware which are more hands-on.

To learn more about Linux commands, look up lists of Linux commands and then read their manual pages, cover to cover. If the manual doesn't make much sense, Googling it can give more context / examples. Definitely read the Bash manual (https://www.gnu.org/software/bash/manual/html_node/index.htm...) cover to cover; it's very long, but you will be using it for the rest of your life [if working with non-Windows tech].

Once it's networked, set up whatever server software you want. Set up multiple computers, see what it's like to make them work together. Try to understand how the low level networking stuff works. Look up tools that let you troubleshoot networking.

Set up an Apache or Nginx web server, and some web application and database that uses them. You can play around with that for a long time. Try adding bugs to your software and look for tools to help you debug them [assuming you didn't have the source code]. After that point, there's a whole universe of server software for setting up large-scale computing.


Try the Caprover on digital ocean to try things out.

This will save you ton of time, there is a lot of application already packaged with docker there and you can add your own. What is great about this is that nginx proxing and other bits are already working together to give a working system. The port mapping, the domain/subdomain managing, https with certificate, etc. Application are installed in one click and are configured mostly with env vars.

Then dig into it with docker or install one ubuntu system and install stuff and see how to configure, etc.


Caprover sounds good. I think I'll try it on vultr like the other commentor said, thanks!


Buy a Raspberry Pi. Get it on your network, and get used to SSH'ing into it. You can learn a lot with a Raspberry Pi and if you break it you can just redo the SD card for the OS.

I also recommend finding an old laptop and installing Ubuntu on it, you would be surprised how often doing just this brings new life to an old laptop. I usually get it a new SSD for $50 and it feels like a brand new laptop whenever I do this.

If you really want a VPS then OVH, DigitalOcean or even BuyVM offer cheap options. BuyVM will get you a KVM slice for $24 per year.


Yup, this is what a lot of recommendations say...will test the waters and go the hardware way!


Depending on what you want to specifically learn, I'd start with the easier distributions, no need to waste hours unable to install or use an OS. Then if you want to "REALLY LEARN" I'd suggest trying out something like Slackware first before jumping straight into the Gentoo or Arch Linux bandwagon. What you'd learn from Slackware is things like partitioning during installation, and how useful package managers can be ;)

Also some commands you want to always use:

man is #1 cause it will show you documentation on any other command and how to use them

almost every program has a -h or --help that tells you how to use them as well.


One great thing about setting up your own server is to do it faster each time. Deployment, hardening, running services.

Get used to your terminal, use SSH, understand how firewalls work and run a couple of services such as nginx, fail2ban etc.

Once you have a hand of it try to do it using Ansible. It will be a valuable skill if you decide to provision, for example, a swarm of servers for a cluster. And it will give you reproducibility and a trail of steps that you will ensure get executed for each server you deploy.


Digital Ocean droplets can be created within your price range with a very vanilla version of Debian installed. My personal website runs off one.

I've used AWS a lot for larger projects but they do a lot more to create vendor lock-in which breaks the idea of a "Linux server", i.e. it's easy to end up inadvertently depending on Amazon tools.

Running your own physical server is also a valuable experience, but it's a significantly larger commitment which is less reward-per-effort in my opinion.


Try to run your application for a year, and when you run into problems don’t just fix them, find solutions to keep it from happening ever again.

Stick to some LTS linux distribution so you don’t have to think about that stuff too much, it’s annoying otherwise.

When you’re ready for hard mode start doing it on a raspberry pi on like a 0.5MB/s link, and add random reboots by pulling the plug.

Once all that’s done you’re probably close enough to found a device management company.


Haha, I aim to get my website running on it!


Do you mean a dedicated machine, or an actual PC running in your household?

You can start with self-hosting some software, they usually come with good instructions/tips on how to set-up a server, eg: https://docs.uxwizz.com/installation/setup-uxwizz-server/ubu...


It hasn't been suggested yet so here it is: I found plugging a raspberry or a refurbished/second-hand NUC on my home LAN made some things click faster than with VMs on computer or a single remote server. After a while, it slowed down progresses though, but it depends on what you actually do.

You really need to understand the basics of IP and networks though, so get that right first.


- have a firewall and configure services on a needed basis.

- move sshd to other than port 22, there will still be brute forcers but it will keep the spam down in the log. You can have .ssh/config with your hostname and that set port, so you don't always have to ssh hostname -p 23456.

- speaking of sshd, fail2ban is useful

- contabo.com has cheap vps

Apart from that, what's there to say. I can't think of anything right now.


If you want to have the real experience, get a real server.

It makes it harder due to having a real bios/uefi and you need some kvm solution they offering.

It makes it much clearer how much harder it is to reformat your disk etc. and how you can recover from an outage.

You also need some daily drivers to use it for. Like plex for your media collection perhaps.


Don't do one-off setups and quick fixes. The entire knowledge will be lost on the next instance failure. Launch and provision the instances automatically with well maintained scripts. Start with cheap VPS to initially avoid dealing with entire domain of problems related to physical servers.


I used DDNS to put a mail server up. The ISP didn't like port 80 for webmail so I had to use 8080. Later, I got a static IP. One Friday, I said to myself, Ubuntu must have fixed that upgrade bug (right?) and I spent the rest of the weekend putting the mail server back together.


There are a million good reasons to run your own server.

This is a great HOWTO for setting up an industrial grade email host:

https://www.purplehat.org/?p=1446

This is a recent comprehensive overhaul of this long established guide...


I was using VPS for a while for a couple of services I use (mostly miniflux) and then, after getting FTTH I got RaspberyPi and deployed everything there. If it's not critical (power/ISP outage) it's just fine enough for a couple of services :-)


You can find super cheap VPS offers on lowendtalk.com. I wouldn't trust many of them for production, but even the cheapest one should be good for learning. Also, Hetzner is always a reliable low-cost option too.


For what it’s worth, try setting up Arch from scratch in a VM. It’s a massive undertaking, but I learned so much about the unix environment and tools. Plus the added benefit of building out your own environment.


My suggestion would be setting up a FreeBSD server via a Hetzner VPS. I'd argue it's more hands-on than Ubuntu and this is a positive thing.


I switched from DigitalOcean to Hetzner for my cloud servers. They have a nice web firewall, that you could use. I use it to prevent access on some ports, for example on Port 22...

But wait, how can you connect to Port 22 then? I use Tailscale as a VPN for this. I installed Tailscale on my local computer and my servers, so I can use the VPN to connect to this port. The less open ports, the better.

What else... Use fail2ban or CrowdSec for banning IPs that probe your server. Don't use passwords for authentication. Don't use root, use another user. ... stuff you maybe already heard of or know. Here is a link to get started: https://blog.codelitt.com/my-first-10-minutes-on-a-server-pr...

If you knew this already, sorry, I wasn't sure where to start.

Maybe use a Hoster that provides 'Snapshots'. So you can safely play around and if something happens, just revert to your last snapshot, so you don't loose your previous work.

If you spin up a server, maybe use cloud config. You can use it to 'bootstrap' your server and let it install your main tools while it gets provisioned.

It looks something like this (HackerNews removes my line breaks...): ####

#cloud-config for installing fail2ban

package_upgrade: true

packages:

  - fail2ban
runcmd:

  - [ systemctl, enable, fail2ban ]

  - [ systemctl, start, fail2ban ]
####

If you start and mess up something, it's handy if you have a cloud init to start a new server and don't have to install the basic stuff again ;-)


I didn't know that - so thank you! I'll start with the link :D


You could also go for Oracle Cloud free tier.

There yiu get 2 tiny vps for free.


I am using it, BUT, they keep archiving my instance because what I'm running on it is too lightweight and doesn't activate the CPU enough: https://twitter.com/XCSme/status/1770601125869126144


I'd suggest setting up a raspberry pi and build your own service locally hosted at home!


I would honestly just set up Arch or Gentoo Linux (or NixOS if you’re feeling adventurous) from scratch on either a VM or a NUC / cheap local machine as a home server to start. This was how I learned Linux and it’s pretty satisfying having a server at home.

Once you’ve got that under your belt you can either use DynDNS or rebuild your setup on a VPS. Bonus points if you can start using a single implementation across both instances so your homeserver and remote are replicas / peers.


Make and host a simple web app:

Django(Python) - gunicorn - nginx

You could run it on a raspberry pi for no ongoing cost


Oracle cloud has a great always free tier. You can get a 4 core 24 GB ARM server which is awesome. Of course, it is obvious ARM so while support for that is generally pretty good there may be some quirks.


They ban people for no reason, they harass people with "we would like to talk to you" emails. They refuse to delete your data and name email address and so on after you've been banned.

They also require their own 2fa app instead of other common apps e.g. Google authenticator. And when you delete it, after you've been banned for no reason, you can't use it to request deletion of your data.

Do not go there. Oracle "free" tier isn't free. It's a ploy to grab your data.

You have been warned.


Honestly I'm more surprised that there still exist people willingly using anything from Oracle or advise doing it. Oracle is The Asshole.


I don't mind having a free server from them! I ignored a few sales emails but otherwise it's been drama-free.

If you're looking for a place to experiment I don't see any better deal.


look into the selfhosting hobbyist scene (eg r/selfhosted) and find some cool stuff you want to run for yourself. then either run it on your lan on a spare machine or get a cheap vps and set it up in the cloud. doing stuff with docker is the easiest imo but setting stuff up by hand by installing packages, modifying config files etc is useful for getting your hands dirty. interrogate chatgpt about anything you don't understand.

the bash `history` command is your friend.


Ah, forgot about reddit. Will check that one out!


I recently dove into setting up my own server as well. I also wanted to understand how to self-host my own solutions and get my hands dirty while doing so and not just rely on the hosted solutions. I'm only a bit farther than you are on this, but I'll share what I've learned so far and the resources I've found helpful.

After looking through many server providers, I ended up buying a server with Hetzner[1]. They are a better bang for your buck than DigitalOcean, Vultr, Linode, etc.. and are much more reliable than anything you will find on lowendbox. According to the StackOverflow 2023 Developer Survey[2], they are one of the most popular cloud providers, and the most admired cloud platform. I was surprised when I learned about this because I had never even heard of Hetzner before I started this journey.

The friends over at r/selfhosted are a great resource. They seem to be more about self hosting your own solutions to common software you use, like email and cloud storage. It isn't really my use case, I am more exploring hosting my own sites and apps and databases like you, but they have good tips and advice on getting your own server up and running.

DigitalOcean offers a 'Getting Started With Cloud Computing' tutorial series[3] that I found immensely helpful. They assume no knowledge at all, and walk you through everything, from explaining what web servers are, to the best practices around security practices for your server. It is a denser read than some other tutorials recommended here, but I highly recommend it if you are looking to learn and explore deeper into the subject.

If setting up your server declaratively (i.e., writing your server settings like you would a config file, instead of running commands like `sudo systemctl enable nginx`), then I would recommend looking into NixOS. I'll warn you, it is not for the faint of heart. NixOS is known for not having the best documentation, for some answers not being a quick google search away, and for sometimes having to look into other people's code just to find how to do something. I set up my server first with Ubuntu, then went back and did it all again with NixOS. I found the learning curve well worth it. It enhanced my learning and understanding, as server settings aren't me just copying some commands from a tutorial and forgetting what I did the next day. With NixOS you are mindfully crafting your server, and usually reading documentation along the way. And you won't forget what commands you ran to get your server to where it is, as it's all in code. If that sounds interesting, I highly VimJoyer's guide to NixOS[4] to get started.

[1]: https://www.hetzner.com/cloud/

[2]: https://survey.stackoverflow.co/2023/#technology

[3]: https://www.digitalocean.com/community/tutorial-series/getti...

[4]: https://www.youtube.com/watch?v=a67Sv4Mbxmc -- VimJoyer is an absolute goldmine for all things NixOS and NeoVim. I love his videos. I accidentally stumbled upon his NeoVim video and have been using NeoVim ever since.


Pull that old laptop from the closet, the one with the broken screen and keyboard which made you so sad to put it to pasture since it did have plenty of memory and CPU to keep up. Install Debian on the thing followed by Proxmox Virtual Environment (PVE) [1]. Since you have 16GB of RAM in that laptop (or 8 but 16 is nicer) you should be able to run a number of containers [2].

Here's an idea, more or less based on a number of servers I configured for friends and family, based on 8GB Raspberry Pi 4 hardware with 2/4TB USB SSD. Your laptop will offer better performance.

- Create 4 or 5 containers and name them 'auth', 'serve´, 'base', 'backup' and 'mail' (if you want to run your own mail that is, otherwise skip that one). Their functions are:

> auth runs LDAP, Kerberos (if you want that), a central letsencrypt instance which takes care of all your certificate needs and anything else related to authentication and authorisation

> base runs databases, that means Postgresql, Mysql/Mariadb, Redis, RabbitMQ and whatnot - all depending on what you need.

> serve runs services, that means nginx or another web server which is used as a reverse proxy for the other web-related things you want to run: 'cloud' services like Nextcloud with everything that comes with it (e.g. Collaboraoffice or Onlyoffice to replace whatever web-based office things you currently use), communications services like XMPP, application-specific proxies like Invidious/Nitter/Libreddit, media services like Peertube/Airsonic/Ampache, a Wiki like Bookstack, search services like SearxNG, etc. - the size of your server is the limit.

> backup runs Proxmox Backup Server and is used to backup everything to some external drive and to some outside repository.

> mail runs mail services, only if you want to run those. I always say 'do it' but many people have an irrational fear of running their own mail services. That fear is not grounded in truth, running mail is not hard and offers many advantages over hosted solutions.

While it is possible to separate all the mentioned services out into their own containers I think this adds needless complexity for little to no gain. Separating out database services makes sense since those can end up quite taxing and as such might well be moved to their own hardware in some (possibly not too distant) future. Separating out authentication services makes sense since that lowers the attack surface compared to running them together with externally available services. The same goes for mail services which is why I put those in their own container.

Once you've got this up and running you can create a few more containers to play around with. If you just want to try out services something like Yunohost [3] or Caprover [4] can come in handy but I do not see these as viable alternatives to installing and running services which you intend to keep around for a long time.

Of course you can do most of this on a VPS as well but I prefer to keep thing in-house - the fewer dependencies, the better.

[1] https://proxmox.com/en/

[2] containers perform better and take less memory than VMs but if VMs are your thing that is possible as well

[3] https://yunohost.org

[4] https://caprover.com/


This is holistic, thanks! Will go through it.

The fear of email hosting isn't the work but the delivery reliability and continuity if we're unable to get to it for any reason - but yes, that's an experiment I want to do at some point...


I do this quite a bit and have many small sites that I manage and create. My go-to VMS hosting service is Digital Ocean (DO) [0] as they offer reasonable rates ($5 - $10/mo) for reasonable machines and have a very nice interface. There's also LowEndBox.com [1] to scour for cheap VMs but I've had mixed results for many of the services listed.

I don't do anything fancy and usually have a VM that runs Ubuntu and Apache or Nginx with mostly static content and some custom Python CGI, where appropriate. I'm not above hand-writing HTML but I also some type of static site generator when Markdown, or something similar, is easier to write it. I haven't found any static site generators that I particularly like, though I've used mkdocs [2].

Personally, I favor minimal front end styling so I tend to gravitate to things like Skeleton.css [3] or Bootstrap [4]. I tend to use Javascript heavy web applications and I personally hate the new breed of front end development frameworks so I prefer the very boring option of jQuery [5] or plain Javascript. I appreciate D3 [6] but I don't really use it. I've found better utility through Pixi.js [7], Two.js [8], Three.js [9]. MathJax [10] is also nice if you're writing math formula. Whenever I get worried about a new front end framework taking over completely, I just have to wait a few years before it drops off and gets replaced with another one. Many of my sites have been running for many years without issue because I use minimal dependencies.

I'm usually a solo developer working on small projects, without the need to interface to a team, have clients update web content, etc. so the above works for me. I think it makes more sense to use heavier duty front end web frameworks or back end CMS fraemworks, like Wordpress, in those scenarios.

I'll just briefly mention websocketd [11] as a nice little tool to play around with websockets. As I said, I usually focus on static site stuff but Mariadb [12] is my go to DB, just out of familiarity (from mysql), though if I can get away with it, I try to use Sqlite [13]. I have various cron jobs running to update various static sites or do other tasks on the server and elsewhere. Analytics are a big blind spot for many of my projects but I've used Piwik in the past (now Matomo [14]) to good effect. A more DIY approach is audience minutes [15] which is a small JS widget that sends periodic pings to the web server that show up in the logs so that you can track usage.

I'm not as security conscious as I should be as I'm such a small fish on the big internet but my attack surface is not open in the ways that others would be because I don't use an off-the-shelf CMS or other standard frameworks that are the low hanging fruit that attackers try to exploit. My threat model is mostly guarding against attackers trying to take advantage of software exploits. I'm not at all protected against any sort of DDoS attack but considering the stakes, I prefer to focus on creating interesting things than expending large amounts of effort preempting an attack that has minimal critical impact and low reward for an attacker.

The advice that I'll give is to not get hung up on doing things the "right way". It's more important to do something, even if imperfect. Once something is in place, it's easier to improve or upgrade. If it's a blog you want to do, write it in HTML if you have to, or write it in Markdown and then use pandoc to convert to HTML. Once that gets too clunky, especially if you're writing blog posts consistently, then upgrade to a static site generator, or write you own minimal one. If that's too clunky, install Wordpress. etc. The same goes for other content.

There's other miscellaneous "awesome" lists on front end, back end, static site generators, etc. that you can look at to see what strikes your fancy.

Here's a curated list of some sites or projects I've created:

https://mechaelephant.com/

https://calebharrington.com/

https://meowcad.com/

https://mechaelephant.com/dev/

https://mechaelephant.com/nymlist/

https://mechaelephant.com/ResonatorVoyantTarot/

https://mechaelephant.com/hsvhero

https://mechaelephant.com/whatisthislicense/

https://mechaelephant.com/noixer/

https://mechaelephant.com/notenox/

https://mechaelephant.com/feed

All of the above code and content is libre/free/open and can be found on GitHub:

https://github.com/abetusk

https://github.com/abetusk/www.mechaelephant.com

https://github.com/abetusk/calebharrington.com

https://github.com/abetusk/dev

With some of the above projects having their own repo.

Also happy to chat further if that's useful at all.

EDIT: link formatting

[0] https://www.digitalocean.com/pricing/droplets#basic-droplets

[1] https://lowendbox.com/

[2] https://www.mkdocs.org/

[3] https://github.com/dhg/Skeleton

[4] https://getbootstrap.com/docs/3.4/

[5] https://jquery.com/

[6] https://d3js.org/

[7] https://pixijs.com/

[8] https://two.js.org/

[9] https://threejs.org/

[10] https://www.mathjax.org/

[11] http://websocketd.com/

[12] https://mariadb.org/

[13] https://www.sqlite.org/

[14] https://matomo.org/

[15] https://github.com/berthubert/audience-minutes


Thanks for the great links! Love the aesthetics of noixer.

True about the frameworks though. I've tried react, svelte and vue because why not, but they keep changing, annoyingly. Runes...ffs xD




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: