Docker is the de-facto standard in the community now (and, to a lesser extent, alternatives like LXC or podman). The daemon should be run rootless if possible, or the containers rootless if not.
You can still use VMs, and some use that as an additional layer of isolation because they're virtualizing anyways (performance overhead is really negligible).
I've been self-hosting on my home server for at least 5 years now, and I think I've only seen two or three vulnerabilities across all the services I know about, none of which were ever really exploitable.
100% not worth it. If you need multi-host for some reason (beyond “I want it” - and you don’t) then try docker swarm.
It’s your home environment. You want it to be easy. You want to use the tools you run not maintain them. If you want to learn k8 for professional growth, learn it separately from a home server.
I went with Docker Swarm on the same advice from someone else, and tbh, it's unnecessary overhead as well. And at least on RPis, it's very fragile and not as self-healing as I'd hope it to be. My stacks are well compartmentalized, but weird database locks will still happen, or the swarm will just become unreachable, and I gotta go power-cycle a node or two to get things back up again. (I mean, we're talking once every few weeks or something, but still not okay.)
I've been moving workloads to an old gaming rig running NixOS with varying levels of isolation (some containers, but really just good user/group/permissions management), and it runs super well.
Of course, you could do the same with just Docker Compose and no Swarm, and I think you'd still be better off than using Swarm.
Yea I had a not dissimilar experience. I didn’t have as many issues, but I pretty quickly realized a single old gaming Pc was way easier than a half dozen Pis stacked up in the closet trying to coordinate. Auto scaling and balancing seem nice at work… but complexity was rarely needed at home.
The main reasons swarm is better than other options for clustering IMO is networking. They can be set up to share the ports on all devices and map it back to the correct container on whatever host it’s on, so you can disconnect the target IP:Port from the container.
My iPhone is a pet. It’s a pet with a great backup system that turns a new pet into exactly my pet. But it’s still a pet.
There’s only one and it changes manually as I need features to change. I download and install things as needed, from gui, with no version control or script to manage it. It’s a pet.
It sounds like for you: hand-operated -> pet, automated/script operated -> cattle. I think the whole point of the analogy is about if things get slaughtered can you furnish a new one without batting an eye. If yes, then cattle, not pet. So I guess the question is: if someone stole your phone right now, would you blink?
> if someone stole your phone right now, would you blink?
Yes absolutely. I can afford a new one, and I would immediately buy a new one (well I’m already waiting for the newly released one but still). I would still be quite upset and my life would be interrupted at least a little.
I took the pet/cattle analogy to be about how manual the setup is, and how replaceable it is. I think apple has smartly blurred that line with great backup tech, but I would still consider the “lovingly” hand customized aspect of maintaining a phone solidly a pet. Some version of my current phone has been around for ~10 years through various hardware iterations, all restarted from a backup image. I would be distraught if i had to recreate it without a backup, just finding my apps, logging in, finding wallpaper, rearranging icons, setting up shortcuts, etc. Maybe that’s the ideal state for a home server - a nearly no-op backup and restart process that you still manage as you need
Proxmox + Proxmox Backup Server + external storage (I use my NAS) means I don't really have to worry about disaster, as such, because every VM is backed up nightly. VMs and the hypervisor can all be pets and I can just restore a backup if something happens.
If you're doing something for a hobby, treat it like the special snowflake it is to you. If you're doing something just to get things done, treat it like the utility it is. If you're at home playing around with machines in a homelab, feel free to baby your servers.
As far as disaster is concerned, it's not that difficult to install software that really needs minimal maintenance. But it comes down to what you want out of the software and hardware that you run.
I have no experience with it, but generally my view is that a home server is NOT a “devops” project, its more like an iPhone. You want backups, and you want whatever is running to restart if you lose power (whether that’s a new toaster tripping a breaker or the weather killing power, it happens), but you dont need “infra as code” and all sorts of automation. Just update as you go, and move on. Docker et al. have enough tooling that you can run everything as its own container (basically a phone app) and you’re done.
If you want to try out <insert tech here> to learn something, then just learn it, don’t try to fit it in your normal life and eat at your existing stuff. Don’t replace your mac with a chrome book just because you’re learning webdev, and don’t replace your home server with terraform just because you’re learning it. What if you learn it but stop needing it or never use it professionally? You’ll now need to maintain that skill to maintain something at home.
If you want something more than a blank Linux box for your home server, check out HASS.io, synology, QNAP, TrueNAS, or one of the many “hold your hands” distros/tools designed to make it less work. Even Portainer/Proxmox will give you a bit of a GUI without being too opinionated. I use a blank Linux box primarily, but only because I live with other SWEs who all want to mess with the shared server, and everyone wants their own thing and we couldn’t agree on anything else. We plan to switch to TrueNAS and give everyone a VM but haven’t coordinated the switch yet…
Kubernetes alone recommends at least 1gb of ram just for itself IIRC, so that may push it out of some home servers such as RPIs or smaller nucs depending on the actual service load.
You can still use VMs, and some use that as an additional layer of isolation because they're virtualizing anyways (performance overhead is really negligible).
I've been self-hosting on my home server for at least 5 years now, and I think I've only seen two or three vulnerabilities across all the services I know about, none of which were ever really exploitable.