Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Show Your Homelab and Home Server Setup for Inspiration
47 points by Brajeshwar 10 months ago | hide | past | favorite | 57 comments
I’ve been tinkering with quite a bit of self-hosting and home-lab stuff, starting with a few Raspberry Pis, a 10+ year-old MacMini, and a few laptops-as-servers. I’m willing to make mistakes, learn, and be inspired by what you do. I believe many of us would love your tips, tricks, and all the gotchas in between.



Used corporate mini-pcs make excellent home servers if you don't need too much storage (typically they support two SSDs), and want low power consumption & noise and small form factor. Right now ones with 7th or 8th gen i5 are a pretty good deal, often around 50-120 USD on ebay, a bit more in the EU. Idle power cosumption is about 10W. If you want to play around with virtualization, a higher-end model with Intel NIC is recommended due to some Linux driver issues (at least with Proxmox).

https://www.servethehome.com/introducing-project-tinyminimic...

Corporate thin clients with Pentium J5005 or similar also make decent raspberry pi replacements, usually just not as good price-power ratio as proper minipc. Certain models have pcie slot, which makes them ideal DIY routers when fitted with pcie NIC & pfsense/opnsense as OS. If you're using a consumer router, such project can make lots of sense.

For NAS, I recommend building your own. Personally I use Asrock J5040 mini-itx board with Node 304 case and Unraid as OS, houses 4x 3.5" drives, so far happy with it. For maximum reliability you might want a mothebroard that supports EEC memory though.


I agree with ewweezdsd. I've been doing the same with slimline HP/ Dell towers, off-lease C-tier corporate equipment. 7th Gen i7 32gb ddr3 boxes for under 200$ usually. Low noise, heat, and power consumption. I never could justify having an actual server room in a house with the loud fans whizzing and the heat.


just running my old desktop PC with proxmox.

most important thing for my homelab i ever did was starting to mess around with VMs. first with virt-manager to have a gui, then just plain kvm (actually probably qemu) on the commandline using scripts to auto-install vms, and now a combination of Proxmox and Ansible to make a vm with just a few keystrokes. The freedom of messing around as much as i want without danger of breaking my system is the best.

next most important was setting up dns and dhcp automation - whenever the ansible playbook makes a vm, it immediately reserves a static ip for that mac and makes a DNS entry for that vm's name pointing to it's ip - so i can easily reach all my vm's by name without having to go through the effort of setting that up. great for when a test vm becomes permanent and needs to be accessible. this combined with using ansible to automatically create my user and put my ssh pubkey into it's authorized-hosts file makes everything SO smooth, no friction at all to work on different vm's and experiment.

also, when the ssd proxmox was installed on died, it was super easy to restore all the vm's.

next step is probably automating public dns, port forwarding and reverse proxy settings - so far i am doing that manually, but i just got a new phone and refuse to log into google on it, so i've had to set up a bunch of services, kinda running my own cloud, and they all need port forwards and reverse proxy entries. speaking of reverse proxy... it's a wonderful tool, set up ssl/https ONCE on it and use certbot to manage certs in one place, and you can test any service by just forwarding it to the vm without ssl, no hassle. for critical things i guess i'd still recommend setting up ssl for your internal network too, defense in depth like.


RPi4 running DietPi, with a 1TB USB drive, all strapped to the back of the TV. Syncthing for backing up phone media, Hugo for my blog posts, HomeAssistant for my ZigBee devices, Jellyfin for my media stack which network mounts my Synology NAS (auto wake/suspend), Moonlight client to game in the living room with PlayStation six axis controller, private git. All reverse proxied behind a CG-NAT to a single public IP4 VM I pay maybe 10€ / year for.


> RPi4 running DietPi

> Jellyfin for my media stack

How does it handle transcoding television and movies? I had a similar setup, and my RaspPi slowed to a crawl because it couldn't handle 1080p videos.


How did you implement auto wake/suspend for your Synology NAS?


In the settings of Synology you can tell it to autosleep after X mins and to wake on LAN activity. You can have an NFS mounted on your Pi, and it will wake up the device as soon as you try to access the mount.


It’s been through quite a few iterations, but the current setup is a 35U Sysracks (not filled, I just like the space) with:

  * 3x Dell R620 running K3os on clustered Proxmox
  * 1x X11 Supermicro 2U with ZFS handling the disk array
  * 1x X9 Supermicro 2U, also with ZFS, that is a backup target for the other one (WoL to ingest, then shutdown)
  * UniFi Dream Machine Pro
  * UniFi 24 port PoE 1G switch
  * 2x UniFi UAP-AC-PRO APs (not in the cabinet, obvs)
  * APC UPS with battery extension
The Dells have Samsung PM863 3.84 TB drives for Ceph, which Proxmox handles. They have some ancient 2.5” enterprise pull SSDs for boot.

The Supermicros have some cheap ADATA NVMe SSDs for boot (the X9 has a modded BIOS to make this possible).

In general I like it, but the current in-progress work is rewriting all of the Ansible playbook that generates QEMU images for Proxmox, and setting up PXE so I can grab IPs on boot for new images. The latter will enable me to shift off of K3os in favor of Talos. Technically I can do so now, but it requires manual input to set the IP addresses.

Beyond that, I want to get a 10G switch so a. RBD-backed devices are faster b. ZFS send/recv is faster c. Just because.


With your clustered Ceph Promox, does Ceph there provide the storage to your VM's running in Promox?

And if so, how resilient are the running VMs to the (temporary) loss of a Promox node?

Asking just because I'm trying out some clustered Ceph Promox stuff currently, and with 3 nodes (so far) the VMs aren't coping well the loss of a node. They seem to either freeze, fault or hard reset (after a while) when it occurs.

I'm going to try with 5 nodes next and see how it goes. Am new to both Ceph and Proxmox, so it could also be something to do with my current newbie-ness. :)


At the moment, Ceph is a tangential storage solution only used for testing (the bulk of my pods use Longhorn), but yes, that’s what it does – Proxmox sets up Ceph on the NVMe drives on its nodes, and then exposes the pool over RBD to VMs when you add a disk. In that regard, it’s very much like EBS from the perspective of the VM.

Nothing stops me from shifting to Ceph for existing workloads, but I have it in my head that I need to shift the K8s hosting to Talos first. I have used it successfully to run MySQL and Postgres instances FWIW. Never had any issues with either of those, even with abrupt VM shutdowns.

A VM, if set up as HA, can be live-migrated to a different node to perform maintenance with zero downtime. If the node just goes away, of course, you can’t do much about that.

I confess I haven’t used Ceph heavily while dropping a node, so unfortunately I can’t give you a good answer on that behavior, other than ensuring that you have adequate bandwidth for both Ceph and Proxmox. I know they both say 10G networks are preferable. FWIW I have a completely separate switch (UniFi Flex Mini) for Proxmox’s corosync; in theory the main switch would be fine, but for an extra $30 I had more peace of mind.


Thanks, that's all good info.

Yeah, the live migration bit is a useful looking feature. I'm just investigating ways to reduce single points of failure. :)

Live migration is working fine with storage off-the-cluster. eg local iSCSI server sharing storage to the nodes. But then the iSCSI server is the failure point.

Almost done setting up the 5 node cluster, so will test that shortly. Then probably need to read a bunch more docs. ;D


The 5 node cluster with Ceph on all the nodes is working "ok" (resilience wise) to loosing a node. The VMs keep on going.

Unfortunately, it's not a complete win though. Doing a live migration of a VM between nodes actually hung (locking the VM completely), and needed the VM to be restarted then the migration re-run (which then worked). The logs for the failed migration were completely useless. :(

So, a 5 node setup seems like it might be a better way forward, but things like migration hangs need eliminating first. More experimentation and learning coming up. :)


Do all of your nodes have identical CPUs (or if not, a generic CPU selected to be virtualized)? That’s the main thing off the top of my head that can cause a freeze.


Yeah, the whole setup (for now) is just running virtualised in my desktop (Ryzen 5950x, 64GB ram).

So each node has 6x cores and 8GB ram, with 2 network interfaces (1 public, 1 for ceph cluster traffic).

After I have more experience I'll look to replicate it in hardware, but starting with a virtualised setup lets me test different configurations easily. :)


And another migration has just gotten stuck. This time of the Debian 12 installer running.

Nothing is looking obviously weird in the output log:

    2023-12-24 10:05:33 starting migration of VM 101 to node 'proxmox4' (10.1.1.101)
    2023-12-24 10:05:33 starting VM 101 on remote node 'proxmox4'
    2023-12-24 10:05:36 start remote tunnel
    2023-12-24 10:05:38 ssh tunnel ver 1
    2023-12-24 10:05:38 starting online/live migration on unix:/run/qemu-server/101.migrate
    2023-12-24 10:05:38 set migration capabilities
    2023-12-24 10:05:38 migration downtime limit: 100 ms
    2023-12-24 10:05:38 migration cachesize: 256.0 MiB
    2023-12-24 10:05:38 set migration parameters
    2023-12-24 10:05:38 start migrate command to unix:/run/qemu-server/101.migrate
    2023-12-24 10:05:39 migration active, transferred 58.6 MiB of 2.0 GiB VM-state, 101.7 MiB/s
    2023-12-24 10:05:40 migration active, transferred 131.4 MiB of 2.0 GiB VM-state, 92.7 MiB/s
    2023-12-24 10:05:41 migration active, transferred 233.6 MiB of 2.0 GiB VM-state, 96.8 MiB/s
    2023-12-24 10:05:42 migration active, transferred 341.5 MiB of 2.0 GiB VM-state, 108.9 MiB/s
    2023-12-24 10:05:43 migration active, transferred 457.1 MiB of 2.0 GiB VM-state, 107.5 MiB/s
    2023-12-24 10:05:44 migration active, transferred 557.6 MiB of 2.0 GiB VM-state, 108.2 MiB/s
    2023-12-24 10:05:45 migration active, transferred 661.4 MiB of 2.0 GiB VM-state, 113.3 MiB/s
    2023-12-24 10:05:46 migration active, transferred 773.7 MiB of 2.0 GiB VM-state, 57.4 MiB/s
    2023-12-24 10:05:47 migration active, transferred 851.0 MiB of 2.0 GiB VM-state, 127.4 MiB/s
    2023-12-24 10:05:48 migration active, transferred 973.5 MiB of 2.0 GiB VM-state, 141.0 MiB/s
    2023-12-24 10:05:49 migration active, transferred 1.1 GiB of 2.0 GiB VM-state, 119.3 MiB/s
    2023-12-24 10:05:50 migration active, transferred 1.2 GiB of 2.0 GiB VM-state, 113.5 MiB/s
    2023-12-24 10:05:51 migration active, transferred 1.3 GiB of 2.0 GiB VM-state, 142.3 MiB/s
    2023-12-24 10:05:52 migration active, transferred 1.4 GiB of 2.0 GiB VM-state, 122.8 MiB/s
    2023-12-24 10:05:53 migration active, transferred 1.5 GiB of 2.0 GiB VM-state, 174.4 MiB/s
    2023-12-24 10:05:54 migration active, transferred 1.7 GiB of 2.0 GiB VM-state, 141.7 MiB/s
    2023-12-24 10:05:55 migration active, transferred 1.8 GiB of 2.0 GiB VM-state, 102.6 MiB/s
    2023-12-24 10:05:56 migration active, transferred 1.9 GiB of 2.0 GiB VM-state, 121.8 MiB/s
    2023-12-24 10:05:57 migration active, transferred 2.1 GiB of 2.0 GiB VM-state, 179.7 MiB/s
It got to that point and stopped. It's been 10 minutes. The VM itself is completely frozen meanwhile.

That sucks. :(


Looks like I'll need to put some time aside to properly debug this in the new year and figure out what's going wrong.

I'd like to think it's just some kind of dumb setting I should have changed (being a newbie with Proxmox), but so far this "feels" more like a bug of some kind.

Will hopefully be able to get that sorted. :)


Ansible caught my interest! I setup a dev server and have been wondering about using Ansible to install my development dependencies on a new OS


You’re welcome to take a look [0] at what I use now; it’s designed to be used with Packer to build QEMU images for Proxmox.

Also it’s not really set up how I now prefer to do things, hence the total rewrite that’s underway. When that’s done, I’ll archive this repo with a link to the new one.

[0]: https://github.com/stephanGarland/packer-proxmox-templates


At home I run:

Raspberry pi 4 8GB with a 22TB external hdd and 1tb external ssd. For uhhhh media. Runs *arr stack and samba. Raspberry pi os (debian) with everything defined in a docker compose file. Also runs tailscale. Having my entire media library anywhere I have one of my devices is pretty sweet and setup was stupid easy. No plex.

An i3 NUC for hosting a game server for some friends. Think it runs ubuntu server. Runs Valheim. Also in docker compose.

A mitac board with dual GbE and a low power intel cpu N3-something. Runs pfsense.

For learning and tinkering I spin up VMs in hetzner and delete them when I’m done. We use AWS via terraform at work and I’m not likely to work somewhere where physical servers is something I’d deal with so using the most likely interface I’m going to be put in front of makes more sense for me. Having everything in terraform really is lovely, from networks to machines it’s all in my editor.


How do you stream the media on your devices?


I run samba via docker compose and bind mounts. I use Infuse on my apple tv and ios devices.


Is the drive raided (NAS)?


No RAID. All the pictures and music I have duplicated on other devices. I’m ok with losing my video media.


Personally, I've really enjoyed using a 1U 16 core Atom server as a single node Kubernetes cluster. I started on raspberry PIs and Synology NASs too but consolidated them earlier this year and didn't look back. It runs a whole bunch of stuff now and I blogged about the setup at https://bensblog.meierhost.com/20230705-home-lab-infrastruct....

Totally understand that it's a bit more expensive than second hand hardware and smaller elements so I'd only suggest going down that path once you know you'll get the utility out of it.

I've used it to replace my reliance on Google photos/docs and lean more on self hosting, though that means that backups, disk mirrors, and run books for restoring everything is double important!


I have been running a server in the attic for 20yrs, now relocated to the shed (not the same hardware).

I am getting a lot of joy out of vnc connect at the moment. I have ethernet to my shed with a pi firewall between my attic and the garden. VNC connect means I don't need to pfaff about with port forwarding etc.

I use rclone to sync my saas business backups from the server to the cloud. That may not sound secure but I used to be an Infrastructure Lead and I have seen how government agencies left doors open in data centre cages.

One piece of advice. Don't waste all your holidays on such projects, pace yourself :-)


I've been running TrueNAS for years on commodity hardware that's been upcycled from my personal workstation after major upgrades.

Current iteration is: a skylake era CPU, 16 gb ddr4, 4x 14TB WD Reds, 2x 512 GB SSD, and a m.2 boot drive

Used primarily for shared storage, backups, media serving, and lightweight docker containers (home assistant, git tea, etc) in an Ubuntu VM.

It's been super stable (paired with a UPS) and was surprisingly economical for what it is. I've also learned a lot in the process.

I do wish my CPU had a few more cores though.


Mine is mostly various cast-off enterprise hardware, living in a semi-finished room in the basement. From bottom to top:

- Mid-'90s 42U IBM server cabinet

- Two 1500VA APC UPSes (non-rackmount)

- 1U Dell R610 (PFSense firewall)

- 1U Dell R620 (ESXi host)

- 2U Dell R720XD (60TB NFS/CIFS storage server)

- 4U rackmount case containing a Windows gaming/workstation PC

- 4U rackmount case containing a Linux workstation (the previous iteration of the gaming/workstation PC)

- Black Box KVM for the two workstations

- TP-Link unmanaged 10GB switch

- Cisco managed 1GB switch

- Cable modem

- Two 1U PDUs plugged into the UPSes - half-ass dual power path for the machines that support it.

It's a pretty nice setup; I have a bunch of system service VMs on the ESXi host (IDS, Splunk, etc) and can spin up a new one in a few minutes whenever I have a new project or want to try something out. The cables from the KVM go through the floor to my office on the main floor so I don't have to listen to fans and can switch with a key combination. And of course I have plenty of storage for whatever - I back up all of my VMs and my workstations to it, I can download pretty much anything I need to, etc.

And since it's been asked before, yes, electricity is oddly inexpensive where I live.


https://sschueller.github.io/posts/wiring-a-home-with-fiber/

I run proxmox for my router (opnsense) and several servers on it like gitlab, Mastodon, matrix chat, lemmy and some others.


Bought three Dell T420s for like $120 each. Loaded them up with ram. They're actually quieter than some NUCs in normal operation. On the down side, they have obnoxiously bright blue LEDs on the back and the little LCD display. A bit of rubylith tape took care of that.


> rubylith tape

Hadn't heard of that stuff before. Looks cool. :)

* https://en.wikipedia.org/wiki/Rubylith

* https://www.youtube.com/watch?v=q_zInegHR40


My homelab setup is simple:

I have no homelab.

I have a decade old Synology (ds115j if you wonder) as a cheap-ass NAS and a torrent client; it's still running because I got it for $0 and it's more convenient to torrent from a separate device than from a WiFi connected notebook.

I have a two AD networks, two tier PKI, Nextcloud, Netbox, Gitea, fileservers, at least two RDS servers, Syncthing disco server and clients, probably WSUS and WHMCS (didn't touch for yeats, literally) and somethings else I forgot and not at the PC to inventorize.

Between 5 to 7 servers around the world which can be used or are a proxy.

Zabbix server and it's proxies on the abefore mentioned servers.

I don't have a homelab.


It's a "distributed lab" or "cloud lab" then maybe. :)


I use Intel NUC. It is small, quiet and you can choose some components as you like. For a homelab, be sure to have a CPU that supports virtualization (VT-x), so you can play with VMs. On top of that Proxmox.

Like someone already mentioned /r/homelab for more ideas.


Many comments on Reddit and such complain about the noise made by their NUC, and I hate my NUC8i3BEH because of the noise the cooling fan makes. I think some people are sensitive to constant sounds of constant frequency and some are not. I.e., for me it is not about decibels, but more about spikes in the sound spectrum at a certain frequency that persists as long as the computer is powered on.


Wow I feel like something may be wrong with me.

This isn’t the first comment I’ve read about the noise, but I’ve had an 8i5beh for almost 5 years (just replaced a dead fan last week!) and a 13anhi5. I never notice them, they’re essentially fanless to me.

Currently sits at 40db in my bedroom with the fan at 100% 24/7.

Is it a high pitched whine? Or is it like… unfavorable harmonics? I have an old brocade switched that had a very discordant sound, I did end up replacing those fans.


If constant 40db noise doesn't bother you, then you might actually want to see an audiologist... It's possible your hearing is much weaker at some frequencies. Treat yourself to a proper hearing test, if nothing else just to satisfy your curiosity.


Really? We have yearly hearing tests in my country but nothing has ever come back from them. I'll look more into it!


Whines and drones (whines of lower frequency) are what I hate if they're present all the time or most of the time. It's not discordantness that bugs me.


I feel you. I was very disappointed with the noise made by my nuc7i5bnh. Did a bit of research and found out that a) the default BIOS fan settings are crap and b) the installed fan is crap as well.

Found some recommendations for the fan [1] and BIOS fan settings [2] on the internet and got my NUC completely silent. Perhaps that works for you as well?

[1] https://www.amazon.de/gp/product/B07C8H5WHP (Out of stock and does not fit your model anyway, but perhaps the brand has fans suitable for your nuc.) [2] https://community.intel.com/t5/Intel-NUCs/NUC7i5BNK-fan-spee..., https://community.intel.com/t5/Intel-NUCs/Lautst%C3%A4rke-L%...

Did a bit of research and found the following one getting recommended: https://www.amazon.de/gp/product/B07C8H5WHP

Installed it and it was SO MUCH quieter.


I can recommend going for fanless cases, like the one from Akasa, Streacom or Impactics. In my case I am using several Akasa Max MT and Impactics D2NU1 (D2NU2) cases and they are perfect for my all-silent home lab.

Sadly the turemetal cases are not available anymore :(


FWIW, at least on current-generation NUCs, you can set the fan profile to "Quiet" from within the BIOS. Doing this solved the problem for me.


I’ve got a Raspberry Pi 4 booting from an old SSD I pulled from my MBP 2011.

I run it as a “pi-vpn” tap server and it has Samba installed.

Using Tunnelblick on my MBP, I can access my LAN from wherever I am. TimeMachine works too.

However, I mostly use the VPN to get to UK TV when I’m abroad. Sky Go, Netflix and iPlayer mostly.

I’m planing to use my second Pi 4 as a tap client and the WiFi as an access point. I’ll take it with me next time.

I’m going to try to get a Sky Q mini working with the system to get UK TV in an extended holiday in Portugal. Both ends have symmetric 1Gbit connections so it might work.

Edit: I’ve also had a Pi 4 running N64/PSX/Arcade emulators (Emulation Station) over-clocked to get a good frame rate. PS Bluetooth controller.


The Marginalia search engine ran for over a year on a AMD Ryzen 9 3900X with 128 GB non-ECC RAM and a mix of NAS drives and SSDs of various types ranging from consumer crap, through enterprise drives, and even an Optane. 16 TB mechanical storage, 4 TB SSD. All on domestic broadband.

It survived the HN front page and even bigger traffic spikes. Did blip out when Elon Musk tweeted a link to one of my blog posts, but only momentarily.

Now that server's been relegated to run a test environment and to perform various odd jobs.

I do think everyone interested in programming should have some sort of server in the house. Being able to run processing jobs for a few days really does radically expand what you're able to do.


I've been running frr (free range routing) for networking, using ospf layer 3 routing between my hosts. This allows dynamic routes to be populated throughout and makes a switched layer 2 network optional, since switches tend to be expensive and obnoxiously loud and a star topology is not necessary with a layer 3 network.

I like the Supermicro Xeon D boards because I can power 6 of them off a single power supply (the GPU cables can be converted to a 4 pin cpu).

I also use systemd-nspawn (w/ dnf --installroot or debootstrap) or docker to attach instances to the network, where each has it's own layer 3 address distributed by frr.


I bought a Biostar J4125NHU motherboard and put it in a small case with two 2.5 HDDs and two SSDs and am using it to download shows and movies from a Usenet provider and stream them with Jellyfin.

The motherboard was a huge pain to work with, and I had to return two (!) units; I made the third one work. The first two would not boot in the same configuration.

I set up the box with NixOS. Here is its flake: https://github.com/fnune/bilbo

The NixOS experience was fun!


Dell Optiplex 7060 Micro i7 running Proxmox on bare metal. Works awesome and lets me experiment very quickly spinning up VMs for each of my projects.


I'm repurposing an old laptop (an Asus X53SV with an i7, 8GB RAM, and 1TB HDD) for a simple server at home that I use for Plex/Jellyfin, Navidrome, and some self-hosted webapps, all using Docker. Can't handle transcoding media much, so I do that separately. Obviously it's an old laptop, a decade old at this point, but it handles everything I use it for.


I have a Synology DS918+. Not exactly the best bang for your buck, but the form factor is tiny and fairly good fit for a small apartment. For access, I have a PiVPN setup on a Dell Wyse 3040 because Raspberry Pis were extremely hard to obtain. Eaton 5S 550i for keeping both powered.

Need to get a new router though. I have a Mikrotik hAP ac and it seems like it's hanging onto dear life.


I’ve got five raspberry pi’s running arch Linux arm and a Mac mini for plex video encoding. Plus shared storage.


I've got a NAS, a really nice NetGear router, and four mini PCs


My homelab is a single 4U with Proxmox and a crappy GPU I never got to work

The 4U sits on the floor in a "closet", I think you can picture it


Network: Aruba 1930 24x GbE /w PoE and 4x SFP+ in the basement; Mikrotik CSS610 8x GbE and 2x SFP+ in the home office. Linked with 10GbE fiber, and a ton of VLANs.

Server: Dell Precision Rack 7910 with a single E5-2690v4 (14C/28T), 64GB DDR4 ECC (4x16GB), 4x 3.5" HDDs (a hack) with 30TB net storage in RAID1 and 2x small SSDs (OS on RAID1, and temporary data/caches without RAID). I put in the 2x GbE + 2x SFP+ NIC. It's connected with 10GbE and 1GbE.

My old system was nice, but upgrading meant I had to get an expensive CPU; luckily the Dell fell in my hands. With this one, I could double the core count cheaply (it's a 2S system and the CPU is really cheap) and iirc have 192 or 256GB memory per socket (with the cheap 16GB modules).

The above hardware plus all the PoE stuff (5x UniFi AP, a switch behind the TV, DECT-VoIP Gateway), DSL modem and a few ESP draw 110W. The server itself is in the 80W ballpark (that's 15€/month worth of electricity). This could be reduced with more modern or less powerful hardware. What I did was replace my old, big array of 12 disks with four much bigger disks. Since OS and cashes are on SSD, the disks can spin down a lot. I only insert one PSU, since the second one adds 17W idle power draw.

Compared to other home servers that can be pushed much lower it's still okay, since it replaces a bunch of services I'd have to pay for & allows much faster transfers to my workstation (for backups etc) than the 32MBit/s DSL. The old server was running out of cores... The only external service I still spend money on is mail.

Server runs baremetal Arch Linux (it's not cattle), and two VMs (qemu): OpnSense and Home Assistant. Samba serves files from the raid to the network. Services are partially native, some in podman. There are: cockpit, mealy, foundry, step-ca, scanserv, vaultwarden, mosquito, zigbee2mqtt, pihole, tasmoadmin, uptime kuma, UniFi manager, samba, lancache, heimdall, plex. Plus two custom services. It also interfaces with the PV using RS485. A 20m USB cable connects it to the zigbee stick on the second floor. I'm looking forward to adding influxdb/graphana for long term monitoring of our heat pump and BEV power consumptions.

The OpnSense does DHCP, wireguard and local DNS.

First step was to setup cockpit, since I like it to configure IPs and access the VMs. Then OpnSense for inter-VLAN routing and firewall, and to the outside world.

Since I wanted to encrypt the traffic even locally, that was the second step: I have a step-ca that serves certs using ACME. My nginx acts as a reverse proxy for most services and gets certs from the step-ca. The CA is limited to the .lan so it can't be used to intercept other traffic. Also, DHCP puts hosts on .dhcp.lan, so random hosts can't just try to pick any domain name inside the network and get certs for that.

With that done, I looked at services I use or that sounded useful, and spun them up.


I'm not gonna show anything but I've done quite a bit. My house is a four-story townhouse with the second floor being a single room where the kitchen, living room, and dining room are in an open floorplan. We put most of our stuff there. My wife built shelving into the corner of the den area, wrapping the corner and going up to the 12-foot ceiling. She also added a rail-mounted rolling ladder, so we've got a nice library setup. Beneath that is locking cabinets for craft material and I keep spare electronics and cabling in it. We have the same model of cabinet mounted underneath the nook for our television where the cable hookup is and that's where I keep most of the homelab.

I've got an OPNSense appliance router running FreeBSD. I made no attempt to modify it. I tried to build a Linux router but realized I can't make anything that takes up this little space and the PCIE-mounted NICs to get the number of ethernet ports costs a lot more than buying an appliance. I built the NAS server myself, using ASRock Rack mini-ITX motherboard with AMD CPUs that have graphics integrated onto the chip. It's got a 1 TB SSD cache and 8 spinning drives. The machines I use as servers are six Minisforum small form-factor PCs, similar to NUCs but fanless, cheaper, and with AMD 6-core processors. I don't think they outperform or anything, but having more cores makes it easier to pin many VMs. These plus the NAS server and television are plugged into two Cisco switches that support 10 GbE and two UPSes that typically give me about 30-40 minutes in the event of the frequent power outages we get in Texas.

I've been tempted forever to try building a "real" server, but they're power-hungry and loud and way more than I need. The small form-factor PCs have done the job and I can run them in a closed cabinet that looks identical to all my other cabinets. The only modification I needed to do was install USB fans in the cabinets themselves to ventilate the heat, but they don't make any level of noise I can perceive.

I've got Aruba Instant-On WiFi access points, one for each floor of the house. They run five separate networks, one for work devices, one for IoT, one for televisions that aren't hard-wired, one for guests, and one for a main non-work WiFi network. Everything except the main network is forbidden from sending packets to local IPs. I don't know there's much benefit outside of that, but it allows me to set the television network to be optimized for streaming, low QoS on the IoT and guest networks, and make the main network WiFi 6. It's also pretty funny to ban porn on the guest network and see which houseguests notice and complain about it.

Being in a townhouse, the only walls I have running floor to ceiling are either shared or external, which means they're very tightly insulated, and running cable between floors was a pretty serious challenge. If you're ever going to do it, I would recommend doing it immediately upon moving in, before you do anything else, before you even move in furniture. I'm pretty serious about cable management and keeping things neat, so I run everything I can through walls and/or floors. No cable is loose except at the last mile.

As for the self-hosted services, I don't use any sort of on-prem management layer like vSphere or Proxmox or anything. It's all Arch Linux with libvirt running on hardware-accelerated KVM. I at least automated the Arch builds by putting provisioning scripts on USB drives with a "cidata" label since the Arch installer comes with cloud-init and you can use this for unattended installs by just in two drives at first boot instead of one. Most everything else runs on Kubernetes, with Longhorn as a storage provider. I use Ansible playbooks to install Kubernetes and the applications are installed and configured with GitOps. So the external services are a Git server and Minio on the NAS acting as a backup target for anything that will backup to S3 as well as a package mirror and image registry so I can provision everything without Internet access. I load balance the control plane with kube-vip and Ingress with MetalLB, using the L2 advertisement features.

Unbound on my router is configured as the default DNS for the LAN. This recurses to NextDNS. I block outbound port 53 to try and ensure everything is actually using this, but there isn't much you can do to block DNS over HTTPs without a MITM proxy. Pretty much any known ad source, telemetry, tracking domains, is all blackholed at both the Unbound and NextDNS layers. It doesn't seem to break too much. The Paramount+ app stopped working, but their actual content is streamable through Prime Video using the same subscription. I should probably just cancel it, but they have SEC and NFL football that I still geek out for plus I've been rewatching Aeon Flux and Daria and they own the MTV back catalog.


try /r/homelab


Been homelabbing in some capacity for decades, current iteration ~5y in now.

Here are some pointers from me to you, with the assumption that you will end up with some form of personal production workloads and that extended unplanned downtime, security breaches and data-loss would be stress-inducing experiences you want to avoid.

- For any future hardware you add, get minimum 2 of everything. You never know when or why the extra will prove invaluable.

- Get a managed L2 PoE switch with more ports than you think you need. Used Brocade/Ruckus gear can be found on eBay, for example. Until you do, at your current scale you can get away with several cheaper smaller unmanaged switches. But I'd be considering getting more serious gear when start growing out of 10-12 Ethernet ports if you aren't already sick of cables and wall-warts at that point.

- The STH forum is an amazing resource. Identify and scourge megathreads relevant to you.

- Segregate your networks. Don't run your servers on the same network where you have your WiFi AP and user clients. Ideally, your servers won't even have a default gateway and be firewalled to only allow internal traffic even for the outgoing. You will set up not only reverse proxies for the incoming (Caddy will be the easiest to get started with if it's all the same to you) but also for the outgoing (squid still seems to be the sane default for HTTP?). You can still proxy HTTPS via an HTTP CONNECT proxy without having to care about TLS, certs, or MitMing.

- One piece to the above would be setting up a "bastion host" - the gateway and firewall between your labnet and the world. You want something with minimum 2 NICs. My personal experience with USB NICs has not been great. I strongly advice you to consider this use-case for the next piece of hardware you get alongside the switch. A cheap SBC (get minimum 2!) should be fine.

- Wireguard

- Use configuration management and resist the urge to manually configure stuff by SSHing and editing files. You want to be able to reproduce it and keep track of changes you made. Ansible is popular but there are many others - pick whatever feels smoother for you.

- Backups: do them. Learn about the 3-2-1 rule and apply it. You could let one of your Rpis with an HDD be a dedicated backup-sink.

- Virtualization: I'd say you shouldn't bother with this at all for now, unless learning virtualization is a goal in itself. Hypervisors like Proxmox make a lot of sense if you have 1 or 2 huge hosts. You have a larger number of smaller hosts already. Makes more sense to scale horizontally and use containers (Docker/LXC) to separate workloads within one host. If you get something more beefy than the Mac Mini down the line, it can start making sense to look into, though.

- Just Do It. You don't need any of the above to start iterating and prototyping today. Configuration management will make it easier to fearlessly set up and tear down your setup.


As for my own main setup:

- 48 port Brocade PoE switch

- 2x "Bastion": N100-based Mini-PC with 4 NICs. Qemu+libvirt, running several VMs for proxies, firewalls, VPN gateways and such. These are the only hosts connected directly to the network. The 2 are identical and made redundant using VRRP.

- 9x Pi 4B-equivalent ARM SBCs: 3x consul + 3x vault + 3x nomad server. I'm considering virtualizing these onto 3x of something similar to the bastion host.

- 3x x86 server boards with ECC RAM, and various drives, virtualized: Glusterfs servers, database servers, nomad clients. Distributed filesystem was probably premature here and I regret glusterfs but it's still chugging along.

- A few other hosts: Nomad clients for workloads not suitable on the above for whatever reason.

- x86 miniPC with 2x16GB HDDs as dedicated backup sink (zrepl)

All Debian base. Running most of my own needs on this platform.

I also have some home-automation stuff with HomeAssistant and Zigbee devices etc but that's a completely different setup and network.


Sorry but you should just swallow your pride and rent a box from your favorite cloud hosting megacorp if you really want to get the job done




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: