Hacker News new | past | comments | ask | show | jobs | submit login
Home Lab Beginners guide (linuxblog.io)
636 points by ashitlerferad 7 months ago | hide | past | favorite | 387 comments



The article is good, but it is intimidating buy its size and scope. A homelab could be a single NUC on a desk. I have one with 64GB RAM and it could fit so so many things. NUCs are very efficient to run 24/7, but become noisy with sustained CPU load. For that one could grow the setup with OptiPlex or Precision Tower (can haz ECC) SFF from eBay. These Dell SFFs are very good, quite small yet proper desktops/servers with proper quiet fans, could fit 10G Mellanox 3 cards (also from eBay for $40), and you could stack one on top of another horizontally. Don't go older than OptiPlexes with 12th gen CPUs, electricity and space they take could become a constraint. The used ones with i5-12500 are already very cheap. With LGA1700 one could put i9-14900 (non-K) there if needed.


In addition to that, I don't think people should overthink the rack as an essential component. I like racks as much as the next guy and homelab racks look really cool but for usability and learning, just go with whatever you like.

I personally have 4 boxes stacked in a corner that are connected to a "rack", which is just two rack side panels bolted to the back of a workbench that I screw components into, connected to NAS and multiple Raspberry Pi's on a shelf and I really like the mess and have learned a lot.

Just use what you have and expand as you need. Racks are cool once you're into the hobby enough to care about style points.


From what I've seen on reddit's /r/homelab, the neat thing about racked systems is you can buy very cheap used hardware that are discards from enterprise uses. There's very little market for an old server or rackmount network switch so you can get very good bargains.

Personally I buy new cheap mini PCs and consumer networking stuff. That's another way to get bargains, since consumer hardware is so cheap. But I'm a bit jealous of the enterprise capabilities like proper managed switches or remote managed servers.


The down side to the old enterprise servers is the power draw. I had 2 Dell blade server chassis and 1 HP blade server chassis that used Xeon based blades. The electric bill was UNREAL with 10+ blades running at once.

I do use Dell PowerConnect manged switches from EBay, but tbh for what I do, TP Link switches work really well and are cheap. They support VLANs, MAC Lans, SNMP, SSH/Web configuration, etc etc - basically fully managed switches. And they have been rock solid as 'edge' switches for each room in the house. The powerconnects basically work as top-of-rack switches for pulling all the different rooms/rack into 1 feed to the cable modem.


Agreed. Power draw and often noise on enterprise equipment is ridiculous in a home setting. I try to stick to the small form factor (SFF) or ultra-small form factor (USFF) computers whenever I can, which is what the author is using. I used to use Raspberry Pi boards for some things but always ran into problems w/ microSD cards dying, even with log2ram and buying industrial cards. I haven't had a single issue after switching to tiny x64 machines w/ real SSDs in them.

I picked up a HP managed switch that I thought I might use but it draws more power than the ThinkCentre M93p Tiny computers do and it's actively cooled w/ super noisy 1U fan(s) so I'm going to ditch it.


Yeah - I skipped rPi because back when I started replacing stuff ARM still had some gotcha's with stuff that hadn't been ported. I do use the rPi's for the 3d printers though.

I've found the TP Link stuff to be good for home switches. cheap, quiet, and basically fully featured/managed. Stuff like SNMP, VLAN/Mac Lan, port mirroring/monitoring, etc. Also SSH and web interfaces... I have 2-3 currently, 1 per room feeding back to the PowerConnects (NOISY) and haven't had a single issue in 3-5 years of use :-)


One of the coolest additions to my home lab recently was an Orange Pi 5 plus/pro - 8 core ARM 64-bit cpu and 32GB of RAM all with awesome power usage and zero noise, plus an up to date kernel. I skipped the rpi as well a while back but these newer boards are worth lookin at! :)


Oh nice! - I'll have to take a look

Thanks!


The Omada line makes pretty decent AP’s too. A step above ubiquiti in terms of reliability but also dead simple. I’d have gone ruckus if I could have but the mailman stole my $200 eBay R550 score and I didn’t feel like shelling out $500 for one.


Which x64 machines have you been using? I'm looking for a successor to PC Engines APUs


I've been sticking w/ very compact units, mainly ThinkCentre M series "Tiny" and Dell OptiPlex "Micro" form factor, which are both practically identical in size. Both use a laptop power supply. You can get current versions for many hundreds of dollars or generation(s) old ones for as cheap as $30 USD. My most recent ones were older 3rd & 4th gen i5 & i7 CPUs but they still blow current Raspberry Pi, etc. out of the water, have M.2 for network, etc. & SATA for storage. A Dell I have that's a generation newer than these adds NVMe too. It's hard finding these small PCs that have multiple Ethernet but I just added a gigabit Ethernet M.2 card to the one.


On paper, these look promising, no direct experience. Also on Amazon US/CA.

$170, N100, 16GB LPDDR5, dual 2.5GbE, https://aoostar.com/products/aoostar-n-box-pro-intel-n100-mi...

$210, N100, dual 3.5" SATA, M.2 NVME, dual 2.5GbE, https://aoostar.com/products/aoostar-r1-2bay-nas-intel-n100-...

$250, Ryzen 5700U, https://aoostar.com/products/aoostar-cyber-amd-ryzen-7-5700u...


I love the Odroid H series [0]. cheap, quiet and only like 4w idle draw

0) https://ameridroid.com/products/odroid-h3


I was also looking for a successor to the PC Engines APUs and came across https://teklager.se/en/ that lists some possible alternatives that you might find interesting.

Personally I was looking to build a router so I ended up buying a fanless N100 based mini PC from Aliexpress (e.g.: search term is "N100 firewall appliance") and have been very satisfied with it so far (Proxmox homelab with OPNsense running as a VM).


The N100 is the current thing. It is in a large variety of form factors with a variety of available connectivity options.

N100 sips power while performing as well as a 60w quad core Skylake.


Power draw also comes with terrible noise. They were designed to be in a server room, not a living/bed room.


> The down side to the old enterprise servers is the power draw.

It depends on the generation of the gear. Stuff coming off-lease in the last 3 - 4 years has been of a generation where lower power draw became a major selling point. In Dell land, for example, the power draw differences are dramatic between the Rx10, Rx20, and newer series. (Basically, throw Rx10 servers in the trash. Rx20 servers are better but still not great.)


What homelab people who do not work with/in a data center daily doesn't realize is how noisy and power hungry is the equipment they procure.

They got a managed switch, nice. It has VLANs, stacking, etc. nice. How many 48 port switches you can possibly stack in a home? You got a Sun ZFS7420. Great. That's a rack with two controllers with 1TB RAM each. That thing screams. An entry level 7320 is maybe workable, but you can't do 7420 at home. Same for HPE/Dell/Netapp storage boxen.

Doing Infiniband is home is another thing I can't understand. Fiber cables are extremely expensive. Copper is range limited and thick, plus unless you do HPC, its latency and speed doesn't mean much. Go fiber Ethernet. Much more practical.

I understand the curiosity and willingness to play, but it's not that practical to play with these at home (noise, space, heat, power wise). I also buy high end consumer hardware. TP-Link has 4/8 port smart managed SOHO switches for example, they even have PoE. Power your Raspi cluster via PoE with less cables and get some respectable computing power with no noise. Neat! No?

Dell's Rack servers are generally not noisy at idle, but a 2U server under some real load is unbearable. What's funny that an Intel N95 has the same power with an i7-3770K with better SIMD support while being 4x more efficient, and that thing is not slow by any means.


I don't think fiber cables are expensive. I ran CAT6 in my house and at the same time pulled an OS2 single-mode 30m fiber cable to my office for core switching. The 30m fiber cable was $17 on Amazon, plus two 10G-LR modules at $18 each. The equivalent CAT6 is more power hungry, has less range, the cable is the same damn price, and the 10GBase-T SFP->RJ45 transceivers are twice the price of the LR transceivers at $50 each.


Infiniband uses its own cables with dedicated/integrated transceivers. It's not compatible with Ethernet and can't be swapped (unless you plan to talk plain Ethernet with your IB card, which is possible).

A 5m, 200gbps cable is $1,149.00 a pop [0].

Mind you, this is neither the longest, nor the fastest cable.

[0]: https://esaitech.com/products/mellanoxmfs1s00-h005v-200gbps-...

More cables: https://esaitech.com/search?type=product&options%5Bprefix%5D...


The caveat to this is noise. People who have never seen a rack server in person don't understand how loud they really are. PSUs, case fans, cpu fans (if applicable) all gotta go. You kind of have to go into it knowing you are going to spend an additional $100-200 replacing it all, and that's assuming you will not have to 3D print a custom bracket to house the new fans.


Yep, the noise is serious. I may not even seem that bad at first, but if you are going to hear it prolonged, it starts to get really unbearable after a 20 minutes of it. I don't know anybody who isn't irritable after a couple hours.

And the heat is enough to make a room very uncomfortable. I started my home lab in my office, and it didn't take long to realize that was not an option. Luckily I was able to find a nook under the stairs where I could stash it, but the heat and noise are polar opposite of a productive office environment.


I feel like it’s a harder sell than it used to be because post-Ryzen there’s been real increases in CPU performance. Modern NUCs and prebuilt also having native NVMe is a boon too.

All depends on what you’re going for though.


I haven't seen any mention of going to scrap yards in this thread, but just about every general-purpose yard ive been in has had racks of some kind. Hard to beat scrap price as racks themselves tend to be light when empty.


I did just the same. Built myself a little movable 6 node rack out of wood.

With 3d printed powerbrick holders on the sides and a switch with a 10g uplink so all servers can use the full 1g nic.

Warms my home office in the colder months and can be put in the garage on the warmer months.

Photo front: https://pictshare.net/qi7zv2.png

Photo back: https://pictshare.net/llloiw.png

Photo of the 3d printed power brick holders: https://pictshare.net/kgffy6.png

[edit] Specs:

- 80 Watts during normal load

- 50 cores

- 9TB nvme storage

- 300GB RAM

- 3 of them are running proxmox

- 3 of them Docker swarm


I can't quite see which Thinkcentre's those are, but the M920x can fit a 10GbE PCIe NIC in it and provide it enough power. I have one on my desk with 2x2TB NVMe, 32GB RAM and a 10GbE fiber connection to it.


In my experience, homelab means high powered or at least a lot of compute. People like me who have tiny, power saving devices are in the selfhosted communities.

There's no definition like that, but it happens to turn out that way.


You can add "data hoarder" to the Venn diagram too!


Fair enough, kinda works with neither group. Usually not enough compute for homelab, but care way too much about storage for selfhosted stuff.


Racks are nice from an organization perspective as well, but agreed it’s an unnecessary complication and expense and limits your options.

I run a half dozen Lenovo ThinkCentre tiny PCs I picked up off eBay super cheap. Sure, I could go buy some rack mounts for them all… to the tune of $750. It would look really nice. That’s about it. If I’d decided that was a “requirement” I never would have done any of this.

For anyone considering diving in to any of this stuff, don’t let the pursuit of “perfect” stop you before you’ve even started.

My PCs are quite content sitting in a pile with some scraps of wood shoved between to help with airflow.


Or as a compromise, you can buy strips of appropriately spaced/sized square-punched metal angle; before I moved I built a shelf unit designed so that the depth of the shelf was correct (19" between posts plus a bit for the sheet metal thickness) for a rack, I had access to the side so I had a combination shelf & rack. People do similarly with Ikea tables ('lack rack'). Nice thing about the shelves was that I could buy a short strip (it was maybe 12U) and if I'd outgrown it I could've just removed a shelf and added another strip.


Thing is, you can get pre-built racks on amazon for < $100 that are both sturdy and designed for it. with removable sides, or casters, or whatever. For a little more you can get nice wooden audio racks that are spouse-friendly.. ish.


Ok? I chose to build sometbing custom fit to my space, and the rack mounting strip things were <£10 on eBay. Obviously the wood brings it more to '<$100' realm, but I was also just building a shelving unit. My point wasn't really cost anyway, I didn't mention it, my point was just that 'rack' doesn't have to mean floor standing 4-sided 42U behemoth from a supplier surprised to be delivering to a residential address, on which point we're agreeing.


When I started using SBC's for my home setup, I skipped rPi's. At the time, being ARM based was a bit limiting (IIRC, at the time some stuff still hadn't been ported to ARM). So I went with odroid h2/h3 [0]. X86 based, lots of memory to work with, and much cheaper then a nuc - and uses much less power then my old Dell/HP blade servers from EBay :-)

0) https://ameridroid.com/products/odroid-h3


In the spirit of "your setup, should be your setup. As traditional or as custom as you’d like!" as mentioned in article, I went with a vertical mount rack. It's not enclosed so it has enough ventilation.

The advantage with a vertical rack for me was space savings. Key parts of my home lab are near the low voltage panel (lucky to have in newer house) which does have foot traffic from other household members. Much less risk of someone bumping into the rack.


I didn't care about racks either, but I got a 6u desktop rack and mounted a couple of things in it and turns out I love it.

- shelf with two mini systems - power strip with individual switches - power strip always-on outlets front/back - ethernet switch - "broom" thing to feed cables through - many-port USB 3 hub

other systems are under the desk.

It's really convenient. I have hooked some systems to power switches, others are always on. It is organized, takes up space but doesn't fall over, and gives me desktop access to ports if I need them.

That said, many years ago I had a server setup where I really organized everything, zip ties for all cables for neatness, etc. I turned out to be a pain every time I rewired something (cut zip ties, etc). I eventually learned there's a neat and organized vs flexibility balance you have to maintain.


From a home perspective, the best way to think about a (full depth four post) rack is as an extremely high density shelving for specific types of things. You won't make any gains with one or two machines, or several SBCs in small cases, that you couldn't have made with similar non-rack shelves. Having a few pieces of gear with rack ears not in a rack will forever call to you though. In general I'd either say commit to it (42U and understand you'll be paying a premium for shelves/gear/accessories), or regular shelving will suit you fine.


Also there are great fanless cases for NUCs and RPis.

Actually, Asus has taken over NUCs from Intel and they are releasing fanless machines, so now there is not even the need to mod them.


I am honestly kind of anti-rack at this point.

"Deals" on used enterprise are IME dreadful unless you can get in on your own employer offloading old stuff, which requires you to a) even have an onprem setup at all and b) be friendly enough with the IT/NOC people that they give you a tip when it's going to happen. (For me b seems to happen naturally so it's really all on a).

eBay is a fools game, terrible prices. Craigslist is usually the same sellers too. Maybe some people live behind an Amazon datacenter or something, I unfortunately live in a city where the only reason we have Amazon 2-day shipping is because we're a major shipping hub. And anything that does resemble a good deal is ancient Opterons that are literally e-waste these days.

So anyway, sorry, my whole point here is that most people are actually going to be running a ton of old enterprise stuff that is actually rackmountable with good sliding rails. You're either building whiteboxes in a 4U rosewill case or you're putting stuff on rack-mountable shelves. The rosewill cases theoretically have sliding rails but they're awful, just awful. So you're either using shelf rails or literal shelves which tends to throw off efficient utilization of the U's available in the rack - shelf rails usually result in a dead U either above or below them, and shelves obviously have similar effects.

So yeah. As someone who owns a rack (4-post open rack), if you asked me, I'd say just go buy one of those big heavy duty Lowe's shelf units and ask for extra shelf inserts so you can have more, but less-tall shelves. Boom, you now have something that'll hold anything and is way better than fucking around with rack nuts and shelf rails and stuff.

Love homelabs in general, though.


10-15 years ago the deals were real. I picked up a lot of off-lease and/or used stuff on ebay really cheap. As more and more companies move to cloud, and as more people get into homelabs, prices on decent gear have gone up. I was starting to think the good times were over in 2018 and in 2020 I stopped defaulting to ebay when I saw a great lot getting broken up and sold cheap suddenly pull all their stuff and relist it at much higher prices - "the scrappers are on to us" is what I realized.

These days I've been replacing the guts of my old servers with mini-itx setups for the most part. It does what I need and lets me shove 2 or 3 nodes into a 2u case so I can keep the aesthetic of "a personal rack".


FWIW Supermicro slides work with those 4U Rosewill cases. I was pushed in that direction because my rack is just slightly narrower than 19 inches.

Honestly I don't mind wasting 1 or 2U between machines because the tops of server cases function as great "temporary" shelves.

I do agree the meme of used enterprise deals is completely overblown, unless you actually want 15+ compute nodes and are prepared to suffer power, heat, and noise for the density. Honestly I wish they made those Rosewill cases in 5U so I could put any full height cooler in there instead of being somewhat restricted.


Get a DIN-rail and some mounting brackets


This is because like many things this is a rich man's hobby. See mechanical keyboards for example.


Whereas a "poor mans hobby" is a a 1970s muscle car and a home workshop...

Sure you can spend thousands on racks of servers and UPS/generators but you can also spend less than $500.

Personally I rent a unit in a quadplex and have:

    Old 42" rack -  $50 from old employer
    10 year old HP Microserver G7 for storage - $250 new
    repurposed old desktop virtual machines - Free
    Cheap dumb switch - $50



Most of the things on r/homelab and elsewhere are not wooden racks. Most are expensive server racks retrofitted to be vanity projects to serve videos to at most 2 people in a home. It's fine, it's just that people should be honest about what these are.


Interesting you mention 2 people.

I got kids. So I got a lot of toys in the living room. The small office I got, does contain a printer, a PC, a space for my laptop plus dock, a server, a switch, and a small cluster. But what I don't have is more space. I don't have space for a rack.

Furthermore I've been replacing industrial grade fans (in switches, desktop, the cluster and the like) which make a lot of noise (and cool very well) with Noctua which are expensive and have less CFM but are silent enough.


I don't know if I'd call myself a "hobbyist" for keyboards, but my mechanical keyboard that I am typing this on, and that I bought in 2021, cost me $35 after tax. Obviously not the cheapest keyboard in the world, but also within the reach of nearly any employed person in a rich country like the US or UK.

I posted this in a sibling comment, but it doesn't really have to be expensive regardless. I got started with the homelab stuff with a partially broken laptop. Considering that that laptop was about to go into the trash, I consider that (sort of) free to get started, but even if you don't have an old laptop lying around you can purchase a used "Craigslist Special" for basically nothing, generally less than $200, and often much much much less because you probably have a friend or coworker that has an old laptop lying around that they'd be happy if you took off their hands. Old laptops are kind of in this frustrating category of being "too good to throw away, but not good enough to use", so people will often hoard them inadvertently. Additionally, they're extremely ubiquitous; most households in the US have at least one laptop, and often at least one per person.

Even rack mount servers (used) cost very little, at least upfront. My server with 128gb of RAM and 24 cores was $150, but of course the power draw is where most of the cost for that comes from, so for those you could argue that they're a rich man's hobby.

ETA:

For those interested, the listing I ordered from a few years ago on Amazon appears to have mutated into a different item, but this one appears to be an identical product. I think for the price it's actually a really decent keyboard. I'm able to get about 120WPM on MonkeyType with it, for what it's worth.

https://www.amazon.com/Mechanical-Keyboard-Ultra-Slim-Anti-G...


You can get into custom mechanical keyboards for under 50 dollars. I don't think you understand what you are talking about.


Don’t be disingenuous he’s obviously not referring to that part of the mechanical keyboard community


I'm not being disingenuous. Calling it a "rich man's hobby" might dissuade people from partaking which I wouldn't want to happen. Yes, there is a high-end to mechanical keyboards where it can get really expensive but is cooking also a rich person's hobby because there are $1000 knives? If you don't want to play in that side of the pool you don't have to. You can get a lot of mileage out of $50-$100 in mechanical keyboards.


Intel NUCs are so *insanely* good for homelabbers.

I got a few recently, and they're just great, particularly in terms of power usage. I hooked mine to tasmota-powered smart plugs, and they idle at something like 6W ... Granted, I always tune hardware (from bios config) and software (tuned profile) for low power... But long story short, my nucs rarely spike over 30W.

Literally half of one of those old tungsten-based lightbulbs.


Personally, I thought the NUC style boxes were really expensive. I like odroid H2/H3 SBCs[0] - they have much less compute power, but support large memory (DDR4 up to 64G), SSD/NVME, dual on-board nics (up to 2.5Gbps), X86 based and just sip on power (2 watts idle, 15w stress, 18w cpu+gpu stress).

0: https://ameridroid.com/products/odroid-h3


NUCs may be expensive, until you see a 40% discount on Amazon :) I've got mine this way. Now they are discontinued so one may expect to get them cheap on eBay. Anyway, I'm talking about the box itself and MB/CPU only. Vendors/people often sell a good box with bad RAM/disks at a very low price.


They were discontinued by Intel but the brand was picked up by Asus [0], so not discontinued [1].

[0] https://www.asus.com/us/content/nuc-overview/ [1] https://www.howtogeek.com/say-goodbye-to-intel-nuc-as-asus-t...


I got a Ryzen 7 5800H with 32gb of ram and a 1tb hard drive for 320€... I don't have the dual on board nic, but beside that, it's quite great cpu wise with Proxmox — multiple VM on it, doing transcoding, DNS, dealing with download and search, Home Assistant. Of course transcoding 4k live is costly, but that's an exception. I can use NVMe and SATA hd, that's nice.

And I can update the ram and use ECC.

Not bad all in all !

Still, I got a Dell SFF with an I5 and 16gb of ram for less than 100€ — updated it with a 4 2,5gbps NIC, will be good for OPnsense... If I don't switch to a Melanox Connect 3.


Very Nice :-)

I don't really need that much horsepower (My stuff is like 90% idle - git server, etc.) but I _really_ appreciate the lower noise and power usage compared to my old Xeon blades


I didn't know ODROID was in this market, that's awesome! Definitely checking them out next time I need hardware, the H3 looks perfect.


I sound like a shill - I have no affiliation, just a customer but I _LOVE_ the H series :-)

plenty of power for my needs, silent, low power ... and CHEAP! I especially love the large RAM capacity as it lets me run a lot of docker/vm/etc without swapping. Most of my stuff is idle like 90% of the time (stuff like git servers, etc) so the CPU is fine for me.

I've had mine for a few years now (I had the first gen..) and never any problems!


Don't worry, I'm actually (the original?) ODROID shill!

https://news.ycombinator.com/item?id=32619740

:)

Glad to hear they never stopped doing what they were good at, sounds like they've gotten even better!


One of us...One of us :-)

I really hope more people hear of the H2/H3 stuff...I'd love for Odroid to keep making/improving them! :-D


Oh hey - that post was pretty cool :-)

Amazing how much a kind gesture sticks with you! Glad to see stuff like that these days :-D


Not only do NUCs sip power but they're embarrassingly fast for most "I want to play with servers" jobs. RAM is cheap and 32GB goes a long ways.


Any power tuning tips?


How do they compare to SFF (like thinkcentre tiny) or other minipcs (e.g. Minisforum um790)? Power usage seems to be fairly similar (5-7W) but maybe there are less obvious perks?


My first homelab thing was a laptop with a broken monitor. It wasn't terribly hard to get Ubuntu server working on there, it had a gigabit ethernet port built in, and it had USB 3.0 so it wasn't too hard to get reasonably good hard drive speeds to use as a NAS.

You can buy old used laptops (especially with broken monitors) for basically nothing (or maybe literally nothing if you already have one, or you have a friend of family member who has one they're willing to donate). If your goal is just to use it for Plex, it will work fine, and the power draw on laptops (particularly when idle) isn't that much more than a NUC.

I use a proper rack mount server now largely because I thought they were neat, but I think for most people an old laptop is fine.


That's my setup. I decided to use old laptops instead of proper servers because power cuts are common in the region I live, especially when it rains. So laptops have built in UPS (their batteries) which can hold power for some minutes in the worst cases. It sounds like a poor man's home lab, but it works.


I wonder if it is hard to configure graceful shutdown when batteries run low and automatically come back up when power is restored?


Honestly, I think that's probably smarter than what I do now. My personal laptop idles around 15-20W, tends to peak at about 80W. If I bought a dozen laptops, the total energy usage would be roughly the same as my server at idle, and I could use Docker Swarm or something to glue everything together, while also having a lot more power resiliency because of the batteries (like you mentioned).

I also think it's good to prevent ewaste. There's a lot of raw computing power that just ends up in landfills. It's almost certainly better for the environment to get as much use out of existing stuff as possible than it is to buy new stuff. I can't imagine it's good for the world for a bunch of old lithium batteries to lie dormant.

If I can find a good deal on a dozen old laptops, I might honestly put this plan into action.


I agree. For most people just starting out, it's a lot more worthwhile to get a single cheapo repurposed desktop or a single Raspberry Pi to run PiHole or something on and then expand from there. My homelab[0] started as a single Pi running PiHole and has expanded to four machines running everything I need from Jellyfin to Calibre to DNS, etc.

That being said, when I finally got around to rackmounting and upgrading some of the other hardware in my lab, this "beginner"'s guide was really helpful.

[0] https://blog.janissary.xyz/posts/homelab-0


Please, do not use Raspberry Pi for a homelab unless you are 100% sure your workload is OK with it. I've just sold mine after ~2 years it being in a box in a closet. It's just to weak, too useless. I value my power socket slot more that RPi. If ARM is important, especially Mac Mx, the lowest Mac Mini is not that expensive. RPi is close to zero in performance. It could be just some unnoticeable VM in Proxmox/AnotherHypervisor performance-wise.


I'll second this. My "home lab" is several small computers crammed wherever they can either be totally hidden, or where they can justify their presence. An old Celeron NUC lives under my couch, and it runs Pi-Hole, Syncthing, and some diagnostic stuff. It's extremely useful, and the impact on my electric bill is negligible. A Lenovo mini PC lives behind my TV, and serves double duty as a Syncthing node and an HTPC. I'll probably make it do other stuff eventually.

It's not the most sophisticated setup on earth, but it works great, it's dirt cheap, and it's apartment-friendly.


Personally, I really like odroid H2/H3 units (0). Much cheaper then Nuc's, DDR4 SO-DIMMs up to 64G, very low power draw, and X86 based so no compatibility issues. No where near as powerful as desktops but more then enough for running basic home lab stuff.

I used to use a half rack full of Xeon based Dell and HP blades - but the blade chassis are huge/heavy and the power usage was HUGE!

0) https://ameridroid.com/products/odroid-h3


How much is "huge"? My HP Proliant idles around 175W, and even running Folding@Home it only peaks at around 300W. Considering that I used lots of incandescent light bulbs throughout my house, each about 60W, 175W doesn't really seem that bad to me.

Granted, this is one server. If you have a bunch of them running at high load all at once then of course that can get pretty pricey power-wise.


Not the person you are replying too but huge will largely depend how much you pay for electricity. Electricity is very expensive in Europe

In Sydney, Australia running your server just at idle would cost just over AUD $50 a month assuming it is powered by the grid.

For about AUD $400 you could replace it with a 7 or 8 series Intel i7 mini PCI and throw a 10Gbe NIC in it and that thing will cost you AUD $7 a month to run.

It gets tricky though if you want ECC RAM. Most of the cheaper low power systems that are available will take UDIMMs and that becomes prohibitively expensive if you need a decent amount of RAM.


> Considering that I used lots of incandescent light bulbs throughout my house, each about 60W, 175W doesn't really seem that bad to me.

Well, your monthly power bill would be much higher than needed with either those light bulbs or a server like that.

Those light bulbs can be cheap to replace with more efficient variants. This was already true approx 30 years ago, back in the 90s.

I got solar power. Without that, I wouldn't be having a Xeon server right now.

Don't get me wrong, to each their own! That is the fun thing with homelab. You can optimize for power, price, noise, size, heat, or a combination of these. Then you can decide to have a couple of machines dedicated or have one large VM. Then you pick the OSes.

For example, right now I am settled for my homelab server. I got a cluster to play with, too. I got a 10 gbit switch (which I only put on if I need the speed, otherwise I use a second hand Brocade with 10 gbit SFP+ uplink). Loads of fun. But my router is just an Edgerouter-Lite. EdgeOS is dead, and it doesn't have failover nor does it run a VM. If I run it in bridge mode (which I do now) and I use Wireguard it doesn't saturate 1 gbit at all. So I am looking for an alternative which is a bit future proof. Ie. with SFP+ because I will get fiber instead of DSL eventually.


A bit of a typo, I meant to write that I used to have dozens of 60W light bulbs; all the lights in my house are LEDs now.

I’ve been debating trying to find a smaller, low power computer to reduce energy; it’s hard to find a small one that have 10GbE or that would take a pcie card. Maybe I can find a Thunderbolt one.


Right, I guess most of us used to, but we save a lot of money that way. Still, why waste money on energy when you don't have to? If you could invest some money on different, more efficient hardware?

Yeah, Thunderbolt would work. I use LAG on my NAS with 4x 1 gbit, I got some devices with USB-C to 2.5 gbit NIC, these are cheap (30 EUR or so). I'd then use that managed 10 gbit switch which I put off by default. If I'd need the speed, I'd put it on. Thunderbolt is even better (way faster and cheaper, multipurpose, though you'd need to set up TCP/IP over USB), but I prefer AMD over Intel.

If you want a TB4 one with Intel, there's many Intel NUCs available, very cheap on Ali. Many of the ones I saw not from Ali are just rebrands (some, admittedly, with Coreboot).


Annoyingly, I can't really do 2.5G because when I was looking to upgrade my network, I ordered a bunch of 15 year old old 10GbE switch, because I mistakenly assumed that that would work with 2.5G as well. It turns out that the 2.5GbE came after 10GbE and so compatibility is not guaranteed, so I ended up having to upgrade everything in my house to 10GbE to make the switches work. Fortunately, used 10GbE cards from data centers go for around $40 on eBay. Thunderbolt cards cost a lot more so I only have one for my MacBook, and it's a big chunky boy that weights half a pound.

I was looking into some of those single-purpose computers, some of which will sometimes have a PCIe slot.


The Dell chassis has a 2500w powersupply for 6 blades. The HP chassis has a 3000w PS for 8 blades..

I actually ran 3-4 extra circuits to the computer room to run it all...


One additional NVMe slot and 10GBe nics would make the H3+ an object of absolute perfection. So close, and yet so far.


And the n100 cpu


n100 is overhyped IMO; I would prefer vendors focus on the connectivity and application requirements first and let the choice of SoC follow. The most important factor for this type of application is really PCIe lanes. If someone (other than broadcom) could introduce a basic fanout switch with PCIE5.0 upstream and loads of PCIe 3.0 downstream ports that would hopefully open the floodgates.


Heh - maybe in the H4 ?? :-) Though you can prolly stick a 10GB nic in the PCIE slot ??

Edit: Oh shit...just checked, it does NOT have a PCIE slot..I was sure it did :-(


My homelab (or infrastructure if you will) consists of 2 ARM SBCs distributed geographically and connected via a VPN & syncthing. They are fanless, powerful enough (8 cores and 8GB of RAM is plenty for what I do), and handle 90% of the automation my old desktop was doing.

I don't need Proxmox, tons of containers or anything. If I do, I can always throw a N95 into the mix with ample RAM and storage.


Unless something has recently changed, the SFF Precisions and Optiplexes used weird smaller PSUs and generally have really poor airflow. Generally the lower end precision mid-tower also flow air terribly and get hot and whiny. There's no way I would recommend an SFF system if you can get the MT variant, as your options for fixing the MT systems when the Dell PSU inevitably fails are much better than getting a regular ATX PSU, cutting a hole to feed the cords in, and bolting it to the side of the case, which I've done repeatedly for SFF Dell systems with blown PSUs.

Additionally, one challenge I have with all of these is that none of them have anything like a BMC. If you have more than one system, moving peripherals around sucks, and KVM technology is kind of iffy these days. It seemed easier to just jam an old Supermicro motherboard that had IPMI enabled in a reasonable ATX case with a lower power Xeon variant, and call it a day.


Thermals are perfect with these Dell SFF things. I tested it couple of times. Not 24/7 at 100%, but e.g. for 5 mins at 100% before vacuuming the outsides and after. NUCs are much more sensitive to dust. I have to *push* air through my NUC like every half a year (not just vacuum pull). But Dells are fine.

Just do not put CPUs with >65W there. Or hungry PCI things.


BMC is not a requirement these days as there are options like using a piKVM or AMT/vPRO on certificied Intel systems.

Personally I turn IPMI off. The difference at idle can be up to 20w. Not to mention the outdated interfaces are a sercurity risk


I actually did just that and setup my homelab with my "old" laptop [1].

[1] https://thin.computer/index.php/2024/03/08/self-hosting-my-s...


I've been thinking about upgrading my current "homelab" (just an old PC), and I've been waiting for a small form factor with an integrated power supply and a bunch of NVMe slots. Does that exist yet?


For my homelab, I bought OptiPlex 7000 with 3 NVMe slots (one PCIe 4.0, one 3.0, one 3.0 short 2242), 1 PCI x16, 1 PCI x4, 3 SATA and 3 DisplayPorts for 300 EUR near 2 years ago.

It was so good actually that I've made it my main PC. I already had 3 monitors and this SFF just worked fine with them. No games of course.

I also have Precision Tower with Xeon/64GB ECC RAM and 1 native NVMe slot. I use PCI-to-NVMe adapters there (2 slots), it works well with ZFS mirror.


OptiPlex seemed to be the best option when I was browsing around.


> …I've been waiting for a small form factor with an integrated power supply and a bunch of NVMe slots. Does that exist yet?

Have you considered using a NAS as your homelab? I run a few containers (no VMs yet, although I can) and just do that on my QNAP.

The TBS-h574TX (https://www.qnap.com/en-us/product/tbs-h574tx) has five E1.S/M.2 slots. The HS-453DX-8G has two SATA 3.5-inch drive bays and two M.2 2280 SATA SSD slots (https://www.qnap.com/en/product/hs-453dx), and the TBS-h574TX. Or you can just throw SATA SSDs into any NAS.


These NASbooks are sooooo close to what I want!

My problem is that none of them have an internal power supply. They all require a power adapter that's decidedly less easy to source than an IEC cord. I have had 3 separate experiences where I give someone instructions to grab a box and the power adapter... and they show up with the box and without the power adapter (or with a power adapter in a box of similar power adapters and no volt/amp/size rating on the box to disambiguate). In short, the internal power supply is something I actually care about.


It's harder and harder to find small computers with internal power, because they've found that nobody counts the power brick in the "size" (remember the original Xbox?).

The best bet might be to wait until they've all finalized on USB-C.


Yep, that's probably how it has to be. Hopefully USB4NET can save us from "gigabit ethernet is enough for everyone," too.


It's easier to turn an OptiPlex into NAS. Actually you could pass-through entire SATA stack as a PCI device into TruNAS running in Proxmox VM so it sees SATA ports natively, with all goodies such as SMART.


If you just need to run a few containers, you might want to take a look at TrueNas (Scale) rather than Proxmox.


Depending on the use case, this is definitely an option. I personally prefer using Proxmox under my Container Orchestration hardware because, as it seems to always eventually do, my home lab has become production, and being able to do software patches of the underlying system without service outages is really desirable. I've been agonizing and going back and forth between various potential solutions for this for a couple months now, with the most recent attempt being "Deploy k3s, with a fully separate control plane from the worker nodes, and run all of those atop a Proxmox cluster because being able to migrate VMs across nodes and also stand up isolated workloads to test out other solutions makes changing things in the future far less risky".

That said, I was running the Homelab on effectively a single Unraid hyperconverged NAS/compute layer before then, and that worked wonders for quite a long time. I wouldn't consider Unraid my first choice now, given the advancements in TrueNAS SCALE in the time since I chose it as the solution, but the concept of NAS + Container Orchestrator as hyperconverged homelab today is quite attractive.


> ... being able to do software patches of the underlying system without service outages is really desirable.

Yeah, that's what I was looking at Proxmox (8.x) for recently as well, but it never made it through basic qualification testing. VM migration's would randomly hang forever about 25% of the time. Definitely not a case of bad network interconnect either. :(

I'm using Ryzen 5000 series cpus on all the test gear though, and recently saw a mention that it may have been a known problem that's fixed. But not super keen on wasting more time.

What sort of cpus are your hosts using, and are you using your VMs with Ceph as their storage? :)


Great question! I haven't gotten that far yet. I have Proxmox running on a 12th gen i5 currently, and my other two compute nodes, a 3rd gen i7 laptop and a 7th gen i5, are currently still serving production workloads, so I haven't had a chance to set up a cluster yet. That said, I don't mind limiting features to lowest common denominator for the k3s control plane node, as I don't need gobs of power for it or advanced CPU features, and I imagine that's the pain you might be facing (though I'm new to Proxmox, so who can say?)

That said, I do have a NAS that I can provide both NFS and iSCSI storage to, so I imagine if I run NFS as the backing storage, I shouldn't have to worry about clustered file systems. Each compute does have local storage, but the plan is to provide that via a CSI for workloads that don't need to be highly available or where fault tolerance isn't a concern.


As a data point, the vm migration hangs aren't happening in testing any more. Looks like there really was a fix added to Proxmox recently.

Can proceed with more testing now. :)


Maybe give this thing a look? it's newish and a little expensive but pretty dang well equipped. (not internally powered though)

https://store.minisforum.com/products/minisforum-ms-01


It's a mobile CPU (with H). Unless it's fanless it will sound like a rocket lunch. Or it will be thermal-throttled with any sustained load. And it looks too expensive.


After massively overthinking things for my homelab / firewall, I am ordering one this weekend, just deciding which cpu.


For the "bunch of NVMe slots", the OWC PCIe NVMe adapters look interesting:

* https://eshop.macsales.com/shop/ssd/owc-accelsior-4m2 (4x NVMe drives)

* https://eshop.macsales.com/shop/ssd/owc-accelsior-8m2 (8x NVMe drives)

I'd personally get them without drives included, as the OWC drives seem pretty expensive.


There are SFF PC's with integrated power supplies that have a few PCIe slots. PCIe -> NVME adapters are readily available (not my website or article, but posting a link to a review site seemed more genuine than posting Newegg or Amazon links to specific products): https://thunderboltlaptop.com/best-nvme-pcie-adapters/


The cheapest from Amazon works well. Just make sure it matches PCI Gen and slot overall for throughput, e.g. PCI 3.0 x4 are fine for NVMe 3.0, but for NVMe 4.0 more is needed.


I think you could DIY that, but I found getting a few little PCs can be stuffed in a drawers with some holes drilled in the back. Haven't complained about suffocation yet! I bought some Bee-Link mini PCs with pretty impressive specs. No idea if I am now being spied on by China.


I actually picked up a refurb desktop from Walmart with a ryzen 3500 for $400, it runs basically everything without breaking a sweat. Proxmox running homeassistant, docker, my seedbox, media server, etc and it averages 3% CPU usage.

I did not know just how much heat a 16tb disk can put out up until that point though.


Yep, agreed. When living in a high cost of living area (i.e. where lots of tech professionals tend to live), space is limited.

I'm annoyed at the lack of ECC-capable sub-ITX NAS boards. I have a Helios4, but I've no idea what I'll migrate to when that dies.


If you want cheap, but non-performant, there are AMD R-series like found in Dell/Wyse thin clients. They're pretty weak machines though 2-4 cores, but support ECC DDR3 single channel.


N100 boards are ddr5 compatible which has internal ecc


Internal ECC doesn't protect the data in transit. It also doesn't have error reporting, so you could be operating with the error correction already operating at maximum capacity without knowing it, leaving no error correction available for unusual events. It's not a substitute for traditional ECC.


DDR5 ECC is an internal mechanism just to make DDR5 viable due to the speed and node size. It doesn’t increase reliability, in enables not having a reduction in reliability over DDR4.


> A homelab could be a single NUC on a desk

then a better name than homelab would be your "NUC/NOC-nook".

with regard to TFA, I don't trust "labs" with neat wiring.


A bit of a tangent but I want to sing the praises of Proxmox for home servers. I've run some sort of Linux server in my home for 25 years now, always hand-managing a single Ubuntu (or whatever) system. It's a huge pain.

Proxmox makes it very easy to have multiple containers and VMs on a single hardware device. I started by just virtualizing the one big Ubuntu system. Even that has advantages: backups, high availability, etc. But now I'm starting to split off services into their own containers and it's just so tidy!


+1, Proxmox is awesome. I set up a cluster with two machines I wasn't using (one almost 10 years old, one fairly modern with a 5950X) and now I don't have to worry about that single Debian box with all my services on it shitting the bed and leaving me with nothing. VMs are useful on their own, but the tools Proxmox gives you for migrating them between machines and doing centralized backup/restore are incredibly freeing. It all works really well, too.

Most recently I set up a Windows VM with PCIe GPU passthrough for streaming (Moonlight/Sunshine) games to the various less-capable machines in my house, and it works so well that my actual gaming PC is sitting in the corner gathering dust.

My only complaint is I wish there was a cheaper paid license. I would love to give back to them for what I'm getting, but > $100/yr/cpu is just too much for what amounts to hobbyist usage. I appreciate the free tier (which is not at all obnoxious about the fact that you aren't paying), but I wish there could be a middle ground.


> My only complaint is I wish there was a cheaper paid license. I would love to give back to them for what I'm getting, but > $100/yr/cpu is just too much for what amounts to hobbyist usage.

I don't use proxmox anymore at home (1) but as an user I would have loved to pay, say, $5/month and just get access to the updates repository (the stable ones) or maybe something similar. $100/year/cpu was just too much for me as an hobbyist.

[1]: not because I don't like it, for life reasons I had to downsize that hobby a bit and I don't need proxmox anymore.


My thoughts exactly.

Anyone from Proxmox reading? Could it be that you're leaving money on the table here?


Proxmox Dev here, albeit personally speaking here, so do not make more out of it than the educated opinion this is.

No, I do not think we leave much money at the table.

In short, we're targeting enterprise users with a mix of a (soft) stick (e.g. pop-up to note that one isn't having the most production ready experience) a carrot (way better tested updates and depending on the level also enterprise support). Homelab users aren't in the target of our subscription services at all, and if we'd target them with cheap prices that would be just misused by companies too, we know this for a fact because the project is over 15 years old, and there was a lot tried out before getting to the current design. And while there might some protection mechanisms we could set up, we rather avoid DRM'ifying Proxmox projects and avoid wasting time playing cat-and-mouse games with entities trying to abuse this.

Note though, that we still cater to the homelab in other ways even if it isn't our main target audience, like being very active on our community forums for all users, or simple having 100% FLOSS software, no open-core or other, in my opinion rather questionable, open source models. And the price of a pop-up or using the still very stable no-subscription repository is IMO also quite small compared to the feature on gets 100% for free.

If one wants to contribute to our project but either cannot, or does not want, to afford a subscription, then I think helping out in the communities, submitting elaborate bug reports and thought out feature requests, spreading gospel at the companies they work with/for is not only cheaper for them, but also much more worth for the project.

ps. The actual license is the AGPLv3, which is always free, what we sell are subscriptions and trainings for additional services.


Thanks for the reply. I guess if the enterprise repo is the primary reason most organizations buy a subscription (implying that your existing subscriptions are already skewed toward the community license), and you added a homelab-tier subscription that also offered it, you might indeed see some cannibalization. I'm not sure what "complete feature-set" means—the free offering already seems pretty complete to me, though obviously my use case is not very demanding.

Still, I wonder how many homelab users would be willing to pay something like $10-20/yr (or maybe a one-time license purchase) just to make the nag dialog at login go away. I certainly would, just as I've paid for shareware-style licenses in the past simply because I wanted to support the developer, and not because they made their software unusable if I didn't.

Either way, thanks for your excellent work on Proxmox.


> I'm not sure what "complete feature-set" means—the free offering already seems pretty complete to me, though obviously my use case is not very demanding.

I worded that part a bit oddly (sorry, not a native speaker), I meant that one already gets all features, as we don't do any feature gating or the like. I.e., you can use all features of the software with or without subscription, the latter only provides extra services.

> Still, I wonder how many homelab users would be willing to pay something like $10-20/yr (or maybe a one-time license purchase) just to make the nag dialog at login go away.

Actually the "stick" part of the carrots and stick design is just as important for enterprise users, as surprisingly a significant amount of enterprise user do not care of not getting any software updates ("it's enterprise, it just needs to magically work, even for new unforeseen events!11!!1"), and the subscriptions and enterprise repo existed before the nag, I cannot share all details, but you will have to believe me that it made a big difference. It also showed that a lot of companies are willing to pay for projects supporting their infrastructure, but they need to be made aware of the possibility quite actively.

But yeah, I can understand the opinions of the home lab community here, but I also hope that I could convey that the current system was carefully optimized over many years for our business case, and that means, that it can pay a (nowadays not so small amount of) developers salaries to extend, maintain and support the growing amounts of Proxmox projects.

If anybody that stumbles over this still does not believe in the necessity of the system, or know how it will be better if we do XYZ (how, if you have zero insights in our data and obviously do not run your own company doing this?), I just have to be blunt and recommend just using an alternative, for PVE et al. the single nag on login is the price you pay to get a full-blown cluster & hyper-visor stack.

> or maybe a one-time license purchase

I feel already like some FLOSS evangelist, but that's something I just have to correct: we sell no licenses at all, our projects are, and will stay, AGPLv3 licensed. And w.r.t. the same question for a life-time subscription with a one-time fee, not planned, reasons: see above.

btw. thanks for the discussion, most often I read this more in the demanding voice, which is hard to stay polite when responding – that wasn't the case at all here.


> I worded that part a bit oddly

I actually read that off the website—not your fault at all!

> Actually the "stick" part of the carrots and stick design is just as important for enterprise users, as surprisingly a significant amount of enterprise user do not care of not getting any software updates ("it's enterprise, it just needs to magically work, even for new unforeseen events!11!!1"), and the subscriptions and enterprise repo existed before the nag, I cannot share all details, but you will have to believe me that it made a big difference. It also showed that a lot of companies are willing to pay for projects supporting their infrastructure, but they need to be made aware of the possibility quite actively.

That's funny, but makes total sense. Thanks for the responses!


Hey, thanks for your reply!

Just wanted to say proxmox is great and you and your colleagues are doing an amazing job!

Keep rocking!


Hard to tell really. I do wish they had cheaper access to their enterprise repos, perhaps a no-support option...


Proxmox is incredibly helpful especially for homelab beginners familiar with Linux. You can easily create and destroy various things as you learn. Only thing that's not so easy to use is storage in general : it's very difficult to truly understand the consequences of various choices and therefore it's kinda difficult to properly set up the backbone of a NAS unless you're fairly comfortable with zfs, lvm, lvm-thin, re-partitioning things, etc..


One thing Proxmox is missing (IMHO) is a simple storage sharing system. I ended up just running the kernel NFS server on my Proxmox hypervisor, then sharing disks via a virtual-only LAN to the guest images. Other folks run an NFS server or a full NAS system in a VM under Proxmox.

Proxmox' official solution for this is Ceph but that seems like overkill for a small home setup.


Hyperconverged Ceph is both trivial to set up and amazing. Would highly highly recommend it, it makes everything so much easier when your VMs are backed by RBDs.


Any good tutorials for dummies like me?


I've followed the official one with good success. They do assume some familiarity with ceph but it's not too much.

https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Clu...


Another handy feature of it is snapshots, very useful before trying out some modification.


As someone still in monolithic home server tech debt I've been meaning to migrate too. The part I always wonder about is storage - said server is a NAS, serving files over SMB and media over Plex. From what I hear some people mount and export their data array [1] directly on Proxmox and only virtualize things above the storage layer, like Plex, while others PCI-passthrough an HBA [2] to a NAS VM. I suppose one advantage of doing the former is that you might be able to directly bind mount it into an LXC container rather than using loopback SMB/NFS/9p, right?

I also hear some people just go with TrueNAS or Unraid as their bare-metal base and use it for both storage and as a hypervisor, which makes sense. I might have to try the Linux version of TrueNAS now that it's had some time to mature

I also rely on my Intel processor's built-in Quicksync for HW transcoding. Is there any trouble getting that to run through a VM, except presumably sacrificing the local Proxmox TTY?

[1] As in the one with the files on, not VHDs. I default to keeping those separate in the first place, but it's also necessary since I can't afford to put all data on SSDs nor tolerate OS roots on HDDs. Otherwise a single master ZFS array with datasets for files and zvols or just NFS for VM roots would probably be ideal

[2] Which I hear is considered more reliable than individual SATA passthrough, but has the caveat of being more coarse-grained. I.e. you wouldn't be able to have any non-array disks attached to it due to the VM having exclusive control of all its ports


> I suppose one advantage of doing the former is that you might be able to directly bind mount it into an LXC container rather than using loopback SMB/NFS/9p, right?

Yep, I do this and it's very simple to give LXCs access to parts of my storage array.


>I also hear some people just go with TrueNAS or Unraid as their bare-metal base and use it for both storage and as a hypervisor, which makes sense. I might have to try the Linux version of TrueNAS now that it's had some time to mature

Unraid is especially meant for that. But TrueNAS Scale is also good, it has Docker and Kubernetes support too.

There is also openmediavault, it's a bit more Linux-y than the Unraid or TrueNAS Scale but a lot of people love it


Obviously you could have just run containers on that Ubuntu system. And nothing stopping you backing it up either. But you might as well start with a hypervisor on a new server, though. It gives you more flexibility to add other distros and non-Linux stuff into the mix too (I run an OPNsense router, for example).

I use xcp-ng rather than Proxmox.


How exactly does running a VM in proxmox give you high availability?


Proxmox comes with baked in support for live vm migration[0], high availability[1] and clustering support[2] if you run multiple host nodes. I'm assuming that's what OP is referring to

0: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_mig... & https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_guest...

1: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_how_i...

2: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapte...


It doesn't on its own but it simplifies things hard to roll by hand that does, like observation nodes, consensus algorithms, Ceph storage, live migrations etc.


Some defense against the class(es) of availability problems that occur from OS + upwards. No defense against the class(es) that occur below that (metal).


>Proxmox makes it very easy to have multiple containers and VMs on a single hardware device

I just wish there was something like Unraid's Docker support but in a FOSS distro.

Proxmox with LXC is just not the same. Unraid is good, love the UI and easyness too but the proprietary paid Linux is just something I don't like.


FWIW https://tteck.github.io/Proxmox/ has a docker LXC container. A bit meta but works fine for my use cases. But I agree with your general point.


Proxmox is ultimately just debian linux. You can install portainer on the host. Or create a VM running portainer.


Most folks who need Docker in Proxmox just set up a VM for it. It's a bit goofy but works fine. Portainer is a popular UI frontend for Docker.

I like LXC alright but I do miss the ecosystem of DockerHub.


Especially now with the rumors of Unraid moving from pay-once to an annual subscription model.


I believe this was confirmed and they aim to switch by end of the month. Those interested in Unraid who prefer the old model may wish to pick up a perpetual license now.

https://unraid.net/blog/pricing-change

disclaimer - I'm not an Unraid user.


They're still continuing to have a lifetime license option, it's just going up in price.


Combining that with Proxmox Backup Server is awesome.


And a huge win that it makes it so easy to have distributed storage. Ceph is built-in and works well. Being able to add disks as needed is bliss (no RAID planning up front). And the rebuild situation is better than any RAID. Performance is adequate for homelab even on a 1gbps network. I have about 10 SSDs mixed 1TB and 2TB across 5 hosts (old Optiplex units).


That sounds interesting, I'm only tangentially familiar with Ceph so forgive these stupid questions: So you have the Ceph "array" running on the Proxmox nodes, and then are your VHDs like objects on Ceph? Do you also use Ceph for direct data storage or are all your files and data within VM's VHDs?


What's the main value vs using something like K3?

I've mostly just been defaulting to k3 since I use k8s at work so lots of my knowledge maps over.


I know it's becoming the new "I use arch btw" meme but if you go down this route I highly recommend using nix as the distro. Ideally you can just leave these systems running once they're working and using nix means all your system state is logged in git. No more "wait, how did I fix that thing 6mo ago?" or having to manually put the system back together after a Ubuntu dist upgrade explodes. Every change you've made, package you've installed, and setting you've configured is in the git log.

I find myself referring to my git repo frequently as a source of documentation on what's installed and how the systems are set up.


I use nix btw.

I totally agree. It (nix and nixos) is/are such a pain so often. But, wow... when you get things to work it is glorious.

Quickly and easily syncing system/user/application configurations across multiple devices is incredible. The declarative approach means that the often laborious effort it takes me to get things to work pays off when I finally figure it out and realize I never have to do that again - and since it's declarative, I now have a documented functional configuration/example to grow and expand from.

My current project is to learn Nix development environments by building "Linux from Scatch" from Nix. I am making progress slowly, but every success is cemented and replicatable.

LLMs definitely help distill the sparse/scattered documentation, but I am finding a good amount of grit is still required.

All in all, I am very much enjoying Nix and hope to utilize it more in my homelab.


I just feel it doesn't have to be this way. We can get the benefits of nix without the hardness of nix. It's just someone hasn't taken the time to make this "thing" yet.

It's annoying enough that I won't use nix personally. I've used it at work so I'm quite knowledgeable with it.


IMO, this thing is Nix, it just needs better UI and a lot better documentation.

The problem I've realized is that for many core Nix developers, the source is the documentation. Which doesnt scale.


Luckily the ecosystem is literally getting better by the hour right now. There's been a huge uptick in interest and lots of people are writing really good documentation and starter templates.


> It's just someone hasn't taken the time to make this "thing" yet.

DietPi [1] has one massive (15KLoC) bash script [2] for system config.

[1] https://dietpi.com

[2] https://github.com/MichaIng/DietPi/blob/master/dietpi/dietpi...


Not exactly an easy nix alternative


Easy for users, one step backup and restore of system-wide state.

Less easy for developers, but code is mature and stable in production for years.


> sparse/scattered documentation

I've been hearing more and more from people switching over to NixOS, so hopefully the docs will get better as well


I got into Guix and then failed. Now I realise it's because I didn't fully understand the nix philosophy underneath. Can you recommend any good tutorial points for an intermediate/experienced unix person?


I wish Nix had a shallower learning curve (or that I was smarter). It's been on my list of wishes to play with, but that list is long, and getting Nix over the hump hasn't been something I've been able to do yet. The concept makes sense to me, but when I see a new package or app I want to play with, I can get there with docker-compose and a VM more quickly than I can wrap my head around doing it in Nix.


I’m going through the NixOS journey right now, and it’s a combination of easier than I thought, but also more frustrating than it should be.

Getting a basic system going seemed about as easy as Arch (if you consider Arch easy). But adding flakes and home manager to the mix is where things have run off the rails a bit for me.

I think a lot of the steepness of the learning curve comes from trying to go all in on flakes/home manager up front, and a lot of the online discussions and quick starts being oriented towards this goal. I’ve since scaled back my aspirations a bit and I’m focusing on just understand core NixOS with a monolithic config file until it makes more sense to my brain.

If you haven’t get, check out vimjoyer’s channel on YouTube. His content has been the most helpful for me by far, and it got me a basic setup going that I’m now iterating on. What I do love is that I feel like I can iterate with confidence. I have no idea what I’m doing for the most part but I know I can get myself back to a working state.

I think it will be worth the time investment in the long run, but yeah, it’s been a bit weird and the lack of a canonical approach makes it hard to break into the ecosystem.


Its just a matter of repetition. It will eventually click if you go back enough times (at least it did for me after like 4 tries). The biggest thing to getting your mind adjusted to the syntax is to understand that nix is just 1 big JSON blob. All the syntax is there to facilitate merging/updating that ball of JSON in a deterministic way.

You might try to generate your docker containers using nix as a first step. Not sure if it will be more approachable for you, but at least its 1 less new thing to adjust to.


If you get the base system up you don't have to exclusively use their containerization system. I have a mix of nix environments and k3s deployments that are saved off in another git repo. There are advantages to purity but practically speaking you can just do what's convenient.


For what it's worth, I personally do use Arch for my homelab and still achieve cattle not pets because the Arch install media comes with cloud-init so I simply keep a cloud-config with the modules to build a server from scratch on a separate USB stick with a drive that has "cidata" on a label and plug that into a machine along with the install media before hitting the power button. I don't know if Nix would have been easier but cloud-init just lets you run scripts that you presumably already know how to write if you're the kind of person who does this sort of thing. No need for a DSL.

Someone is going to chime in with "boo yaml" and sure, but a person interested in running a homelab is almost certainly a person who manages a bunch of servers for work too and there's no escaping cloud-init, Ansible, Kubernetes, or something using yaml that you're not going to have a choice about. Assuming the point of a homelab is at least partly to practice what you do on the job without the possibility of destroying something owned by someone else (which is what this guy says his reason was), you may as well get used to the tools you're going to have to use anyway and not make your home setup too much different than the environment you manage at work.

Of course, I know a lot of people on Hacker News are not like me and this guy and use "homelab" to mean they want to self-host media servers, chat, and photo sharing and what not for their family and friends, not have a mini data-center as a practice ground for the real data center they manage in their day job.


Same, I originally had a bunch of RasPi's in my lab running differing versions of Raspbian until I got tired of the configuration drift and finally Nixified all of them. Writing a single Nix Flake and being able to build declarative SD card installation images for all of them makes managing a bunch of different machines an absolute dream (tutorial here[0], for those interested).

The only issue is remotely deploying Nix configs. The only first-party tool, nixops, is all but abandoned and unsupported. The community driven tools like morph and deploy-rs seem promising, but they vary in terms of Flakes support and how much activity/longevity they seem to have.

[0] https://blog.janissary.xyz/posts/nixos-install-custom-image


I can vouch for deploy-rs. Used it for years without issues. Flake support is built in and activity is pretty good.

Disclaimer: I am a relatively active contributor


Im really happy with nixinate if you havent tried it. Basically does the bare minimum so theres no real concern over continued development.


> ...or having to manually put the system back together after a Ubuntu dist upgrade explodes.

I avoid this by running Gentoo Linux on my Linux machines. I've heard people say good things about Arch, but I've never used it.

Ubuntu is just bad... I say this as someone who used Ubuntu for a few years waaay back when (and was _really_ pleased that I finally could recommend a reliably, easy-to-use Linux distro to nontechnical folks), but swore off of it after the second or third in-place upgrade (in a row) that left my system nonfunctional in ways that required the sysadmin knowledge I'd gained from going through the well-documented Gentoo setup process (and from tinkering over the years) to repair.

Friends don't let friends use Ubuntu.


Agreed - I'm willing to use Ubuntu LTS on servers that are basically glorified LAMP machines (LCMP?) - but for home servers, nothing has beat Gentoo, not only for the full understanding of "what I have" but also having direct control over what I've got running, including quite a bit of customizability if needed.

Gentoo on ZFS is kinda nice.


I want to love arch, but it is way too complicated. I _love_ complicated things, but arch just didn't seem worth it.

My home lab journey was:

manual installs -> Ansible -> Docker + Docker Compose -> Kubernetes (I told you I loved complicated things)

I kept everything source controlled. This repo documents my long journey: https://github.com/shepherdjerred/servers

Docker and Docker Compose are nearly perfect. Here's an example of what that looked like for me: https://github.com/shepherdjerred/servers/blob/839b683d5fee2...

I mostly moved to Kubernetes for the sake of learning. Kubernetes is very cool, but you have to put in a large amount of effort and learning before it becomes beneficial.

Ansible is nice, but I didn't feel like it met all of my needs. Getting everything to be idempotent was quite hard.


I use a single to set everything up, including docker and docker compose. most applications are running in docker compose. but ansible allows me to handle all the hard drive mounts and other stuff that can't be run inside docker containers.


That's not a bad idea!

I didn't mind manually installing Docker, configuring fstab, etc., since it was such little work.


I actually went the complete opposite way and I run RHEL (Red Hat Enterprise Linux - with the developer subcription) or RockyLinux at home.

Because:

> "wait, how did I fix that thing 6mo ago?"

Such things just don't happen. That thing (RHEL/RockyLinux) just doesn't break. I do update software once every six (or more) months, reboot and it just works.

I have notes on how I did stuff, if necessary. I used to keep ansible playbooks but they were just an overkill. Does it even make sense to write a playbook for a system that I'll reinstall in three or four years ? Probably not.

One of my main systems at home has been running RHEL 8 for 3-4 years, still happily getting updates. I plan on replacing the whole box, with another one running RockyLinux 9.

And I'm moving from RHEL to RockyLinux because I just don't want the yearly annoyance of having to renew the Developer Subscription license.

That's how reliable it is: I have to look up my notes once a year, and it's too much.

I run most of my stuff in containers anyway, and podman is just great.


>I have notes on how I did stuff, if necessary. I used to keep ansible playbooks but they were just an overkill. Does it even make sense to write a playbook for a system that I'll reinstall in three or four years ? Probably not.

Yea, sure, except what if the one-time configuration was in a Nix file that will continue being exactly as useful for years?

Folks applying the Ansible mindset to NixOS are missing the point. NixOS is *not like a runbook*. It's declarative, it's prescriptive, it's not some god-forsaken state machine that tries to paper over the horrors of modern computing. It IS the definition of my computers, not some glorified pile of scripts and conditional checks encoded in fking jinja. The same as it's been mostly for 8 years now.


> Yea, sure, except what if the one-time configuration was in a Nix file that will continue being exactly as useful for years?

Still useless. Software change, the configs i make today might not make sense or even work in 3-5 years.

Nix and stuff (ansible etc) only make sense if you have to reinstall your systems multiple times in a single week (fine if your hobby is reinstalling linux on your laptop).

But if you do a full reinstall once every five years then it’s largely an exercise in futility.


Except that Nix literally solves that problem by abstracting over the underlying config.

Y'all don't know what you're talking about and once again the Ansible comparison belies in.

>reinstall once every five years then it’s largely an exercise in futility.

You are as wrong as you think your are right, I'd know I've been using NixOS for 8 years. You're just wrong.


x doubt


Which part? Like, that's what NixOS is. My config repo is declarative, describes in its history every machine I've owned. You could rebuild them at any point in time and get an identical exact machine image out.

Idk, this is why it's hard to talk about nix with people that think they know it but then also openly doubt things that you will one day take for granted if you take the time to learn it.


When people say Nix is declarative do they mean like SQL/Prolog?


Nearly every time I do an Ubuntu dist upgrade, there is some relatively easy to solve but critical problem. A couple examples from a few years back: default route was lost, network interfaces were renamed. In both cases, remote connectivity was obviously lost. Good thing it was in my basement...


Ansible code in git solves the same problem for ANY distro


Once you stop templating yaml you will never go back.

Seriously as someone who has used ansible a lot and nix a bit, I'd take NixOS over Ansible+Debian/Fedora any day.


No, no it doesn't. It doesn't solve the same problem, at all, certainly not holistically like NixOS.


> Location

For a few years I put mine inside an IKEA FRIHETEN sofa (the kind that just pulls†).

    Pros:
    - easy access
    - completely invisible save for 1 power wire + 1 fiber (WAN) + 1 eth (LAN)††
    - structure allows easy cable routing out (worst case you punch a hole on the thin low-end)
    - easy layout/cable routing inside
    - noise reduction for free
    - doubles as warming in winter a seated butt is never cold
    - spouse enjoys zero blinkenlights
    - spouse didn't even notice I bought a UPS + a disk bay

    Cons:
    - a bit uncomfortable to manipulate things inside
    - vibrations (as in, spinning rust may not like it) when sitting/seated/opening/closing
    - heat (was surprisingly OK, not worse than a closet)
    - accidental spillage risks (but mostly OK as design makes it flow around, not inside the container, worst case: put some raisers under hardware)
    - accidental wire pull e.g when unsuspecting spouse is moving furniture around for housecleaning (give'em some slack)
https://www.ikea.com/us/en/images/products/friheten-sleeper-...

†† that I conveniently routed to a corner in the back then hidden in the wall to the nearest plugs, so really invisible in practice.


I had a similar thought about the IKEA KIVIK sofa (non-sleeper variants), which has wide boxy armrests that are open to the underside, and could fit towers of little SFF PCs, or maybe rackmount gear sideways. There's also space underneath the seat, where rackmount gear could fit.

One of the reasons I didn't do this was I didn't want a fire with sofa fuel right above/around it. I wasn't concerned about the servers, but a bit about the UPS. Sheet metal lining of the area, with good venting if the UPS battery type could leak gases, I'd feel a bit better about, but too much work. So the gear ended up away from upholstery, in rack/shelving, where I could keep an eye on it.


This is amazing. Do you have an actual photo with your servers inside or anything?

I would be really worried about ventilation and heat / fire risk.

Reminds me of the lack rack.

https://archive.is/Uf2k3


Unfortunately no. I moved its content out when I added big spinning rust to the system.

Although I did some yelling^Wsitting test and found it to be surprisingly impervious it was not so much to the furniture moving scenario.

That, and the UPS being the unlikely-but-real fire hazard. Old man printers-and-shotguns, y'know.

(disclaimer: I'm an internet rando, not a fireman, seek professional guidance)

But then again it's going to be a fire hazard anywhere except in a fireproof enclosure: it may give warm fuzzies that it's kept in sight but a) these things pack a ton of energy and would burn fast and long; b) fire propagates stupidly quick in residential locations; c) you're not looking at it 24/7, far from it.

In a way, ok the whole thing is fuel surrounding it, but this fuel first needs to be consumed for the fire to fully break out. IKEA products are also already designed with some form of fireproofness in mind (e.g textile and foam won't burn as easily as you'd think). It won't resist long but it may comparatively give a few seconds delay before it makes it worse and breaks out to the whole room.

This time could be improved by lining the interior with fire retardant panels, including the underside of the sitting area. Given the roomy boxy shape of it it should be quite easy to do. If done right it could even be near-airtight, so that fire could delay/choke itself due to lack of O2. This would need active ventilation for thermals, but it could shut itself down on a reasonable heuristic indicating trouble. Heck now I'm thinking automatically injecting CO2/spraying water droplets to further quench. Oh, an engineer's imagination running wild.

The most recommended - and much cheaper - investment though would be a fire/smoke detector, as whether in sight or not, with such fires the most sensible general recommendation one could make would be:

a) be alerted as early as possible, preferably before any flame/smoke is visible.

b) immediately get out, like, RIGHT NOW. Think after. Life is the one thing of true value that cannot be replaced.

I'm not kidding. You have ~30s tops from the beep to get the whole family out. After that smoke is irrevocably incapacitating and flashover is only a short minute away.

Watch this: https://m.youtube.com/watch?v=JGIICiX2CNI&t=141

Then watch it again, roleplaying yourself having to be woken up by the alarm, then having to navigate through smoke, heat, and panic to get the spouse and kids out. These 30s are going to be awfully shorter IRL than they already appear watching the video.

End of interlude.

Ventilation was passive yet OK, temps were higher that open air of course but good. A thing that helped is that I did not put your run-of-the-mill Xeon server blade in there, that would be dumb, although it would make for a nice synesthetic experience when watching Top Gun.

So I aimed for passive cooled stuff (ISP ONT+router, RPis, bunch of USB portable SSDs and HDDs) or active but low thermal requirements (headless Mac Mini, 5-disk bay, UPS). I monitored temps and they were well within parameters, I've seen much worse numbers in actual server rooms. The most noisy things were the 3.5" 7200rpm drives spinning up and seeking (constant rotation was inaudible) and the disk bay fan (lousy unlubricated bearing).

Given the side panel size, pretty sure one could hack active ventilation with big near-silent fans.


The whole home lab scene is great.

Everyone has some set of goals... low power, interesting processors, data ownership, HA, UPS/whole home UPS... and the home is the only common intersection of all these overlapping interests, and bits of software. Even more fascinating is the types of people it attracts, from professionals playing to people far outside the industry.

I have really taken the plunge and it recaptures some of the magic of the early internet (at least for me).


The community is absolutely amazing. It's incredibly active on Reddit and Lemmy. People are always quick to help you find solutions and give you advice on setting things up using best practices. It's an absolute gem if learning this stuff is interesting to you.


The home lab community is a very leaky abstraction ;)

Serve the home, and level 1 on YouTube...

It intersects with proxmox community, home assistant community, all the NAS distros...

The spread out and over lapping nature of these groups really adds to the early internet vibe!


Absolutely. As a prolific Reddit user I have a a love hate relarelationship with the site but the home lab and self hosted communities have restored my faith in internet communities


which lemmy instance / community do you recommend?


I like selfhosted@lemmy.world but there are probably plenty of others


As an alternative perspective, this is my home lab:

* Location: Sitting on a shelf in my basement office. Ventilation is okay, the WiFi is fine but not great.

* Hardware: An old PC I picked up at a neighborhood swap meet. I added some RAM taken from another old PC and bought a hard drive and WiFi card.

* Software: Debian stable and podman/podman-compose. All my useful services are just folders with compose files. I use podman-compose to turn them into systemd units.

If the stuff in the article is the kind of thing you're into, that's awesome, go ham! But you absolutely do not need to ever, and you certainly do not need to do it right away. I run a bunch of services that my family uses daily on this thing, and we use less than half the 16GB RAM and never get over 5% CPU usage on this old, ~free PC.


My favorite home lab setup was when I had an old (originally XP, switched it to Ubuntu) laptop and a couple of USB drives in the top drawer of a scavenged file cabinet. I cut a hole in the back for the wires and it never generated enough heat for airflow to be an issue. I hosted my personal website on it, including a webcam of my aquarium, hooked it up to my doorbell, XMPP, used it for document storage... anything I needed a server for. I switched to an old Mac Mini for NAS after I moved and haven't had much time to mess with it since.


Similar, my homelab is an old gaming rig shoved in a corner running a bunch of docker compose services via systemd.


This could almost be me. The main difference is my "server" is an off-lease Dell Micro PC that I maxed out the RAM on running ProxMox with a mix of VMs with everything stored on a Synology NAS all sitting in my basement office closet. I have tailscale setup so I can access it remotely.


I see this a lot with certain home labs. A ton of ram for vms. Why not just run on the metal?


Easier to mess with or reinstall OS remotely if you're not sitting in front of it, I'd imagine. Also can be easier to back up the whole system or take a snapshot of the system state, or nuke it and start over if you really screw it up.

I've dabbled with VMs for test environments but generally just install on bare metal.


I have about a dozen VMs running on a couple of large-ish servers (16 cores, 128 gigs RAM, 8 TB storage) Many of these are small, experimental environments (8 gigs RAM max, few hundred gigs storage) Running all those on metal would be expensive and annoying.


I have a rather extensive homelab, painstakingly set up over time. It works great, I love it. Few questions for you guys:

- My real problem is disaster recovery. It would take me forever to replicate everything, if I could even remember it all. Router configurations, switch configurations, NAS, all the various docker containers scattered across different vlans, etc. I mapped out my network early on but failed to keep it up to date over time. Is there a good tool to draw, document, and keep up-to-date diagrams of my infra?

- Backup and upgrading is also a persistent problem for me. I will often set up a container, come back to it 6 months later, and have no idea what I did. I have dozens of containers scattered across different machines (NUCs, NAS, desktops, servers, etc). Every container service feels like it has its own convention for where the bind mounts need to go, what user it should be run as, what permissions etc it needs. I can't keep track of it all in my head, especially after the fact. I just want to be able to hit backup, restore, and upgrade on some centralized interface. It makes me miss the old cattle days with VM clone/snapshot. I still have a few VMs running on a proxmox machine that is sort of close, but nothing like that for the entire home lab.

I really want to get to a point, or at least move towards a solution where I could in theory, torch my house and do a full disaster recovery restore of my entire setup.

There has to be something simpler than going full kubernetes to manage a home setup. What do you guys use?


I think the core problem you have is a lack of consistency. I'd try to trim down the number of different types of device/deployment methods/etc you have. It's too hard to scale wide as a single person or two, but you can go very deep in one stack and both know it very well and also get into a spot where it's kind of like either everything works or nothing works if it's all exactly the same, which means ultimately everything will have to work well.


I accidentally wiped the drives on my server last year, but it wasn't so bad due to my setup.

My strat is having a deploy.ps1 script in every project folder that sets the project up. 80% of the time this is making the vm, rcloning the files, and installing/starting the service if needed. Roughly 3ish lines, using custom commands. Takes about 100ms to deploy assuming the vm is up. Sometimes the script gets more complicated, but the general idea is that whatever system I use under the covers, running deploy.ps1 will set it up, without internet, without dependencies, this script will work until the heat death of the universe.

After losing everything I reran the deploys (put them into a list) and got everything back.

I'm with you on some of the routing setups, mine aren't 100% documented either. I think my router/switch config is too complex honestly and I should just tone it down.


Oh I wish companies like Framework or System76 launched a reproducible manufacturing process where you code your hardware similarly to how Nix/Guix manages builds. Disaster recovery would be much easier. Perhaps Super Micro Computer can do this already but they target data centers. One can only dream.


I think the complexity of your infrastructure is not helpful - so I would start in reducing it.

I personally use one single server (Fujitsu D3417-B, Xeon 1225v5, 64GB ECC and WD SN850x 2TB NVMe) with Proxmox and an OpenWRT Router (Banana Pi BPI-R3). The Proxmox draws ~12W and the OpenWRT around 4.5W in Idle and with NodeJS I can run MeshCommander on OpenWRT for remote administration.

Since I use ZFS (with native encryption), I can do a full backup on an external drive via:

  # create backup pool on external drive
  zpool create -f rpoolbak /dev/sdb

  # create snapshot on proxmox NVMe
  zfs snapshot -r "rpool@backup-2024-01-19"

  # recursively send the snapshot to the external drive (initial backup)
  # pv only is there to monitor the transfer speed
  zfs send -R --raw rpool@backup-2024-01-19 | pv | zfs recv -Fdu rpoolbak
An incremental backup can be done by providing the -I option and two snapshots instead of one to mark the starting point and the stop of the incremental backup:

  # create new snapshot
  zfs snapshot -r "rpool@backup-2024-01-20"

  # only send everything between 2024-01-19 and 2024-01-20
  zfs send -RI --raw rpool@backup-2024-01-19 rpool@backup-2024-01-20 | pv | zfs recv -Fdu rpoolbak
That is easy, fast and pretty reliable. If you use this `zfs-auto-snapshot` you can rewind the filesystem in 15 minute steps.

This is also pretty helpful on rewinding virtual machines[1]. Recently my NVMe (a Samsung 980 Pro) died and restoring the backup via zfs (the backup shell command in reverse order from rpoolbak to rpool) took about 2 Hours for 700GB and I was back online with my server. I was pretty happy with the results, although I know that ZFS is kind of "experimental" in some cases, especially encryption.

1: https://pilabor.com/series/proxmox/restore-virtual-machine-v...


I've been trying to re-bootstrap my own homelab and I'm sort of gridlocked because I know I will forget or yes "cant keep it all in my head"... so I've been spending a lot of time documenting things and trying to make it simple, vanilla, and community supported when possible. As a result I have no homelab or "homeprod" as some others have coined here :)

I also found a friend who I am convincing of the same issues and so we may try mirroring our homelab documentation/process so that the "bus factor" isn't 1.


I think people mix "homelab" and "homeprod". Here there are some other comments mentioning homeprod. To me what you describe sounds more like a homeprod setup. But I like the "lab" part. When one could just relax and experiment with lots of stuff.

I do have an ECC instance for data, to be backed up to a remote S3. I could torch it. It still remains a lab. I could just buy a new hardware and restore. But it's only about the valuable data (isolated to a couple of containers), not "restore of my entire setup".


I’m quite a homelab novice and I’m always tinkering so I have the same problem as you.

So I spent time creating a robust backup solution. Every week I backup all the docker volumes for each container to an external NAS drive.

Then I also run a script which backs up my entire docker compose file for every single container in one gigantic compose file.

The benefit of this is that I can make tweaks to containers and not have to manually record every tweak I made. I know that on Sunday at midnight it all gets backed up


I just use Ansible to "orchestrate" the containers. I only run one of each, configured to restart on fail, so I don't need anything smarter than that.


Maybe something like Nix/NixOS can help you?


k8s is a lot easier for homelabs than it used to be, and imo it's quicker than nix for building a declarative homelab. templates like this one can deploy a cluster in a few hours: https://github.com/onedr0p/cluster-template

here's my home assistant deployment as a single file: https://github.com/pl4nty/homelab/blob/main/kubernetes/clust...

I deliberately nuked my onprem cluster a few weeks ago, and was fully restored within 2 hours (including host OS reinstalls). and most of that was waiting for backup restores over my slow internet connection. I think the break even for me was around 15-20 containers - managing backups, config, etc scales pretty well with k8s.


Since last year I’ve been configuring and maintaining my homelab setup and it is just amazing.

I’ve learned so much about containers, virtual machines and networking. Some of the self hosted applications like paperless-ngx [1] and immich [2] are much superior in terms of features than the proprietary cloud solutions.

With the addition of VPN services like tailscale [3] now I can access my homelab from anywhere in the world.

The only thing missing is to setup a low powered machine like NUC or any mini PC so I can offload the services I need 24/7 and save electricity costs.

If you can maintain it and have enough energy on weekends to perform routine maintenance and upgrades. I would 100% recommend setting up your own homelab.

[1] https://docs.paperless-ngx.com/ [2] https://immich.app/ [3] https://tailscale.com/


Defo invest in a mini pc. I got a powerful HP elitedesk that sips 7w on idle and runs 38 containers


If your homelab gear is in a non-tech-nerd living space, also think about noise, lights/displays, and otherwise being discreet.

As an apartment-dweller, for a long time I had it in a closet. Once I moved it to living room, my solutions included:

* For discreet, IKEA CORRAS cabinet that matched my other furniture. I had rackmount posts in it before, but got rid of them because they protruded.

* For noise, going with gear that's either fanless, or can be cooled with a small number of Noctua fans. (I also replace the fans in 1U PSUs with Noctuas, which requires a little soldering and cursing.) I tend to end up with Atom servers that can run fanless in a non-datacenter except for the PSU.

* Also for noise, since my only non-silent server right now is the 3090 GPU server, I make that one spin up on demand. In this case, I have a command I can run from my laptop to Wake-on-LAN. But you could also use IPMI, kludge something with a PDU or IoT power outlet, find a way to spin down the 3090 and fans in software, make Kubernetes automate it, etc.

* For lights, covering too-bright indicator LEDs with white labelmaker tape works well, and looks better than you'd think. Black labelmaker tape for lights you don't need to see.

* For console, I like the discreet slide-out rack consoles, especially the vintage IBM ones with the TrackPoint keyboards. If I was going to have a monitoring display in my living room, I'd at least put the keyboard in a slide-out drawer.

* I also get rid of gear I don't need, or I'd need more than twice the rack space I currently have, and it would be harder to pass as possibly audiophile gear in the living room.

* For an apartment, if you don't want to play with the router right now (just servers), consider a plastic OpenWRT router. It can replace a few rack units of gear (router, switch, patch panel), and maybe you don't even need external WiFi APs and cabling to them.


I’m a math grad student and want to play with LLMs as well, so I have been thinking about getting a 3060 or 3090, due to budget. However, I’m using a MacBook Pro from 2011, thus not very convenient for these kinds of things.

What do you think the lowest budget or spending that I should have, given my “requirement”? Or would you say that vast.ai or similar websites would suffice?


You might try to find forums where people are doing the kind of LLM work you want to do, and see what they're recommending for how to do that work.

Personally, for my changing needs, my GPU server is already on its 4th GPU, after I kept finding I needed more VRAM for various purposes. The RTX 3090 seems to be a sweet spot right now, and my next upgrade right now would probably be a 2nd 3090 with NVLink, or some affordable older datacenter/workstation GPU with even more VRAM. Other than the GPU, you just need an ordinary PC with an unusually big PSU, maybe a big HDD/SSD (for data), and maybe a lot of RAM (for some model loading that can't just stream it). https://www.neilvandyke.org/machine-learning/


I'm older than 30 so I just call this having a home network and more than one computer.

Why do I have a home network? Like many here, I develop network applications. I'm not a network technician, but being able to truly understand how the various components of the internet work, like TCP/IP, DNS etc., is really useful and sets me apart from many developers. I also just like being in control and having the flexibility to do what I want with my own network.

Why do I have multiple computers? Playing with different operating systems etc. isn't really the reason as virtualisation is pretty good these days. It actually comes down to locality of the machines. I want my hard disks to be in a cupboard as they are quite noisy, but I want my screens and keyboard to be on a desk. So I've got a NAS in a cupboard and a (quiet) PC at my desk, plus a (silent) media centre in my living room and other stuff.

One thing I would say is don't be tempted by rack mounted server gear. Yeah, it looks cool and you can get it second hand for not much, but this isn't suitable for your home. Use desktop PC cases with big fans instead. Rack mounted network gear is pretty cool, though.


> One thing I would say is don't be tempted by rack mounted server gear.

That depends on the server. Super Micro makes some rackmount servers that are shallow enough for a network cabinet/wall-mount rack and fairly low power. They also have a real OOB mgmt system, which can be really helpful. https://www.supermicro.com/en/products/system/iot/1u/sys-510...

I agree that typical deep-rackmount servers are more trouble than they are worth for a home lab.


> They also have a real OOB mgmt system, which can be really helpful.

You can find regular-PC-case-shaped motherboards that have real IPMI management baked in, but you do have to be willing to pay more for it, as they're usually (always?) found in "workstation" boards.

However, you do need to do a bit of research to make sure the advertising claims match up to reality. Some of these "IPMI" ports require custom Windows-only software and don't let you remotely control much at all.


I built my home server with a 4RU case filled with desktop components. Thankfully 4RU seems tall enough to get fans large enough to be quite quiet. *Actual^ rack mounted equipment though? I highly recommend that anyone even remotely thinking of bringing this stuff home find some way to hear how bloody loud it is beforehand. I couldn’t imagine living with that.


Rackmounted gear is fine depending on where it is "racked" - some climates can get it into a garage, basement, etc.

But unless you have a real rack it's gonna be a bit of a pain, because it WILL end up being stacked on other pieces of gear in a rackish tower that requires full dismantling to get to the piece you want to work on.

If you go off-lease rack equipment, go whole hog and get a rack and rails too. Rails can be a bit pricey, check the listings for those that include them - rails often work for much longer than the servers so the big companies that liquidate off-lease equipment don't include them; smaller sellers often do.


IMO one of the biggest frustration I currently have with homelab-scale gear (and edge compute in general) is how everything has massively regressed to only offering 2.5gbit connectivity at best.

Try to find one of these hundreds of small form factor products that has anything faster than a 2.5Gbit nic and despair! What good are these machines if you cant even get data in and out of them?!

The list of hardware with 2x10GBe or better connectivity is amazingly short:

  - Minisforum MS-01
  - GoWin RS86S*, GW-BS-1U-*
  - Quotom Q20331G\*
From major manufacturers, only the following:

  - Mac Mini (1x10gbaseT option, ARM, )
  - HPE Z2 Mini\* (1x10GbaseT option)
Please someone, anyone, build a SFF box with the following specs:

  Minimum of 2 NVMe
  Minimum of 2 additional storage devices (SATA or more NVMe)
  Minimum of 2 10GbE
  Dedicated or shared IMPI/OOBM
  Minimum of 8 Core x86 CPU
  Minimum of 64GB RAM
Make sure the nics have enough PCIe lanes to hit their throughput limits, give the rest to storage. Stop with the proliferation of USB and thunderbolt and 5 display outputs on these things, please already!


You should check out the Xeon D offerings from Supermicro. They make some M-ITX motherboards with embedded Xeon D chips that sport dual 10GB Ethernet. Used one in a freenas system at one point for years

Looks like the X10SDV-TLN4F[1] has everything you mentioned besides a second m.2 slot. Although its 8 cores they are a few generations old and low power (45w TDP on x86). For true compute its not exactly fast but for something like a high performance file server in 1U connected to a disk shelf, they are really nice.

[1] https://www.supermicro.com/en/products/motherboard/x10sdv-tl...


I actually do run a production edge cluster with these exact boards; they are just too old. The price/performance is so far off the mark WRT modern hardware they are simply not worth using anymore. I have started hitting CPU instruction set issues with current software due to the ancient microarchitectures in these lowend parts.

On one hand they are able to still sell them because nobody is competing, but what is the reason for the lack of competition?

I have a theory about Broadcom buying and blocking up the low end PCIe switch market to purposely hold back this segment, but I'm not sure how significant that is.


I use one of these. For a system that’s honestly mostly idle, it’s plenty for proxmox with Plex and pihole and whatever else, and it’s good on power and heat and noise.

I made the mistake of getting Seagate 10TB rock tumblers, but they’re the only source of real noise. WD would probably have been quieter.

I have this and my switches and a ham radio base station all racked up on top of the cabinets in my laundry/utility room, out of the way but not hidden away, so I can still notice if the fans are too loud or the wrong lights are blinking or not blinking or there’s weird smells etc.


> Stop with the proliferation of USB and thunderbolt…

If you don't need it built-in, external adapters are a good way to get to 10GbE and even 25GbE.

https://www.qnap.com/en-us/product/qna-t310g1s

https://www.sonnettech.com/product/thunderbolt/ethernet-netw...


By "good" I assume you mean "possible" because there is nothing really good about a connectivity solution that is both technically inferior and vastly more expensive than it ought to be.

I'm not opposed to Thunderbolt though; let me know when someone ships a 8+ port thunderbolt interdomain/multi-root PCIe switch. That would be a good solution.


I understand that more is better in some sense, but why the emphasis on having two (2) 10GbE instead of one for a single device that isn't a router?

What's the second link for? Just more bandwidth, or is there some crucial use case I'm missing?


In my experience, a second nic is important to build anything with resilience; it's not just for routing. I see it as the most important hardware to "double up" on. If you are building a small cluster, for instance, you generally want to have one interface for the backend interconnect and one interface for client traffic. Or maybe you want to be able to upgrade or change your networking without taking traffic offline -- you'll need two physical connections for that, too.

Of course you can still build routers and everything with a single interface (which is why I noted the single port hardware in the first place), but if you are going to design something specific to the mini-server market, the cost difference between a single port and dual port controller is negligible enough that there's no reason not to include 2 or even 4 ports. The GoWin machines simply toss in an OCP nic slot and call it a day. That would be a fantastic solution to see adopted by other vendors.


If a home lab is for tinkering, it’s something to play with first of all, but it also helps facilitate re-patching things or moving things around, which in many cases you can do one port/cable at a time without interrupting services.

I once moved a server from one room to another by moving its redundant power cables one at a time to a UPS on a cart, then swapping its short network cables for longer ones one a time, wheeling the thing to the other room, then “reversing” the procedure in the new location. Zero downtime!

Zero point really, but fun in the way I reckon a home lab is meant to be.


>For this article’s purpose, I won’t recommend any specific servers because this will vary a ton depending on what you will be hosting. NAS storage, VMs, web servers, backup servers, mail servers, ad blockers, and all the rest. For my requirements, I purchased a ThinkCentre M73 and ThinkCentre M715q; both used off eBay.

This is the way. HP, Lenovo, and Dell are all making identical small form factor PCs that are more than enough for most people at home. They are also very silent and power efficient.

I have a Lenovo M720 Tiny and a Dell Optiplex 3080 Micro (they are virtually almost the same). You can change parts, there are ample ports available and you can pretty much run any OS you want.


I've been pretty happy with the EliteDesk G3 I got off eBay a couple months ago. I actually use it for light work (mostly spreadsheets and emails) too.

I think I paid less than $130 including shipping and sometimes it (anecdotally) has better performance than my home PC which is a full tower I bought new 4-5 years back.


I love my G3. I found with a beefy i7 8 core cpu that sips 7w at idle. These machines are gold


> For my requirements, I purchased a ThinkCentre M73 and ThinkCentre M715q; both used off eBay.

This deserves a shout out. These things are everywhere on eBay and they Ryzen ones rock.


This article has such a focus on hardware, when that's like... 1% of my homelab decision making.

Was hoping this article would delve into hosting your own email, websites and services and the steps to expose them to the public internet (as well as all the requisite security considerations needed for such things).

A homelab is just a normal PC unless you're doing homelab stuff.


This is always my area of interest, and I rarely see it covered. Maybe all the people who do it just understand it so well that's it just an after thought for them?

I've tried on numerous occasions to host my own services at home and use dynamic DNS to map to an IP, but when things don't work, it's hard to know if it's the hardware (consumer modems, routers, etc.), dirty ISPs who want to force you into a business tier to open up anything, some combination of the two, or something else altogether. Are the problems so one-off and specific that they all require bespoke problem-solving?

I would love it if anyone could share foolproof resources on the subject.


I think it is because most of those things are just regular sysadmin things and homelab more refers to setting up hardware or software that you wouldn’t normally find at work.

As for your actual problem, a lot of ports are blocked by ISPs because a LOT of computer worms have used them to spread malware in the past. However you usually can test to see if the port is open as well as try alternative uncommon ports like 47364 to see if they work.

Some ISPs, although I believe mostly mobile ones, also use something called CGNAT where your Internet IP is actually an internal network IP within a part of the ISP, and you can usually test for this by checking if your IP is in a private range.


Did you try r/self hosted on Reddit. Thats where you’ll get support for something like that


this kind of harms the appeal of a homelab. If you search youtube, you will find many creators sharing their labs. But when you look at the services they host, 80% of it is just stuff to maintain their infra (hypervisors, monitoring tools, high availability stuff, etc).

All of this is very interesting, but not very useful. I'm personally most intrigued by how people are maxing out their old hardware for years, like many examples in this thread


I recently bought an old Mac Pro 2013 (trashcan) with 12 cores/24 threads and 128 GB ECC RAM for my upgraded "always-on" machine - total cost $500. Installed Ubuntu 22.04, works out of the box (23.10 had some issues). Unfortunately it's hard/impossible to completely suspend/disable the two internal AMD Radeon GPUs to lower power consumption. I got it down to ~99W consumption when idle, when using "vgaswitcheroo" to suspend one of the GPUs (and it set the other one to D3hot state). My Intel NUC consumes almost nothing when idle (my UPS reports 0 W output while its running, even with 4 NVMe disks attached via a Thunderbolt enclosure). I don't want a 100W heat generator running 24x7, especially when I'm away from home, so will need to stick with the NUC...


GPUs are bad with power. My RTX 3060 still consumes around 20W doing nothing, no display or real load whatsoever. More than the rest combined when idle. I wanted to sell the entire box, but made that machine on-demand (it's mostly off). It's also noisy for it's relatively powerful overall. Yes, NUCs are great for 24/7 and light/spiky loads.


I get an electronics workbench or a woodworking workshop, but I still don’t quite understand the purpose of this „home lab“. You can just run stuff on your laptop, or a mini computer in some corner if it’s supposed to run 24/7?


Yeah, it's the same kind of thing as running something 24/7 in a mini computer in some corner, but scaled up.

It's fairly common for people that like to muck around with this stuff (or learn it for whatever purpose). Think of it like a hobby I guess. :)


Like with every hobby, YouTube and Reddit push people from having an interest in something to pursuing it to extreme lengths.

Many people get so struck by gearlust they lose sight of what piqued their interest in the first place.

I'm not immune, I don't think I'd have a Unifi setup at home if I hadn't seen the flashing lights and convinced myself I needed some.


It can be for fun or practical. Having access to ECC hardware is a big plus for some stuff so the server racks aren't all for show. It just depends on your goals.


Looks like it's tailored for SRE-related hobbies.


yeah was expecting/hoping for an electronics/hardware "lab" setup


Surprised author didn't mention https://labgopher.com/ for finding used enterprise hardware at low cost on ebay. This hardware is cheap and typically out-of-service for enterprise, but great for home labs.


The big problem with off-lease enterprise hardware is the power usage can quickly outrun the original costs - but that may not be an issue for everyone.

The nice thing about off-lease enterprise hardware is there is so much of it, and it's enterprise. You can easily find replacement parts, and working on that stuff is a joy, everything is easily replaceable with no tools.


THIS! My Dell and HP blade chassis/server EAT power (10+ Xeon blades). I actually shut them down and started moving to SBC based stuff like odriod H2/H3's. Basically silent and just sips power. All much less heat (I live in a desert, so thats important)!

0: https://ameridroid.com/products/odroid-h3


For what it's worth, we're just starting to get to where enterprise stuff that cares about power usage is coming off-lease. Some of the newest stuff actually sips power relatively when not under load.

My biggest (current) problem is finding something that supports a ton of SATA drives and still is low power (though at some point even the power usage of spinning rust can become noticeable).

I live in the icy wilderness, so excess power to heat isn't a problem for most of the year.


yeah - I don't have a need for a lot of drives, but its hard to come by...but the odroids have a PCIE expansion slot, so you might be able to add a add-on controller...

Given how huge drives are now, 1 nvme and 1 sata/spinning rust is basically all I'd ever need (I don't do video/music/etc. )

Edit: Oh shit...just checked, it does NOT have a PCIE slot..I was sure it did :-(


For the price of an 8th gen off lease pc on ebay you can buy a new alder lake N minipc with 6 tdp.


Some people even use an Alder-Lake-N miniPC as a "user-facing machine" (i.e., as the only computer in the room). I do, and I love it so far (after about 45 days of using it) because it doesn't have a fan to make noise and gradually become noisier as the machine ages.

My guess is that if I was used to a much faster machine, then the machine's slowness would bother me, but it in fact does not bother me. (The machine has an NVMe interface, and I have a fast SSD in there. I have the N100 CPU because I couldn't find the N200 or the i3-N305 for immediate delivery from a non-Chinese manufacturer.)


The notion of a homelab is really, more of a suggestion. I have, what you might call, a "homeprod."

"Be aware, the video server and sync server will be offline between the hours of 8PM and 10PM on Friday, March 8th for hardware upgrades. Expected downtime should not be more than 20 minutes but please plan around this disruption to essential services. Other services should not be affected during this work."

Like someone else mentioned in this thread: It was really best described as a "home network with more than one computer." But you can't really put that on your C.V. under "experience."

"Improved SLA uptime on client's network to five nines. Upgraded storage controllers in a live environment without disruption to established services. Deployed k8s cluster into a production environment serving thousands of hours of compressed video files with real-time format transcoding."

I will freely admit, my home network probably needs to be pared down a bit. The problem is the salvaged hardware and good deals at the liquidation auctions accumulates faster than I can cart it off to the street corner. There's also far too many laptops floating around the house doing nothing useful that could be donated away. A lot of the homelab is salvage or bought at (in-person) liquidation auctions. There's some new stuff, but I rarely need the latest and greatest for my setup. In my home network diagram I try to document everything that touches the network so I can stay on top of the security patches, and what has access to the outside world and in what way.

https://justinlloyd.li/wp-content/uploads/2023/11/home-netwo...


> Attic

> Pros: Less noise, easier cable runs.

> Cons: can get hot depending on where you live, roof leaks, humidity/condensation, and creepy at night.

Where do attics not get hot? I live in northern climate, and ours regularly gets hotter than 140F during sunny summer days.

And I say hotter, because the remote temp sensor I have up there maxes out at 140F. I wouldn't be surprised if it actually gets up to 150F or more.


My attic is insulated at the roof sheathing, not the ceiling. My attic is basically 'inside' the house, from an airflow perspective. It peaks around 87°F in late June. Not ideal, but workable.


Conditioned attics are the new rage. Once it's conditioned, you can move other mechanical devices up there like HVAC etc to reduce the amount of ducting your home needs (especially if it's single-story).

It's nice because that loose, shredded insulation is garbage and so easy to mess up (walking on it can change it from R40 to R20, for example). With a conditioned attic you use standard batting in between the roof members -- impossible to mess up.


Sounds like a nightmare for your energy bills. At least with interior ductwork the losses go to heating/cooling your home instead of your attic, which even with insulation is going to act like a giant heatsink that you get nothing out of.


I don't think so -- you _are_ paying to condition that part of your house, sure, but it becomes just another part of the overall envelope and acts like a buffer between the weather and the actually livable parts of your home. Anecdotally, in my home in the summer months, when unconditioned the attic is 50 degrees on a summer day it literally radiates heat down through the ceiling on the floor below into the rooms there. The attic acts like a heat battery -- long after the sun goes down the attic is radiating heat into the upper floors of my house. Not sure why it retains heat so well it's got louvres and vents ...


When you have ductwork in the attic your heating/cooling efficiency is hurt by the amount of heat lost through the roof and out into the environment. When the ductwork is all in interior walls, there's no efficiency lost because those losses go into the rooms the system was trying to heat/cool already.

Where I live, we have extreme ranges of temperature (both hot and cold) and almost all ductwork is on the interior for this reason. You'd be spending a fortune in the winter heating an attic when it's getting lost to -17C air temps outside...

It sounds like your attic needs more insulation.


If someone made a UPC in the form factor of a laptop battery, we could be off to the races. I thought a laptop battery would be good for this, but they do bad things if you leave them plugged in all the time.

No reason to go crazy with your home server if it is only you using it. You can get away with a crappy laptop that your roommate is done with.


So what should you do, remove the battery on these laptop servers?


Yeah, especially after a few years. Perhaps there is a way to get it to charge and discharge reccurringly, but that would probably take some extra hardware.


Unless you simply want to play with all the various bits of hardware, a standard PC tower will be more power efficient and save a lot of room. I use that with unRaid and a simple router and switch and it's power efficient, doesn't make much noise, cheaper, and takes up much less room.


> a standard PC tower will be more power efficient

How much does it average in power draw (watts) from the wall?


I can't speak to if it's more or less efficient, but I have one PC taking backups that's running at 43W idle. A different PC based server, which is a much more highly specced retired desktop, is running at 72W idle.


For standard PC's (ie with ATX based power supply) the 43W one seems pretty decent at idle.

The small form factor gear though can have much more efficient power supplies, so some of those reportedly idle as low as 6w. That's total power draw at the wall. Much more efficient than a standard desktop PC.

---

That aside, apparently that newer 12 volt power supply standard being pushed by Intel is good for lower power idle too. Hopefully that gains traction and reduces everyone's power bills. :)

https://hackaday.com/2021/06/07/intels-atx12vo-standard-a-st...


If you're old and grumpy[1] and don't want a homelab but paradoxically want homelab functionality, I can't recommend the Firewalla[2] enough. And heck, it would fit right into a homelab situation, as well.

It allows you to go deep into settings and configure anything to your heart's content. But, if you want to ignore it for a year and not mess with it, it will run as smoothly as an Eero or a similar consumer product like that.

I have a Purple model[3]. I initially balked at the price, but it performs well, and there hasn't been a single day in the past couple years where I've regretted it or looked at something else.

It supports just about any router feature. Has robust and easy to configure device/family profiles. The quality of the Firewalla software and the mobile app is excellent.

---

1. I manage a quite large WAN during the day. I don't want to do it at night, too.

2. https://firewalla.com/

3. https://firewalla.com/products/firewalla-purple


I've seen it mentionned on hn before and was intrigued as a replacement/simpler solution to a home pfsense setup. It seems there are issues with it though: https://old.reddit.com/r/firewalla/comments/18wvlqa/steer_cl...

The software is open source apparently? https://github.com/firewalla/firewalla Anyone running this on their own hardware? Or in a VM homelab :) ?


It'd be so cool if we could have permanent routable IPv6 addresses for every machine. Home Labs would become 1000x more useful, and I bet Digital Ocean and similar would see an exodus of small site hosters like me


I'll go against the grain and recommend running your lab on Windows 11 Pro and Hyper V (often included with reclaimed PCs). Hyper-V is a very capable hypervisor and Windows hardware compatibility is excellent. Remote mgmt can be done over RDP, SSH or WMI

Nearly everyone has a spare PC lying around collecting dust, and within an hour it can be running a dozen VMs . Reclaimed HP & Dells are cheap and include a Windows Pro license. otherwise they can be had for $20

I recommend Alpine for the guest OS. It's fast to set up, natively supports Hyper-V hosts and compact (about 1/10th the size of a debian guest).

https://learn.microsoft.com/en-us/windows-server/virtualizat...

https://wiki.alpinelinux.org/wiki/Hyper-V_guest_services


How do you handle the fact that Windows 11 is a desktop OS and it tends to want to do whatever it feels like restarting for updates and such. I would imagine it's a bit of a pain to have the system reboot at random, killing your VMs etc.


Windows 10 hardly restarts a lot. There's no pain.

If I reboot windows, my Debian VM comes right back up, with all the docker-compose services running (a la `restart: always`). Like nothing ever happened.

So I just don't think this is a problem. But that's just my experience...


I manage this by scheduling restarts. Maybe 2-3x / month.

fwiw my linux machines get periodic kernel & firmware updates that also need restarting


This is how I have my setup running, a moderately-outdated, but still beefy (E5-2699v4, 512GB ram, etc), Dell Precision tower, which originally started as a design machine with a handful of Autodesk and EDA tools. I wanted to try out proxmox, but I was too lazy to want to move all of those installed things off onto another host, reinstall, then move it back into a Windows VM. I suppose I could still run proxmox in a HyperV instance and then VMs within there, but it feels like too many layers of indirection.

HyperV is perfect for what I want to do, run a dozen or two sandbox and experimental VMs. I can pass through hardware, assign raw volumes, partition out network interfaces for routers and other shenanigans and it's stable too. And with RDP, I just use the mac version of the remote desktop client to use it interactively, which also works surprisingly well even over tailscale on another continent!


sounds like an awesome setup. this is what I tell my friends. Just run Hyper-V. They can easily be checkpointed and moved to another hypervisor.

Use what you got!


I absolutely agree. Windows Pro + Hyper-V + Linux = super good. And I'm a Mac guy, if anything.

Remote Desktop is really good, too. I use it all the time from my MacBook (though I mostly SSH, naturally). WinSCP + midnight commander are all the extras I need.

If you have enough memory/CPU on your workstation/gaming machine, Just 1 PC works great.


You could use this built-in OEM license from a Windows VM in Proxmox. Just search for that on Proxmox forum.

Hyper-V is very limited in hardware support, you won't be able to pass-through stuff. And you always pay for VM layer, while in Proxmox you could use containers which give almost the host performance.


Hyper V Server 2019 (not the same as Hyper V inside Win 10/11) is free and has GPU pass through

https://learn.microsoft.com/en-us/windows-server/virtualizat...

https://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-...


that's changed in recent years. you now have access to hardware and the gpu for inference, for example.


Maybe in the Server version. I haven't seen anything useful in Windows Pro. I could not setup SR-IOV even with Powershell after wasting a lot of time on searching for help.


Not sure why are you downvoted, Hyper V is perfectly fine

Hyper V Server 2019 is also free you don’t have to pay for anything https://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-...

And you can install a full GUI too https://gist.github.com/bp2008/922b326bf30222b51da08146746c7...


that's an interesting option. does it boot into cmd.exe by default?


Yes that only but you can install a GUI + a web interface


Tiny PC's make excellent low-power servers for lots of purposes, but personally in the end I just decided to build my own desktop PC from cheapish previous gen parts to have more flexibility. Now I have an AM4 consumer board based system with 8 core Ryzen 7, more ECC ram than I will ever need, 4 enterprise grade SSD's with ZFS, intel NIC, and RTX 3060 for some AI experimentation. Thanks to efficient PSU choice the whole thing idles at just 30W running Linux, about 10 of which is due to the dedicated GPU. I should be even able to game on it through GPU passthrough to Proxmox VM, though I haven't tried it yet.

For networking I can recommend Fujitsu Futro S920 thin clients, especially for EU users. They are dirt cheap here, and with aftermarket riser & intel NIC they are great routers with opnsense/pfsense. Apart from that, I have a couple of raspberry pi's running dedicated tasks, and space heaters (e-waste tier enterprise desktops) running BOINC during coldest months.


At the risk of yucking others' yum; I wish there was more of an effort to get people to do this sort of thing without buying new equipment (including perhaps the reduction of of the idea that you need/want a "rack")

There are just SO MANY perfectly good computers that are bound to be e-waste that would be good for this.


Most people don't buy "new" hardware. Power efficiency has changed drastically to a point anything before ~2016 is not worth plugging into the wall, and sata to NVME drastically improve responsiveness which didn't become the base standard until 2018ish. Most people use an old desktop or buy old server equipment on ebay.


Yeah - Old EBay server equipment can be had for like $200/node and will burn at least that much in electricity every year :-/ Also, the noise is hellacious!

Old laptops / desktops can be better - especially since you can get newer/more power efficient units without all the noise. But I started moving to even smaller SBC based stuff (0)

0: https://ameridroid.com/products/odroid-h3


Alternatively, If you don't fancy patching your own cables you can get female to female RJ45 keystone jacks which use plugged / crimped cables. The longer cable run is plugged in behind the patch panel, and a shorter cable in front to keep your cabinet or rack clean.


I run a more mundane setup using the most possible boring choices (lol) and it's been reliable. I occasionally restart random things and pull network cables to fuck with it to make sure that it can recover.

* Hardware: A regular but small (SFF) computer + Synology NAS

* Software: Ubuntu distro + Docker containers bare

* Configuration: Git-versioned Dockerfiles and docker-compose.ymls

* Storage: The NAS with Synology Hybrid RAID

* Backup: TBD... haven't figured out a cheap and automatic way to backup 20 TB of data off-site

I run cameras, a media server shared with friends, a shared place to dump project files, a DNS server (just in case the Internet goes down so that everything local still works) and supporting services.


One bit of guidance I've been trying to figure out is this: how does one properly segment one's homelab to be able to firewall potentially dangerous stuff (e.g. IoT gear, hosting publicly-available websites via Cloudflare tunnels or something, all while running on the usual "virtualize all workloads" thing with Docker/Proxmox, etc)?

The best I've come up with so far is segmentation via VLANs, then adding many rules to deal with inter-VLAN reachability (e.g. should it be forced to transit to the firewall and back? Can my laptop reach out to the IoT devices, even if I block connections in the reverse direction?).

Beyond this, I'm not sure how to best handle similar segmentation on my few servers (all running proxmox), e.g. whether to expose VLANs to the VMs (which I kinda need to do for the pfsense firewall, maybe?).

The biggest thing is how to keep these network configs in sync, e.g. when adding a new VLAN, etc. I basically have an Org file that I write all this down, but I'm guessing there's a better way.

Also, if anyone wants a cheaper alternative to Ubiquiti, Mikrotik has served me well for years. I'm considering switching to Ubiquiti stuff though, as they seem to have a solution to synchronize state between multiple switches that is much better (vs. manually using Winbox on each individual router).


Shameless plug, but if you're looking for awesome software to run in your home lab, I've got just the thing:

https://awsmfoss.com

The quality of software out there these days is insane -- there are projects that cover so many crucial functions (tracking expenses, playing music, managing data, etc), that it's becoming really hard to track them all.

I've been working on converting the site to a directory and it's not quite there yet -- but some people do like the massive wall of projects.


This looks really cool, but the title has to be a joke right? This is not 'beginner'.

I have a beginner setup: a single NUC and a cheap NAS with NFS.

I have yet to come across something that this setup cannot handle.


My setup is even more beginner. I have an old Elitedesk running Windows with some external hard drives for extra media storage.


> ...or in a cardboard box

When I saw the picture of the server mounted in a cardboard box, my first thought was about fire hazards. It turns out that might not be as big an issue as I initially thought, since paper won't spontaneously catch fire until reaching ~200C [1], and the operating temperatures of computers generally shouldn't exceed 100C. However, I'd still be a little wary about a cardboard setup, considering the potential for electrical faults, the heat generated by power supplies and the risk of inadequate ventilation leading to hotspots. These factors could present a significant fire risk. My initial judgment of the setup may have been too quick but not entirely unfounded I guess!

[1] https://en.wikipedia.org/wiki/Autoignition_temperature


There are a ton of resources about HW aspects of home labs for beginners but not so much for what to run on them and why. There are lists like https://github.com/awesome-selfhosted/awesome-selfhosted but they are confusing for absolute beginners like me. Are there any good SW project guides you know?


I’m running a scraper for Dune movie tickets on mine. Before that I didn’t really have anything to do on it besides learning about Proxmox, VMs, networking, and cloud-init. Maybe try to run something you’d usually spin up a VPS for.


Admittedly, it makes way more sense once you have more than one person in your household. I ended up running most things in docker on a Raspberry Pi 4.

Started with

- PiHole to cut down on ads and try to keep the kids from clicking malware links

- added network attached storage to make it easier to share files with the family

- added Jellyfin to share music with people in the house (kids gotta grow up listening to good music)

- added syncthing to be able to ditch dropbox and still keep multiple computer file folders synced

- ran a Minecraft server, then a Terraria server, then shut those down since we moved on to other games

- turned on an ntp server since ddwrt router wasn't getting the time from the internet reliably

- added librespeed and ittools for quick wifi-speed-testing and to convert docker-run commands to docker-compose commands

- needed a webpage with links to start keeping track of all of this, so lighttpd it is...

- smokeping and blip were fired up when the internet got flaky and I needed more data to explain to the ISP what I was seeing

- Got a 3D printer, and Octoprint made it easy to start and monitor prints without having to pull a microSD card out of the socket every time.

- Figured, while we're here, that I might as well explore the smart-home thing with HomeAssistant -- now we have a handful of lightbulbs and smart power switches that mean you don't have to get up to change the lights quite as often (not everything is on a smart control, just a few out-of-the-way things. Oh, yeah, handy during Christmas-time to turn the tree lights on and off without having to fumble behind the tree for the little switch.)

- That, of course, lead to an MQTT broker (mosquitto), which is kind of a budget pub-sub broker

- And now that I have docker logs with things I need to debug, Dozzle serves those logs in a handy webpage. Not secure, but it's not like this is enterprise server.

- Then I wanted to play around with timeseries data (just to learn), so (running out of RAM at this point, so I couldn't just fire up Grafana) kept digging until I found VictoriaMetrics, which was quite happy to catch metrics (fired at it from Benthos) and visualize some 3 years of hourly Beijing weather data. (Seriously, I'm still amazed how lightweight, simple, and easy to use VictoriaMetrics is)

- currently thinking of kicking the tires on the NATS message broker and see if I can get the multi-node clustering to work. No real purpose other than see what it's like and dream about silly developer projects.

- also thinking of exploring SeaweedFS -- might be nice to have horizontally scalable storage, as each of my nodes are scattered around the house depending on where I can put an Ethernet drop.

- after a few years going along this journey, it has the perk of having helped me get my current job, which is pretty cool.

So... it's mostly a plaything, a sandbox, but a handful of things run "in production" at home and make life around the house slightly more convenient and less risky. By being choosy with the software and a bit aggressive on the container RAM limits, I've got 27 containers running on a Raspberry Pi. Hey, gotta take my bragging rights somewhere.



I took a look, and I have it running with nothing hooked up. I haven't decided which of the remote log-shipping tools I want to hang my hat on. Two docker plugins I tried (Loki and elastic, I think) didn't seem to be compiled for ARMv7, so... maybe I'll try to get benthos streams mode to tail the logs, format them as json, and fire them into the VictoriaLogs webhook. But who knows when I'll figure that out.

I did get a 3-node NATS cluster running across 3 servers this weekend, that was neat how easy that ended up being.


Rack mount equioment is often more expensive. Also noisy.

I have mostly consumer (non-rack) equipment in a small Ikea closet. Looks less nerdy, which also improves the WAF.


If you go the Ebay route, the price isn't bad... But people can literally hear my servers on the phone (1 adjacent wall) if I fire all of them up. Also, power draw was crazy - each blade chassis pulls 2500w-3000w !

SBC based stuff is plenty powerful for most stuff now, MUCH cheaper then Nucs, and SILENT! (Great for my sanity)


Some of it is quite quiet nowadays, when not under load.

I can hear my racks but they're about as quiet as the furnace until I do a really big compile.

On reboot they go into full speed fan mode until the controller comes online, that sounds like a jet taking off.


It is, mostly, but if you’re handy with a soldering iron, you can easily solve that problem with a lot of equipment as long as you have good airflow and cooling where your rack is located.


I have an obscene amount of retired Chromeboxen that I have flashed with Mr.Chromebox[0] EFI and running as either K8s, Proxmox or independent Linux/Windows nodes at any given time.

These are mostly 1st-gen i7-4600U Haswell's, upgraded to 16GB RAM and 128G+ SSDs. I ripped them out of their cases, added heatsinks to the new SSDs, and then "racked" them by wedging the front-bottom mainboard corner (if plugs are oriented to the rear and the heaviest plug, AC, at the rear-bottom for balance) into a slotted piece of super-strong and right-angled cardboard that came from inside some re-inforced shipping box (I think it was an X-mas tree).

I assembled all of this juuuuuuust before COVID while learning K8s and craving the physical plan aspect, as I had just transitioned to a remote SaaS role from one that involved multiple physical DCs and travel.

My "go small lab" initiative turned into some sort of Tribble fable.

[0] https://mrchromebox.tech/


I’m trying to come up with the best 3/2/1 solution for me. Two forms of storage seems bizzare, are people really backing up to tapes? What does that workflow even look like? What about your offsite? Backblaze b2 seems expensive compared to backing up on a big external drive off site at my desk at work.


I rotate two 8 TB USB hard drives between home and my locker at work, every month or two. Full-disk encrypted, of course. This covers my entire system and media files.

For /etc and my source code repositories, I use restic and cloud storage, because this data is small (< 1 GB). (Also, I'm on rural internet, which is metered and slow. So I can't use cloud backup for the big stuff.)

The whole system is not quite 3/2/1, but it's close enough for me.


what would be the typical use of such a lab?

Why are so many different components needed?


The responses are kinda a mixed bag here - there are two totally different use cases.

If you want something for practical purposes, /r/selfhosted actually has you covered. It’s usually jellyfin for movies, etc. And you use a NAS or a PC, but the idea is normally that you setup something reliable and don’t lose your data, and it does not require too much tinkering. Maybe once in a while run some docker container to test it out. what I have now.

Then there are people who get many raspberri Pi’s or NUCs and basically play ‘build a datacenter at home’ game. It’s good for tinkering and you often mess it up and start from scratch. I’ve done that, it’s kind of fun, cool experience, learned a lot about networking, installed Kubernetes, then sold the machines.

Then a third group are people who build a whole data centre at home AND run serious applications on it. These guys need to buy themselves a motorbike or something.


>Then a third group are people who build a whole data centre at home AND run serious applications on it. These guys need to buy themselves a motorbike or something.

I don't think the insult to a generalized group of people is necessary.

Some enthusiasts have overkill racks, 40Gbps fiber, enterprise-class switches and vlans, etc, etc ... because it's their enjoyable hobby. The over-the-top components and making it all work is all part of their fun.

It's not a hobby I'd personally pursue but I do understand why it can attract a passionate set of people.


It’s not meant to be insulting, it’s a joke in good spirit

I think it communicates the point that their setup requires a level of effort and commitment that’s not for everyone


> their setup requires a level of effort and commitment that’s not for everyone

Depends!

A lot of people doing this kind of stuff probably do this stuff professionally. That drops most of the “learn how everything works” part of the project and substantially lowers the effort.

I’ve got a half dozen Lenovo Tiny PCs sitting in a cobbled together wooden holder and plugged in to a cheap managed gigabit TP-Link switch. One runs TrueNAS to provide persistent storage, and the rest run Debian+k3s.

Total initial monetary investment was maybe $500 for the PCs, RAM upgrades, some additional hard drives, etc.

Total initial time investment was probably my evenings for a week. And the majority of that was fighting with the friggin’ Debian installer so I could PXE boot the machines (or any future machines) and have them automatically come up fully installed and configured and joined to the cluster with no interaction.

As far as ongoing commitment… it’s been very minimal. It just runs. If anything fails, needs an upgrade, or anything else happens… I just reboot the node and wait 10 minutes while it blows itself away and sets itself back up. Managing workloads is just editing some YAML files and pushing them up to the GitLab instance running on there.

Barely more work than running any of this on AWS and it costs substantially less than the $4k/yr or something I was paying before.


  > It’s not meant to be insulting, it’s a joke in good spirit
Indeed. That said it rings a bit true for me.

The day I sold my upgraded Macintosh and bought a motorcycle was, in hindsight, one of the best decisions I ever made.

I was without a home computer for about 3 years before I got another one, though I was a programmer by trade.

But the bike, well, just going to say far, far more ups than downs. Never regretted that trade.


I am literally thinking of doing the same thing :)


>It’s not meant to be insulting, it’s a joke in good spirit

Oh ok. It's hard to tell you're joking because you've made condescending comments about generalized groups of people in the past. But I've never downvoted them and just moved on. I stopped to reply in this case because I recently saw a bunch "insane petabyte homelab tour" videos where the owners took a lot of pride in showing their setups. It sounded like the "buy a motorcycle" advice was dismissive of their hobby.


Personally I run frigate, gitea, jellyfin, file storage, and a bunch of other minor test environments.

I find it highly convenient personally, I have a data cap so moving large stuff files around happens over 10gbe locally instead of over the internet to aws or whatever. It's also all mine so from a security and privacy perspective it has a lot of advantages.


Do you do any offsite backup?


Piling on, my homelab is currently running

* calendar/contact sync (webdav)

* personal crm

* notes webapp

* email (yes, my primary one. helps when you're running business internet from home rather than a random ip on a vps)

* mediawiki for writing

* rss reader webapp

* internet file sharing (basically selfhosted dropbox)

* virtual tabletop for d&d-like games

* reverse proxy/dns/cert management (a let's encrypt post renew hook that scps the certs where they need to go)/zfs

This runs on an old rackmount server under a table using KVM/QEMU (once you get the commands written it your notes it's not so bad to manage purely from the command line), a cheap managed switch, and x86 router. I back up important stuff (email on the server, then lots of stuff on my desktop) to BackBlaze


For me, mostly for experimentation with networking. For a practical example: most of my complaints and gripes with IPv6 stem from trying to actually run an IPv6 network. And realizing that folks proclaiming that IPv6 is ready for everyone probably hasn't tried to administer a non-trivial network. Or they're getting enterprise vendor help.

Looks like the author of the story is also dabbling into reliability engineering as well; having hardware can be a little helpful for that too since there's often vendor quirks as well.


It means different things to different people, but I'd say for most it's some mixture of the following:

1. Enjoyment of building an enthusiast system from select components,

2. Having a more capable and flexible system,

3. Learning how networking works by being your own network admin.

You don't need all the components. Many will get more just because they want to (see point 1), but mostly you get them because you need to (see point 2).


I'll dogpile on here with my uses:

- AdGuard / PiHole - Plex server (plus the whole suite of *arr -- I gave up Netflix during the password sharing debacle) - Home Assistant (I'm on a huge data-ownership crusade this year) - Stats tracking for my car (TeslaMate -- this is such an amazing piece of software cannot recommend it enough if you've got a Tesla) - general tinkering

I'm currently building a house as well and I plan on deploying a way more complicated networking setup (full-ubiquiti) so I can get rid of all my internet-connected security devices (cams and doorbell for example).

Honestly... I used to build PCs (I still do when I need one) and now this fills that gap for me. I spend way too much money on it and it makes me happy 'cause I'm having fun. I get to tinker and break things in the privacy of my home. I get to learn about Linux administration (this is honestly the least fun part) and networking.


> what would be the typical use of such a lab?

Usually self-hosting things and making things more complicated for fun.

> Why are so many different components needed?

Learning experience. Many home lab enthusiasts are sysadmins(-adjacent) and use it as a learning experience.


I use mine for:

* Streaming media server with Plex

* Jenkins build server

* Game servers (Minecraft, Factorio, etc.)

* Photo syncing with Immich

* File syncing with Syncthing

* Alternative frontends for Reddit and YouTube with Teddit and Invidious

* Home automation with Home Assistant


Partially to practice things you do (or want to do) at work, partially because it’s just fun.

I set up a Mellanox mesh network between my three compute nodes a couple of weeks ago. 56 Gbps Infiniband is pretty sweet, although my CPU is too slow to fully take advantage of it (I don’t have RoCE working yet, so it’s IPoIB). Still, a ~20x speed up for my Ceph cluster on NVMe is lovely, and makes it actually viable for use.


Typical use for mine case - is to test things before deploy somewhere else.

For example - Proxmox, XCP-ng, VMWare ESXi, Kubernetes, docker, etc, etc. All this require knowledge and experience.


>Proxmox, XCP-ng, VMWare ESXi, Kubernetes, docker, etc, etc.

It will be a bit general but what's the best use case here?

I mean try to build a home server, self host some services but there are so many options.

- Proxmox, run a VM for each service. And it supports LXC too.

- A vanilla Linux server which runs a type 2 hypervisor (QEMU) for each service

- Docker can be also done either in a single server but some people do it with something like Unraid (which is for NAS but support virtualization too). Very similarly you can install Docker in OpenMediaVault too.

Feels like there are so many ways to achive the same thing


It comes down to what you want to do and what you're comfortable with.

For the last 4 years, I've used a Proxmox and LXC-based setup. I create LXCs for specific services like nfs, jellyfin, and minecraft where I can fine-tune the memory and cpu specs or turn off the LXC when not in use. Then I have an LXC for all of my docker-based services.

I manage the LXC provisioning with ansible, the source code is available on github. https://github.com/andrewzah/service-automation/tree/master/...


But how you can install Proxmox under VM (QEMU/KVM)?

Sometime you need a hardware that you can trash it within day or two.


Proxmox + portainer


Depending on your use case some of the components might not be required like a discrete GPU.

You can start self hosting on a mini PC if you want.


>That said, be careful with the power consumption! You don’t want to end up with… mortgage of $1,100, a power bill of $1,200, the feeling when walking into your home lab… NOT priceless.

This is the biggest thing for me, I would love to have a big fancy lab setup at home, but when you start adding up the power consumption and put a monthly dollar amount to it, then it becomes less exciting. Especially if you live somewhere like California where home electricity bills are incredibly high nowadays, it is extra painful.

There are some low-power setups that are great, but it seems like once you start using rack-mounted equipment, the machines get more power-hungry.


I have a lab, and I had a lot of configuration state that was annoying to maintain and fragile. I switched to vmug because it has a semi decent terraform provider and I now put most things in TF.

TF to bootstrap vms + rke cluster. TF to deploy helm charts. Still some per app config/backups to mange but I just copy container volumes to all the machines in the cluster with the csi plugin and then back everything up to zfs nas via truenas on NFS.

Runs primarily on 3 dell rx30 series, and then my desktop and htpc are also in vsphere running gpu centric things when I'm not gaming with them.

Vmug is pretty perfect for this -> $180/yr = 16 or 36 physcial sockets.


I'll add a #protip about UPS. Instead of getting Cyberpower or APC, I recently picked up a EcoFlow River 2 Pro[1] (portable power station). It has 768Wh capacity and 800W output and pass through charging. Crucially, it switches fast enough to not cause any outage to any of my gear (Ubiquiti, HP Proliant, Raspberry Pi, Synology). As a bonus, you also get the ability to do solar charging and USB power outlets.

[1] https://us.ecoflow.com/products/river-2-pro-portable-power-s...


I have two gaming computers that I’d like to vent up into my attic. Anyone tried this?

I considered moving them to the garage and then running USB-over-fiber to access them but have a ton of flight sim gear I need powered as well.


I'm stuck trying to understand what exactly a home lab is: Apparently a computer at home that has similarities with common production servers?

Here's my naive question: What is the point of a home lab, i.e. what makes it better than running stuff on the machine you're developing? I can see some advantages in networking applications (as mentioned in the article), but I have the impression this is not too critical for most applications and devs.

Don't get me wrong, I'm a fan of building stuff like this, and will always be able to find an excuse to do it anyway. :)


>what exactly a home lab

Its murky because there is a bit of backstory. There used to be a homeserver subreddit. The people running commercial heavy rack servers got fed up with the peasants and created homelab sub. Then overall quality & vibe in homeserver sub dipped so everyone migrated over to homelab. So now you've got everything under the sun that is at home and switched on 24/7 being called a "homelab". Leading understandably to people being confused about the term

>What is the point of a home lab

People do it for different reasons - "own your own data", learning (IaC/docker/kubernetes/networking). It's also quite empowering for experimentation. Most of these setups are running hypervisors. So creating a VM is a 20 second affair. That sort of blank slate is useful to me for dev work. Local gitlab server, local CI runners, local kubernetes cluster, postgres server. It's all there, free, fast, accessible and ignoring initial layout effectively free.


It's a server, so the goal is to have it run something you care about (whatever that may be) 24/7, not because you cannot live without it, but to get experience with administrating a server.

In contrast, your dev machine may need to be rebooted any time, or switched off if you leave for the weekend.

A second defining characteristic of a homelab is that part of the rationale for owning it is in order to tinker with it, to try out new software packages, to play with tools & concepts (VMs, containers, admin and remote admin tools, KVMs).

Ultimately, you may define a homelab as a "physical server setup for learning and for running personal services". Homelab operators may also take pride of the independence from large IT or cloud providers attained by operating their lab.


it's at least one server (_any_ computer that never turns off) hosting things you would use (and probably pay) on a day-to-day basis.

The most important point is to be available all the time in multiple devices that you own, that's why some folks don't want to do these things on their main machines. A laptop you could take it away, and your family at home would/could lose access to the things you're hosting. A beefy desktop might be a bit too power hungry to be on all the time.


It's a hobby. It doesn't need to have a point. It's kind of like playing games - it doesn't need to have a point, but it's enjoyable, and you get a community out of it.


Yeah, what exactly is Lego? Painting? Playing music? Model railroad? Ham radio?

It’s a place to learn while playing pretend. LARPing enterprise IT if you like. Why wonder what the point is?


It's linked there, but I found this interesting: To get some more inspiration, visit:

https://www.reddit.com/r/homelab/

https://www.reddit.com/r/minilab/

My "homelab" for now is a single Raspberry Pi (I want it silent and not take much power when idling). But from there, I can power-on my desktop, via wakeonlan.


Do you have easy ways to monitor power usage on any power plug over time, without needing to check out a display on the plug or to install some shady chinese Android app?

I have a nice used Dell Poweredge T320 with a single Xeon E5-2428L that's been running on only an SSD (and then a raid 1 of 3.5" HDDs) for over 8 years, but electricity prices are rising and I wish I could measure it precisely, idle and under load.


While no broken out per plug, APC UPS network management cards provide total power output data (current, voltage, frequency, power) via SNMP, which you can log using a wide variety of tools.

And even without external tools, historical power usage logs are available via the APC Web UI.

While I don't currently log anything externally, I use an xbar[1] script[2] to display UPS output current in my Mac menu bar.

[1] https://xbarapp.com

[2] https://jasomill.at/apc-nmc-status.5s.sh


> For my home lab, I won’t be needing CAT8 or CAT7 anytime soon

Has anyone ever needed CAT7 (or better)? CAT6 works for 10 GBit at a short distance, but in my experience (in corporate IT) I've literally never seen a 10 GBit Ethernet port! SFPs are the shit, direct attach SFPs are cheap as chips and fiber ones aren't even that expensive anymore. And of course fiber will handily beat twisted pair in signal integrity at any distance.


I'll toss mine out there. A single computer. Ryzen 1700 with 128GB RAM, six or seven disks totalling 20TB storage or so. SSD for VM images. Proxmox for OS, on a UPS.

I have one of those SoC Intel boxes with 4x2.5Gb LAN running proxmox, running OPNsense for my router. I had to get a fan to keep it from overheating.

Proxmox is a great SDN/VM host and I haven't fully explored monitoring and automating it.


I have two 4GB rockpro64 with the NAS case on my closet. Each with an SSD raid. There are no fans in this setup and the SBCs don’t take much space. No heat either. More than happy with these, good uptime too. Tow-boot gives me an UEFI environment so I can use some normal distros so I use the latest Fedora server.


Whats your raid setup with the ssds? I see 4tb ssd have marched down to $200. Would be nice to never hear spinning rust again.


BTRFS raid1 for now. I’ll try to push for raid10 maybe by the end of the year since SSDs draw very little power.

Device is finicky with sata adapters. I’m using some Marvell controllers at the moment.


My "home lab" is a raspberry pi 4 and a few more devices thrown into a drawer. Not pretty (especially the gordian cable knot), but it works.


"safely fail" what's wrong with just taking my laptop and do my experiments there? Why is a "lab" needed?


It’s completely unclear to me from the post or comments here what a homelab is actually for


So running 24/7 and having no ECC is not a probl at all?


Not ... really?

https://www.klennet.com/notes/2022-11-25-zfs-and-ecc-rant.as... is a pretty good overview. ECC is "nice to have if you can get it cheap enough" but it is not a ride or die.


> Whatever method you use to prevent tampering with your backups (e.g., when a cryptovirus silently corrupts your data) should catch the RAM problem too. If it does not, you need to change the method rather than rely on ECC RAM.

> If your backup and recovery strategy relies on ZFS using ECC RAM, you should rework your backup strategy. After you do not need ECC RAM, you can buy it as a matter of convenience to save time troubleshooting and/or restoring from the backup.

What's the solution the article alludes to?


If you buy non-mini machine with UDIMM, not SO-DIMM, some generic ECC DDR4 is very cheap, e.g. Kingston Server. Works well with Dell Precision Tower.


how/where can a complete noob learn all these skills to set something like this at home?


The non-noobs also don't have all the skills. The main skill is being able to progress towards a goal even though you don't know how, by searching for others that have done similar things and adapting their instructions.

Some starter goals:

* Get a Raspberry Pi

* Assemble the case for it, format the SD card, start the OS

* Install PiHole & Tailscale

* Setup Tailscale on your phone so that you can use PiHole on the go

* Adjust your router so that most devices go through PiHole

* Get a more powerful router or OpenWRT software for the router so that you can do hairpin NAT to reroute everything to the PiHole.

* Attach a hard drive to your Raspberry Pi...


Or just use whatever old piece of hardware you have laying around, cheap is decent, but free is gold.


First: try running just about anything from https://awesome-selfhosted.net . Pick something useful to you.

I chose Nextcloud and Jellyfin when I started in 2020. Before then I was running a few random things like Syncthing and Samba mostly just for myself. In 2020 I decided to try and apply a more thoughtful, disciplined process. Good security, reliable backups, useful for my whole family, etc.

Shameless self-promotion: https://selfhostbook.com . My book covers justification for self-hosting, fundamental skills, and homelab (or perhaps "homeprod") setup with Ubuntu, Ansible, Docker, and Traefik.

I wrote it for folks with some light sysadmin/programming skills. It's a good general starting starting point for self-hosting.


I started a few months ago. Reddit's /r/homelab has lots of great resources, and you can get some good deals through /r/homelabsales, though the best deals usually go quickly.

Like with learning a programming language, having a specific goal helps a lot.

Reading other people's posts about their hardware, how they use it, and what problems they solved with their homelab can be a source of inspiration that sets you on the right path.

Like others on this thread have said, while it's tempting to immediately dive into racks, patch cables and network diagrams, you can achieve A LOT with something simple like a small NAS or a NUC. Even a spare laptop that you can repurpose might be more than sufficient.

Unless you're trying to simulate managing enterprise hardware (and I mean physically managing it), the greatest value lies in learning the software side of thing, IMHO.


Lovely guide


Can we stop with all the "homelab in living room = divorce", "one less closet = unhappy wife" etc jokes?

Reminds me of all the "so simple your <older female relative> could use it" comments on older tech product advertising.

Just smells of yet more boys club humour that adds to the difficulty of including women in the hobby + industry. It's the small jokes that we're supposed to just quietly smile along with, as well as the big, that cause problems.

Otherwise, nice post! I've had my homelab for years now and have been looking at expanding, definitely some things there I'll be looking at.


Yes agree with this. On the Homelab subreddit I’ve seen people suggest it as a bad thing if someone’s wife isn’t okay with a mess of wires accompanying a rack mount server and some other older PC’s located in a living room. They then proceed to question the posters “manliness” (in other words) for suggesting they plan on cleaning it up or moving it.

Anyways to your original point I feel as if online people feel like marriage is a bunch of negative compromises that are always one sided or something. Homelabs are great but please don’t put loud servers and PC’s in the living room. :)


> ...I feel as if online people feel like marriage is a bunch of negative compromises that are always one sided or something.

Bitching about the negative things in life with one's buddies is an activity that stretches back for hundreds of years, if not for as long as we've had language. It's a bonding activity, and an essential one.

The big difference between -say- the 1980s and now is that one's buddies often congregate in clubhouses that post their conversations publicly, rather than just shooting the shit verbally. So, folks "these days" get to see how much convivial bitching between buddies there actually is out there, rather than being blissfully unaware of it.

Yes, there are certainly folks who are dreadfully serious about their complaints and hate their SO with a burning passion or whatever... but those folks are certainly in the minority of complainers.


It's not just that the posts are public, it's also that some unknown portion of your "buddies" on a place like Reddit or HN are actually female.

Slightly misogynistic jokes with a group of male friends who know you and know your SO (and presumably know how your relationship with her actually is) land completely differently than the same jokes in an anonymous online forum.


Then, honey, I shall put the homelab in the bedroom!


I generally agree, I really do. However, I also feel that the entire article shares a very personal perspective of running a home lab - and that personal perspective is probably having a wife which sometimes gets annoyed by blinky stuff in plain sight. And when we remove that personal perspective, we also lose..personality which I desperately appreciate in a world where most things are hyper professional and polished to the point where it feels...soulless and bland.

I would would have have no problem reading an article which makes jokes about the husband being driven to insanity by home server, I would have appreciated the diversity of perspective.


Yes, of course, there is a chance that's "just" the experience. But these remarks tend to fall into the always same sexist, normative stereotypes - usually the only occurrence of "woman" in the story. In any case, the author does know he isn't exactly the first one to make that joke... In the end his experience may be a self-fulfilling prophecy. Communication does not happen in a vacuum, how something could be perceived usually is considered on some level - the author may not have seen it as a problem... which kinda is the problem.

Honestly, I wouldn't want for anyone to act the way pro forma either, or think the author has done anything wrong, or even misogynistic, as others have suggested.

It isn't something wrong, it is unfortunate. I think these jokes are sad, because any author would have an imaginary reader in mind, someone who might find the joke funny. And I don't think that imaginary reader is a woman, here - and it shows. So, for any woman reading these "boy's club" guides and write-ups, they will always feel othered, always have to overcome these tiny, but omnipresent obstacles in the way of their (new & fragile) interests. (Radiolab's 'Alone enough' episode comes to mind[1])

I just wish, people would introspect their motivations and intentions behind those remarks some. And especially when writing a guide, maybe consider if the take is worth discouraging even one girl's newly found interest in computers and blinky chaos, because her projected role as the opposition has just been reproduced sufficiently to stick. I wish authors would be aware of the problem implicitly alienating some readers by these reproductions, so writing welcoming texts to everyone would be intrinsic motivation.

[1] https://radiolab.org/podcast/alone-enough


Yeah, the one size fits all is grating, for example my wife is not really into computers but she likes to see what I put around the house and just to ask questions and understand what I'm doing. She didn't have anyone to really ask questions growing up, and online wasn't so great so now we just talk and seeing a new box is just an occasion for discussion.


I agree, actually, but "wife-acceptance-factor"[0] (or; "wife approved") is useful terminology; I would love if there was some colloquially acceptable gender neutral term for that.

Some have suggested some truly cumbersome terminology to replace it which is unfortunate because you very quickly run into the same problem the vegans have: If your choices are too unpalatable then reasonable people (who don't care or share your values 100%) will avoid them.

You need to make the choice easier. (hence why a lot of Vegan focus right now is on having enjoyable alternative meals).

[0]: https://en.wikipedia.org/wiki/Wife_acceptance_factor

EDIT: I'm not going to argue this because if you're reading my comment you've already decided whether you agree or not and you will not be productive to talk to.

I will answer some common suggestions and why they're not reasonably sufficient and you can argue amongst yourselves about it. I'm not a sociologist or psychologist but I am human so take it with that lens.

* Partner as a replacement for "wife" is unfortunately more syllables making it just a smidge more annoying to speak, so people won't; but it's the closest to what is intended and mentally equivalent for most people. As a bonus it even encompasses the unmarried! Partner is also (commonly) said lovingly so it doesn't create distance in the mind of the speaker.

* Spouse is an uncommon enough word that it doesn't come to mind quickly enough; it also unfortunately comes across as unfamiliar (IE: not from a place of warm affection) and clinical.

A good replacement term should be something endearing because it's important to point out that this isn't a term of derision, the true meaning behind the phrase is that they love their partner and want them to be happy. It's just a boomer-humour type joke to hide behind "because I dont want a divorce" or "I fear my wife", which is part of the supposed humour -- so it needs to feel familial and affectionate.


The article suggests "Significant Other Acceptance Parameters" (SOAP), which I like better anyway since it involves parameters which seem targeted to the individual, rather than a broad factor targetted at the device.

The device hits all my SOAPs, vs the WAF is high.


Replace with "partner" or "spouse"? Why is that not good enough?


Happy spouse, happy house.


Cohappytant, coolhabitat.


It should really be something like family acceptance factor. Your spouse is usually not the only person living with you for long periods of time you'd ideally like to accomodate as a permanent co-user of shared spaces.

That said, this writer never says anything about the general idea being wife-approvable as far as I can tell, just that putting a homelab in the living room will lead to a divorce. That is already gender-neutral but at least so far also not true. My homelab is in my living room and I haven't divorced. I had to wallmount a very large media cabinet in the television nook and cut holes for USB fans into it, and use fanless mini-PCs rather than 500 decibel used servers, but you can't even tell it's there if I don't tell you.

The article you linked even says right up top this is widely thought to be a sexist term but also different than what the guy here means. The article here is about being considerate of how you use limited shared space but the Wiki is about design elements being attractive to people who care what something looks like instead of just what it does. Stereotypically, it would have been the woman in the heteronormative relationships that cared but certainly that isn't the case in my house. My wife loves gadgets and buys random crap all the time and leaves them laying around wherever is most convenient. I'm the one that cares deeply about neatness and my living space actually looking like a living space and not like I live in a data center.


I started my comment with an agreement to the sentiment you're expressing and the sentiment expressed by the parent.

The only thing I added was a caveat that there is a useful term I'd like to see a replacement for.


Why not make the replacement term: "even reasonable people wouldn't object"?


You'd be implying that your partner is unreasonable if they did object, which is a bit cold.

"Wife-Acceptance Factor" is also a spectrum, not a boolean.

What is acceptable to one partner may be unacceptable to another, understanding what is more generally acceptable than another solution is useful, because it's likely that you understand where your own partner is at mentally and how much of the home needs to be tidy and nice.


Partner. Spouse. This seems easy.

edit: "it's just a joke" - please re-read OP.

5 to 6 syllables is too many? Why aren't you saying WAF (PAF) if that's a concern :)


I'd probably say "would it be wife-approved?" or some other thing given the context of looking at a device that is hideous and not in-line with my girlfriends intended aesthetic of the home.

But an additional syllable is indeed enough friction here, and if you think it's not then that's fine: but it is.

Please understand me here, I'm not arguing or fighting with you: it's not something you can argue rationally, it's something subliminal that all humans will do and I'm trying to point it out. It's a form of lossy optimisation in our brains, we will always choose phrases that roll off the tongue better. This is the science of soundbites, and how they are weaponised in marketing for populist political agendas.


I hear you and that makes sense for speech, but let's come back to the fact that this thread begins with a guide put online for anyone interested in the subject to read. Ease of saying a phrase out loud has less importance for written communication and there's a difference between speaking about one's own specific situation and generalising that to tips for your entire audience.


I agree with you in principle, but in this case the author is describing their own situation. They do have a wife, I'm not sure they should be required to remove the gender of that actual person.


It's a "guide for beginners" offering advice to people in general, not just a post about their specific situation.

>Here’s a quick list of Pros vs. Cons I’ve compiled to get us thinking about all the possible home lab locations. Choose wisely.


I think, I don't know, that the modern day general understanding with that statement is "your partner finds this annoying."

I don't think it has anything to do with genders per se but is an old adage with any hobby.The historical implications do point to a bias in understanding of 'roles' but I'm more comfortable with the approach that instead of rephrasing old adages we become comfortable with the idea that our understanding of the specific adage has changed.

I think not changing the adage and instead focusing on the shift of understanding can also point to how much we grow as a society, and we can relish in our evolution of understanding as it pertains to historical beliefs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: