Hacker News new | past | comments | ask | show | jobs | submit login
Rebuilding my homelab: Suffering as a service (xeiaso.net)
106 points by xena 5 months ago | hide | past | favorite | 62 comments



My contrarian take is that eventually k8s will be recognized as the overcomplication it is; and better methods of managing less than 10,000 VMs will be researched and used.


People don’t use Kubernetes at home because it’s the easiest tool for the job.

They use it in home labs because it’s a safe and easy environment to learn, practice, and explore.

Most importantly: It’s a low-consequence environment. If you accidentally bring everything down at your homelab, the only person who suffers is you. You don’t get that degree of safety to explore and experiment when you’re using company resources.


Incorrect sir. I am way more concerned about my wife and kids as end users than i am about any employees


You're more concerned about your home Plex server going down than the service you run at work?


It's way cheaper and easier to swap jobs vs swap spouse and family. If i have to swap jobs my old job isn't entitled to half my stuff plus a large chunk of my ongoing income. The spouse and family also have access to me at my most vulnerable, while I sleep. If you really think about it my position is the obviously cortect one.


Doubtful. But things like mail, documents, file storage, etc might be another story.

It depends on what the person in question is hosting, and how it relates to the life of the SO. There are a lot of options around, just take a look at the awesome-selfhosted repo on github


Homelab. Emphasis on lab. Sounds like you (also?) have a "home prod".


Isn’t the result of that research Kubernetes?

What do you expect about the new solution that will be better?

Genuine question, btw. While I see how kubernetes can feel overcomplicated, it has always felt as a consequence of how complicated it is to run such a large number of workloads in a scalable and robust manner.


I switched my home lab to nomad, which I find much easier to wrangle, but we’ll see what happens with the IBM acquisition.


I really hope that is the case too, but for now Kubernetes sucked all the oxygen out of the room for everything else :(


Funny, I am a firm believer of the opposite: Kubernetes is the perfect level of abstraction for deploying applications.


This is incredibly popular a take, and this anti-k8s is rapidly upvoted almost every time.

The systemd hate has cooled a bit, but it too functions as a sizable attractor for disdain & accusation hurling. Let's look at one of my favorite excerpts from the article, on systemd:

> Fleet was glorious. It was what made me decide to actually learn how to use systemd in earnest. Before I had just been a "bloat bad so systemd bad" pleb, but once I really dug into the inner workings I ended up really liking it. Everything being composable units that let you build up to what you want instead of having to be an expert in all the ways shell script messes with you is just such a better place to operate from. Not to mention being able to restart multiple units with the same command, define ulimits, and easily create "oneshot" jobs. If you're a "systemd hater", please actually give it a chance before you decry it as "complicated bad lol". Shit's complicated because life is complicated.

Shits complicated because life is complicated. In both cases, having encompassing ways to compose connectivity has created a stable base (starting point to expert/advanced capable) that allowed huge communities to bloom. Rather than every person being out there by ourselves, the same tools work well for all users, the same tools are practiced with the same conventions.

Overarching is key to commonality being possible. You could walk up to my computer and run 'systemd cat' on any service on it, and quickly see how stuff was setup (especially on my computers which make heavy use of environment variables where possible); before every distro and to a sizable degree every single program was launched & configured differently, requires plucking through init scripts to see how or if the init script was modified. But everything has a well defined shape and form in systemd, a huge variety of capabilities for controlling launch characteristics, process isolation, ulimits, user/group privileges, special tmp directories is all provided out of the box in a way that means there's one man page to go to, and that's instantly visible with every option detailed, so we don't have to go spelunking.

The Cloud Native paradigm that Kubernetes practices is a similar work of revelation, offering similar batteries included capabilities. Is it confusing having pods, replicasets, and services? Yes perhaps at first. But it's unparalleled that one just POSTs resources one wants to an API-server and let's the system start & keep that running; this autonomic behavior is incredibly freeing, leaving control loops doing what humans have had to shepherd & maintain themselves for decades; a paradigm break turning human intent directly into consistent running managed systems.

The many abstractions/resource types are warranted, they are separate composable pieces that allow so much. Need to serve on a second port? Easy; new service since the service is separate from the deployment. Why are there so many different types? Because computers are complex, because this is a model of what really is. Maybe we can reshuffle to get different views, but most of that complexity will need to stay around, but perhaps in refactores shapes.

And like systemd, Kubernetes with it's Desired State Management and operators creates a highly visible highly explorable system; any practitioner can walk up to any cluster and start gleaning tons of information from it, can easily see it run.

It's a wrong hearted view to think that simpler is better. We should start with essential complexity & figure out simultaneously a) how to leverage and b) how to cut direct paths through our complex capable systems. We gain more by permitting and enabling than by pruning. We gain my by being capable of working at both big and small scales than we gain by winnowing down/down scoping our use cases. The proof is in the pudding. Today there's hundreds of guides one can go through in an hour to setup & get started running some services on k3s. Today there's a colossal communities of homelab operators sharing helm charts & resources (ex: https://github.com/onedr0p/home-ops), the likes of which has vastly outclassed where we have stood before. Being afraid of & shying away from complexity is a natural response, but i want people to show that they see so many of the underlying simplicities & conceptions that we have gotten from kube that do make things vastly simpler than the wild West untamed world we came from, where there weren't unified patterns of API servers & operators, handling different resources but all alike & consistent. To conquer complexity you must understand it, and I think very few of those with a true view of Kubernetes complexity have the sense that there are massive opportunities for better, for simpler. To me, the mission, the goal, the plan should be to better manage & better package Kubernetes to better onboard & help humans through it, to try to walk people into what these abstractions are for & shine lights on how they all mirror real things computers need to be doing.

(Technical note, Kubernetes typically runs 0 vm's, it runs containers. With notable exceptions being snap-in OCI runtimes like Firecracker and Kata which indeed host pods as vms. Kine relies on containers are far more optimizable; works like Puzzlefs and Composefs CSIs can snap-in to allow vastly more memory-and-storage-efficient filesystems to boot. So many wonderful pluggable/snappable layers; CNI for networking too.)


I once joined a project which had decided against Kubernetes years prior

For my entire stay there, half of the time was spent on reinventing the wheel, but worse.

There surely are lots of bloated and overly complex projects out there, but I'd say for what Kubernetes does, it's a very elegant solution to a very, very complex problem and not one of those.


This is the exact talking point I used to defend React on HN. Yes, you may not like it or even hate it, but the fact that it has some degree of industry-standardness makes React a good choice simply because you can hire for it easier than for your bespoke javascript framework.


No. Real suffering in the homelab is getting N 20 year old servers and swapping parts between them to get N-M servers that work. I feel like the project will be successful if I get all the drives wiped and I am within site of that although I discovered the wiping was going to be a process of triage: some drives did not spin up, one drive took 14 hours to wipe whereas a normal drive would take about 30 minutes. My collaborator will use the bad drives for target practice for a black powder rifle that shoots round balls.

Noisy fans take the "home" out of the homelab; the machines are 64 bit Intel but top out at 4GB. The latest version of Ubuntu installs fine but the desktop struggles. I think I'm going to install the desktop Ubuntu again just to see if I can watch YouTube with it but the plan now is install the server edition and give it to my collaborator to run an occasional minecraft server, which might free up a (much more powerful) i3 machine to watch videos from my Jellyfin server on the TV downstairs, something the XBOX ONE oddly can't handle. (No patent licenses for codecs if it's a game console?)

At least I dug out the old VGA-supporting monitors out of mothballs so I'll be ready to play around with the RISC-V and eZ80 SBCs I have which are, at the very least, a lot quieter.


That is retrocomputing, not homelabbing. Look into the sub-$200 Intel N100 based systems with 16GB RAM.


I recently bought 2 of these and they are EXCELLENT! I can leave them running 24/7 without worrying about how much electricity they're using. The performance, flexibility and reliability far exceeds the Raspberry Pis that are confined the to cupboard and they're probably a fair bit faster than my old desktop PCs that I rarely switch on any more.

I've gone uber-minimalist and only have NVME drives attached via USB-3. One's connected via ethernet, the other has a wifi connection. Personally I don't need any more and I've retired my old servers for now.


> reliability far exceeds the Raspberry Pis

In which ways? Most commenters say the exact opposite, that cheap N100 mini PCs are less reliable than Raspberry Pis.

I'm just now trying to decide which way to go. Raspberry Pi 5 is definitely much much more interesting from the nerdy point of view, but would cost about the same as some cheap N100, for half the power. Though half the electricity usage too.


If you get away with a 1/2 GB RAM version I'd say the Pi 5 is worth it, otherwise cheap mini PCs are better. I've found mini PCs to be more reliable than the Pi's I've owned as well (including the 5), though a lot of that is mitigated if you go for an m.2 hat on the Pi. In general though, if you're thinking of using a Pi more like a regular computer there isn't really anything special about it that makes it worthwhile. Well, beyond "I get to tinker more" if you particularly like assembling the thing, picking out the exact case, finding your favorite m.2 hat, etc. Even if you enjoy doing all that by the time you're done you still end with a PC that is worse than one you could just buy but maybe with a claim of saying you saved $20 doing so.

Where I like the Pi is use cases that don't quite fit with a normal mini PC like IP KVM.


My use case is Home Assistant, Pi-hole and such things. I would go for the 8 GB model and the NVMe hat, ending up somewhere around 170-180€. So practically the same price as N100.

With RPi I do like the tinkering aspect, an actual community, and lower power consumption -- I'd prefer passive cooling but I can't see a way to do that with the NVMe hat.

With N100 I would get double CPU power, RAM and storage. But I don't actually need that extra power, and the concept of buying a cheap pre-built PC is just a lot less appealing than something more nerdy. So I'll probably end up getting the RPi just for the, err, shall we call emotional reasons.


I think we need a different word for collecting a lot of old, underpowered computers and tinkering endlessly.

I understand the attraction of playing with old, cheap hardware. However, hardware has come so far that it’s easy to build a 16-core server with a lightly used AMD consumer chip and 64-128GB of RAM for under $1000. It will have more power and use far less energy than these clusters of old machines that I see people assembling.

> Noisy fans take the "home" out of the homelab;

Again, a completely unnecessary thing to suffer. If the goal is a homelab. It’s really easy to make a near-silent PC with modern parts and cooling that will outperform an entire rack of 20 year old PCs. Even 10G switches that are quiet or fan less are common.

I get it. It can be fun. But I don’t think this is homelabbing.


So in summary a former NixOS user now use a preconfigured OS dedicated to run one single platform (k8s) and still suffer for having to tinker with everything.

It would have been fun or wise if she had gone back to Nix in the end.


I still use Nix to build docker images, part of this is to see how bad the rest of the industry really is. It's slightly worse than I imagined it would be.


> a former NixOS user

Why did the author stop using NixOS? This is the first time I'm hearing about a veteran NixOS user giving up on it.


It mostly seems due to some drama going on in the Nix community rather than some technical reason.

https://xeiaso.net/blog/2024/much-ado-about-nothing/


I would call the situation "a complete failure of management" instead of "drama".


I used "drama" because I'm completely detached from the Nix community, and I have no idea what's going on over there :)


Can't believe I'm witnessing another node/io.js thing in 2024.


The CoreOS diversion was interesting to read. I've been daily driving CoreOS+i3 for the past year (I might be the only one in the world). I thought having a tiny immutable base OS would make the system easier to manage over time, but unfortunately that hasn't been the case. It's been an adventure but I'm ready to give up and switch to something more vanilla.


I had the same experience.

Immutable OS are great when you have a team of people to manage the tooling and process complexity of deploying and managing them, end users who only want to use software running on those servers, not the servers themselves, and you are bringing servers up and down all the time, then you actually reap the benefits of standardization and infrastructure as code.

But when it's just you, you want to interact directly with the OS, and it's just one device, it's just foot guns all day.


I've keep running Fedora coreos on my home server. My biggest issue with it, is that it is very cloud oriented and doesn't seem to allow to rerun the provisioning config on an already existing machine. This turns the thing again into a stateful pet instead of a "one cow cattle". Although I do very much like the rollback feature which has allowed me temporarily roll back an update a couple of times


> doesn't seem to allow to rerun the provisioning config on an already existing machine

In theory, a generic existing machine could have been compromised by malware, in which case the configuration may not match the previously provisioned version.

With OS launch integrity to guarantee absence of tampering, and prove that current=expected config+binaries, it could be feasible to rerun provisioning config.


I've wondered if that would be possible for a while, but I didn't imagine anyone would actually do that. What are the upsides and downsides of doing this? I'd love to read a writeup of how you did that and what you'll miss when you move away.


I'm no blogger but here's a quick writeup.

# Setup

Setup was a process, no clicking through a nice UI for this one. I had to set up a web server on a second machine to serve the ignition yaml to the primary machine.

It was a very manual process despite CoreOS's promise of automation. There were many issues like https://github.com/coreos/fedora-coreos-tracker/issues/155 where the things I wanted to configure were just not configurable. I had some well-rehearsed post-setup steps to rename the default user from "core" to my name, set keyboard layout, move system dirs to a set of manually-created btrfs subvolumes, etc.

# Usage

The desktop and GUI worked flawlessly. All I had to do was install i3 and lightdm via rpm-ostree. Zero issues, including light 2D gaming like Terraria.

Audio was a pain. My speakers are fine. My mic worked out of the box in ALSA, but Pipewire didn't detect it for some reason, so I had to write some manual pipewire config to add it manually. Also, I had to learn what ALSA and Pipewire are...

I ran just about everything, including GUI apps, in distrobox/arch containers. This was very nice: Arch breaks itself during updates somewhat often and when that happens I can just blow the container away and install pkglist.txt and be back in 5 minutes. I get the benefits of Arch (super fast updates) without the downsides (upgrade brittleness). I plan on keeping distrobox even once I leave.

# Updates

I disabled Zincati (the unattended update service) and instead I ran `rpm-ostree upgrade` before my weekly reboots.

This is the reason I'm leaving. This was supposed to be the smoothest part of CoreOS, but those upgrades failed several times in the past year. To CoreOS's credit my system was never unbootable, but when the upgrades failed I had to do surgery using the unfamiliar rpm-ostree and its lower level ostree to get the system updating again. As of now it's broken yet again and I'm falling behind on updates. I could solve this, I've done it before! But I've had enough. I'm shuffling files to my NAS right now and preparing to hop distros. If anyone wants to try to sell me on NixOS, now's the time ;)


This was a tremendous write-up. I appreciate the detail including your ingressd setup. I agree, though, that this is a pain. It's for this reason that we made Cloud Seeder [1] so you can have hands-free setup of your homelab and IPv6rs for painless ingress [2] </shameless>

[1] https://github.com/ipv6rslimited/cloudseeder

[2] https://ipv6.rs


I didn't know about IPv6rs, but thanks for supporting raw WireGuard configs[1]! Being able to use just WireGuard without having to install any additional daemons or wrappers is always appreciated.

If only I had known about this service before AWS launched its eu-south-2 region (Spain), I would have seriously considered it. But unfortunately I already have my stuff all setup and working, tunneling through an EC2 instance.

Still, bookmarked. I won't say I'll be using it in the future, but I'll definitely keep it in mind for the next time I need to change/update my homeserver stuff.

[1]: https://ipv6.rs/raw


I'd set up IPv6 on there, but the problem is that I use Flannel (which is IPv4 only) and my 8 gigabit fiber ISP only gives me IPv4 connectivity. I'll look into more details, but I've slightly given up on IPv6 for now. Maybe I'll set up Calico or something, but IPv6 seems to have been made artificially difficult by everything in the stack. I hate it.


> 8 gigabit fiber ISP only gives me IPv4 connectivity.

Just like my ISP (but not 8 gbit!). Luckily, IPv6rs actually tunnels through IPv4 (or 6) and provides an IPv6 address. You don't need one to start!

I don't know the ins and outs for flannel, but maybe you could setup IPv4 internally and use IPv6 (and an IPv4 Reverse Proxy) for the public internet?

I agree, though, IPv6 on its own can be hard but thanks to WireGuard, tayga (NAT64), and nginx/caddy/etc. (reverse proxy), it's definitely quite usable!


Can you email me at hackernews@xeserv.us? I'd like to hear more.


I'm on it! Looking forward to the dialogue!


Can you add AAAA records to cdn.xeiaso.net as well.

It seems to have the same IPv4 from fly.io as the main domain, but you forgot to add it to the CDN subdomain.


Uhhhh, I thought I did that! I'm gonna go fix that, sorry!


Did that fix it?


> What's not fine is how you prevent Ansible from running the same command over and over. You need to make a folder full of empty semaphore files that get touched when the command runs...

> One of my patrons pointed out that I need to use Ansible conditionals in order to prevent these same commands from running over and over.

Yes-ish. As I'm pretty sure OP figured out due to the pre-made roles comment, there exists a `community.general.dnf_config_manager` module that would handle this specific issue. As a general rule in Ansible, as ansible-lint [0] will tell you, is if you're using `ansible.builtin.{command, shell}`, there's a decent chance you're Doing It Wrong (TM).

The biggest problem I have with Ansible (and I say this as someone who uses it extensively with Packer to build VM templates for Proxmox in my homelab) is that, like its underlying Python, there are a dozen ways to do anything. For example, if you wanted to perform a set of tasks depending on arbitrary conditions, you could:

0. Use `when` and rely on things like named host groups

1. Use `when` and rely on manually handling state throughout plays

2. Use handlers

3. Break down the tasks into logically-scoped roles, and manually call roles

4. Do #3, but rely on Ansible's role dependencies instead

5. Use semaphore files / `creates` as OP did

6. Probably something else

[0]: https://github.com/ansible/ansible-lint


It's true, the flexibility can be both a boon and a curse. There should be a little more "best practices" info out there that's not too prescriptive. It doesn't help that a lot of pre-made roles on Ansible Galaxy vary widely in style and quality. Certainly no one wants to inherit Ansible code that's nothing but shell and command modules, but sometimes those are crucial gap fillers when an idempotent module isn't available for the task or is missing needed functionality. And even then, specialized (as opposed to general use) modules are only idempotent within themselves and you still sometimes need to check and pass the state of things between tasks and stick that in a registered variable combined with conditionals if the multiple tasks are dependent on each other or require a specific ordering.

I think a good generalized "best practice" is to keep those inter-task dependencies and conditionals to a minimum though. Small chunks or no chunks at all. It's always better to find a way to just run tasks independently with no knowledge of each other. The block module with "rescue" is useful for failing out a host gracefully if there's a bundle of finicky inter-dependent tasks that just have to run together though.


> I ran a poll on Mastodon to see what people wanted me to do. The results were overwhelmingly in favor of Rocky Linux. As an online "content creator", who am I to not give the people what they want?

Truman would be proud!

> the NAS. It has all of our media and backups on it. It runs Plex and a few other services, mostly managed by docker compose.

Does the NAS run NixOS?


Yes, it runs NixOS and I am too cowardly to bother changing that any time soon. It's got everything on a giant ZFS array and most distros have poor ZFS support.


> and most distros have poor ZFS support.

This is the complete opposite of my experience. For most "server" style distros (IE not arch/arch derivatives) you just install the zfs modules and forget about it. Ubuntu even has them pre baked on their kernel.

Arch gets complicated because it's a rolling release so the zfs module supported kernel versions gets out of sync with the latest kernel version which can prevent system updates due to irresolvable requirements which can go on for days/weeks as they play catch-up with eachother but you can just install the lts kernel and mostly avoid that issue.

For (mostly) every other distro they are a lot more coordinated in their releases so the zfs module will work with the newer kernel without any issues.

Other than that I can't think of what poor support even means, once it's installed it works. You can even have zfs on root for most distros.


After running NixOS for 6+ months on my homelab and also re-using part of the configuration on my work machine, I feel the same way as Xe each time I'm interacting with a non-declarative OS. There's just no simple way to share configuration between machines or to automagically clean things up after making changes.

Ansible feels like a thin layer of ice upon a deep ocean of the OS state, hiding in a multitude of non-tracked configuration files. It is simply not enough to build a layer of YAML around an OS which is imperative by nature.

Unfortunately, I can see the downsides of NixOS as well, being radically different from what we usually expect in a Linux distribution, adopting it in a already established environment will no doubt be hard. Steep learning curve, idiosyncracies of the Nix language (although after reading parts of the Nix thesis[1], I find it much more understandable and deeply thought out), just explaining Nix to people who don't have much experience with the functional way of doing things, let alone taking the functional approach all the way to defining an entire operating system - all of this sounds like a tough barrier to cross.

And yet, the desire to keep things reproducible and declarative (not to mention going back in time) persists once you've had the taste of NixOS.

[1] https://edolstra.github.io/pubs/phd-thesis.pdf


I’m picking this nit:

> When is a build reproducible?

> “A build is reproducible if given the same source code, build environment and build instructions, any party can recreate bit-by-bit identical copies of all specified artifacts.”

> Neither Nix or NixOS gives you these guarantees.

This really makes me question whether all of the quirkiness of Nix is worth it if it can’t actually “pay off” with true reproducibility.

[1] “NixOS is not reproducible (2022) https://linderud.dev/blog/nixos-is-not-reproducible/

[2] “non reproducible issues in NixOS” https://github.com/orgs/NixOS/projects/30


Nonetheless, Nix/NixOS is more reproducible than the majority of other build systems and distros out of the box. But yes, if this is a hard requirement, you’ll be better off with a different choice.

Keep in mind that this is but one of the features NixOS provides. I would say the config-driven approach to OS management is extremely powerful.

As an example, I could bring up my homelab’s external reverse proxy on a generic VPS in a few minutes over SSH using a single command. This includes SSH keys, Telegraf, Nginx with LetsEncrypt certs, and automatic OS upgrades. No Ansible needed :)

See: https://github.com/nix-community/nixos-anywhere


It isn't worth it, if you care about freedom and configurability, Gentoo exists.

>reproducibility

would like to see people reproduce software that embeds build timestamp into the binary.


Does Guix offer guarantees of build reproducibility?


For populating a homelab Kubernetes cluster, onedr0p has a very nice Flux template: https://github.com/onedr0p/cluster-template


Well, a translation can be: a NixOS user tried some from the IT dark-modern age and find how much it suck, nothing extraordinary, unfortunately most people fails to even understand that some Big of IT have interest in crappy solutions because they allow commerce while good ones do not.

Back then was the full-stack-virtualization era where a large flock of biped sheep say that VMWare "is the future, today" of course on x86 with the incredible overhead it have. For some times the psyco-PR-drug works, than paravirtualization start to be know with docker before, k8s after. Again the aforementioned flock run for the mania again ignoring the fact it's a nonsense.

Ladies and gentlemen this solution have only ONE purpose sell pre-build crap for those who do no know how to work themselves, mostly the Dev part of DevOps who in mean have issues even deploying their own private personal desktop at home, not even a homelab. They do not care about the tremendous overhead, "hw it's relatively cheap and I'm rich enough to buy more", and they consider normal the abysmal level of complexity. So we see in-production docker images built by no-one-know who, perhaps with his/her ssh keys authorized "oups, I forgot to remove them" and so on.

NixOS unfortunately have a terrible language, Guix System unfortunately it's mostly focused on HPC/academic usage instead of homelab/desktop usage, but that's the "modern IaC" since years. They are incredibly simpler and light than the modern crap pushed to sell pre-made images, offering "cloud services" of various kind and so on, yes Hetzner or OVH or Amazon do need to been able to sell so they do need these monsters, but users do need to AVOID them and re-learn how to use and deploy OSes and app, perhaps in a less archaic ways than the 80s style...


>Big of IT have interest in crappy solutions because they allow commerce while good ones do not.

This is really true, I don't want to go into specifics but I've witnessed a standards committee advertising how much "vendor added value" their product allows (which is fancy language for the standard being incomplete and unusable without extensions). Oh and don't forget arbitrary limits on capabilities for no good reason so that in a few years marketing teams can release a new version with a fancy nickname to keep the hype running.


The big issue is that we can't go much further that way. That's why for instance China industries advance while we regress. Interoperability and diversity are keys to success, cutting them means cutting the branch we are sit on...

These days many developers do not even know how to deploy their apps behind some third party APIs, finding a sysadmin it extremely hard and most are just "homelab guys with a bit of experience", how can we build something with this holes under our own feet?

A stupid example: photovoltaic is now relatively popular and that's clear the sole reasonable usage we have so far is self-consumption, we also have LFP storage still high priced (respect of China) but at a price tag sufficient to buy some storage at home like a giant UPS for a home, that's actually 400V batteries, most common inverters recharge directly from their MPPTs, actually the very same batteries are in most BEV. Well NO DAMN SINGLE VENDOR offer DC-to-DC direct charge for cars, simply nobody have apparently thought of that. We even have a standard https://www.iso.org/standard/77845.html but nobody seems to have implemented it commercially. We have a significant set of IoT appliance, only very few offer at least an open standard protocol for integration and most of them offer only ModBUS as an open standard protocol, something from the '70s, nice for certain usage also today but way too limited for many other possible usage. MQTT is a complicated option, still valid, very few implement it. Two more easy open options exists (Kafka, Matter) but nobody seems to implement them. We have a gazillion of VoIP solution almost no one is really simple and ready to deploy for anyone. Mumble/MurMur are the easiest for voice, but they are just chat, no calling ability, GNU SIPWitch is a lightweight SIP alternative, but it's far from being comfy for SOHO usage and Astersik/Yate are simply too complex for most users, similarly the first voice + screen sharing was in 1968 "The Mother of All Demos" we still lack a broad comfy screen sharing + voice solution, we even lack an IPv6 global per host to ease anything.

Long story short we waste gazillion of resources to maintain immense pile of crap following the classic https://xkcd.com/2347/ creating fragile monsters and struggling to go past this sorry state.


Good place to tell people about CKD8S. It acts as an IaC translating TS to K3 Yalm greatly simplifying working in K8. Developers shouldn't be writing YAML.


Hot take: infrastructure shouldn't be created imperatively; you wind up with fun surprises too often.

Hotter take: devs shouldn't be running infra at all, DevOps was a mistake. Return to specialized roles.


You don't have to use CDK8s in an imperative way. The benefit is you get _really_ good typing, which makes writing infra much easier than YAML.


cdk8s*

The project, afaik, is abandoned, but it does still work quite well. I use it for my homelab: https://github.com/shepherdjerred/servers/tree/main/cdk8s


I really enjoy reading about what people do in their homelab. It has turned into platforms that encourages actual 'doing' in contrast to just 'thinking about it'. When I am interviewing to hire a DevOps or System Engineering person, I love it when they have an example of how they pushed their knowledge outside of their core job. Just because your company isn't doing something does not need to block you from trying it.

There are some really neat things going on in the micro-homelab space as well - Using very small machines with very lower power footprint that opens up lots of possibilities. The N100 price/powerpoint has opened up more options along side the Raspberry PI and it friends. So cool!

Then there is this guy: https://youtu.be/-b3t37SIyBs

What does that nutjob run? Oh wait, I know! Keep on lab'ing and learning!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: