Hacker News new | past | comments | ask | show | jobs | submit login

I enjoy reading about homelabs that aren't using kubernetes or some other scheduler and how they're doing it/what the full stack looks like. This is just another "how I installed kubernetes" thing only without any real scale behind it. There are other ways to deploy and run things, and it takes people toying around with alternatives for something unique to spring up.



My "homelab" is just debian stable, with a moratorium against containerization. Systemd keeps the services going. Updates are manual. It's a bit incredible how clean and easy such a set-up can be. Like there's almost nothing to say. It's extremely reliable and just keeps ticking.

I actually did use to have a bunch of virtualization and kubernetes and crap, but got rid of it because it ate literally half the systems' resources, in a death by a thousand containers-type of way. There was also a lot of just jank, stuff was constantly breaking and I was always trying to piece together why from half a dozen out-of-date tutorials. Felt like herding cats.


I've transitioned from a single server running services on baremetal Debian, to running services with Docker on Debian, to running services with Docker on Debian VMs under Proxmox (aka Debian with a repo), to running services on a three-node Kubernetes cluster using k3os (I was using Talos, but it doesn't have Longhorn support yet, and Rook/Ceph was a nightmare).

The baremetal server was the easiest to set up, obviously, but the hardest to maintain. Where did I put that random config? Wait, why did this break when I upgraded X? Switching to Docker (and later Compose) wasn't that difficult, and made maintaining services much easier. Going to k8s has been challenging, mostly because of how I'm doing it - a controlplane and worker node on each physical node in Proxmox. Writing the YAML is, well YAML.

I'm mostly looking forward to not having to keep my VMs updated. I automated a lot of that (VMs are Packer templates) with Ansible, but it's still annoying. Upgrading k8s nodes is a lot easier, IMO.


Yea I think I missed the whole web-complexity bus and am probably out of touch, but I still don’t get the use case for docker and containers and kubernets and and orchestration and all that stuff, just for a simple home setup. I serve a tiny web site, email, backups, a NAS and a few other internet services for my family, and my “stack” is vanilla Debian Stable.

Maybe I don’t know what I don’t know, but my setup works for me and I don’t really have any problems maintaining it so I figure why add all the complexity?

It always feels weird to see threads and threads of people talking about dozens of software programs I’ve never even heard of, let alone used. Maybe I’m living in the past but to me a “stack” is: OS, server, database, application. Like LAMP. Wonder when this changed!

It makes me curious about what kinds of stuff people do in their home networks that I never even considered doing.


"This is just another "how I installed kubernetes" thing only without any real scale behind it."

It is just a learning project, for me K8S didn't 'click' untill i tried to configure it myself on a few machines.

There is something about doing it om physical real things that aids learning, like you could read all chemistry textbooks in the world but untill you actually try it yourself it's not quite the same

I think it's important to keep you 'precious files/services' and you experiments separate.


> but I still don’t get the use case for docker and containers and kubernets and and orchestration

For docker there's a simple motivating case: some services are difficult to configure securely with minimal permissions, and having a standard docker image provided by people who know what they're doing would be a big net win for security on the internet. There are a lot of poorly configured and insecure http server's out there. Think about how many vulnerable http servers are running on routers.


I would argue that poorly configured docker images as far bigger issue than insecure http servers: https://www.infoq.com/news/2020/12/dockerhub-image-vulnerabi... Security wasn’t the motivation imho, but being able to package your app with all it’s dependencies, including OS libs and deliver it extremely easy, sure was.


> I would argue that poorly configured docker images as far bigger issue than insecure http servers

I think the number of router vulnerabilities alone refutes this argument. Yes, security wasn't the original motivation, but it could be one good motivation.


It doesn't make sense, but often people who set up K8s at home do it for educational reasons or just having a playground so they are more capable at administering systems like that at work.


Sure, using K8S in your learning envornment because you're managing 800 different services in your dayjob is fine. Personally I have my learning environment as part of my day job though.

At home I just want things to work, hence a nice simple LAMP


Agreed. I don't even want Kubernetes at work.


I can see benefits of something like ansible if you have a command-and-control environment where you deploy 300 machines from a central database. Still allows you destroy 300 machines in one single misconfiguration or mishandled error though, ouch. If you have a federated environment where you have many actors all managing their own machines though I don't see the point.

The number of times I have to do something to the 160 machines I'm responsible for is about 3 times a year. Clusterssh does the job in 10 minutes, and inevitably shines a light on some unexpected error somewhere. If I was doing it on a weekly basis though, using ansible would be beneficial.

Personally I don't like ansible's way of throwing everything in a file in /tmp, then running it as root. authlog gets sent to my central syslogs servers so I've got a record of what happened. With ansible I'm never quite sure, and I'd have to look in two different places for logs.

Instead we just have a lightweight phone home agent which reports processes/diskspace/mounts/interfaces/hostname/ownership etc so you can identify who owns a device on your network and what it's responsible for without consulting out of date documentation that relies on humans to do things.

Kubernetes I guess is useful if you're building a cluster of 100 machines scaling and doing the same service because you're handling a million users a day. Most companies don't do that, most services don't need that.

Use the right tool for the right job, and don't assume the stuff google use is the right tool (unless you're building a lab because you want experience to get a job working for google. If you tried that shit on my network you'd be starting your job search far earlier)


I want multiple conflicting versions of Elasticsearch on the same host. My OS packages are built to support a single deployment. I could manually setup users, config files, systemd services, etc. for the second deployment, or I could just run them in containers, which is the lower effort option


> I want multiple conflicting versions of Elasticsearch on the same host

Why do you actually want this? Even having one ES is a bit of a smell that you're probably over-engineering things.


Dependencies of software I didn't write (e.g. Graylog)


This is what I'm getting at. That is an enormous overkill for a single server, and exactly the problem with kubernetes: It drives so much complexity that you kind of need this type of heavy-duty enterprise grade solutions even in an area where you really shouldn't need it.

If it's just a single server, you're fine with logrotate and grep.


I like practicing enterprise at home; it makes me feel more comfortable using them at work, where they're needed.


So first of all: I do have to clarify that I just use bare podman containers deployed by ansible and not a while k8s setup, so I'm defending containers and not k8s. But anyway:

Who said graylog is only logging on that single server? That server is by far the biggest and hence why I run multiple services with big dependencies on it, but I have that server, 2 cloud servers (one running my personal web page and other publically accessible services, one running a couple of game servers) + a couple of Raspberry Pis.

Even if I didn't have all that, a web interface to configure it to email on ssh login or send me a mobile notification that my offsite backups are still happening and/or not happening for devices or similar items would still be a nice feature, even if you personally feel it's too complex or you could write a cron job for each of them.


I've used everything under the sun and for homelab use IMO nothing beats using docker-compose and salt.

1. Unless you have crazy dynamic scaling needs, there's literally no point in using k8s or nomad 2. Unless you want RBAC and fine tuned ACL, theres literally no point in using k8s or Nomad

I hate this idea that you need a Ceph cluster and a 3 node HA cluster to have a home lab.

Not saying don't do it because you want to play/test/learn any of those platforms, just that if you are going to run a server service maybe consider if you are doing it for learning a task orchestration service or just want to run XYZ reliably.


I was settled on Docker-Compose and Ansible for a long time, and while it was indeed quite easy to use once everything was tweaked correctly, what finally got me to move to k8s was HA. Having a single node with nginx, DNS resolution and Unifi controller running on it meant if I ever took the node down down for something, everything broke. Plex being down because I wanted to play with something unrelated wasn't ideal.

Agree that Ceph is ridiculously overkill though, I tried. Longhorn is much easier and perfectly fine for distributed storage.


Unifi controller isn't a dependency for your network to continue working. Config is distributed to the devices themselves, the controller is just a UI and collector of data.


I really like nomad for a single node homelab, it's super easy to maintain and deploy stuff.


I also run everything with docker stack(s) currently. Not using salt and haven’t heard of it. Why would I want salt and not ansible or something similar?


Salt (https://saltproject.io/) was an almost-winner around the time that Ansible blew up. It's pretty irrelevant nowadays since terraform and kubernetes - but still it's a very very good tool.

I'd suggest if you already know Ansible then don't change. If you have no idea which is which then try them both and let us know :)


I used saltstack at a decent scale.

The reasons that I used it were because the agent (called “minion”) initiates a connection towards the master, which can be handy if you’re behind NAT- but it was interesting to us because incoming connections are easier to manage at scale than outgoing ones.

Another reason was that the windows support was much more mature (though not perfect) and our environment was mostly Windows servers.

That said, installing the minion agent was easy, much easier than enabling winrm.

If you have sunk significant time into ansible I wouldn’t recommend switching, but it’s definitely not dead as per the sibling comment, I personally found it more enjoyable and easier to work with once understood the DSL pattern and added a few custom modules, it’s very simple underneath.


Personally I don’t use Kubernetes for the scalability, but because I haven’t seen anything else remotely maintainable (and yes, Kubernetes still has lots of room for improvement).

I don’t want to build a good chunk of Kubernetes myself just to host my software. It’s nice to be able to Google for solutions to my problems or to ask on various forums/chat rooms (or in many cases, not have the problems in the first place because someone else committed the fixes to Kubernetes long ago).


I run mine using HashiStack (Nomad, Vault, Consul), highly recommend going that route over Kubernetes which I have also used.


I've come from bare metal, to virtualisation, to Cloud. I then went from AutoScaling Groups and AMIs to K8s and Docker Images. Recently I went from K8s to Nomad+Consul and Docker Images.

Now I'm building a small platform to host a business I've started and I'm going back to ASGs with simple EC2 Instances.

There's really no need for the features Nomad/K8s offer at almost any scale outside of a few big companies.


It depends. Orchestrators like Nomad can give you a lot of leverage as a small team. It's not very difficult to manage and can give you lots of redundancy, control, and a path to scalability while utilizing best practices so you don't have to think about it. There are lots of ways to do this and not every app needs this but as you begin to get traction, you either have a choice of outsourcing your devops or bringing it in-house which happens way before you are even a big company. Nomad solves the problem of the latter.

Also, I'm curious why you didn't just end up sticking with Nomad now that you know it.


Orchestration engines can give you a lot of leverage if you run a managed solution because all you're doing there is deciding on the size you need and providing a credit card number. This is absolutely a great way of your DevOps to the next level with little effort.

If you're going DIY, however, then you're not doing yourself any favours by starting out with an orchestration engine.

In the case of Nomad you'll also need Consul for it to be truly useful. That means six EC2 Instances (for example) for a truly production trade system - 3x for Nomad, 3x for Consul. Of course you also need mTLS as well - may as well throw Vault into the mix too, then... that's another three EC2 Instances to support that cluster... 9x EC2 Instances into the DIY solution and all you have is an orchestration engine and you've not even started on the workers yet. That cost so far is USD$30,416.04 for the reference architecture for all nine nodes: https://learn.hashicorp.com/tutorials/consul/reference-archi...

On the other hand if you keep it simple and just work with AMIs you bake your self when you need to release something new, you need two EC2 Instances. The cost of the compute resources for this for 12 months: UDSD$330.

There's no raft consensus which needs three nodes to be stable (but a single AZ failure can result in split brain, so really you need five AZs), no new technologies to understand, no new vendor to work with or pay capital to, nothing more to support.

These things - these orchestration engines - have their place, and that place is outside of 99.9% of businesses (large or small.)

As for moving away from Nomad: too expensive and too complex. I ran Consul, Nomad and Vault on the same master instances (which is a risk and against best practice) without mTLS (just TLS) and it took a week to get it all running. None of that includes monitoring, backups, auto-scaling, etc.

I believe the work involved with going DIY on the orchestration front is not worth the benefits in the short to medium term. In the long term you'll likely outgrow your own simpler solution and at which point you'll likely have the capital to switch to a managed service anyway /shrug

It's all relative, after all.


There's no need to go truly production grade initially. I run some projects in production on single node deployments (with Nomad, Vault, and Consul all on one node, i.e. bootstrap_expect = 1) and it's solid. The hardware specs required are no different than if you were to just use systemd to run everything.

The beauty with HashiStack is you don't need to go fully monty. There's also no need to use ACLs which I agree adds a lot of unnecessary complexity if you're only at the MVP phase or doing things solo/in a small team. And you can run everything on a Digital Ocean $10/mo droplet -- it's very lightweight. I even have one HashiStack deployed on a Raspberry Pi running Home Assistant and other ancillary wares and it works great. But there's always the path to building a proper cluster if you need the redundancy.

I'm very bullish on HashiCorp. They can do a much better job though of communicating that not every deployment needs to be enterprise-grade and offer better ways of bootstrapping the type of cluster that fits your needs. I think a lot of that messaging gets lost due to their ongoing war with Kubernetes and what large companies are looking for.


I actually totally agree with your approach, but not the technology stack choice.

I think if you're at the MVP/solo-team stage then a single server that you push the app too with Ansible/Puppet Bolt, run via systemd, and then monitor, is even simpler, easier and "cheaper" (in time, which is money) than having to learn and use the HashiStack.

My app compiles into a single binary thanks to Go's binary packing feature. Does deploying a single binary reeeeaaallly need the entire HashiStack? Seriously? Do I need TWO operating systems (the host's and the container's) to run a binary?

Honestly I just think people over think this stuff and are easily swayed by cool features.


But how do you provision the baremetal?


What do you use for storage?


If it's images, files then something like S3. If it's a database then I mount it locally on the host.


What do you do if the host goes down?


Nothing happens to the data -- it's still intact when the host comes back up. Happy to clarify further though if that's not what you meant.


Ah, so your systems are down until you fix a node, and if the drive(s) fail you have a dead node you're stuck restoring backups.

I guess that's fine, I'm just used to having labs to test how I can avoid single instance failures. :)


The path to redundancy is there if you need it but it's not something you have to go with from the start. You can also run a database with replicas to handle any drive failures, etc. I have been running a production-grade system with thousands of customers like this for a few years.


> homelabs that aren't using kubernetes or some other scheduler and how they're doing it/what the full stack looks like.

I've had great success with Ansible+Proxmox. The proxmox CLI interface is good enough that you can basically do DIY terraform like setup straight against CLI without any agents / plugins etc. (I know declarative vs imperative but hopefully the comparison makes sense)

Lately I've been toying with local gitlab-CI building and deploying images directly into the environment so that any changes to the code redeploy it (to prod...this is home after all haha)

Considered terraform too but not a big fan of their mystery 3rd party plugin provider system


Yeah I’m with you. I run a rack at home with servers running a few applications orchestrated with a bit of Puppet, it’s 100x more reliable than K8s, I’m not having to do weird hacks to make persistent storage work, edit dozens of YAML files and run a whole host of dodgy control plane services.

Sadly, I leaned this the hard way by setting up a 4-node k8s cluster and realizing bare-metal support is abysmal and the maintainers simply don’t care about you if you’re not running a cluster on AWS or GCP. Unfortunately my day job still involves k8s and people give me strange looks when I suggest metal so substantially less complex, cheaper, more performant, and easier to orchestrate and maintain.


I'm building my own education platform around getting people into IT and towards a career in Cloud and DevOps.

A few months back I built a Nomad based cluster with Consul and was like: wow! This is super cool... but it's also expensive and complex.

I'm going to be building out the platform with an ALB, two EC2 Instances in an AutoScaling Group based on an AMI I bake with Ansible and Packer. I'll use Terraform for the entire infrastructure.

The release pipeline for content updates (it's an MkDocs/Material based book with videos blended in) will be me updating the AMI and firing off a `terraform apply`.

I won't be using Docker. I won't require or need an orchestration engine.

It's a static website delivered via a custom web server written in Go (as I need custom "Have you paid to view this content?" logic.) That's about as complex as it gets for now.

I'm actually looking forward to implementing it and watching as something so simple hopefully goes on to allow me to build and scale a business.


Kubernetes is just young. The story for bare metal is improving all the time. You just have to pick the right components for things like storage and load balancing, which is tedious but no more so than picking the right combination of tools to use to build your own Kubernetes replacement. In time even this pain will go away once someone builds a decent bare metal distro with all of this stuff available out of the box (k3s is trending in that direction).


I'm currently building out my single-node "homelab" with coreos + podman + systemd. Most of the config is pulled from 1 ignition file during coreos installation. Its been a learning experience and but also very lightweight.


My setup I just have ansible deploy to a couple of hosts running Arch. Generally I just use OS packages and have ansible deploy appropriate configs then for some software where that's more hassle I use podman rootless containers. Such software includes e.g. Graylog, which wants particular versions of Elasticsearch and mongo, or postgres which requires a manual data migration for annual version upgrades

k8s and nomad are on my "maybe eventually later" list though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: