Hacker News new | past | comments | ask | show | jobs | submit login

Reading this charitably: I guess I agree. k8s is definitely overpowered for my needs. And I'm almost certain my blog or my business will never need that full power. Fully aware of that.

But I'm not sure one can find something of "the right power" that has the same support from cloud providers, the open source community, the critical mass, etc. [1]

Eventually, a standard "simplified" abstraction over k8s will emerge. Many already exist, but they're all over the place. And some are vendor specific (Google Cloud Run is basically just running k8s for you). Then if you need the power, you can eject. Something like Create React App, but by Kubernetes. Create Kubernetes App.

[1] Though Nomad looks promising.




Curious why run it at all? The cost must be 10 times more this way. It is mostly for the fun of learning.

I come from the opposite approach. I have 4 servers two digital ocean $5 and two vulr $2.50 instances. One holds the db. One server as the frontend/code. One server to do heavy work and another to server a heavy site and holds backups. For $15 I'm hosting hundreds of sites, running so many background processes. I couldn't imagine hitting that point where k8s would make sense just for myself unless for fun.


Sounds like your setup lacks high availability. If you don't believe you need that, then yeah, kubernetes is overkill.


Few people actually need high availability.

If you do, the recipe is to reduce the number of components, get the most reliable components you can find, and make the single points of failure redundant.

Saying you can use Kubernetes to turn whatever stupid crap people tend to deploy with it highly available, is like saying you can make an airliner reliable by installing some sort of super fancy electronic box inside. You don't get more reliability by adding more components.


> Saying you can use Kubernetes to turn whatever stupid crap people tend to deploy with it highly available, is like saying you can make an airliner reliable by installing some sort of super fancy electronic box inside. You don't get more reliability by adding more components.

This is a bit funny, considering Airbus jets use triple-redundancy and a voting system for some of their critical components. [1]

[1] https://criticaluncertainties.com/2009/06/20/airbus-voting-l...


What about application upgrades?

Are you ok with your application going down for each upgrade? With Kubernetes, it's very simple to configure a deployment so that downtime doesn't happen.


If and only if your application supports it. Database schema upgrades can be tricky for instance, if you care about correctness.

On the other hand, atomic upgrades by stopping the old service and then starting the new service on a Linux command line (/Gitlab runner) can be done in 10 seconds (depending on the service of course – dynamic languages/frameworks sometimes are disadvantaged here). I doubt many customers will notice 10 second downtimes.


And that downtime can even be avoided without resorting to k8s. A simple blue-green deployment (supported by DNS or load balancer) is often all that's needed.

K8s only makes sense at near Google-scale, where you have a team dedicated to managing that infrastructure layer (on top of the folks managing the rest of the infrastructure). For almost everyone else, it's damaging to use it and introduces so much risk. Either your team learns k8s inside out (so a big chunk of their work becomes about managing k8s) or they cross their fingers and trust the black box (and when it fails, panic).

The most effective teams I've worked on have been the ones where the software engineers understand each layer of the stack (even if they have specialist areas of focus). That's not possible at FAANG scale, which is why the k8s abstraction makes sense there.


Takes a couple of minutes at most for an average application upgrade / deployment, a lot of places can deal with that. Reddit is less reliable than what I used to manage as a one man team.


So if a deployment is 2 minutes of downtime, you are limited to 2 per month if you still want to hit 4 9s of availability with no unexpected outages.


You can get a k8s cluster on DO for around $15 p/m. And that itself can host all your apps.


How do you do automated deployments, though? I don't like using K8s for small stuff, but I am also extremely allergic to having to log on to a server to do anything. Dokku hits the sweet spot for me, but at work I would probably use Nomad instead.


Set your pod to pull image always and have entrypoint shell script that clones the repo, kill the pod so on restart you could get your code deployed.

You could run init container with Kaniko that pushes image to repo and then main container that pulls that back but for that you need to do kubectl rollout restart deploy <name>

If you are looking for pure CI/CD gitlab has awesome support or you could do Tekton or Argo. They can run on the same cluster.


What's wrong with logging in to a server? I love logging in to a server and tinkering with it. Sure, for those who operate fleets of hundreds it's not scalable, but for a few servers that's a pleasure.


The problem is when you need to duplicate that server or restore it due to some error, you have no idea what all the changes you made are.

Besides, it's additional hassle and a chance for things to go wrong, the way I have it set up now is that production gets a new deployment whenever something gets pushed to master and I don't have to do anything else.


But this is a solved problem since... well, at least since the beginning of internet. I managed 1000s of Linux & BSD systems over the past 25 years and I have scripts that do all that since the mid 90s that automate everything. I never install anything manual; if I have to do something new, I first write + test a script to do that remotely. Also, all this 'containerization' is not new; I have been using debootstrap/chroot since around that time as well. I run cleanly separated multiple versions of legacy (it is a bit scary how much time it takes to move something written early 2000s to a modern Linux version) + modern applications without any (hosting/reproducibility) issues since forever (in internet years anyway).


That's great but then you're not doing what the commenter above said and "logging in to a server and tinkering with it"


True; I learned many years ago that that is not a good plan. Although, I too, love it. But I use my X220 and Openpandora at home to satisfy that need. Those setups I could not reproduce if you paid me.


> The problem is when you need to duplicate that server or restore it due to some error, you have no idea what all the changes you made are.

A text file with some setup notes is enough for simple needs, or something like Ansible if its more complex. A lot of web apps aren't much more than some files, a database, and maybe a config file or three (all of which should be versioned and backed up).


I would be a lot more confident trying to back up my old school apps than the monstrosity we have on kubernetes we have at present.


Make backup of /etc and package list. Usually that's enough to quickly replicate a configuration. It's not like servers are crashing every day. I'm managing few servers for a last 5 years and I don't remember a single crash, they just work.


I'm logging into a server because i need to, not because its 'pleasure'.

I don't hate it but if you need to login to a server regularly because you need to do an apt upgrade, you should have enabled automatic security updates and not login every few days.

If your server runs full because of some logfiles or stuff, you should fix the underlying issue and not needing to login to a server.

You should trust your machines, independently if it is only one machine, 2, 3 or 100. You wanna be able to go on holiday and know your systems are stable, secure and doing their job.

And logging in also implies a snow flake. Doesn't matter as long as that machine runs and as long as you have not that many changes but k8s actually makes it very simple to finally have an abstraction layer for infrastructure.


Devs went on holiday before k8's.


Yes true, not sure what point you are trying to make.


Flux or Argo can help with this. The operator lives on your cluster, and ensures your cluster state matches a Git repo with all your configuration in it.

Flux - https://github.com/fluxcd/flux

ArgoCD - https://argoproj.github.io/argo-cd/


Write a script which runs remotely over SSH and trigger on the appropriate event in your CI/CD host.


This is what I like to do. In my case, even the CI/CD host is just a systemd service I wrote.

The service just runs a script that uses netcat to listen on a special port that I also configured GitHub to send webhooks to, and processes the hook/deploys if appropriate.

Then when it's done, systemd restarts the script (it is set to always restart) and we're locked and loaded again. It's about 15 lines of shell script in total.


That's quite elegant.


Do you manage to aggregate all logs on a single place? Do you have the same environment as staging? How do you upgrade your servers? Do you have multiple teams deploying on their own components? Do you have a monitoring/metric service? How do you query the results of a Cron? Can you rollback to the correct version when you detect an error at production?


remote syslog has been a thing for how many years?! As has using a standard distribution for your app with a non-root-user for each app, easily wiped and set up every deploy (hint: that's good for security too!). Monitoring was also a solved problem and I guess Cron logs to syslog. Rollback works just like a regular deploy? (I wonder how good k8s helps you with db-schema rollbacks?)


Setting all of that from scratch is not really that easy, and I wouldn't consider "monitoring" to have been a "solved problem". syslog over TCP/UDP had many issues which is why log file shippers happened, and you still need to aggregate and analyze it. Getting application to reliably log remotely is IMO easier with k8s than remote-writing syslog, as I can just whack the developer again and again till it logs to stdout/stderr then easily wrap it however I want.

Deploying as distribution package tends to not work well when you want to deploy it more than once on a specific server (which quickly leads us to classic end-result of that approach, which is VM per deployment minimum - been there, done that, still have scars).

Management of cron jobs was a shitshow, is a shitshow, and probably will be a shitshow except for those that run their crons using non-cron tools (which includes k8s).


Yes, k8s makes it easier and more consistent. But it's not like all the stuff from the past suddenly stopped working or was not possible like GP made it sound ;)


Are you using a framework (cPanel etc) for this or just individual servers talking to each other? Need to move my hosting to something more reliable and cheaper...


I'm learning Elixir now, and it's quite confusing to me how one would go about deploying Elixir with K8s. How much you should just let the runtime handle.

How much of K8s is just an ad hoc, informally-specified, bug-ridden, slow implementation of half of Erlang.


> But I'm not sure one can find something of "the right power" that has the same support from cloud providers, the open source community, the critical mass, etc.

I totally agree. I would dearly like something simpler than Kubernetes. But there isnt a managed Nomad service, and apparently nothing in between Dokku and managed Kubernetes either.


I've been very pleased with Nomad. It strikes a good balance between complexity and the feature set. We use it in production for a medium sized cluster and the migration has been relatively painless. The nomad agent itself is a single binary that bootstraps a cluster using Raft consensus.


I was about to, but seems like you answered the question yourself through that footnote.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: