Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes for personal projects? No thanks (carlosrdrz.es)
481 points by carlosrdrz on Oct 4, 2018 | hide | past | favorite | 269 comments



Oh man, the original article went way over the author's head. The point of the original article was that even though Kubernetes is primarily useful for tackling the challenges involved with running many workloads at enterprise scale, it can also be used to run small hobbyist workloads at a price point acceptable for hobbyist projects.

Does that mean that Kubernetes should now be used for all hobbyist projects? No. If I'm thinking of playing around with a Raspberry Pi or other SBC, do I need to install Kubernetes on the SBC first? If I'm thinking of playing around with IoT or serverless, should I dump AWS- or GCE-proprietary tools because nobody will ever run anything that that can't run on Kubernetes ever again? If I'm going to play around with React or React Native, should I write up a backend just so I can have something that I can run in a Kubernetes cluster, because all hobbyist projects must run Kubernetes now, because it's cheap enough for hobbyist projects? If I'm going to play around with machine learning at home, buy a machine with a heavy GPU, figure out how to get Kubernetes to schedule my machine learning workload correctly instead of just running it directly on that machine, because uhhh maybe someday I'll have three such machines with powerful GPUs plus other home servers for all my other hobbyist projects?

No, no, no, no, no. Clearly.

But maybe I envision my side project turning into full-time startup some day. Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it. Nah, probably too expensive. Oh wait, I can get something running for $5? Hey, that's pretty neat!

Different people will use different solutions for different project requirements.


> But maybe I envision my side project turning into full-time startup some day.

The state of the art for cluster management will probably something completely different by then. Better to build a good product now and if you really want to turn it into a startup, productionize it then.

> Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it.

If learning Kubernetes _is_ your side project, then perfect go do that. Otherwise its just a distraction, taking more time away from actually building your side project and putting it into building infrastructure around your side project.

If what you really wanted to build is infrastructure, then great, you're doing swell, but if you were really trying to build some other fun side app, Kubernetes is just a time/money sink in almost all cases IMO.


> taking more time away from actually building your side project and putting it into building infrastructure around your side project.

I generally dislike this way of thinking. Infrastructure is a core component of whatever it is you're building, not an afterthought. Maybe you can defer things until a little bit later, but if you can build with infrastructure in mind you'll be saving yourself so many headaches down the road.

You don't need to build with the entire future of your project's infrastructure in mind, but deploying your project shouldn't be "ok now what?" when you're ready, like it was a big surprise.


> Infrastructure is a core component of whatever it is you're building

That's true in some sense -- but you can get surprisingly far using a PaaS like Heroku to abstract that infrastructure away.

I'm a big fan of Kubernetes, and use it in production at my company, but I would not recommend using k8s in a prototype/early-stage startup unless you're already very familiar with the tool. The complexity overhead of k8s is non-trivial, and switching from Heroku to something like k8s doesn't involve undoing much work, since setting up Heroku is trivial.


I am a big fan of K8S too, not only use it in production, but I was also the one that set it up for my team. I agree that, unless you are already familiar with it, it is not always useful for protyping stage.

There is something to be said about having the infrastructure in mind though. That's why I'm inclined to use something like Elixir/Phoenix for web-based projects. Some (not all) of the ideas that K8S brings to the table are already built into the Erlang/OTP platform.

As for Heroku, there was a recent announcement that I think shifts things quite a bit: https://blog.heroku.com/buildpacks-go-cloud-native ... having standardized container images that runs buildpacks.

The ecosystem and tooling is not quite there yet, but I can see this as significantly reducing the investment to put into Dockerizing your app for K8S.

At that point, for the hobbyist, it might be:

Prototype -> Heroku -> K8S with an Operator that can run the buildpack

K8S is really a toolset for building your own PAAS. If there were a self-driving PAAS (using Operators) targeting small hobbyists that will run cloud native buildpacks, then the barriers of entry for a hobbyist using K8S is much lower.


> K8S is really a toolset for building your own PAAS.

I don't agree with this; that's one of the things you can do with it for sure, but multi-tenant isolation is actually one of k8s' weak points -- for example by default you can access services in any namespace, and you need something quite specialized like Calico and/or Istio to actually isolate your workloads. Plus you're still running workloads in containers on the same node, so inter-workload protection is nowhere near as good as if you're using VMs.

I see the big value add of k8s as making infrastructure programmable in a clear declarative way, instead of the imperative scripting that Chef/Puppet use. This makes it much easier to do true devops, where the developers have more control over the infrastructure layer, and also helps to commoditize the infrastructure layer to make the ops team's job easier, if you ever have a need to run your own on-prem large-scale cluster.


Cloud Foundry dev (pcf dev) or minishift will give you your own PaaS running in a VM


As nijave said, minishift and minikube are Kubernetes distros that will run on your laptop given virtual box, kvm, xhyve, hyper-v, and at least 2 cores, 2gb of RAM, 20gb of disk.


>The complexity overhead of k8s is non-trivial, and switching from Heroku to something like k8s doesn't involve undoing much work, since setting up Heroku is trivial.

This sounds like exactly what I'm advocating. I wasn't saying that you need to build specifically with Kubernetes in mind, but some people aren't even thinking about what it means to deploy to Heroku. Maybe you do some small amount of reading and research to know "switching from heroku to kubernetes is a viable migration path.' What I strongly dislike is this mentality:

> taking more time away from actually building your side project

None of what I've mentioned is time taken away or lost. Time spent doesn't just pause when you are ready to deploy or productionize. SWEs need to think more holistically about their software's lifecycle.


>Infrastructure is a core component of whatever it is you're building, not an afterthought.

Great, so use something appropriate for a small side project which in 99.99% of cases will not be k8.

Unless the small side project is learning cluster management. In which case go nuts.

K8's operating cost ($) might be manageable for a small project, but that doesn't mean the upfront learning and implementation costs (hrs) don't exist.

Your time isn't infinite.


>Unless the small side project is learning cluster management.

I think one of the underlying assumptions that this article missed about the original is that building a side project often isn't about what the end result will be. It's about the skills you pick up along the way.

If you have a killer product idea and you need to get it to market as quickly as possible then by all means, take the fastest route. But maybe you're working on something that's just a copy of an existing app so you can get the flavor of a new language or framework.

If that's the kind of project that someone's working on then whether or not k8 is applicable to the scale of the finished product is irrelevant. What matters is 1) do they see a benefit in increasing their knowledge of k8, and 2) can they do so without significantly ramping up the cost of the project.

The original article answers question 2, and provides a few arguments as to why the answer to 1 might be yes in the current job market.


Ideally, it shouldn't be too hard to build a side project in such a way that these sorts of infrastructure things are not only a decision that you can defer indefinitely, but also one that you can change later, if you figure out a better way to do it on down the line.

I don't really want to get into a debate over the relative merits of k8s or any other way of doing things in production, but I do want to throw out a general observation: The technologies we use always solve problems that we have. This cuts two ways: If you have a problem, you'll find a technology to solve it. On the other hand, if you have a technology, then soon enough you'll find a problem that it can solve -- even if you have to create the problem for yourself first.


I agree, so for my side project I thought a lot about what scale would look like if anyone else wants to use the janky accounting system that works for me. Anything below 20,000 unique (but not concurrent) users could easily be handled by a couple of servers and a decent database server. I figured if I get over 50 users I can start thinking about kube and containers.

Until then I think my efforts are best spent making a simple monolith.


A million times yes. You have to pick your battles.


> The state of the art for cluster management will probably something completely different by then

Has it not only changed twice in the prior two decades between first VMs and now containers? Don't think this is something you have to worry about long term.


Uh I don't think vms or containers are cluster management in itself. They are just technologies. What you use to orchestrate them is entirely different, and yes that has changed over the years many times.


For example? Most early cluster management was found in products doing general fleet provisioning, HPC/HTC in academia, proprietary commercials offerings like relational databases, or proprietary in-house solutions like Google's Borg.

I'd describe cluster management as four things - provisioning + configuration, resource management (CPU+RAM+Disk), scheduling (when & where to put things based on availability/resources), and deployment (running things). At least, these are the things I'm concerned about when managing a product that requires a cluster (besides monitoring).

Early cluster management tools were often just doing provisioning and configuration (CFEngine). We see Puppet, Chef, and eventually Ansible further refine the solution as we enter the "Bronze Era" where standing up new servers take a fraction of the time. Now we didn't even bother to name each server by hand when we were going through the installation process after booting up the OS - servers had become cattle, and they were tagged appropriately.

Around the same time (2003-2006) we see virtual machines begin to catch on, culminating in the 2006 debut of Amazon Elastic Compute Cloud (EC2) and the birth of modern cloud computing. We now had a general purpose mechanism (VMs) to spin up isolated compute units that could be provisioned, configured, and managed using tools like CFEngine & Puppet. IT departments begin spending less on the SANs that dominated the early aughts and shift budgets to AWS or VMWare ESXi.

Then in 2006 we see Hadoop spin off from Nutch, and the MapReduce paradigm popularized by Google leads to an optimization of the resource and scheduling problems thanks to YARN and Mesos and related tools. Non-trivial programs and workloads can now be scheduled across a cluster with vastly improved confidence. Distributed programs are now in reach of a much larger audience.

Suddenly new Hadoop clusters are springing up everywhere and the "Silver Era" begins. Tools enabling greater efficiency hit the scene like RPC serialization frameworks (Thrift, Protobuf), improved on-disk storage (O/RCFile, Parquet), distributed queues (RabbitMQ, SQS), clustered databases (Cassandra, HBase), and event streaming and processing (Storm, Kafka).

Coordination becomes more complicated and essential, so we get ZooKeeper and etcd and consul. Managing passwords and other secure data leads to tools like Vault. Logstash and Graylog make log management less of a nightmare. Solr leads to Elasticsearch leads Kibana and we now have logging and visualization for both server and application monitoring.

Developers also begin to take advantage of VMs and it's not long before tools like Vagrant help bridge the gap between development, staging, and production. Our profession takes a collective sigh of relief as getting our distributed programs started on a new developers machine went from three days of trial-and-error to minutes thanks to well-maintained Vagrantfiles.

Still, deployment is a big pain point. A single cluster could be home to Hadoop (which benefits from YARN) and thirty other things being run by teams across the organization. One day you discover that your mission critical web app is sharing resources with your BI team after your CEO calls you in a panic to tell you the website is down. Turns out a DBA transferred a 20TB backup across the same switch shared with your customer-facing website because somebody forgot to isolate a VLAN to prevent backups from interfering with other traffic.

This doesn't even take into the consideration the abominations we call deployment scripts that developers and DevOps wrangle together to get our programs deployed.

Then Docker and containers become a thing. Developers are now able to setup incredibly complex distributed programs with little more than text files. No more coordinating with DevOps to make sure your servers have the latest version of libpng, or spending months fighting to get Java upgraded to the latest version. I shout in glee as I delete Chef from my machine because progress. Then the beer goggles dissipate and the love affair ends and we realize things are shitty in a different way.

Docker Swarm and Kubernetes emerge, and that brings us to today which I'll call the "Golden Era". We now have incredible tooling that deals with provisioning and configuration, resource management, scheduling, and deployment. But like any new era there are rough spots, but I'm positive incredible new tech will popup.

Throughout all of this, virtual machines and containers were the fundamental building blocks that enabled improved clustering and cluster management. They're inextricably tied together. But all-in-all, things have changed VERY little (couhg LXC cough) in 20 years compared to the rest of the landscape. We're solid for another 10 years before anything like this is going to happen again.


> Better to build a good product now and if you really want to turn it into a startup, productionize it then.

Well, you gotta deploy with something. Why not k8s?

Obviously if you're developing locally, k8s would not help in the least.


Personally, as somebody who is building a small Kubernetes cluster right now at home just for the fun of it: I think using Kubernetes for small projects is mostly a bad idea. So I appreciate the author warning people so they don't get misled by all the (justified) buzz around it.

For your average developer who just wants to get something running on a port, Kubernetes introduces two barriers: containerization and Kubernetes itself. These are non-trivial things to learn, especially if you don't have an ops background, and both of them add substantial debugging overhead. And again for that developer, they provide very, very small gains.

I think the calculus changes if that developer starts to run multiple services on multiple servers, wants to keep doing that for years, and needs high uptime. I have a bunch of personal services I run in VMs with Chef, and I'm excited to convert that over to Kubernetes, as it will make future OS upgrades and other maintenance easier. But my old setup ran for something like 6 years and it was just fine. For hobbyists whose hobbies don't include playing with cluster-scale ops tooling, I think it's perfectly fine to ignore Kubernetes. It's the new hotness, but it doesn't provide much value for them yet. They can wait a few years; by then the tooling will surely have improved for low-end installs.


> For your average developer who just wants to get something running on a port, Kubernetes introduces two barriers: containerization and Kubernetes itself. These are non-trivial things to learn, especially if you don't have an ops background, and both of them add substantial debugging overhead.

From the application-developer side, I'd dispute this. I was told to use Docker + Kubernetes for a relatively small work project recently, and I was able to go from "very little knowledge of either piece of tech" to "I can design, build, and deploy my app without assistance" in about 1 week of actual study time, spread out over the course of the project. And although I have several years' experience, I'm not some super-developer or anything.

What surprised me most is how well-documented (for the most part) everything is. The Kubernetes and Docker sites have a ton of great information, and the CLIs are rich and provide a consistent interface for all the details about your environment. (To tell the truth, that alone makes the time investment worth it.)

After this, there's no way in hell I'm going back to Heroku or similar and trying to piece together their crappy, one-quarter-documented "buildpack" system. I'd take a Kubernetes-and-Docker-first PaaS at a reasonable markup any day of the week.


Did you already have a Kubernetes cluster set up? If so, that seems plausible to me. And yes, if you're deploying to a PaaS already, Kubernetes is a fine one. But compared to "start up a process and leave it running", I think any PaaS requires a fair bit of learning to deploy anything complicated.


I think the original article made a good point, and one that seems to be overlooked by pretty much every critical response: if you don't already know standard sysadmin tooling and procedures, then the value proposition for k8s for small projects alone is probably comparable to standard sysadmin tooling. At least with k8s, you can take that knowledge with you when you want to scale.

I don't know if I'm totally convinced by that argument alone, but it would be nice if every critical response didn't seem to assume that every hobbyist is born understanding systemd, ansible, packer, qemu, etc.


This is the thing I think most people miss when they complain about the complexity of Kubernetes. If you already know how to use standard sysadmin tools it looks like a lot of additional knowledge to achieve similar outcomes. These people seem to forget that there is a lot of complexity in their sysadmin tools too.


Do you really thing you can set up and run a Kubernetes cluster, even a small one, without significant understanding of standard sysadmin tooling? I know a fair bit, and I didn't find it easy.


Absolutely. I set one up from scratch on a cluster of Raspberry Pis and I don't know shit about Packer, Ansible, SystemD, and probably a bunch of things I'm only nominally familiar with. And that was a bare metal cluster, and it was hard; however, I don't think it was hard for lack of sysadmin knowledge, it was hard because I was learning K8s, I had to learn about ingres and MetalLB, K8s requires you to modify cgroups things that I don't know shit about, various RPI/docker/architecture tedium, and other K8s particularities that aren't "standard sysadmin" things. More importantly, as the original article mentioned, cloud providers do this setup for you for an affordable price so you absolutely don't need to know the standard sysadmin things.


If you, or somebody available to you, doesn't know standard sysadmin things, you shouldn't be deploying a production system. When things inevitably do go wrong, you'll need that sysadmin knowledge to get things sorted out.


But here we're talking about personal projects. That's how a lot of people learn exactly what production systems take.


You certainly can, but you need to find that one easy way to do it, that only appears obvious in hindsight.


> These are non-trivial things to learn

Right, but again I think the point being made is that, if those are skills you do want to learn, or plan on using later down the road, it's worth knowing that you can use K8s even at a small scale.

Obviously, in most cases it could be premature optimization, but for some people (including you), it can be fun to learn.


OK, but this is a point made in the article the comment attempts to refute!


I do agree with you, but I don't think I really missed the point of the original article. From the original article: > However popular wisdom would suggest that Kubernetes is an overly complex piece of technology only really suitable for very large clusters of machines; that it carries a large operational burden and that therefore using it for anything less than dozens of machines is overkill. I think that's probably wrong.

I don't think that is wrong. I do think it is probably overkill, and IMO it does introduce operational burden and complexity. That doesn't mean you shouldn't do it, though, if you're interested in exploring the technology, for example.


I don't understand what the operational burden is. We literally do nothing to our K8s cluster, and it runs for many months until we make a new updated cluster and blow away the old one. We've never had an issue attributed to K8s in the 2 years we have been running it in production. If we ever did, we'd just again deploy a new cluster in minutes and switch over. Immutable infrastructure.

It is not like I haven't done it the "old" way. I spent many years doing hand deploys, making deployers, running Ansible/Chef. It is just that we always found we can never confidently update servers running many apps as it would step on other applications. So we'd just make new ones, test and switch. This was not an easy process either. Plus we'd encounter issues like oh someone didn't make a startup script or filled up /var with logs, or had something eat up all the memory. All of these operational problems are gone with K8s. I know what you are thinking "well you did it wrong". Yes sometimes developers do things wrong. But in container/K8s land that wrong stuff is contained, and if you don't do things "right" you can't even run.

So we had operational issues there. Now we have a universal platform that someone can ship their app anywhere and have it run the same. That is a huge win. All for no extra work.


Operational burden comes when you have to troubleshoot an issue. Simply deleting and recreating doesnt solve reoccurring problems.


I have had the same experience and the same journey from hand deploys, using configuration management, and all of that.


Why is your comment is gray?


> using it for anything less than dozens of machines is overkill

The question isn't really whether you need dozens of machines, it's whether you can foresee eventually maybe needing dozens of machines.

Remember the bad old days when people said that relational databases were worthless because they "don't scale", that using Mongo and other NoSQL databases were practically a necessity for doing anything modern and "web-scale" because otherwise after you got your big break and you got popular you would need to keep up with all the new traffic and not crash? A lot of engineers have this tendency to worry about scalability long before it's ever a problem. Something about the delusions of grandeur incurred by people who got into engineering because they were inspired by great people building big things.

Starting out by running Kubernetes on a three-node cluster is actually the correct call for a small project if you can reasonably foresee needing to elastically scale your cluster in the future, and don't want to waste days or weeks porting to Kubernetes down the line to deal with your scalability problems that you foresaw having in the first place.

Again, that doesn't mean that Kubernetes is right for every hobbyist project. But there is definitely a (small) subset of hobbyist projects for which it is not overkill.


> A lot of engineers have this tendency to worry about scalability long before it's ever a problem.

"Premature optimization is the root of all evil" -- Donald Knuth

Translating the old suggestions [1] to the realm of devops, I think the point really is: if you are fairly certain that your optimization (k8, docker and so on) will result in "better" code and practices right away, then you should do it. If not, you shouldn't.

I personally find this stuff to be way overkill in a lot of cases. Does Kubernetes really accelerate your development process? If you are a two-men startup, your objective is to find ways to deliver value as quickly as possible, not to play at being Facebook. When you become Facebook (or even just Basecamp), you will then have enough resources to do this optimization. But if you feel your development process is really so much better with K8 and friends (because it's what you used in a previous job or something), by all means go for it.

[1] http://wiki.c2.com/?PrematureOptimization


100% agreed. If a dev thinks they might want to put something in Kubernetes eventually, there are a few best practices that they could adopt early on to make that easy. But basic hygiene aside, the should wait.

The number of projects that might scale up is much, much smaller than the number of projects that do. If I actually want my project to serve a zillion users, the right place for me to focus my effort is not on Kubernetes, but on user context interviews, user tests, and fast iteration based on the results of experiments.


Kubernetes has a definite whiff of NoSQL - a massively hyped tool/technique originating from Google with oversold benefits.

I tried it about 6 months back with the intent of using it in a corporate prod environment and getting set up was... a massive pain in the ass to say the least - compared to the existing ansible set up. It was supposed to solve headaches, not cause them.

I wasn't impressed. I wouldn't be surprised if it ends up being "Angular 1.0" to someone else's react.


i set up a kubernetes cluster 1 year ago at work and a private one last weekend.

last year took, i think 2 days. my private one was up and running within ~1h, including writing the ansible role to first install binaries/dependencies and join the cluster as a worker node.

either you didn't use kubeadm to set it up or ... i have no idea how you could've possibly failed.

its pretty much

    (all) ${packagemanager} install docker-ce kubectl kubelet kubeadm
    (master) kubeadm init -> prints token
    (node) kubeadm join ${token}


Jeff Geerling even wrote an Ansible role to do all of the heavy lifting for you. I've used it alongside vagrant to spin up a three node cluster in ~15 minutes.

https://github.com/geerlingguy/ansible-role-kubernetes


I've used OpenStack a good bit, but not Kubernetes directly, and I have never set it up. Is there an up-to-date, in-depth tutorial around?



He's written so many useful and maintained ansible roles.


I didn't fail, I just couldn't see a strong ROI after doing a spike.

I did use kubeadm. It required considerably more than just 3 simple steps required to get a basic working cluster up. Two days was more like it.


Managing kubernetes yourself is a headache and why I more or less only consider managed services like Google Kubernetes Engine for real use-cases. That's why the original article showing that you could install and run it on a set of 3 micro preemptible instances for ~5 month was so compelling to me.



how does kubeadm take two days. I had to rebuild my cluster recently and it literally took me 20 minutes.


I didn't use it last time. It was marketed as beta iirc


You don't really need to jump into Kubernetes to be prepared to eventually migrate to it. As long as you're using some kind of compatible containerization platform it is not incredibly hard to shift those workloads to Kubernetes. If you're really only needing to run a small handful of containers, running them directly in Docker or rkt isn't challenging and has very little operational overhead compared to running a full k8 cluster.


This echoes my thoughts as well. You can get pretty far with just a simple Docker-Compose file for many personal projects, and still leave the door open to a relatively easy transition to more advanced orchestration tooling should the project grow large enough to require it.

Personally I really like Docker Compose files - virtually no overhead to maintain alongside a handful of Dockerfiles coupled with really simple syntax for expressing the relationships between them. Containers themselves seems conceptually challenging for some newcomers in my experience (concepts like image immutability etc don't have many similar analogues I'd argue if you are new to containers, and I've seen some very experienced developers get stuck trying to map them conceptually to VM images which is not a good fit), but the payoff in the ease of deploying your work is huge. It's nothing a reasonable developer can't learn in a few hours though, and the documentation is pretty good.

I especially find Docker great for projects I worked on years ago - I no longer really need to keep track of how to install/configure the side project I haven't touched in age's software dependencies, just hit 'docker-compose up' and I'm running.

I also really like Docker-Compose as a replacement for things like Vagrant scripts to create developer environments in some scenarios - way less overhead than a vagrant script spinning up multiple VMs on your laptop, and generally much faster deployments.


The question isn't really whether you need dozens of machines, it's whether you can foresee eventually maybe needing dozens of machines.

Kubernetes doesnt manage the machines. It manages the applications on machines that are managed with something else.

You have to do something else to manage the machines


cluster-autoscaler can mange the machines easily, the point is kubernetes is an abstraction layer, using this level of abstraction is upto the user itself. Should i keep going down with container or using the node itself is the question, in production environment it makes all the sense to use something which has the potential to scale infinitely, but for hobby projects mental overhead doesn't justify using k8


You could say that Kubernetes "manages the machines" from the point of view of the application, by providing a suitable environment to execute in (including actually starting the application)


I have to disagree. It doesn't introduce burden and complexity. They're already there whether you use Kubernetes or not.

The difference is that if you did build it all by hand as the author said, if it ever scales, you're going to have double the job to make it scale.

It's all a question of: do I think my software will succeed?

If it's a hobby project that will never get big, it's not worth the hassle. If it actually has a chance of succeeding, the small added complexity of Kubernetes will pay dividends extremely quickly when the system needs to scale.

Even with as little as two machines, I'd argue k8s is already adding more value than managing those by hand. People can say otherwise because they're used to it, but being used to it is not the point of the discussion.

The author also talks about Ansible which is another piece of complexity that would be comparable with doing it in k8s. I'd argue you have less YAMLs with k8s than with Ansible for a small project.

The only argument I see for doing anything by hand today is if it's a play thing.


I see what you mean, but I don't really agree. You can introduce that burden and complexity whenever you want. If you spend the start of your project working on this, you will be prepared for scaling (if you ever need it) but you could have used that time on actually working on the project and checking if you actually will need scale at some point.

I don't know about your projects, but I'm my case most (all) of them doesn't really need any kind of scale. Hell, this blog has a tiny 5$ DO machine and is still happily serving traffic from HN. I do understand not all projects are small blog instances, though :)

I guess in my case I prefer to just keep it simple and see how far I go with that setup than spending time working on making the perfectly-scalable project that is never serving more than 2 requests per second. If it ever grows, I will need to work to make it scale, sure, but on the other hand that is a good problem to have.

Anyway, I understand this is pretty subjective and depends on how you think about your projects and your requirements, so I do understand there will be people both in agreement and disagreement.


I completely agree it's overkill to run your own cluster. That'd be good for a learning experience, but way too complex to use/maintain otherwise.

However I read somewhere you had experience with Kubernetes, right? That means there is no extra work to learn the technology.

Now let's take your blog as an example. I'm gonna guess there's an official Docker image for whatever software you use and you could create an image, deployment + service + ingress in less than an hour for it (pretty much every example out there is about how to setup a blog, heh).

If you have to do all that manually through SSH, I'd argue it takes pretty much the same time and the complexity is the same. You will simply change the tools/concepts but won't be caught by manual gotchas.


> do I think my software will succeed?

With hobby projects for the vast majority of people, the answer is somewhere in the territory of "you can worry about that after it succeeds."


Let's talk about complexity for a moment, for one aspect of a simple service: ingress.

Kubernetes:

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: test-ingress
    spec:
      rules:
      - http:
          paths:
          - path: /testpath
            backend:
              serviceName: test
              servicePort: 80
Nginx:

    location /testpath {
        proxy_path http://127.0.0.1:8080
    }
Which is less complex? Which is beta, and thus could be changed over time (it happens a lot). Which one requires major (and breaking) infrastructure updates every 3 months?

> I'd argue you have less YAMLs with k8s than with Ansible for a small project.

Since you typically need one yaml document per K8s resource, and can describe multiple independent and idempotant actions in one Ansible document, I think this is easily demonstrable as false for small projects (and likely big projects as well).


The first one is a complete configuration you could kubectl apply into a cluster that sends traffic to a backend service that may be running across multiple instances on multiple machines.

The second is a configuration fragment that is useless by itself that would send all traffic to a single instance running on localhost.


You can just drop it in a conf.d folder created by the package install and it would work just fine.

Also we're talking simple projects, and simple projects are typically not distributed; typically don't need to be distributed.

That said, supporting multiple backends with different load balancing algorithms is also pretty simple to write as well.


Ok, great... I have a bunch of simple projects like that, one web instance running on a single host.

How do I safely upgrade it without downtime? Ensuring that the new version starts up properly and can receive traffic before sending it requests and stopping the old one?

With k8s: kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1 (or just use kubectl apply -f)

With your nginx config: ????


That moves the goalposts a bit. We've gone from a simple service to a fleet of highly available services on multiple hosts with zero downtime requirements.

At which point, sure, use Kubernetes.


No.. I didn't move the goalposts at all.

Still A single service on a single machine, how do I safely upgrade it?

A rolling deployment in k8s does work the same way on a single host with minikube as it does on a 1000 node cluster, but I'm still just asking about the single host case.


> Still A single service on a single machine, how do I safely upgrade it?

Depending on your OS-package's provided init script (here Ubuntu), it's as simple as `service nginx reload`, (`service nginx upgrade` if upgrading nginx itself).

Or skip the init script entirely with `/usr/sbin/nginx -s reload`.


I am asking how one would safely upgrade the service that nginx is proxying to, not how to restart nginx.


You could do "blue-green" deployments with port numbers... service rev A is on port 1001, service rev B is on port 1002... deploy new rev... change nginx config to point to 1002... roll back, you repoint to 1001...


So I need to write some code to keep track of which of A or B is running, then something else to template out a nginx configuration to switch between the two. Then I need to figure out how to upgrade the inactive one, test it, flip the port over in nginx, and flip it back if it breaks.

Or

kubectl apply -f app.yaml

Which is less complex?


Depends on what's inside app.yaml, no?

At minimum it requires good-enough health checks so that k8s can detect if the new config doesn't work, and automatically rollback, otherwise you're looking at "no downtime except when there's a mistake" situation.

...and to really check that the health check and everything else in your .yaml file actually works, you will probably have to spin up another instance just so that you can verify your config, unless you like debugging broken configs on live. Well, of course you can always fix your mistake and go "kubectl reaplce -f", but that kinda goes against the requirement of "no downtime".

I grant that k8s makes it easier to spin up a new instance for testing.


I'm pretty sure I could write a shell script to do that, with a bunch of grep/sed hackery in under an hour or two. For a single server, personal project, this is probably the simpler approach for me.


sudo apt-get upgrade?

Lets not make it more difficult than it has to be.


The new version fails to restart properly. Your site is now down. You upgraded the package so now you no longer have the old version on disk and can't easily start the previous version.

Let's not pretend that blindly upgrading a package is the same thing as safely rolling out the new version of a service.


I think this is proving my point. They're basically the same, but one is completely dynamic and the other will have to be changed as soon as anything changes.

We've been using Kubernetes in production for almost two years now and I have yet to face a big API change that breaks everything. The core APIs are stable. There's a lot of new stuff added, but nothing that breaks backwards compatibility.

As you just said, if it's a simple project, you can upgrade the infrastructure by clicking "upgrade" on GKE. We've only ever hit problems when upgrading when using the bleeding edge stuff and the occasional bug (once since Kubernetes 1.1 to 1.10 for a large Rails app).

Regarding the yaml document per resource, I mean... that's spaghetti Ansible. If you want to have a proper Ansible setup you will have separate tasks and roles. If we're going down the route of just have "less files" you can have all the Kubernetes resources in a single YAML file. I would definitely not recommend that tho.

While Kubernetes is a lot more verbose, it is light years better than the Ansible jinja2 weird syntax. Even someone that never heard of Kubernetes can read that Ingress resource and guess what it does. If we're being really picky actually, the proxy_path should be pointing to some "test" thing that would have to be an upstream that would already make the NGINX config more obscure.

I feel like people just hate Kubernetes to avoid change. These arguments are never about Kubernetes' strengths or faults but about why "I think my way is better". You can always do things in millions of ways and they all have tradeoffs. The fact is: ROI on Kubernetes is a lot bigger than any other kind of manual setup.

I'll repeat what I said on another comment: the only reason to do anything by hand today is if it's a play thing


Are you using GKE for your production environment? Kubernetes has lots of great features, and definitely imposes some good operational patterns, but it's no panacea, and for those of us who can't put their applications in the cloud (or don't want to) Kubernetes can be a complex beast to integrate.

Are you running databases in Kubernetes? Do you have any existing on-prem infrastructure that you want to utilize (or are required to, because sunk costs) to integrate with Kubernetes?

Is you company a startup with no existing legacy application that you need to figure out how to containerize before you can even think about getting it into Kubernetes? We've seen benefits from running it, for sure, but I'm honestly not sure if the amount of work it took (and still takes) to make it work for us was worth the ROI.

Sometimes I feel like Kubernetes is a play by Google to get everyone using their methodology, but only they can provide the seamless experience of GKE.


I think the key element here is that you are using GKE. Managed cloud Kubernetes, and self-hosted on-prem Kubernetes are two different beasts. Yes, it's easy when you don't actually have to run the cluster yourself.

Ansible has its warts, but it is great for managing and configuring individual servers.


Yes I agree. But in the context of this article, we're talking about small projects. So I wouldn't expect the need for an on-prem setup.

In fact I've used Kubespray[1] to setup a cluster with Ansible before with mild success. Nothing production ready, but it's actually a good tool for that job. At the end of the day you can't run Kubernetes on Kubernetes :D

[1]: https://github.com/kubernetes-incubator/kubespray


I really think that all Kubernetes related posts and articles need a footnote -

"Kubernetes allows you to scale simply!"*

"Kubernetes takes minutes to set up!"*

*When using GKE or another managed service.


Honestly I cannot refute that!

Haven't had the experience myself outside of GKE so I can only imagine the complexity of it.

It is a young project still. My guess is that with some more time this will improve.


I guess it depends on your definition of "small projects". I agree with the article that if you are interested in getting something out there for people to use and see what kind of interest you get, then adding Kubernetes to the mix doesn't really get you there faster. If anything, I think it would slow you down, unless we are talking about a very trivial app.

I was responding more to the comments I had been reading, not the premise of the article.


How do you configure it with a master postgress db that is persistent over reboots, hooked up to two hot slaves? With ability to do of site backup?

All deployments I have seen so far have been immutable infi scale webapps. That is easy.


That's true. Stateless apps are a lot easier. I have myself had to put a stateful service in K8s even before they had StatefulSets and it wasn't a walk in the park.

However doing what you described is hard... anywhere.

Even if you do it all by hand it's going to be hard and most likely brittle. It might take a bit more time/effort to do that on Kubernetes but I would say you would end up being a better solution that can actually sustain change.

As stated before by many, Kubernetes is not a magic wand. But it does force you to build things in a way that can sustain change without a big overhead.

We all know that different tools have different purposes and I'm not advocating it's perfect by any means. All I'm trying to say is that this idea - that a lot of people have - that Kubernetes is 10x more complex than doing X by hand is an illusion.


I think the number of containers rather than the number of machines should be the leading factor.

IMHO it makes sense for most setups that have multiple micro-services that need to interact with each other. A single node cluster running a single container is kind of pointless; I agree. And you are not going to run much more than that on a micro instance. So, I agree with the main point of the article that this probably is not an appropriate setup for any kind of home setup unless of course you really want to have kubernetes (which would be a valid reason for attempting this).

If you run multiple microservices you have most of the problems that kubernetes solves out of the box and attempting to solve those by manually gobbling together bits of infrastructure outweighs the financial overhead of running kubernetes. So any moderately small setup where you are in any case going to have 2 or 3 machines running multiple containers, you probably should be looking at kubernetes.

So, if you are in Google or amazon, hosted kubernetes is definitely worth considering. You probably want a loadbalancer as well. So, at that point you are looking at ~50-100+$ per month anyway for a couple of instances, a LB and whatever else you need (e.g. RDS, S3, etc).

For anything running commercially, that's entirely defensible. Yes you can run cheaper on bare metal but people tend to forget all the hours doing devops stuff are also cost you. A day of a competent dev will easily run you kubernetes for quite some time. Unless your devs are super bored, make them spend their hours on more valuable stuff than reinventing wheels.


Do kubernetes the hard way and soon you might realize its not really a nice shiny machine. Its a complex system which you can put together in a way to help.... or hurt, completely dependent on your abilities.

https://github.com/kelseyhightower/kubernetes-the-hard-way


>Oh man, the original article went way over the author's head. The point of the original article was that even though Kubernetes is primarily useful for tackling the challenges involved with running many workloads at enterprise scale, it can also be used to run small hobbyist workloads at a price point acceptable for hobbyist projects. Does that mean that Kubernetes should now be used for all hobbyist projects?

Well, it shouldn't be used but for very very few (if any) hobbyist projects.

Which is closer to this post, than the original article.

>But maybe I envision my side project turning into full-time startup some day.

Facebook managed this just fine as a simple PHP project on some guys laptop.

https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it


I don't see anywhere that you've explained how the original article "went way over the author's head", and your examples of suitable small project vs non-suitable aren't in the original article at all, you just decided all of that yourself.

The original article concludes with "It's my contention that Kubernetes also makes sense for small deployments and is both easy-to-use and inexpensive today". This article contends that it's not at all easy to use and price-wise he hasn't compared like-with-like.


>But maybe I envision my side project turning into full-time startup some day. Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it. Nah, probably too expensive. Oh wait, I can get something running for $5? Hey, that's pretty neat!

Exactly. Or maybe "I would love to advance my career and work for a larger company that is using Kubernetes, and I can get some hands on experience without breaking the bank."


FYI the last part of the conclusion mentions this as well:

"Do you want to do all of this because you think is fun? Or because you want to learn the technology? or just because? Please, be my guest! But really, would I do all of this just to run a personal project? No thanks."


I wish it had been around 4 years ago. 4 years ago I made a website where users can post 100k messages. I used Meteor which was fine but it was frustrating as hell that it wasn't "complete stack".

I don't know what term to use but "full stack" apparently just means front end (html/css/js in browser) and backend (server software). Full stack is missing backup and restore, deployment, seamless upgrades (pushing new versions without down the service), scaling, testing infrastructure both front and backend, staging

As such, I have a hacked manual backup for my site. Anytime I want to update my site I have to take it offline for at least a moment. If any users were on it they lose their work. If it ever gets too much traffic I'll have to manually figure out how to scale it. It also took at least a week to get it where it is, as in a week doing stuff not my actual site but just learning how to deploy at all.

I can see no reason all of that can't be 100% provided out of the box and if Kubernetes is the path there I'm all for it.

Ideally I want to do something like

    git clone complete-stack
    install deps
    edit server-main.js (or other lang)
    edit client-main.js (or other lang)
    deploy --init
Then I want

    edit server-main.js (or other lang)
    edit client-main.js (or other lang)
    stage
    test

    edit server-main.js (or other lang)
    edit client-main.js (or other lang)
    stage
    test

    deploy
And

    scale --num-servers=2

I would use it for any web based projects that aren't static servers.

I'm told by people who provide tech support for people using Kubernetes that my dream of all of this being provided out of the box is about 10yr out.


It's getting there. The building blocks are being created one by one -- Minikube (single-node K8s for devs), Helm (packaging for K8S), Operator framework (and individual Operators for self-driving infrastructure). The most recent piece is Heroku's announcement for cloud-native buildpacks, allowing for standardized container images to run your buildpacks.

Stateful workloads (the database servers) are not quite there yet and remains one of the most challenging parts of K8S. We are just starting to see Operators written for specific datastores (Mongodb, Postgresql, Redis, etc.)

I don't know about 10 years ... but it is not that turn-key right now, yet.


Openshift already provides this experience. It's pretty much your own heroku on k8s.


> But maybe I envision my side project turning into full-time startup some day. Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it. Nah, probably too expensive. Oh wait, I can get something running for $5? Hey, that's pretty neat!

He says it's fine to do it if you want to learn the technology, but points out (rightly, I think) that if your concern is that you might need it at some point down the road, worry about it at some point down the road and not when you're trying to get started.


It is a really weak argument for Kubernetes for personal projects when the only realistic scenario is when the personal project is to learn Kubernetes.


> If I'm thinking of playing around with a Raspberry Pi or other SBC, do I need to install Kubernetes on the SBC first?

No! Although you could deploy it as a docker container easily.

Simple heuristic: how many containers and machines am I dealing with? More than one machine and one container? Consider moving to Docker Swarm or K8s. A single container? What is there to orchestrate?


Doing a small project is a best way to learn a new system. If you never plan to use Kubernetes then I would agree, why would you use it at all? But if you're exploring it there could be no better way to get your feet wet than with a small project.


Kubernetes is way overkill for personal projects. If a side goal is to learn Kubernetes while deploying your projects, then go for it. If not, docker-compose is good enough and is actually very good for local development.


I totally agree with this article. I'm giving a point of a view of a pure developer who knows nothing about DevOps things and managing servers. I kind of know what NGINX is and barely know how to configure something like systemd.

I recently setup a digital ocean droplet and setup my blog there to actually understand how it works. It was great because I learned a ton and feel in control. Pretty simple setup - single droplet, rails with postgres, capistrano to automate deploys and a very simply NGINX config. It took me multiple days to setup everything, compared to the 5 minutes Heroku would have required - and it's not as nice as what Heroku offers.

Still, I'd wait as long as I can to get out of something so simple as Heroku for _anything_. I understand it gets expensive quickly, but I really want to see the cost difference of Heroku vs the time spent for the engineering team to manage all the complexities of devops, automated deploys, scaling, and I'm not even mentioning all the data/logging/monitoring things that Heroku allows to add with 1 click.


> I understand it gets expensive quickly, but I really want to see the cost difference of Heroku vs the time spent for the engineering team to manage all the complexities of devops, automated deploys, scaling, and I'm not even mentioning all the data/logging/monitoring things that Heroku allows to add with 1 click.

Well, if you use a k8s cluster on GKE for example, you will have literally all those things by default. Not even a click needed.

IMO running your own Kubernetes cluster for a company is insanity unless you have a very good reason to do so.


GKE is nice, probably the best K8s experience, but it's still a heck of a lot more complicated than Heroku.


unless you already know how to do it. i can setup a production ready cluster in under a day and as long as things are containerized, run them likety split. Hardly insane once you know how it works.


An on-prem "production ready" k8s cluster in under a day? What kind of businesses are you setting this up for? Are these all greenfield projects? I'm sorry, but I find it hard to believe that.

We run on-prem k8s, on bare metal and VMs. Integration with existing storage (we use NetApp Solidfire and NFS), load balancers, firewalls, backup strategy, DR, etc. takes weeks, if not months of work. But we may disagree on what "production ready" means.


Whom said anything about on-prem? on-prem anything takes weeks and has nothing to do with kubernetes. AWS, production ready. I could probably do google too, but it would take a couple weeks to write the automation first. Also, if your on-prem system is already well organized, the kubernetes portion can also be done 1-2 days. If it takes you weeks to attach storage, load balancers, and configure your firewalls, none of that has anything to do with a production ready kubernetes cluster.


My apologies is I misread what you said, it looked like your reply was to "anyone running their own cluster is insane" in the previous comment. And as I said, to me that means "on their own bare metal servers".

I'm talking about getting a Kubernetes cluster (of any kind really, but specifically on-prem) _integrated_ and doing useful things with existing legacy workflows inside your company using an existing infrastructure is a larger task than I see it made out to be.


Key word there being "on-prem".

The hard part of Kubernetes is setting up the cluster. If you are using an existing k8s master, it should be fairly easy.

One day of course seems like a bit of an exaggeration to me :)


I thought GP comment was replying to this in GGP comment:

>IMO running your own Kubernetes cluster for a company is insanity unless you have a very good reason to do so.

To me, "running your own" means "on-prem", and it did seem to me that the response of "I can create production ready in a day" was directed at that, which I whole-heartedly disagree with.


> giving a point of a view of a pure developer

Kubernetes really looks like designed by the software developers for the software developers: dump all configs of your services in one place, imagine that they run on the network. The uninteresting parts of the job (like managing nodes and ingress, fixing internal overlay network and DNS, adding services for centralized logging) aren't mixed with the actual services. Obviously, the package management is solved by using containers (essentially OS images) as the package format.


There’s been a bit of a full circle going on in the industry. PaaS like Heroku were rejected by many because they couldn’t tinker inside the box. Docker and Kubernetes changes that, and there will be new Heroku-like experiences built on them.

But for now we are in this weird mode where the Kubernetes momentum is eclipsing even Docker, even though raw K8s reminds me of Linux in the Slackware days. there is so much FOMO, people don’t consider Heroku or anything off the Kubernetes wagon, except maybe AWS Lambda.


As someone that knew how to set up a server before hand I found Heroku a real pain in the arse to use.


To learn, or to utilize on an ongoing basis? I'm curious what you found difficult, because since remapping my mind to 12-factor years ago, I still haven't found anything as simple as just using Heroku.


I only used it once or twice, so to learn.

Every time I have had an EC2, Rackspace equivalent or other server I knew what I was doing and other than what felt like some minor stuff. I guess its kind of related to the comment that people didn't like not being able to tinker with the box. (It kind of felt like a black box to me, far more difficult to debug than a "normal" server).


A lot depends on where and how you work. If you are responsible for development and end to end operations down to the VM, K8s and Docker make a ton of sense.

If someone is doing that for you, ie. roles are separated, or you don’t want to take it on and prefer to outsource it, then a higher level abstraction like Heroku makes more sense. Debugging is different (still possible, just different from if you owned the full box).

This split also accounts for why AWS Lambda is growing in popularity (and debugging is probably its main gating factor).


It is weird to hear developers say this. The tooling around your app is essential to understanding how it runs, why would someone not know this stuff?


Because they don't care, what they're doing is good enough and at the end of the day they want to go home and enjoy what they like together with the loved ones. Shocking, truly :p

I know this is a dirty thought, here on Hacker News.


I don't think its a dirty thought; its a different perspective. But I do suspect that one of the qualities of a good hacker is to be relentlessly curious about how things work, to set up something and then be curious about how it works "under the hood", and to dedicate some serious time to learning it. Inevitably, that has an effect on relationships and general life.


I'm not in the camp I described in the previous comment, I was just making an observation.

> Inevitably, that has an effect on relationships and general life.

I'll make another observation: I don't think so. There's plenty of people who lack curiosity that led happy lives. I don't think anyone can prove a correlation between curiosity and success in life.

Heck, one could joke that there's a reason most proverbs say the opposite, in almost every language out there ("curiosity killed the cat" in English, "curious people die young", in my native Romanian).


The people that built all this tech that pays your bills were the kind that would think about the sorts of things I bring up. I don't see that in the latest gen of developers.


You're not looking hard enough. Every generation has them, you're just starting to develop "get off my lawn syndrome". If your username is any indication of your age, it's a bit early for that :D


I don't know how to design an IC either...


You do if you're programming in assembly.

Developers need to understand how the layer underneath their code works. If you're writing Java, you should understand the JVM. If you're writing C, you should understand the compiler.


One layer, maybe, but should you be able to deploy any type of web server on every OS if you're writing a web app? I'm not so sure.


> One layer

Yes, that's why I said "layer beneath" and not "layerS beneath"


Correct, which is why I was agreeing with you, but questioning the premise of a couple comments above yours. I.e., how far up the infrastructure stack should have developer the expected to know intimately?


Did you try preconfigured Dokku images on Digital Ocean?


I've heard of Dokku as a "run your own Heroku" solution. Is there a good guide for this combo?


The "quick-start" installation process on our homepage is `curl | bash` on a plain Ubuntu/Debian/Centos server (you can also run the commands manually, which are also outlined on that page). If you go through the web installer - which allows you to add your ssh key and setup a global domain - you'll be redirected to our deployment tutorial here: http://dokku.viewdocs.io/dokku/deployment/application-deploy...

- Dokku homepage: http://dokku.viewdocs.io/dokku/


It took you multiple days the first time; how long would it have taken you the tenth? Optimizing for the first time is rarely a good approach if you're working with those tools regularly, in my opinion.


I work as an engine mechanic full time, and im learning programming as a hobby. Kubernetes to me is like the shade-tree mechanic vs the professional.

Professional mechanics use high grade tools that can cost thousands of dollars each. We have laser alignment rigs, plasma cutters, computer controlled balancing and timing hardware, and high performance benchmarking hardware that can cost as much as the car you're working on. We have a "Kubernetes" like setup because we service hundreds of cars a month.

The shade-tree mechanic wrenching on her fox body mustang on the other hand? her hand-me-down tool box and a good set of sockets will get her by with about 90% of what she wants to do on that car. she doesnt need to perform a manifold remap, so she doesnt need a gas fluid analyzer any better than her own two ears.

I should also clarify that these two models are NOT mutually exclusive. If i take home an old Chevy from work, I can absolutely work on it with my own set of tools. And if the shade-tree wants to turn her mustang into a professional race car, she can send it to a professional "kubernetes" type shop that will scale that car up proper.


I don't quite resonate with the analogy here. It's about using the right tool for the job. Most small-scale personal projects, by their very definition, don't require an orchestration framework.


It's an old cliché, but car analogies rarely work for explaining computer phenomena. You won't be running a 1000 little kubernetes cars, and that's what k8s approach is all about running thousands of disposable little servers at Google scale.

With any car it's about making that single car more reliable or performant.

K8s doesn't care about reliability of any single instance, just the uptime of the whole service


The analogy shouldn't be hobbyists versus professionals.

It's more like you building your own car versus a Toyota manufacturing plant. You may think procuring and programming the robots to be an overkill for a single car, but it makes sense for a factory.


This is an interesting perspective because I view k8s as the "shade-tree" version of a robust cloud platform. It's cheap, quick, dirty, and probably can take off a few of your fingers if not done carefully, but the payoff is in being able to spin up lots of resources very quickly.

What do the pros use, then? I hear of things like DC/OS, Openstack, I know that Google's got "Borg", which is like professional k8s.


I work for one of those tech giants that I think falls into your "pro" category in a role that you could call dev ops. I used to work in a much smaller startup. The comparison of a tech giant's stack to large, growth stage startup tech stacks is like comparing trains to trucks. It's a qualitatively different problem because the difference between a few thousand and a few hundred thousand servers is Murphy's law.

In other words, I think there are two answers to your question of "what do the pros use?" The first answer is "Kubernetes, because that's the right tool for the job." The second answer is "My product division has an internal team the size of a growth stage startup, and it's specifically dedicated to solving server scaling problems, and that's just my product division."

Another analogy would be the question "how would an F1 team solve this problem?" One answer is "you don't need an F1 team for that", and the other is "first, hire an F1 team, then have them build all of the custom tooling the F1 car needs."


This doesn't make sense, and it's really the opposite of what you're saying. K8s is hard, but for large scale you need something robust that's tested at that scale. A collection of shell scripts is more like a beginner tool set. Yes, it's easy to learn, but it will break when you get serious.


For large scale you have to first make your application capable of scaling. That's a bit that's missing from most of these conversations. Only the simplest programs can "scale up" by just adding more programs - most have to be re-architected to move all the state in the program to some other service.

That is to say, if scaling is your primary concern, you have a dozen other things more important to fix than your choice to use shell scripts vs. Kubernetes.

And, fwiw, Linux has run professional services somewhere around 10x longer on those "beginner tools" than containers have even existed.


I think you mean "scale out" [1]? These days, it shouldn't really require a rearchitecting of a system. Almost all modern web frameworks are designed to be stateless anyway, externalizing ownership of state to databases and other services (themselves stateless APIs to databases).

[1] https://en.wikipedia.org/wiki/Scalability#Horizontal_and_ver...


There are a lot of fully professional, large-scale shops not using Kubernetes, so the analogy seems unfortunate, to me. But I guess that shows you how successful the marketing for this stuff has been, if you think that anyone not using it is acting like a "shade tree mechanic."


I've been thinking about setting up a small Kubernetes cluster for hosting some smaller client projects (read websites, shopping carts, API's, admin panels).

My current setup uses a couple of Hetzner dedicated machines and services are deployed with ansible playbooks. The playbooks install and configure nginx, install the right version of ruby/java/php/postgres, configure and start systemd services. These playbooks end up copied and slightly modified for each project, and sometimes they interfere with one another in sublte ways (different ruby versions, conflicting ports, etc)

With my future Kubernetes setup I would just package each project into its own self-contained container image, write a kubernetes deployment/pod spec, update the ingress and I'm done.


If you can find the time to get up to speed on Kubernetes, I would say do it.

I actually have a weirdly similar setup to you (I run on Hetzner and used and still use ansible), and I've written about it, most recently when I switched my single node cluster to Ubuntu 18.04 [0]. In the past I've also run a single node kubernetes clusters on CoreOS Container Linux, Arch, and back to CoreOS Container Linux in that order, from versions 1.7~1.11.

[0]: https://vadosware.io/post/hetzner-fresh-ubuntu-install-to-si...


Thanks, I'll check it out.

I have quite some experience working with Kubernetes clusters for my larger clients. Usually for clients that are big enough to have their own AWS account.

The thing I am still on the fence about is whether I should go for a DIY Kubernetes setup on one or more Hetzner dedicated machines (cheap, more work, less scalable) or if I should just shell out for AWS and run an easily scalable cluster with Kops (which is what I use for some clients) and take advantage of all the AWS goodies like load balancing and RDS.


Well I think that's more of a cost question -- AWS can get expensive pretty quickly. Three t2.micros (one coordinator, 2 nodes) are absolutely pitiful in terms of processing power but that's already ~$30/month when you could get a way beefier machine on Hetzner whether dedicated or cloud (also Scaleway[0]).

I'm a fan of Hetzner because I think their cheap dedicated machines are worth the operational costs for me, and the issues with upkeep I'll face are good for me because that knowledge has value. Also, I want to note that if you actually start subscribing to the immutable infrastructure movement that's going on right now, once you look past all the buzzwords it's a fantastic way to run servers stress free -- as long as your data is backed up/properly managed, just shoot servers left and right, spend a lot of time to get them into the right state ONCE, and never worry about it again. You can even use terraform to provision hetzner. Again, this kind of thinking and the related tooling is catchy/popular right now because it's useful at larger scales, but it can also free you of a lot of worry at lower scale. For example I have a post on using Container Linux to go from brand new machine to single node k8s cluster, with one file.

To be honest though, setting up a Hetzner dedicated machine is very very easy -- they've got great utilities. You could even go with Hetzner Cloud and things will be more managed.

I would say go with AWS if you want to experiment with AWS technology as well -- and want to use their value added services. If you run kubernetes on a dedicated machine on hetzner you're definitely not going to get the rest of that, of course.

BTW, kubeadm is better/less complex than kops -- It's almost impossible to fuck it up, but there are subtle things in kops due to the AWS integration that make things a little things ever so slightly more difficult.

[0]: https://www.scaleway.com/pricing/

[1]: https://vadosware.io/post/k8s-container-linux-ignition-with-...


I'd be really interested in how you're approaching persistence? I've also found self managing clusters provisioned with kuibeadm fairly hassle free until persistence is involved. Not so much setting it up (e.g. rook is fairly easy to get going with now), but the ongoing maintenance, dealing with transitioining workloads between nodes etc etc.


tl;dr - Rook is the way to go, with automatic backups set up -- using rook means your cluster resources are ceph-managed, you basically have a mini EBS -- Ceph does replication across machines for you in the background, and all you have to do is write out snapshots of the volume contents from time to time just in case you get spectacularly unlucky and X nodes fail all at once, in just the right order to make you lose data. Things get better/easier with CSI (Container Storage Interface) adoption and support, snapshotting is right in the standard and restore is as well -- barring catastrophic failures you can just lean super hard on Ceph (and probably one more cluster-external place for colder backups).

I'd love to share! In the past I've handled persistence in two ways:

- hostPath setting on pods[0]

- Rook[1] (operator-provisioned ceph[2] clusters, I free up one drive on my dedicated server and give it to rook to manage, usually /dev/sdb)

While Rook worked well for me for a long time, It didn't quite work for me in two situations:

- Setting up a new server without visiting Hetzner's rescue mode (which is where you would be able to disassemble RAID properly)

- Using rkt as my container runtime. The Rook controller/operator does a lot of things which require a bunch of privileges, which rkt doesn't give you by default and I was too lazy to work it out. I use and am happy with containerd[3] (and will be in the future) as my runtime however, so I just switched off rkt.

right now, I actually use hostPath volumes, which isn't the greatest (for example you can't really limit them properly) -- I had to switch from Rook due to my distaste for needing to go into Hetzner rescue mode to disassemble the pre-configured RAID (there's no way currently to ensure they don't raid the two drives you normally get after the automated operating system setup). Normally RAID1 on the two drives they give you is a great thing, but in this case I actually don't really care much for main server contents since I try to treat my servers as cattle (if the main HDD somehow goes down it should be OK), and I know that as long as ceph is running on the second drive I should have reliability as long as I have more machines which is the only way to really improve reliability, anyway.

Supposedly, you can actually just "shrink" the raid cluster to one drive, and then remove the second drive from the cluster -- then I could format the drive and give it to Rook. With Rook though (from the last time I set up the cluster and went through the raid disassembly shenanigans ), things are really awesome -- you can store PVC specs right next to the resources that need them -- this is much better/safer than just giving the deployment/daemonset/pod a hostpath.

These days, there's also local volumes[4], which are similar to hostPath, but offer a benefit in that your pod will know where to go because the node affinity is written right into the volume. Your pod won't ever try and run on a node where the PVC it's expecting isn't present. The downside is that local volumes have to be pre-provisioned, which is basically a non starter for me.

I haven't found a Kubernetes operator/add-on that can dynamically provision/move/replicate local volumes, and I actually wanted to write a simple PoC one of these weekends -- I think it can be done naively by maintaining a folder full of virtual disk images[4] and creating/mounting them locally when someone asks for a volume. If you pick your virtual disk filesystem wisely, you can get a lot of snapshotting, replication, and other things for free/near-free.

One thing Kubernetes has coming that excites me is the CSI (Container Storage Interface)[5] which is in beta now and standardizes all of this even more. Features like snapshotting are right in the rpc interface[6], which means once people standardize to it, you'll get a consistent means across compliant storage drivers.

What I could and should probably do is just use a hetzner storage box[7].

[0]: https://kubernetes.io/docs/concepts/storage/volumes/#hostpat...

[1]: https://rook.github.io/docs/rook/v0.8/

[2]: http://docs.ceph.com/docs/master/start/intro/

[3]: https://github.com/containerd/

[4]: https://kubernetes.io/docs/concepts/storage/volumes/#local

[5]: https://kubernetes.io/blog/2018/04/10/container-storage-inte...

[6]: https://github.com/container-storage-interface/spec/blob/mas...

[7]: https://www.hetzner.com/storage-box?country=us


This is an amazing response, thank you!


Absolutely no problem -- hope you found some of it useful!


This is the sort of thing that I think a containerised deployment platform can really help with, and I'm doing much the same – specifically because it completely avoids interactions between applications' environments.

But I would recommend looking at the Hashicorp stack as a possible alternative, which might be entirely suitable for your use-case without the complexity of Kubernetes. This involves running Nomad and Consul to provide cluster scheduling and service discovery respectively - these are both single binaries with minimal configuration. Then you'd need some kind of front-end load-balancer like nginx or traefik which uses Consul to decide where to route requests.

It doesn't cover all the use-cases and features that Kubernetes does, but it does have the benefit of being much more straightforward to work with, so definitely worth considering!


One thing to consider is packaging projects as container images, but continuing to deploy them via ansible. That could solve some of your problems without requiring the time investment and complexity of running kubernetes.


Your current setup is probably already quite stable. Why switch to another? You probably underestimate the learning effort and the required effort to keep your stuff running in a long-term maintainable way on k8s.


I will just link to my recommendation of CaptainDuckDuck:

https://news.ycombinator.com/item?id=18128575


I recommend having a look at DCOS


I recently started working with DCOS, and so far it seems that while it might have been more mature and ready platform 2-3 years ago, today I have to deal with issues I don't have to care about on K8s


Curious to know what.

We run a DCOS cluster with 40+ machines and we have to deal with pretty much nothing


So basically, ignore 1/2 of the reasonable problems that are solved in the first article and then look, no need to learn anything!!!

As someone whom can setup and run a kubernetes cluster in my sleep, I can tell you that it is a superb production ready platform that solves many real world problems.

That in mind, kubernetes has constraints also, like running networked elixer containers is possible, but not ideal from elixer's perspective. Dealing with big data takes extra consideration. etc. etc.

All said, if you have an interest in DevOps/Ops/SysAdmin type technologies, learning Kubernetes is a fine way to spend your time. Once you have a few patterns under your belt, you are going to run way faster at moving your stack to production for real users to start using, and that has value.

I think the initial author (not this article, the other one) was just pointing out that you can indeed run kubernetes pretty cheap, and that is useful information and good introduction. This article is clickbait designed to mooch off of the others success.


> So basically, ignore 1/2 of the reasonable problems that are solved in the first article and then look, no need to learn anything!!!

I think the point is... do you actually have those problems? A lot of people jump immediately to worrying about having thousands of requests per second when it doesn't make any sense.


Sharing code and getting run on other collaborators workstations? Yes, that's a very real developer problem.

Deploying without downtime? Yep, it's nice to have because your favorite customer will have been testing that site in the exact 2 minutes of downtime which you deploy it ....believe me, murphy's law rules here.

Staging and Production environments that are the same, so I don't have surprises from local development to production release? Yep, another real problem that will slow momentum of development.

I suppose if you are developing a personal project of garbage that no one will ever see, than these problems don't exist. But if you are actually developing a product, these problems exist.


> Do you want to do all of this because you think is fun? Or because you want to learn the technology? or just because? Please, be my guest! [...]

Kubernetes is likely here to stay. If you're interested in running a cluster to undestand what the hype is all about and to learn something new, you should do it. Also, ignore everybody telling you that this platform wasn't meant for that.

Complexity is a weak argument. Once your cluster is running you just write a couple of manifests to deploy a project, versus: ssh into machine; useradd; mkdir; git clone; add virtual host to nginx; remember how certbot works; apt install whatever; systemctl enable whatever; pm2 start whatever.yml; auto-start project on reboot; configure logrotate; etc. Can this be automated? Sure, but I'd rather automate my cluster provisioning.


I will also invite everyone to try it and I also believe K8S is here to stay. I think K8S makes a lot of sense for lots of workloads, but I don't think it makes sense to maintain a k8s cluster to run personal projects.

About complexity, what you're saying is true, but I think "once your cluster is running" is making a lot of assumptions about what is actually running in the cluster in terms of infra and what workloads you can run there.


I agree with you, I think K8s is great to learn so you know what it can do, but I wouldn't use it for personal projects. For that, I recommend something like Dokku, which is very easy to get started with.

For Kubernetes, I found the docs a bit bad, the starting concepts are very easy to grok but the docs obfuscate them. I wrote a very short article on the basics [0], for anyone who might be interested in learning. After reading the article, reading the docs should be much easier, as you'll know the terms much more intuitively.

[0]: https://www.stavros.io/posts/kubernetes-101/


> About complexity, what you're saying is true, but I think "once your cluster is running" is making a lot of assumptions about what is actually running in the cluster in terms of infra and what workloads you can run there.

Already answered in a way:

> Can this be automated? Sure, but I'd rather automate my cluster provisioning.

If I need more computational power or a specific 3rd-party service that I don't have available at this point, I simply tear down my current cluster and deploy it elsewhere.


> Complexity is a weak argument

I see you've never tried to upgrade a running kubernetes cluster or been in an on call schedule for one. It's a new technology that is still maturing but it has a lot of moving parts all of which require a fair bit of understanding and which change on a regular basis.

Hell, just a few months ago the ACM agent totally got rewritten and now you have a choice between alpha software or a deprecated project!


I treat my cluster as immutable. My setup is open source: https://github.com/hobby-kube/guide


> just write a couple of manifests to deploy a project, versus...

Whenever anyone says "just do something" these days, it usually means that it hasn't been thought through properly. Is that only my personal experience?


No, I have the same reaction, too. Especially when it's nestled in to a paragraph that, essentially, starts off by highlighting that something is an example of essential complexity.


Totally agree with the author, for my side projects in Node.js, I use the following:

- pm2 for uptime (pm2 itself is setup as a systemd serivce, it's really simple to do and pm2 can install itself as a systemd service)

- I create and tag a release using git

- on the production server, I have a little script that fetches the latest tag, wipes and does a fresh npm install and pm2 restart.

- nginx virtual host with ssl from letsencrypt (setting this stuff was a breeze given the amount of integration and documentation available online)

Ridiculously simple and I only pay for a single micro instance which I can use for multiple things including running my own email server and a git repo!

The only semi-problem that I have is that a release is not automagically deployed, I would have to write a git hook to run my deployment script but in a way I'm happy to do manual deployments as well to keep an eye on how it went :)


Honestly, you could do that in almost the same time on Kubernetes.

I understand why people might not want to invest the time onto learning a new technology, but that's not a reason to say it's a bad fit. If you know how to use Kubernetes, doing these bash scripts and doing a few YAML files will take basically the same time and the end result will be vastly superior on Kubernetes.


Good luck setting up an email server on kube. Or a git repository for that matter


I would say: good luck setting up an email server anywhere

Really not sure what Kubernetes has to do with this argument though.


Why exactly do you think someone needs luck with that? I'm honestly curious.


Drone CI could make this whole process automated and preserve your ability to inspect the logs.


I looked into using Kubernetes for my personal servers, but I abandoned the idea when I saw that the minimal Kubernetes setup uses more compute resources than all the services [1] it's supposed to manage combined (e.g. 0.5 GiB RAM vs. 0.25 GiB, which is substantial on a 1/1 VM). And that's before you consider that a single-server setup is not The Right Way (TM) in k8s land.

[1] Gitea (Github clone), Murmur (voice-chat server), Matrix Synapse (instant messaging), Prosody (XMPP), nginx (for those services and for various static websites)


IMHO Kubernetes only makes sense if you can, or want to, run multiples of things that are either stateless or clustered in some way, or another copy for a different purpose.

Run two instances of something if you want to survive a single crash or a node update. Run another copy of your application stack if you want to try out a different version or config.

Without looking at the docs, most of the things in your list are single-instance stateful applications, so unless you plan to run another copy of them for a different purpose, K8S is overkill.


Kubernetes is an operating system. Using it to run your software is as overkill today as running your own Linux server was overkill before that. Maybe you just need to run on Heroku and don't need the complexity of writing systemd service files?

In the end, the steps you take to deploy with rsync and run your systemd service are the same (conceptually) you'd take to run on K8S, but translated to some YAML and a docker push. In one case you need to learn a new paradigm, in the other case you deal with something you already know. Not having to learn something new is an argument, but it doesn't mean your bare-Linux approach is simpler than the K8S approach. You just know it more.


Kubernetes is just one of many development ecosystem tools that solve real problems and, once you know them, make your life easier. The arguments in this article can apply to any development tool or practice.

Why separate your code into multiple files? Why write tests? Why use a code linter? Why use virtual environments? Why write a Makefile?

If you're working on a small personal project, or you're a newer developer learning the ropes, or the project is temporary, not important, doesn't need to scale, etc. then it's simply a matter of personal choice. It doesn't make sense to get bogged down learning a lot of tools and processes if there's no compelling business need and you're just trying to get the job done.

If you already know how to use these tools, though, they usually make your life a whole lot easier. I learn how to use complex systems in my career, where they're necessary and helpful. I apply these same tools and practices on my personal projects now, because once you know how to use something like Kubernetes, there's little cost to it and many of the benefits still apply.


> tools that solve real problems and, once you know them, make your life easier.

yep, i think you nailed here.


In my opinion, the best technology for personal projects is the one that you don't know yet. They are a great opportunity to play around stuff which really is the only way of learning something new.

Unless the personal project is something that you really care about, potential startup or something like that, then obviously you choose something that you are already proficient in because then it's about getting stuff done and moving forward.

So while it may make sense to discuss what technology is good or bad for some kind of companies, I think we won't arrive at any ultimate conclusion like "X is good/bad for personal projects".


I agree that Kubernetes for personal projects is likely going to be totally overkill for many, but I disagree that containers themselves are overkill, which this author also suggests. These are arguably two separate issues entirely, and lumping them together is extremely misleading. I happily run all my (very small) side projects in containers without Kubernetes and it's really pretty simple to do so.

As soon as this author mentioned he was happy with using Ansible, Systemd etc instead (which are all reasonable tools for what they are) he lost me - this is collectively much more work for me as the sole developer than a simple Docker container setup for virtually all web app projects in my experience. If you understand these relatively complex tools, you can likely learn Docker well enough in about an hour or two, the payoff in time savings in the future will make this time well spent.

In my experience "Dockerising" a web app is much, much less time consuming than trying to script it in Ansible (or Chef, Puppet, <name your automation tool>) and of course much less error prone too. I've yet to meet an Ansible setup that didn't become brittle or require maintenance eventually. If you are using straight forward technologies (Ruby, Java, Node, Whatever) your Dockerfile is often just a handful of lines at most. You can even configure it as a "service" without having to bother with Systemd service definitions and the like at all.


Ansible + Docker play well together and are both a weekend effort to learn and use on side-project. Both work fine for me on cheap dedicated machine. Ansible is a good documentation of how the bare metal host is configured including Docker setup and compose like config[0]. Docker handles apps setup, isolation and local networking nicely.

Then playing with Kubernetes on private project would have only Résumé-value for me.

[0] https://docs.ansible.com/ansible/latest/modules/docker_conta... (check other docker modules too)


Out of curiosity, how do you deploy new versions of your without downtime (start new containers, wait for new containers to be ready, switch traffic to new containers, shutdown old containers)?


For me personally zero downtime upgrades are a little beyond the scope of "personal project" and veering into something more production quality.

If I really had to for one of these, I'd probably just do something at the loadbalancer to start routing users to the new container stack then shutdown the old ones, much as you might have in the pre-container days. I can just wait the old fashioned way (by sitting in my chair for a minute) for them to start.


Ok, this is basically what I do with an Ansible script, but I see it as a bit messy and non-standard, which is why I'm attracted to Docker swarm mode and Kubernetes (and maybe Nomad).


Fair, but I think it's arguable there isn't really a "standard" way for container orchestration, and Docker Swarm to me is starting to feel like a dead horse, despite protestations to the contrary from Docker. The requirements of different software always make each project's needs for zero downtime upgrades different enough, especially if you are dealing with legacy software.

Pick the one that works best for you and your projects goals (within reason...).


Right, there isn't really a "standard". There are so much existing solutions: Kubernetes, Docker swarm mode, Nomad, Spinnaker, etc. What I meant by "standard" is something used by more than one team :-)

Agreed by Docker swarm mode feeling a bit abandoned.

Do you have any recommendation for a solution that I would have missed?


Frankly the article is filled with FUD and the author justifies everything with "i think/what if/my way is fine for me".

You don't need to run a new cluster for every project. You can deploy multiple projects in a single cluster. I was running close to 5 different projects in a single cluster, backed by about 3-6 machines (machines added/removed on demand).

Kubernetes is basically like your own heroku. You can create namespaces for your projects. No scripts. You can deduce everything (how is a service deployed, whats the config, whats the arch) from the config files (yml)

> Is a single Nginx virtual host more complex or expensive than deploying a Nginx daemon set and virtual host in Kubernetes? I don't think so.

Yes it is. I wonder if the author has actually tried setting this themselves. I do realise i had similar opinions before I had worked with kubernetes, but after working with it, I cannot recommend it enough.

> When you do a change in your Kubernetes cluster in 6 months, will you remember all the information you have today?

Yes, why does the author think otherwise ? Or if this is a real argument why does the author think their "ansible" setup would be at the top of the head. I had one instance where I had to bring a project back up on prod (it was a collection of 4 services + 2 databases not including the LB) after 6-8 months of keeping it "down". Guess what, I just scaled the instances from 0 to 3, ssl is back, all services are back, everything is up and running.

This is not to say you wont have issues, I had plenty during the time i started trying it out. There is a learning curve and please do try out the ecosystem multiple times before thinking of using it in production.


> Frankly the article is filled with FUD and the author justifies everything with "i think/what if/my way is fine for me".

It is just my opinion after all. I'm just trying to share my thoughts :)

> Yes it is. I wonder if the author has actually tried setting this themselves.

I've used K8s for months in production, maintaining a few clusters at my previous job.


Warning to those who think Fargate is green pastures: it has its own learning curve. Also, it costs about ~1.8-2.5X the price of standard EC2 for the convenience. Don't waste your money on it for long-running containers that will rarely need to scale.


Thanks for sharing your thoughts! I wrote about Fargate because I like the idea of having a service that manages both masters AND workers and where you only need to care about the API, but didn't really try it yet. That was my impression as well, though. Even the use cases mentioned in the pricing site are just containers running for a few hours a day, and not long-running services like servers.


The smallest fargate container in us-east-1 will cost you USD 13 per month if you never shut it down.

Avoid using a load balancer as they are quite pricey (although it will allow you to create and use auto-managed SSL certificates for free.)

Of course you will also pay for egress traffic.

The nicest part of Fargate is that:

* you can define your whole cluster using a docker-compose like format.

* you can manage your cluster using the ECS CLI. No extra tool needed.


This whole line of debate is really getting tiresome. Kubernetes has proven its value in production use cases across a wide variety of application domains. That doesn't mean everybody should be using it, any more than the proven value of containers means that everyone should be deploying in them. I've been working with k8s for three years and run multiple production clusters, but if I had some little thing I might very well toss it up on a paas like app engine, or just install it on a free micro instance as the OP suggests. Or... maybe I would create a cluster and run it there. Point is kubernetes is an alternative that I can take advantage of where it makes sense because I've gotten some experience with it. It might make sense for you, it might not, but it's not essential that all developers immediately come to agreement on whether or not all software projects should migrate to k8s by tomorrow.


I wholeheartedly agree with this. It's one of many HN debates that boil down to whether X is the right tool. Thing is, you can't judge about the usefulness of a tool without factoring in its users. If you and your team have experience with Kubernetes and its ecosystem, you'll have no problems reaping its advantages even for small deployments. If that's not the case, then by all means, pick something else.


I can't agree more with the author ;).

I work in my day to day 100% and fully dedicated automating Kubernetes cluster lifecycle, maintaining them, monitoring them and creating tools around it. Kubernetes it's a production-grade container orchestrator, it solves really difficult problems but it brings some challenges though. All of their components work distributed across the cluster, including network plugins, firewall policies, the control plane, etc. So be prepared to understand all of it.

Don't get me wrong, I love Kubernetes and if you want to have some fun go for it, but don't use the HA features as an excuse to do it.

But overall saying "NO" to rsync or ansible to deploy your small project just because it's not fancy enough it sounds to me like "Are you really going to the supermarket by car, when there are helicopters out there?"

Great article!


For random toy projects, spinning up a whole Kubernetes cluster is absolutely overkill (unless part of the project is learning Kubernetes). The thing is as you get further along, for some applications, it becomes harder and harder to move to a container-based design as you have to unwind all the weird dependency mappings. I've got an app I've been involved with containerizing for a client at work, and they're dead set on sticking with an Ubuntu 14.04 base container, because they legitimately don't know if it'll even function on a more modern base, and don't feel they can spare the development cycles to figure it out. Thing is, it started as a toy application, deployed to a server by manually SSHing in and doing a git pull from the repo (not even rsync!) and restarting all the services, and that's still how it's deployed in production today.

Containers (and thus Kubernetes) aren't the magical solution to every problem in the world. But they help, and the earlier you can get to an automated, consistent build/deploy process with anything that'll actually serve real customers, the better off you are. Personally, I'd rather design with containers in mind from day one, because it's what I'm comfortable with. There's nothing wrong with deploying code as a serverless-style payload, or even running on a standalone VM, but you need to start planning for how something should work in the real world as early as you can reasonably.


FWIW, cedar-14 stack is Ubuntu 14.04 and that's been the base of Deis Workflow (now Hephy Workflow) for years. We've been meaning to upgrade to Heroku-16 stack (and eventually Heroku-18) but our resources are limited too, and we've had to fight other dragons like getting a website together, and figuring out the build system. (Deis Workflow was EOL'ed last year, and Team Hephy is the fork/continuation of development, which we can do because Deis developers were all gracious enough to keep everything OSS.)

So, back to the point, I'm sure you couldn't deploy your app on Heroku if that's your requirement (because cedar-14 is deprecated, and not available for new deployments anymore) but if you seriously wanted to try containerizing it onto Kubernetes, and if you don't have other obstacles to 12-factor design that you're also not prepared to tackle, then Hephy Workflow v2.19.4 might actually work for you.

https://teamhephy.com and https://docs.teamhephy.com

I'm sure this probably won't work for you, for reasons you may not have explained, but ... maybe you'd like to look?

I'm not doing a great job selling it, the one redeeming quality I've mentioned is that it runs an outdated stack that you need ;)


The point about complexity is exactly right.

Every new thing that you add, adds complexity. If that thing interfaces with another, then there is complexity at the interfaces of both.

Modern tools that atomise everything reduce density (and thus complexity), but people aren't paying attention to the amount of abstractions they are adding and their cost.


Dokku works quite well for personal projects and is easy to get started with even without much dev ops exp https://pawelurbanek.com/rails-heroku-dokku-migration


I don't even want to use it for commercial projects.

It needs a certain scale before the overheads are worth it.


Didn't expect this comment to come back from being voted down so much! Guess it splits opinion.


People are crazy (literally) about Kubernetes.


Why is there always this foregone conclusion that everyone is going to do their hobby projects on AWS or GCP or paid cloud platforms? You can get fast baremetal servers off the gray market super cheap, pay once and you're set.


Also something like Linode, Digital Ocean or even NearlyFreeSpeech are all happy compromises. That way you can have a static IP and not have to worry about power, cooling, or networking.


Containers can be easy to use, once you drop devops. Containers are simple, a much more flexible alternative to VMs.

Unfortunately the devops community always wanted to promote themselves as the only option for containers and even though they were based on the LXC project they did not explain the technical decisions and tradeoffs made as they did not want users to think there are valid alternatives. And this is the source of fundamental confusion among users about containers.

Why are you using single process containers? This is a huge technical decision that is barely discussed, a non standard OS environment adds significant technical debt at the lowest point of your stack. Why are you using layers to build containers? Why not just use them as runtime? What are the tradeoffs of not using layers? What about storage? Can all users just wish away state and storage? Why are these important technical issues about devops and alternative approaches not discussed? Unless you can answer these questions you cannot make an informed choice.

There is a culture of obfuscation in devops. You are not merely using an Nginx reverse proxy or Haproxy but a 'controller', using networking or mounting a filesystem is now a 'driver'. So most users end up trying Kubernetes or Docker and get stuck in the layers of complexity when they could benefit from something straightforward like LXC.


I feel that Kubernetes has a lot of "upfront" cost that needs to be tackled - containerization, manifests for all the pods you want to set up, potentially setting up the right persistent storage if needed, user access, logging, etc. And this is still if you use a "hosted" solution with Amazon/Google/Microsoft, if you set it up yourself there is a ton more complexity.

Using something you are familiar with, even if it's just a 10-line bash script, a simple virtual private server and the adding an nginx config there, is usually faster than having to orchestrate everything. If you want to invest the time in setting up Kubernetes for all your personal projects, it would probably make sense.

Basically, is it worth it? https://xkcd.com/1205/


You could also have told that it is a very complex system where one still just runs Bash scripts to solve exactly the same problems as on bare metal, VMs, etc.


That's why I never use computers, everything can be done with pencil and paper, it just takes a bit longer.


Can't hear you, still waiting for your reply letter! :-D


I start using Kubernetes with gitlab months ago with the free account. Today i'm using it for a side project. I never used kubectl since then. the project is very simple but still it is split into 3 different repos (all connected to the same cluster). Even if the documentation is not clear about how to connect different repos to the same cluster, after a couple of hours i had to click some buttons in the gitlab interface and auto devops is enabled in 3 projects.

in office we do not work with docker, containers, cloud etc, we run legacy asp.net 2.0 on-premise without any kind of automation (just a couple of us coordinating the releases and copying and pasting into the customer Windows Server 2008).

Kubernetes for personal projects? In my case, after 10 years of on-premise deployments, VM Ware, SQL Clusters, web.config, IIS, ARR and the rest of the things related, YES!

I absolutely want 3 hosts for less then 100$, a gitlab account for 4$, a free account in cloudflare, code and deploy.


Hello, Community Advocate from GitLab here.

We are glad to hear that you like using GitLab!

Regarding the documentation, have you checked out the following doc? https://docs.gitlab.com/ee/user/project/clusters/index.html#...


I would say to the author that for most small/personal projects that dokku is probably about as extreme as you want/need. Use Docker for (Windows|Mac|Linux) locally as needed, and use dokku for your deploy target. A $40 DO/Linode vm will go a long way for small scale. I've even setup round-robin load balanced deployments, where the app just deploys to two hosts with the exact same config. Works great on smaller things.

Of course, if you're in a workplace on a project likely to see more than a few hundred simultaneous users in a given application, definitely look at what K8s offers.

Edit: as to deploys, get CI/CD working from your source code repository. GitLab, Azure DevOps (formerly VSTS), CircleCI, Travis and so many others are free to very affordable for this. Take the couple hours to get this working, and when you want to update, just update your source repo in the correct branch.


Having to worry about redundancy and scaling out on a one-man personal project is a very enviable problem to have. Personally, I'm just going to stick with some kind of paas that gives me a managed IIS or Apache type webserver that I don't have to frig with, and focus my energies on actually building the project.


I don't get why Docker Swarm mode is so underrated. Docker Compose is excellent for local development, and Docker Swarm mode uses the same file and is almost the same thing. Some minor additions in extra docker-compose files and it's ready for production in a cluster. For the same $5 bucks in a full Linux VPS with Linode or DigitalOcean (or even the free tier in Google Cloud) you can get a full cluster, starting with a single node, including automatic HTTPS certificates, etc. Here's my quick guide, with a Linux from scratch to a prod server in about 20 min. Including full-stack app: https://github.com/tiangolo/full-stack/blob/master/docker-sw...


What are people classing as personal projects here? I have a bunch of raspberry pi's running docker that I throw some things on to run, and have some custom scripts to pull a new image and restart the container when I want to do an update.

But they're tiny, tiny things that are very personal (i.e. they have 1 user - me)

If you're getitng to the point where you need to scale things using a kubernetes cluster or whatever it seems to me like that thing has graduated from "personal project" to an actual product that needs the features of kubernetes like reslience and so on.

I mean, I'd love the idea of having a kubernetes cluster to throw some things onto but I really don't have the patience to set it all up right now, it seems way too much cost and effort


I have deployed by hand, with Capistrano, with Chef, with Heroku, with systemd, with Docker, with AWS EC2, and with Kubernetes.

Like everything, there are tradeoffs. If there were a fairly easy way to do a one-node Kubernetes setup (say, Minikube), I would probably just go that route. One doesn't have to use the full feature set of Kubernetes to get one or two things that are advantageous.

As it is, I setup Minikube for the dev machines for the team I am on. I might consider Kubernetes for my personal side project if I knew Minikube would do well for machines under 1 GB of memory (it doesn't really).

The pre-emptible VMs that cost less than $5 is interesting, and I might do something like that.


Good response. I agree with the intent of the article. For one I have a project where I just use Bash to set everything up (no containers what so ever). It's simple and convenient. I have it set up with Let's Encrypt, SELinux and git push deploy. The whole script is maybe like 100 lines of Bash + 2 configs (nginx and SELinux policy module).

For anybody who is interested in understanding this basic building blocks I decided to write https://vpsformakers.com/.


I read the article. I'm sorry, but my impression about it is the following: Why use Kubernetes in your personal projects when you have got no idea about it?. One think I have experimented after working with Docker for two years, is that once you know it, you will put every service inside a docker. It only takes 5 minutes to do it and the benefits are huge. Kubernetes might be an overkill, but containerize the apps is another thing.


Back when I was trying to do a startup full-time, I avoided Kubernetes because of the steep learning curve. Now that I'm back to being a regular employee, I've learned Kubernetes out of necessity, and it's great. So if I go back to working on the startup, I'll do it on Kubernetes, because it will save me time and ops grief. But it will save time and grief because I already know it now.


I’m itching to replace my home server’s FreeBSD with Linux and Kubernetes. I use it (& build dev tools for others) at work plenty so for me, the learning curve is in the past. I’m not sure if I would recommend this journey for others, but I also wouldn’t recommend FreeBSD to anyone, either. In both cases, you know what you’re getting in to - something complex, opinionated, powerful, and industrial strength.


You'll know this already but I'd say when you're coming from FreeBSD you'll be disappointed with Docker in particular, because it serves no purpose in the (usually) well-organized BSD world where the good stuff is built from source, and developed to POSIX guidelines most of the time anyway. Docker is just a workaround for the perceived mess of shared libraries in the Linux world of multiple O/S vendors (by not using shared libs in the first place which could be solved by statically linking everything), and FreeBSD's jails is IMHO superior as sandbox technology anyway.


I found the jails ecosystem as old and creaky as FreeBSD itself; the technology is good, and everything makes sense, but after using Docker on Mac and Linux for a while, I've started to prefer the lighter-weight and more user-friendly abstractions.

If there was a Jailfile equivalent for FreeBSD and a command-line tool with the same interfaces as docker, namely `docker run --rm -it ...`, I might be staying on FreeBSD.


Try Unraid on your home server: boots from USB and runs in memory, no raid risks but have network share span several HDDs, parity HDDs, use SSD for cache.

No kubernetes involved - just a webinterface to run containers, install dockers as "apps" on your server. And Unraid is linux, you can but dont need to tinker.

Unraid is how i started using Dockers and became happy friends with my home server again. (tm)


I think you misunderstand me: my objective is to tinker. Before, I was tinkering with FreeBSD, now I will be tinkering with Kubernetes. Thanks for the recommendation though.


I agree with the idea the post is trying to do but not with the justifications, I think it's wrong to compare Kubernetes vs rsync or ansible, it resolves a different problem, container cluster management, a more appropriate comparison would had been to compare with simpler solutions in the same domain, such as docker swarm or nomad.


Or with smth even simpler, like condo (https://github.com/prepor/condo)


I think the whole point of the article is that k8s solves a different problem: orchestrating large sets of images if your application uses a significant portion of a datacenter.

The problems one _actually_ has on a personal projects are indeed solved with simple tools like rsync.


kubernetes has more advantages than pure horizontal scaling capacity. there are services, secrets, networking etc, which are useful for small size projects at the same way as big ones. I agree that it can be over kill, but I would not throw away whole kubernetes only for assumptions base on the size of the projects.


I am sort of a kubernetes person at work, and I just wasted 3 hours trying to get a cluster up with Rancher. Everything worked fine, except somehow the network started isolating namespaces and the nginx ingress couldn't reach my service.

So I'm calling it quits for now. Just running the cluster requires a small ops team.


For me also the overhead played a big role, to just get a bare kubernetes cluster you already need 3 nodes + 1-2 load balancers.

In case you are using GKE, you actually need two ingresses to support IPv6 + IPv4

This adds up to like 10 times the cost of an single droplet. For personal projects this seems kind of wasteful to me.


He makes some valid points but I think if you want to write a blog entry about why "B" is not good for a case and you rather use "A" (that you already know), then you should at least try "B" technology; it doesn't seem the author has even tried k8s.


I absolutely love docker for development. It saves so much set up time, and of course is handy later.


If you can devote some time and learn the underpinnings, Kubernetes is great for personal projects. I personally use it for 1 business website, 1 blog, 2 3 tier applications (backs are SQLite though) and 1 3-tier client project. Where kubernetes shines is that that it handles most things in a principled, and self-consistent manner -- once you've made your way up the learning curve, you can think in terms of kubernetes without having many hiccups.

I'd argue that a lot of the complexity people find in Kubernetes is essential when you consider what it takes to run an application in any kind of robust manner. Take the simplest example -- reverse proxying to an instance of an application, a process (containerized or not) that's bound to a local port on the machine. If you want to edit your nginx config manually to add new upstreams when you deploy another process, then reload nginx be my guest. If you find and setup tooling that helps you do this by integrating with nginx directly or your app runtime that's even better. Kubernetes solves this problem once and for all consistently for a large amount of cases, regardless of whether you use haproxy, nginx, traefik, or whatever else for your "Ingress Controller". In Kubernetes, you push the state you want your world to be in to the control plane, and it makes it so or tells you why not.

Of course, the cases where Kubernetes might not make sense are many:

- Still learning/into doing very manual server management (i.e. systemd, process management, user management) -- ansible is the better pick here

- Not using containerization (you really kinda should be at this point, if you read past the hype train there's valuable tech/concepts below)

- Not interested in packaged solutions for the issues that kubernetes solves in a principled way that you could solve relatively quickly/well adhoc.

- Launching/planning on launching a relatively small amount of services

- Are running on a relatively small machine (I have a slightly beefy dedicated server, so I'm interested in efficiently running lots of things).

A lower-risk/simpler solution for personal projects might be something like Dokku[0], or Flynn[1]. In the containerized route, there's Docker Swarm[2] +/- Compose[3].

Here's an example -- I lightly/lazily run https://techjobs.tokyo (which is deployed on my single-node k8s cluster), and this past weekend I put up https://techjobs.osaka. The application itself was generically written so all I had to do for the most part was swap out files (for the front page) and environment variables -- this meant that deploying a completely separate 3-tier application (to be fair the backend is SQLite), only consisted of messing with YAML files. This is possible in other setups, but the number of files and things with inconsistent/different/incoherent APIs you need to navigate is large -- systemd, nginx, certbot, docker (instances of the backend/frontend). Kubernetes simplified deploying this additional almost identical application in a robust manner massively for me. After making the resources, bits of kubernetes got around to making sure things could run right, scale if necessary, retrieve TLS certificates, etc -- all of this is possible to set up manually on a server but I'm also in a weird spot where it's something I probably won't do very often (making a whole new region for an existing webapp), so maybe it wouldn't be a good idea to write a super generic ansible script (assuming I was automating the deployment but not with kubernetes).

Of course, Kubernetes is not without it's warts -- I have more than once found myself in a corner off the beaten path thoroughly confused about what was happening and sometimes it took days to fix, but that's mostly because of my penchant to use relatively new/young/burgeoning technology (for example kube-router recently instead of canal for routing), and lack of business-value to my projects (if my blog goes down for a day, I don't really mind).

[0]: http://dokku.viewdocs.io/dokku

[1]: https://github.com/flynn/flynn/

[2]: https://docs.docker.com/engine/swarm/

[3]: https://docs.docker.com/compose/


This seems like a grumpy and lame rationalization of not wanting to learn something new


yes, there is a learning curve, but once you have your system in place (for local Kuberenetes Development as well as deployment), then, even of a web-sites, it is a breeze. But yes, still need to be organized.


What about using Kubernetes to manage several personal projects simultaneously? Different namespaces, but with a single point of administration.


> Of course, they won't work for any project.

I assume "any" should be "every"?


Any as in just any.


Sure. It seems much more ambiguous written as it is though.


Thank you for writing this. We've reached peak Kubernetes fanboyism.


Minor word nit: "alright" is not a word. Or shouldn't be. It's not "all right".


kubernetes for some, miniature american flags for others.


"I think the point I'm trying to make is: do you actually need all of this?"

Yep! For any thing which goes beyond the initial viability test, I make an OS package. SmartOS has SMF, so integrating automatic startup/shutdown is as easy as delivering a single SMF manifest, and running svccfg import in the package postinstall. For the configuration, I just make another package which delivers it, edits it dynamically and automatically in postinstall if required, and calls svcadm refresh svc://...

it's easy. It's fast. The OS knows about all my files. I can easily remove it or upgrade it. It's clean. When I'm done, I make another ZFS image for imgadm(1M) to consume and Bob's my uncle.


"Oh man, the original article went way over the author's head."

No, the author of the Kubernetes article completely, so utterly missed the point that it's not even funny: none of those Kubernetes complications are necessary if one runs SmartOS and optionally, as a bonus, Triton.

Since doing something the harder and more complicated way for the same effect is irrational, which presumably the author of the Kubernetes article isn't, I'm compelled to presume that he just didn't know about SmartOS and Triton, or that the person is just interested in boosting their resume rather than researching what the most robust and simplest technology is. If resume boosting with Kubernetes is their goal then their course of action makes sense, but the company where they work won't get the highest reliability and lowest complexity that they could get. So good for them, suboptimal for their (potential) employer. And that's also a concern, moreover, it's a highly professional one. I'm looking through the employer's eyes on this, but then again, I really like sleeping through entire nights without an incident. A simple and robust architecture is a big part of that. Resume boosting isn't.


We detached this subthread from https://news.ycombinator.com/item?id=18138716.


The reason being...?


It relates primarily to the article and not the parent comment.


If you go to joyent.com you'll see "Multi-Cloud Kubernetes" is on their homepage above the fold and listed in their products & services category. SmartOS is an operating system. Triton is a data center for running container instances. Kubernetes is a container orchestration tool. These are all solving separate problems...


Triton can orchestrate containers[1] (they're actually Solaris zones[2], to which constraints in terms of project definitions[3] can optionally be applied, and those constraints are actually containers).

[1] https://www.joyent.com/content/11-containerpilot/chart.15076..., https://docs.joyent.com/private-cloud

[2] http://illumos.org/man/1M/zonecfg

[3] http://illumos.org/man/1M/projadd


While that's great, and I thought about plugging my own containers project here too, to get it some exposure and possibly help someone (we're trying to help people get on the train, and that's commendable, I didn't downvote you) now you've gone from something bespoke to something basically singular, where Kubernetes is a leading industry standard with the weight of at least[1] 77 vendors who have sought and achieved conformance[2] with their own distributions of Kubernetes.

I haven't heard of Joyent or SmartOS in years! I am super surprised to hear of anyone recommending it today as a competitor to Kubernetes, and I have no facts or deep understanding of that platform so I won't belabor you with an argument about how Kubernetes is better. (I can't say if it is or isn't.) It's just not in the same ballpark. I'm glad it works for you. I'm especially glad to hear about another option (that we could potentially replace our bespoke deployments with), because the more of these things I know about, the louder I can clamor to upper management about the fact that we're not using any of these technologies yet, and we should be (to sleep through the night!)

I learned about Kubernetes through Deis Workflow. It took years to understand Kubernetes from end-to-end, and I was already a container veteran when Deis moved to k8s. I resisted! I caved. I came over, now I have years of experience with Kubernetes, and I can't say I'd recommend anything else. "Those complications" are all very hard to get over, but then ... you get over them! And largely don't have to do that again.

If you are on Kubernetes, then you are not locked in to any cloud provider (unless you have opted into another technology that made you locked in.) I can't say the same for Triton.

For the purposes of disclosure, I am a member of Team Hephy, the open source Deis Workflow fork. (Deis Workflow is EOL and Hephy is the continuation.) Workflow is how I learned Kubernetes, and I would still recommend it highly to anyone else that wants to learn Kubernetes. But I will not kid anyone into thinking it's going to happen overnight. (With Workflow though, you can absolutely start using it productively in about an hour.)[3]

[1]: https://docs.google.com/spreadsheets/d/1LxSqBzjOxfGx3cmtZ4Eb...

[2]: https://www.cncf.io/certification/software-conformance/

[3]: https://web.teamhephy.com or https://blog.teamhephy.info/#install


"If you are on Kubernetes, then you are not locked in to any cloud provider"

...but I would be beholden to GNU/Linux and have to do the same thing I do with SmartOS in a far more complex way, built on an operating system substrate which cannot provide the reliability that I need to be able to sleep through my nights without an incident.

Kubernetes, Docker, Linux are a time sink that I can never get back, on things which Solaris solved far better and reliably 20 years ago. I don't want to go from a Pegasus to a donkey.


> the same thing I do with SmartOS in a far more complex way

Citation needed. Yes, Kubernetes runs on Linux kernel and until someone ports it to use something other than iptables and the linux cgroups API, that will be true. But I could say the same thing about being locked into SVR4/OpenSolaris, and I bet you a Coke that a lot more people will agree with me.

The Kubernetes slack community has over 48,000 members and nearly 1000 are online right now. Can you say that about SmartOS? There's a ton of value in that vibrant community. This is not meant to be a dig, seriously where do you go when you need SmartOS support?


Don't waste your time, he's a troll that pops up on every thread that has anything to do with things like docker or k8s. It doesn't matter what the problem is, smartos is the solution.

SmartOS is solid technology, but he doesn't have the slightest clue what k8s does, but is completely certain that smartos somehow does "the same thing" better.

They aren't even mutually exclusive, there's no reason you couldn't write a CRI backend to run k8s on top of zones on smartos.


Personal attacks will get you banned here. Please make your points without stooping to that in the future.

https://news.ycombinator.com/newsguidelines.html


I suppose my comment appears harsh if you haven't seen month after month of his "smartos solves every problem and linux users are too stupid to know how to computer properly" type comments. read the guys bio:

> There exist no words in any of the languages I speak which can express my hate of GNU and GNU/Linux.

> I was raised on IRIX, HP-UX and Solaris. Huge illumos / SmartOS fan and FreeBSD sympathizer.

It's hard not to attack someone personally when their entire persona is tied to promoting a particular technology every chance they get.

The funny thing is all of his comments do absolutely nothing to convince anyone that smartos is worth looking at, which is a shame because it's actually quite nice.


"There exist no words in any of the languages I speak which can express my hate of GNU and GNU/Linux."

And I stand by that statement: if I didn't, it wouldn't be on my biography. I think it's fair and honest disclosure, rather than philosophy and playing with words. I mean what I say and I say what I mean.

Still, ad hominem attacks on someone with an opposing view do little to further your argument; at best, it comes across as nothing but opinion.

For my part, I provided citations to make my point. I regret that you didn't seem to read through them before answering.


Hard but doable.

If you think someone is abusing HN, the thing to do is email us at hn@ycombinator.com so we can look into it.


"Don't waste your time, he's a troll that pops up on every thread that has anything to do with things like docker or k8s."

Thank you for an off-topic opinion. Yes, I do "pop up" on every Kubernetes and Docker topic because both are nothing but hype and bullshit (this topic with "no thanks" is a rare exception, and I wholeheartedly agree with the author of it, even though I vehemently disagree with some of his methods). What the hell do I need an "orchestrator" like Kubernetes for if all my components are packaged and I have a software deployment server where I can mass deploy to all my systems at will? (Hint: I don't.) Kubernetes is a solution to a string of bad decisions, making the situation even worse.

And Kubernetes is nothing but a provisioning solution. If you think Kubernetes is an orchestrator, you've never seen one. Nolio is an orchestrator.

So what if Joyent has Kubernetes support? Joyent isn't infallible, just because they make SmartOS that doesn't make them right in everything. It's not like they can do no wrong. Let's refrain from fanboyism; being Joyent doesn't automatically make them right in everything they do.


The personal attack wasn't ok, but please don't "pop up" on every thread to say predictable things. Pre-existing agendas are tedious, and tedium is what we all come here to avoid.


If we're here to avoid pre-existing agendas, then Docker and Kubernetes promotional articles tick every box on the "tedious" agenda.

The best part is, the article is about "no thanks" to Kubernetes, which I wholeheartedly agree with and have stuck to commenting relevantly to the topic, backed up by plenty of citations.


Perhaps those articles are tedious, but that's changing the subject. The point is that users who show up to grind the same axe in thread after thread are lowering the signal/noise ratio of the site, so we moderate them, and ban them if they won't stop.


It's not so much an axe to grind as it is a fight for better working conditions. If you had to go look for a job, and all everbody ran is this shitty Kubernetes and Docker, how would you feel being forced to work with that when you knew something better is available, but you couldn't find a job using it because everyone runs Linux, Docker and Kubernetes, because they don't know any better? How would you raise awareness about better alternatives with the goal of creating better working conditions and better jobs?


I mentioned our talk to my coworker, his immediate response was "does he work for Oracle?"

I'm only half seriously asking but, do you? (Given the certifications you mentioned, I think that even if you don't, there is a vague subjective case that you do...)


This comment breaks the site guidelines. Please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting here.


I apologise, I thought that was a fair question and civil...

The missing context from another comment was:

> ... amount of institutional inertia I've come up against while trying to get any part of our Development or Production stack shifted over to Kubernetes, which I consider myself fairly expert in, I think you'd understand that "containers on Solaris" is not going to go over any better for me than containers on GNU/Linux,

Both SmartOS and GNU/Linux are open platforms, so it really wasn't fair of me to accuse a person of shilling for Oracle. I think I understand.

(That wasn't my intention, but if you read it that way, that's my mistake.)


You didn't accuse me of working for Oracle; your coworker implied it, but it also shows how little she or he is versed in the subject matter: the very reason top Solaris kernel engineers work at Joyent is that they vehemently oppose the Oracle corporation; it's that same reason why OpenSolaris was forked into illumos, from which SmartOS is built. Please let your coworker know my answer.


That is the answer I was looking for. And also shows what I know, as I just learned that SmartOS is not the same as OpenSolaris. (Thanks!)

> I‘m against object oriented programming.

I just went back and read your profile again. Just wondering, what do you support instead? (At a guess I'd say functional programming?)

I've often heard and suspected for myself after gaining some "industry experience" that the Object-Oriented principles taken by themselves without a strong lead designer who is vocal about (his or her) strong opinions and willing to call out bloated, poorly thought-out designs... will simply tend toward generating a Big Ball of Mud, or "Shanty-town" code.

Is this generally how you feel about the subject? I think we'd probably get along well and I'd certainly like to hear from you again.


I'm against object oriented programming because the code written in that way is needlessly complex and unmaintainable. Experience taught me that procedural, respectively the functional approach produce code which is easy to understand and therefore debug and maintain.


I am in no way affiliated with either Oracle or Joyent. I have never worked for or at either of those companies.


> he doesn't have the slightest clue what k8s does, but is completely certain that smartos somehow does "the same thing" better.

I stand by this statement. You don't have any idea what k8s does.

If you actually cared about convincing people that you don't need k8s and smartos is better, you'd write your own article describing in detail how you can easily use smartos for running personal projects on a 3 node cluster. Put up or shut up. YOU are nothing but hype.


"If you actually cared about convincing people that you don't need k8s and smartos is better, you'd write your own article describing in detail how you can easily use smartos for running personal projects on a 3 node cluster."

How about you read the documentation, where it's already documented? You know, that thing called manual pages? On illumos-based operating systems we have manual pages which are actually useful. With detailed examples in the EXAMPLES section! And lots of them! So, how about you "warm up the chair" and read some docs for a change? Thanks. If you need some pointers, I can help you where to start.

"Put up or shut up."

You or anyone else may not tell me what to do, nor will I listen to you unless I feel like it. Which I don't.


"Citation needed."

https://wiki.smartos.org/display/DOC/Managing+Images

https://wiki.smartos.org/display/DOC/How+to+create+a+Virtual...

https://wiki.smartos.org/display/DOC/Managing+NICs

https://wiki.smartos.org/display/DOC/Using+the+Service+Manag...

...and that's just a small, tiny sampling of what can be done, most of it doable across datacenters with Triton. With SmartOS, one virtualizes datacenters.

"seriously where do you go when you need SmartOS support?"

To read the comprehensive manual pages, then fire up DTrace, then mdb, then to the SmartOS mailing list, and if I want professional support, I pay Joyent, like I'd pay redhat, SuSE, or Canonical.

I'm a computer science major with formal education and certification in Solaris. Those tests were notoriously difficult to pass. I know C. I know UNIX. I know how to program. I know how to debug. I know how to administer UNIX systems, I know how to design physical servers from the ground up, everything someone system engineering large scale systems needs. How much support could I need with such excellent and comprehensive manual pages with lots of examples in them?

"The Kubernetes slack community has over 48,000 members and nearly 1000 are online right now"

...maybe that should tell you how "simple" Kubernetes is then, unless they're all there to grill smores, hang out and sing "Kumbayah"?


So, am I understanding correctly that this is a virtualization solution? Isn't that not the same thing at all? Virtualization has a lot of overhead, and that would be my first concern, so can you address that?

(My platform team balked at me when I suggested OpenShift, because they had it confused/thought I was talking about OpenStack, and they didn't want to incur the overhead of a virtualization layer. It wasn't until I said for the fifth time that it's a container solution and does not require any virtualization, that they actually tuned back in and stopped asking what hypervisor it used.)

My point is not that I couldn't use it, but that again, we're not talking about the same ballpark. A virtualization solution is not a containers solution. Wait. Wait wait... I'm wrong, aren't I? From a quick google, it looks like Solaris Zones are almost exactly like containers in this way. So this actually does both, huh?

> maybe that should tell you how "simple" Kubernetes is then, unless they're all there to grill smores

I told you in my first post, I'm not going to sugarcoat it. Kubernetes isn't simple, there is some learning curve, but that once you get over it, you are actually over it.

Can you honestly say there is no learning curve to SmartOS?

> I'm a computer science major with formal education and certification in Solaris. ... How much support could I need with such excellent and comprehensive manual pages with lots of examples in them?

I'll take that as a "no". And from what you're saying, it sounds like there is actually no community slack? I don't believe you; come on, where do you go to gripe when you find something stupid? I've seen some projects that appear to only use Github, but when you dig a little deeper, they also have a Slack or some other community organizing platforms where you can reach a person who also uses the software like you do, without an established billing relationship (often times right away) and simply ask about their experiences.

Maybe it's an IRC channel? Maybe you are all consummate professionals and just use mailing lists, but I am skeptical of that.


" A virtualization solution is not a containers solution. Wait. Wait wait... I'm wrong, aren't I? From a quick google, it looks like Solaris Zones are almost exactly like containers in this way."

Yes, and they are true containers: fully functional UNIX servers, running at the speed of bare metal, because the OS is virtualized and not the hardware. I was running zones back in 2006 when project Kevlar first came out. Few months later we were running eight 4GB Oracle database instances in a single T2000 server, each database inside of one zone... GNU/Linux didn't even know what a container was yet, and wouldn't for several years...

Yes there's an IRC channel. Don't remember what it is. I've never needed to go there. I pop up on the SmartOS mailing list every few years to ask not a technical, but an architectural question. That's about it.

"Maybe you are all consummate professionals and just use mailing lists, but I am skeptical of that."

A lot of us are professional system and/or kernel engineers or architects, yes. Many of us are freelancers and consultants. Most of us are true engineers, with university degrees in engineering. But there are a lot of newcomers too, especially from ISP's, where SmartOS is a popular hosting solution for infrastructure and customers. The kind of stuff that has to work and has to be unbreakable because it's making money.


Well I work in higher-ed IT, where we don't make money sheepish grin and if I told you the amount of institutional inertia I've come up against while trying to get any part of our Development or Production stack shifted over to Kubernetes, which I consider myself fairly expert in, I think you'd understand that "containers on Solaris" is not going to go over any better for me than containers on GNU/Linux, a kernel and platform that we already support broadly.

I appreciate you engaged in good faith and got a lot of downvotes; I responded because I have had a serious problem here getting the institutional support for a modern devops stack (I've got a CS degree myself, but I mostly don't work on infrastructure, separation of duties and all that... I am a software developer in an environment where "buy not build" is the number one advice)

So when I see a stack mentioned that I haven't delved into before, I tend to want to know more about it. Like I said, thanks for humoring me and explaining.

We actually got a one-node Kubernetes instance stood up which I was able to trivially install Helm and Jenkins onto through the stable Jenkins chart for Helm. We use it now every day for our CI to build and test our internal apps. That Jenkins server took about a day to get together, and maybe a week to get it nailed down with ansible roles so that it would be reproducible.

Even if I had more control over these decisions, I can't see ever switching to SmartOS unless it had a vibrant packaging community and management system like Helm charts and kubeapps. That story is more than half of the value proposition for me. The other half is sleeping at night, and I'm glad you have that worked out ;-)

Another example, the other day I wanted to spin up WordPress so I could try out this novel plugin made by some friends of mine; I haven't run a Php app in years and I forget how to go about setting that up, and I definitely wasn't running a MySQL server any time in the last 3 years, so you can see this is getting to be non-trivial even if it sounds like it should not be.

Well Bitnami has contributed a WordPress and MariaDB chart to the stable charts repo, so it was about 5 minutes of effort to get this stood up in a production-ready style. (I wouldn't call it production ready exactly, but only because it took about 5 minutes to do, and I barely reviewed it at all before I was up and running, ready to install the plugin and give it a go.)

The MariaDB charts are certainly production grade, with persistent volumes hosted in a StatefulSet and easily configurable scaling with replication, again built to be as opaque and easy to deploy whole-cloth if desired for a configurer.

(edit: OK, but seriously I went looking, and sure enough... SmartOS has a documented path to install both WordPress and Jenkins. I guess I need to find some new examples...)

I'm telling you this because I disagree that Kubernetes is not an orchestrator; if you count Helm it is most certainly capable of orchestrating complex workloads.

I'm not trying to convert you, but I am trying to show that K8S has got some advantages that you can't easily recreate in SmartOS, and to emphasize again that for many of us, it's all about the community!

Have a great day, ^_^\/


"I can't see ever switching to SmartOS unless it had a vibrant packaging community"

SmartOS has close to 15,000 packages and growing[1], because it uses NetBSD's pkgsrc, which is also used by FreeBSD. Since the Joyent engineer responsible for this is also a pkgsrc developer, pkgsrc now has full upstream support for illumos / Solaris based operating systems. It also means that SmartOS has full access to FreeBSD's software library, which is a wealth of software: unless it's Linux kernel specific or extremely badly written, pretty much anything you can imagine on GNU/Linux is available on SmartOS. For those few isolated cases of badly written software, one can always run a Linux (lx) branded zone inside of SmartOS at bare metal speed[2][3].

[1] https://www.perkin.org.uk/posts/building-packages-at-scale.h...

[2] https://wiki.smartos.org/display/DOC/LX+Branded+Zones

[3] https://docs.joyent.com/sdc6/working-with-images/list-of-sup...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: