Hacker News new | past | comments | ask | show | jobs | submit | tenfourty's comments login

OP here, I agree. You can get super far with a service like fly.io/Heroku/Netlify/Vercel etc. (pick the one that works with your stack). VMs are an anti-pattern as well for the early stages of an application or startup.


I find those solutions end up being harder once you have to integrate something new you didn't know you needed. And the cost, at least for something like Heroku, is MORE than using something like GKE where I don't need a new dyno for every service. I consider GKE (and DO's and AWS's equivalent k8s solutions) on the same platform level as a fly.io/Heroku/etc.


[OP here] It feels bizarre saying this, having spent so much of my life advocating for and selling a distribution of Kubernetes and consulting services to help folks get the most of out it, but here goes! YOU probably shouldn't use Kubernetes and a bunch of other "cool" things for your product.

Most folks building software at startups and scale-ups should avoid Kubernetes and other premature optimisations. If your company uses Kubernetes, you are likely expending energy on something that doesn't take you towards your mission. You have probably fallen into the trap of premature optimisation.

Please don't take this post to be only aimed against Kubernetes. It is not. I am directing this post at every possible bit of premature optimisation engineers make in the course of building software.


Doesn't it depend what staff you have? I would have agreed with this in 2018 but the world has moved on.

If you have a bunch of people who know it, you can deploy a cluster into gcloud or aws with a few clicks or lines of IaC.

Would I recommend a startup team learn kube while trying to ship a product? No.

Would I think it's a red flag if a team who already know it choose it as their preferred platform? Also no.


Even if they know it, unless they absolutely need to implement it, why would you waste such valuable resources on of all things infra? (Unless your product IS infra). No engineer I know who’s smart enough to effortlessly deploy k8s on their own would want to do that as the job. There’s a million other interesting things (hopefully?) that they can be doing.


Not at all. If you know Kubernetes well and your needs are fairly simple it takes no time at all.

On a project a couple of years ago my cofounder and I opted to use Kubernetes after running into constant frustration after following the advice to keep it as simple as possible and just use VMs.

Kubernetes makes many complicated things very easy and lets you stop worrying about your infra (provided you’re using a managed provider).

On our next project we used a PaaS offering because we had a bunch of credits for it and it was painful in comparison to what we had with Kubernetes. It was way more expensive (if we didn’t have credits), the deployments were slow, and we had less flexibility.

Kubernetes isn’t perfect, far from it. But for those who know it well it is usually a good option.


Often to get access to things that integrate well with kubernetes.

Personally I don't see too much difference between kube yamls and systemd units and cloudformation or whatever and the "cluster maintenance" itself is not the burden it used to be if you stay on the "paved road" provided by your cloud.


What about maintenance of that cluster (of possibly only 1 machine) though?


You can get away without an orchestrator right up until about when your ARR hits $10mm

Hiring people with k8s experience in 2022 is not a difficult task, it's not an obscure technology anymore, and when your initial crop of devops decides to leave, the new guys coming in can scan your deployments and namespaces and pretty much hit the ground running within 48-72 hours. That's a big, important part of running a business. Being able to hire people with the right skill set, and being able to keep the lights on.

Bespoke systems built on top of EC2 require time, extra documentation and a steep learning curve, not to mention the fact that the bespoke system probably isn't getting security audits or being built to modern standards or best practices, tooling isn't being kept up to date. I can build a mechanical wristwatch in my garage, but if I had full access to the Rolex factory floor for free, I'd probably take that option.


So much for theory.

I just came out of a project where Kubernetes performance issues involved wild guess and blind tuning until the so called experts actually found out why the cloud cluster was behaving strangely, including support from Cloud vendor.

And good luck making sense of all the YAML spaghetti available for bootstrapping the whole cluster from scratch.

72 hours? They better be 10x DevOps team.


This is our world right now. A kubernetes cronjob sometimes fails and we have no idea why.


If your devops guys are struggling, you are hiring the wrong devops folks, or at the wrong end of the pay band. Most devops guys I know are paid in the same band as a senior developer.


Sure, it is like bug free C code. It is only a matter of having the top of the cream. Pity there aren't enough of them in the world, including on cloud vendor support team.


Running a self-hosted Kubernetes is indeed ... questionable, but a managed Kubernetes? That's a pretty sane thing to do IMO. The alternatives are running your Docker containers either manually on some EC2 or other bare-metal server which is a nightmare to do deployments, or using something like Elastic Beanstalk which is even worse.

For me at least, Kubernetes has become something like an universal standard: if you're already running Docker or other containerization, "using Kubernetes" is nothing more than a clear, "single source of truth" documentation on what infrastructure your application expects. Certainly beats that Confluence document from 2018, last updated 2019, on how the production server is set up that doesn't even match up with what was reality in 2020 much less today.


A managed cluster is not as much managed as you would think. Still have to configure and install a lot of things yourself.


Of course, managed means you don't take care of say, creating your own X.509 CA and issuing certificates, tending to their expiry, installing Tiller, setting up pod networking etc. etc.

All of these much more annoying and harder than `helm update --install`ing some charts to your own cluster.


To be fair installing tiller hasn't been a thing for years. And pretty much everyone using cert-manager and lets encrypt which makes the whole X.509 story pretty much a no brainer.


That is true. But if you compare it to, for example, managed web hosting, then it is a very different managed experience.


What about standardisation that comes with using a framework like kubernetes , while not using k8s you end up with adhoc deployment methods, clunky work arounds of handling networking, policies, secrets etc, with Kubernetes or even ECS it signals that the team or the developer is looking to use fixed set of rules for infrastructure, also k8s scales well even for smaller apps


Seriously, I use k8s for the same reason I use docker: it's a standard language for deployment. Yeah I could do the same stuff manually for a small project.. but why?


Last year I tried to "simplify" by doing things mostly the old way (though we used containers for a bunch of things later on).

The experience pushed me to decide "never again" and to plonk k3s or at least something like podman from the start on the server.


wrong, read it all again!


will Windows base images be made available on the Docker Hub? How will that work?


> will Windows base images be made available on the Docker Hub?

Yes!

> How will that work?

The same way it works now. Find an image you want to run, and docker run it.


so Window's licences will be free?


I suspect that you'd still need a Windows Server license to start off with. It may or may not need a VM under the hood, but that's a MS thing to figure out. They certainly couldn't license a container the same way as a full Windows VM or bare metal install.


Id expect that since the idea is having your windows container running on a windows host, that each version of windows server will have a "this edition allows X number of containers" limit. This would line up with how they treat VMs. Lets hope the number is 5x the current VM limits.


> Lets hope the number is 5x the current VM limits.

Why, let's open they are unlimited. :)


Great question.

I'll be honest and say I don't have an answer. However, we're working on derive one and you'll know as soon as we do.


I thought the whole point of containerization was that you ran apps in a container, but the container doesn't contain an operating system. (You have several containers on one instance of the OS, not several VMs, each with its own OS. That's how it works in Linux, anyway.)


I completely agree that open source is/has taken over the world - and that that is a fundamentally good thing for everyone on this planet.

I did notice some things that aren't quite right though so here are some comments from a Red Hat employee - though these are my opinions and not necessarily those of my employer.

> The tenet of open source has always been to give away the “open core” for free, and then charge for additional features.

and

>Companies can try a simple version of an open source app for free, and if they like it, can pay for value-added enterprise features.

This is not actually true for the way Red Hat sells it's products, in fact we do something that is quite the opposite but far more powerful. All features are in our community projects like Fedora or JBoss Wildfly, these features might be a mix of whatever is the latest and greatest that people want as well as all the normal bits that you'd expect. When we make an enterprise ready product out of these community projects like Red Hat Enterprise Linux (RHEL) or JBoss Enterprise Application Server (EAP) our focus is actually on stability and longevity, in other words something that we can stand behind and support for up to 10 years or more! So we actually remove or don't pull in features from the project that aren't ready yet (which we know because we employ a lot of the professional engineers that are committers). We then have a rigorous testing process before releasing these enterprise ready products to our subscribers. One of the key bits that people don't understand is that once we release an enterprise product we will then back port from the latest and greatest community version all security and bug fixes for the supported life of the enterprise product.

Two things to note about this, the enterprise versions are open source as well and we follow an upstream first approach so bug fixes and security patches are applied to the upstream version as well as to the stable enterprise version so upstream has all the latest fixes BUT it also has all the latest features and changes so it's more like our R&D version that you can play with and do some development on but wouldn't be something we would stand behind for production environments.

I wanted to point this out as this is fundamentally different from the open core model you describe and which I seriously object to as an approach because it still locks you into the vendor who is selling you the extra bits.

> The first wave of open source leaders, including Red Hat, relied almost entirely on the community to build their products.

I'm not sure this statement is entirely true, Red Hat has been a top committer to the Linux kernel and to the JBoss projects for many many years and in fact around half of our company are professional engineers working on open source projects!


> > The first wave of open source leaders, including Red Hat, relied almost entirely on the community to build their products.

> I'm not sure this statement is entirely true

Red Hat is a good open source community citizen, and has contributed tons back, without a doubt. However... out of the total lines of code they distribute, if you start to consider Apache and the GNU tools, and various libraries, I wonder what the percentages look like? I think all things considered, it would reflect well on Red Hat for all they have done, but still likely be a small-ish percentage.


I guess the point here is that Enterprise Linux represents a significant share of the market and if Docker doesn't work on it then it can hardly be a standard. With Red Hat behind this project it's a significant boost to Docker becoming a standard around Linux containers. Does that make sense?


A few thoughts on this, firstly you are right to an extent - Docker doesn't give you portability completely due to binary dependencies, but if for example I wanted to move my app from the same major version of CentOS or RHEL on a virtual machine to something running with the same major version in the cloud it should be relatively straightforward. You are right though that this won't work going from something like Ubuntu to Suse for example.

LXC currently has a gaping hole around security because it doesn't use SELinux - you might want to read this article: http://mattoncloud.org/2012/07/16/are-lxc-containers-enough/

The key bit is that we are starting to converge on a standard container for the Linux OS and with Red Hat working to get things working with SELinux we should have a pretty awesome container for our apps.

Finally, Docker adds a lot more on top of LXC, which is why people love it so much - you can see a comprehensive answer by Solomon Hykes here: http://stackoverflow.com/questions/17989306/what-does-docker...

Given the above I'd rather standardise on an Open project that adds a lot more value to LXC and hides that complexity away and given the largest linux vendor is putting it's weight behind this I'd say this is rather awesome!


Who says LXC doesn't support SELinux?

https://github.com/lxc/lxc/blob/99282c429a23a2ffa699ca149bb7...

      <title>SELinux context</title>
      <para>
  If lxc was compiled and installed with SELinux support, and the host
  system has SELinux enabled, then the SELinux context under which the
  container should be run can be specified in the container
  configuration. The default is <command>unconfined_t</command>,
  which means that lxc will not attempt to change contexts.
      </para>
IBM guide for SELinux-protected containers: http://www.ibm.com/developerworks/library/l-lxc-security/#N1...

Additionally, LXC is "Open" since it is GPL2, it has been around since before 2006 so it's as much a de-facto standard as you can get, it's already supported by multiple other projects and distros, and it doesn't lock you into "the Docker way" of doing things - you get to choose how you implement it. It's flexible, lightweight, simple, and stable. Docker will work for many use cases, but LXC will work for all of them. It's lynx vs wget, basically.

The libvirt-sandbox project looks like a nice way to manage sandboxed selinux-supported lxc instances that you can convert to qemu/kvm depending on your needs.


I absolutely agree with you, Docker is where the community seems to be converging for standardisation around linux containers. This is a great first step towards application portability from physical, virtual and PaaS. Now it will be interesting to see who/what will win when it comes to the cartridge/buildpack side of things.


Is it really where the "community" are converging?

Docker is well-known because DotCloud is pushing it, so there's money for marketing and hype.

Docker is an attempt to make DotCloud relevant.

Docker will live on as an open-source project, but if it doesn't gain traction fast enough for DotCloud to sell its services or raise extra funding, DotCloud will die.


I'd say this is a very negative comment because you are purely criticising without proposing where the community is actually going in your point of view. It might be more constructive to give your opinion on where containers should go if not Docker. I agree with you that this has been a pivot by dotCloud, but my hat goes off to them for it and I totally respect them as a startup for doing it!


Hopefully the community will continue moving forward by getting actual work done. These technologies are all cute, but nothing here, except the scale of marketing, is groundbreaking.


I really hope so, with the work that Red Hat and dotCloud are doing it will be possible for a Docker container to run on any Linux OS - which is a significant first step towards ubiquity. Whether Heroku's Buildpacks are then able to be layered on top of a Docker container is the question, not impossible as they are open source but will Heroku come on board? It seems Red Hat's OpenShift is making the moves towards this being possible with their cartridges which are the equivalent of the Heroku Buildpacks. The key bit is that the community seems to be converging around some standards that could lead to true application portability for PaaS, something we don't really have yet.


Jeff Lindsay wrote https://github.com/progrium/dokku which uses Heroku Buildpacks to build an app and then run it using Docker - it is a very neat little tool : )


Very true, this would be more compelling if Heroku got behind the project themselves because their build packs would be very portable and possibly a future standard. It seems that Docker is becoming the container of choice but how apps are layered on top is still very much up for debate.


Hi Chris, I don't think that the Docker team have gotten that far yet in knowing where they are going to go, but you can imagine that in the future I blogged about that you might be able to layer OpenShift's gears on top of Docker - now that would be very cool! There is certainly some convergence going on here which is really exciting for PaaS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: