Hacker News new | past | comments | ask | show | jobs | submit login

Watching this from a distance, I vaguely understood Docker. Then everyone was talking Kubernetes, and I vaguely grasped it was some kind of meta solution for something or other. And evidently really complicated but really buzzwordy. And then came Serverless which seems to really mean 'sortof stateless' and reminds me of MTS, an early Microsoft effort that let you run COM objects as a service on an NT server, but only if they were stateless. And now this thing, which I can't begin to fathom. In short, I am lost, but I'm beginning to wonder if all the cloud complexity might one day wrap back around to something very simple again.



You are exactly right, serverless is very much like COM (and wow, haven't thought of MTS in decades), except (!) that there is no mechanism for defining standard interfaces provided by a component, and no registry in which to indicate that said component provides said interfaces...yet. The "ServiceCatalog" cloud product will be the vehicle through which the concept of interfaces is delivered to serverless. Absent interfaces, you might think serverless is spaghetti messiness, and you would be right.

We are in a very messy infrastructure stage at the moment, which really just revolves around 3 concepts- being able to run code on a lot of machines as "easily" as one machine (kubernetes), being able to support in that context multiple runtime languages consistently (docker), while also protecting the machines the code is running on from malicious and/or poorly behaved code (kubernetes and docker together). So maybe we will soon get back to a place of "simplicity" with a stateless function-oriented programming model that looks a lot like COM/DCOM, hopefully with better ergonomics.


When viewed from the lens of a pure app developer, yes these technologies tend to be just more to grok.

But the benefits aren't really reaped by the app developers - they are reaped by operations. These technologies save money by reducing operational complexity. This happens by decoupling the infrastructure from the app. For MTS you still had to configure and manage your server or pay someone a lot to do it. I guess we have come full circle but it's probably more flexible and cheaper this time around.

Here is my 2 cents.

Instead of your team managing a fleet of servers with operating operating systems (which includes security patching, user management, log rotation, etc) you move to raw containers running on a self managed server fleet. Now you have (largely) decoupled the app from the infrastructure it's deployed to via the container interface. Remember CodeDeploy scripts? You don't need that any more since devs are delivering immutable images. That code is moved into the docker file.

Ok so you've gotten this far and it's great, but how do I do autoscaling, failover, etc. Well you can have your ops team do a bunch of work to make it happen on your self managed container host or you can run your containers on a managed Kubernetes fleet. Bam, now you have autoscaling, failover, etc. And the best part is that the Kubernetes API is platform-independent so it's much easier to move it to a different cloud. Or you do Fargate here if Kubernetes is too complex for your needs.

But your app is dead simple and you don't care about configuring all this crap. So you ditch Kubernetes (or you jump straight here) and you write serverless. Now the ops work is dead simple: deploy serverless app with 10 configuration options. No autoscaling to manage, way less security to manage, no user accounts to worry about. Just make sure it's written correctly and runs on the specified interpreter.

IMO there is no way that these technologies are going away. They provide way too much convenience. If you're a small startup (maybe no ops specialists?) and need to run a self-managed off the shelf OSS server, do you want to waste time dealing with ssh keys, choosing the OS, configuration, etc? Or "just" grab the container and throw it into your EKS cluster and forget about it (This step will get easier). Need to deploy a stateless app? Throw it into Lambda and don't even worry about EKS.

Disclaimer: Yes each benefit that I mentioned is possible as you go down layers. That's because the higher layers are built on the lower ones ;-) This post is about how much work it takes to go from 0 to live and manage the live system indefinitely.


I'm currently working on a project that I've suspected from the very beginning is way too complicated for a serverless solution. Nine months into it and I'm thoroughly convinced. The scaled container approach (Kubernetes/Docker)you've described would have been much more pragmatic.


wonder if all the cloud complexity might one day wrap back around to something very simple again

Yes, all these solutions are gradually converging back to CGI.


CGI? CICS!


How so?


Actually, is not very complicated. The main problem is that people, from both sides of the fence, eg customers and the aws/goog/ms people are conflating things, either deliberately or by simply lack of understanding.

I'm going to simplify some of the things, please don't get your pitchfork out if I gloss over some aspects.

Consider a web app, you most likely store your data in database, have some code for the business logic, say php and a bunch of front end stuff, html and js. You set up a web server, say apache, with and interpreter for your business logic code, say mod_php and a db server, mysql, on the same physical server (or virtual server). Your www.something.com is up and running.

Fast forward some time and your business grows. You notice that access to your website is very slow, since there are so many people trying to use your website. You move the db on another physical server to free more resources for the business logic. This doesn't last for long and your website is slow again. You also notice that your db server is not doing that much, yet the app server is being hammered.

You set up another app server, call it ww2.something.com, rename your original one to ww1.something.com and connect both of them to your db server. You also set up a third server, way smaller than the app servers, that on requests to www.something.com, based on some heuristic, will http redirect the user to either ww1. or ww2. [glossing over the db details here]. You see this works and you keep adding wwxxx.something.com. One day, on 20th of December, after you finished deploying ww1337.something.com, you realize you have another problem. How on earth are you going to deploy the brand new version of your webapp, that your team worked hard to finish before Christmas, in time for 25th? And there was a new CVE published, apache ver 1234 is vulnerable to the Grinch exploit. Not to mention that you expect a huge spike in traffic from 25th to 10th of January. So you need to add more servers. You also need to pay for these extra servers all the way to 25th of January and you start to wonder if this is worth it. And ww1256 has a hardware failure. Or was it ws1265?

Some time after, you had a chat with the CEO of this hosting company, he wants you to stop using bare metal and instead, move your stuff on virtual machines and they will make sure that all our vms will run, with an uptime of 99.99%, on their cluster. They also give you some tooling to build virtual machine images, thus a new deploy takes hours now, instead of days. No more worries about hardware failures [glossing over], if you find vulnerabilities, you can react on hours.

With the extra money made by being able to quickly deploy new code, hence develop new features, you hire more people and grow your business even more. One of the new people pokes around for some time and comes up with a list of things she thinks can get better, like your web app has powerusers and the http redirect load balancing makes everybody that happens to end up sharing a server with such an user, terrible miserrable, you should load balance based on request, not session. Most of the features are fairly light, resources wise, just a handfull are heavier. You should offload these requests to another set of bifier, more expensive vms, so while the main app servers wait for this forwarded request to happen, they can asnwer a bunch of the ligher requests. You agree and add the final piece in the puzzle, the application load balancer and move some of the features to a different set of resources.

Hope this clarifies what some of these pieces do. Next, a brief list of real life equivalents for the above:

1) vms: any technology that allows you to have a piece of code run in isolation. VMWare images, Virtual Private Servers, docker containers, this is NOT a new technology.

2) cluster: any group of CPU/Memory/Storage you can use to run vms. AWS Fargate is one, the entire EC2 is another one. An ECS Cluster of EC2 instances is yet another one.

3) tooling: whatever you use to create vms and instruct the cluster to run your vms. ECS is the pure AWS solution. Docker compose is another one. Kubernetes is another one. EKS is the aws offering for managed kubernetes. GKE is the google version.

/edit: GKS -> GKE




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: